not too long ago i published a post called understanding malware growth in which i observed that the period of accelerated malware growth we had seen in previous years appeared to be over. unfortunately, according to new av-test.org statistics (as reported by mcafee) accelerated malware growth is back.
now one way you could take that is that i was wrong. perhaps i was, at least with regards to being optimistic - perhaps optimism was not called for. i wasn't wrong about the rapid expansion being over, though, the growth was more or less constant for the better part of a year. that is a distinct departure from the growth trend prior to august 2007. a year is a considerable amount of time for malware to languish.
another point i don't think i was wrong about was the nature of malware growth itself. if you read the previously mentioned post carefully you should have come away with the impression that, contrary to the naive malware growth forecasts, malware growth is punctuated. there was very clearly a point at which the rapid expansion of malware growth started, and just as clearly there was a point where that expansion came to an end, a point where whatever driving factor was resulting in the higher malware production rates had finally reached it's peak and the malware ecosystem returned to a kind of balance.
i had in mind the possibility that another disruption in malware growth could occur - if it can happen once it can happen again, but i didn't mention it because i didn't want to be negative, i didn't want to jinx our collective good fortune. had more up-to-date data been available, however, i would have known our good fortune had already come to an end when i wrote that post.
another disruptive event does appear to have occurred. continuing with some of the postulates from the previous post (namely that the mechanism for creating the bulk of the variants is server-side polymorphism), if we were to then go with the postulate that the web is the dominant malware delivery vehicle (as opposed to malware being carried directly in email or some other channel) then one of the key limiting factors in the automated creation of new minor variants is the problem getting victims to view malicious content.
there are 2 things in recent memory that i think could have lead to more browsers loading malicious content than before. one of those things is innovation in the field of social engineering - we've been seeing more fake celebrity news (especially deaths) recently than i seem to recall seeing in the past. hammering on a new hook that the public at large hasn't yet grown wise to could result in an increase in the success of social engineering that used that hook and thus trick more people than usual into browsing to malicious content.
the second thing that could be contributing to an increase in browsing malicious content is the increasing trend of mass website compromises through SQL injection. increasing the number of legitimate sites that serve malware (even if indirectly) increases the chances of web surfers stumbling across malicious content entirely by accident.
both of these things can lead to an increase in the browsing of malicious content, which in turn gives the server-side polymorphic engines responsible for handing out that content the opportunity to create that much more of it in an automated fashion. that said, both of these things will also reach a peak in their influence. the public will eventually grow wise enough to the fake celebrity death reports that the success rate of that ploy will drop back down to previous levels. successful sql injection will also eventually taper off as the content management systems being targeted get hardened by their vendors, or as the pool of potential victims shrinks when the entities in it realize they need to take steps to prevent this sort of compromise.
when these things happen malware growth will once again plateau (it might already be there, but it's far too soon to tell) - until the next disruptive malware innovation, that is.
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Saturday, July 25, 2009
Monday, July 06, 2009
on the efficacy of passphrases
so now that we've all had a good think about whether masking passwords is really the right way to go, let's question another bit of wisdom - let's question whether passphrases are really as secure as we think they are.
you see with all the thinking out loud recently about the password authentication system one of the topics that got discussed in various sectors was the passphrase. the passphrase idea is that instead of using a relatively short string of bytes (which we'll all hope for the sake of argument isn't a dictionary word) as the shared secret you use to authenticate yourself with some system, you use an actual complete sentence which, although made up of dictionary words is quite a bit longer and thus theoretically harder to guess or attack by brute force.
the theory goes that the more characters are in your shared secret (be it a password or passphrase or whatever) the more possible combinations a program would have to go through before it eventually found the one that matched. additionally, since a passphrase is a proper sentence made out of proper words the consensus seems to be that it would be easier to remember.
but are those things true? let's look at the length issue. one of the things you'll often encounter when selecting a password is the need to make it complex. complex in this context means that it draws characters from as wide an array of the printable ascii character set as possible and that it doesn't contain dictionary words - basically the closer it looks like a random string of characters the better. the reason for this is 2-fold: the wide array of characters is to maximize the number of possible values for each character in the password so that the total number of possible passwords a brute force attack would have to go through would be maximized, and random non-dictionary-word property is to remove any constraints on what the various characters might be that would ultimately lower the total number of possible passwords (example. in english the letter Q is almost always followed by the letter U). natural language introduces considerable constraints - although it takes 7 bits per character to represent the entire set of printable characters, the english language is generally known to contain a little over 3 bits of information per character because of all those constraints. that means that an english passphrase of 20 characters is about as strong as a random password about half that length. at least if that were the only issue involved.
now how is your memory? a while back when i discussed choosing good passwords that you should have a different password for each account you make so that if one gets compromised the rest remain secure. nothing about using passphrases obviates the need to have a distinct and different one for each account so can you remember 20, 50, 100 different sentences? can you remember them all perfectly? can you remember which one goes with which site? perhaps not.
how could an average user make that easier (besides reusing passphrases)? the most obvious tactic would be to use culturally significant phrases. things like:
this adds another layer of constraints on the set of probable values and my gut instinct is that this constraint is so significant that rather than mounting a full brute force attack it would be feasible to simply mount a dictionary attack. i will concede that there are a very large number of culturally significant phrases that people might choose from, but i doubt the cardinality of that set differs significantly from that of the set of words in an actual dictionary. furthermore, it wouldn't be too difficult for an attacker to trick people at large into generating a dictionary of passphrases for him/her simply by setting up a free website that requires an account logon in order to access porn.
as such, i question both the idea that passphrases will be easier to remember when used properly and that a passphrase is more computationally expensive to attack. i think the strength gained by increasing the length is a phantom improvement, that the added strength is thrown away due to the various additional constrains on the set of possible values. i think there will be no ease of use benefit as the sheer number of passphrases will still require them to be recorded somehow (and since they're so much longer than normal passwords, recording them all will be more onerous). i like the idea of expanding password fields to accept more characters, but i see no reason to use that extra space for anything other than longer strong passwords.
you see with all the thinking out loud recently about the password authentication system one of the topics that got discussed in various sectors was the passphrase. the passphrase idea is that instead of using a relatively short string of bytes (which we'll all hope for the sake of argument isn't a dictionary word) as the shared secret you use to authenticate yourself with some system, you use an actual complete sentence which, although made up of dictionary words is quite a bit longer and thus theoretically harder to guess or attack by brute force.
the theory goes that the more characters are in your shared secret (be it a password or passphrase or whatever) the more possible combinations a program would have to go through before it eventually found the one that matched. additionally, since a passphrase is a proper sentence made out of proper words the consensus seems to be that it would be easier to remember.
but are those things true? let's look at the length issue. one of the things you'll often encounter when selecting a password is the need to make it complex. complex in this context means that it draws characters from as wide an array of the printable ascii character set as possible and that it doesn't contain dictionary words - basically the closer it looks like a random string of characters the better. the reason for this is 2-fold: the wide array of characters is to maximize the number of possible values for each character in the password so that the total number of possible passwords a brute force attack would have to go through would be maximized, and random non-dictionary-word property is to remove any constraints on what the various characters might be that would ultimately lower the total number of possible passwords (example. in english the letter Q is almost always followed by the letter U). natural language introduces considerable constraints - although it takes 7 bits per character to represent the entire set of printable characters, the english language is generally known to contain a little over 3 bits of information per character because of all those constraints. that means that an english passphrase of 20 characters is about as strong as a random password about half that length. at least if that were the only issue involved.
now how is your memory? a while back when i discussed choosing good passwords that you should have a different password for each account you make so that if one gets compromised the rest remain secure. nothing about using passphrases obviates the need to have a distinct and different one for each account so can you remember 20, 50, 100 different sentences? can you remember them all perfectly? can you remember which one goes with which site? perhaps not.
how could an average user make that easier (besides reusing passphrases)? the most obvious tactic would be to use culturally significant phrases. things like:
"The rain in Spain falls mainly in the plains"i say this is the most obvious tactic because we've already seen it in the media - if you'll recall way back there was a television program called millenium starring lance hendriksen. hendriksen's character was recruited by a clandestine organization who supplied their recruits with passphrase protected computers, and in hendriksen's case the passphrase was "Soylent Green is people" (a quote from the movie soylent green).
"Are you smarter than a fifth grader"
"Rudolph the red-nosed reindeer had a very shiny nose"
...
this adds another layer of constraints on the set of probable values and my gut instinct is that this constraint is so significant that rather than mounting a full brute force attack it would be feasible to simply mount a dictionary attack. i will concede that there are a very large number of culturally significant phrases that people might choose from, but i doubt the cardinality of that set differs significantly from that of the set of words in an actual dictionary. furthermore, it wouldn't be too difficult for an attacker to trick people at large into generating a dictionary of passphrases for him/her simply by setting up a free website that requires an account logon in order to access porn.
as such, i question both the idea that passphrases will be easier to remember when used properly and that a passphrase is more computationally expensive to attack. i think the strength gained by increasing the length is a phantom improvement, that the added strength is thrown away due to the various additional constrains on the set of possible values. i think there will be no ease of use benefit as the sheer number of passphrases will still require them to be recorded somehow (and since they're so much longer than normal passwords, recording them all will be more onerous). i like the idea of expanding password fields to accept more characters, but i see no reason to use that extra space for anything other than longer strong passwords.
Tags:
passphrases,
passwords,
security
Wednesday, July 01, 2009
about face on bruce schneier
well, it was nice to see something as fundamental the issue of password masking questioned (usability expert jakob nielson's original article, bruce schneier's reaction), and then answered (rik ferguson's response, graham cluley's response). i wonder if that indicates we're collectively ready (some of us have been individually ready for a while) to question something equally fundamental like the concept of 'security experts'.
i say this because, as was alluded to by rik ferguson, bruce schneier seemed to be pulling 'blatantly evident factoids' out of his ass (as 'self-appointed authorities' tend to do).
empirical evidence of the decline of shoulder surfing, even if it exists, wouldn't be able to support the implied assertion that it would stay in decline if password masking went away. in fact, the opposite seems much more likely. while it's true that a determined shoulder surfer would be able to figure out your password from your keystrokes, without the password mask even the most casual shoulder surfer would easily be successful. remove the control that makes something rare and it won't be rare anymore.
but i digress. this post isn't about password masking, it's about 'security experts' - and i find myself in a bit of a conundrum because, while i would normally be decrying schneier's posting of material obviously outside his field of expertise, as someone who not only doesn't lay claim to the title of expert but actively rejects it would i not be practicing hypocrisy?
to answer my own question: yes, yes i would. stop that.
the problem isn't (or shouldn't be) that schneier (or anyone else for that matter) posts on topics outside their field of expertise. he should be afforded the freedom to post uninformed opinion just like everybody else. in that regard the problem isn't really bruce schneier at all (even if he is a FUD spreading self-described media whore), the problem, dear reader, is you - for not recognizing how narrow a scope expertise has, where schneier's is, and consequently when to hold his statements up to greater/lesser scrutiny (note: if you have figured out that cryptography is his area of expertise and that you should question everything else then i'm not actually talking to you but everyone else).
and yes, i am aware of the implication that you should then question everything i say - that is actually precisely what i want. one of the benefits from not being an expert that i'd like to enjoy is people not blindly accepting everything i say and actually challenging me when something i say doesn't fit with something they know. i actually like arguing, i find it to have useful properties in the gaining of knowledge, and i think it's a shame that so many seem to value the finding of consensus over the finding of correctness (nevermind the tendency to use authoritative quotes as a replacement for critical thinking). oh well, at least nick fitzgerald and vesselin bontchev were still willing to expound at length on the topic of malware last i checked (though it has been a while).
i say this because, as was alluded to by rik ferguson, bruce schneier seemed to be pulling 'blatantly evident factoids' out of his ass (as 'self-appointed authorities' tend to do).
empirical evidence of the decline of shoulder surfing, even if it exists, wouldn't be able to support the implied assertion that it would stay in decline if password masking went away. in fact, the opposite seems much more likely. while it's true that a determined shoulder surfer would be able to figure out your password from your keystrokes, without the password mask even the most casual shoulder surfer would easily be successful. remove the control that makes something rare and it won't be rare anymore.
but i digress. this post isn't about password masking, it's about 'security experts' - and i find myself in a bit of a conundrum because, while i would normally be decrying schneier's posting of material obviously outside his field of expertise, as someone who not only doesn't lay claim to the title of expert but actively rejects it would i not be practicing hypocrisy?
to answer my own question: yes, yes i would. stop that.
the problem isn't (or shouldn't be) that schneier (or anyone else for that matter) posts on topics outside their field of expertise. he should be afforded the freedom to post uninformed opinion just like everybody else. in that regard the problem isn't really bruce schneier at all (even if he is a FUD spreading self-described media whore), the problem, dear reader, is you - for not recognizing how narrow a scope expertise has, where schneier's is, and consequently when to hold his statements up to greater/lesser scrutiny (note: if you have figured out that cryptography is his area of expertise and that you should question everything else then i'm not actually talking to you but everyone else).
and yes, i am aware of the implication that you should then question everything i say - that is actually precisely what i want. one of the benefits from not being an expert that i'd like to enjoy is people not blindly accepting everything i say and actually challenging me when something i say doesn't fit with something they know. i actually like arguing, i find it to have useful properties in the gaining of knowledge, and i think it's a shame that so many seem to value the finding of consensus over the finding of correctness (nevermind the tendency to use authoritative quotes as a replacement for critical thinking). oh well, at least nick fitzgerald and vesselin bontchev were still willing to expound at length on the topic of malware last i checked (though it has been a while).
Tags:
bruce schneier,
security expert
month of 'do as i say'
well today is the first day of july and that means it's also the first day of the month of twitter bugs. like past 'month of X bugs', each day a security vulnerability relating to the subject product/service (in this case twitter) will be disclosed to the public.
why are they disclosing these vulnerabilities to the public? as with any exercise in full disclosure the idea is to force the company(ies) involved to clean up their act and fix the disclosed vulnerability. full disclosure puts the information into the public sphere so that many people are aware of it and something as attention-grabbing as a 'month of X bugs' captures even more mind-share and gets it that much closer to the mainstream.
in theory this is good for the public. in theory the researchers doing the disclosure are doing a good deed because they care about security and the company(ies) they're pantsing are lazy, arrogant boors who don't care about security or the public and who need to be forced into doing the right thing. and supposedly fixing the individual bugs the vulnerability researchers identify is the right thing to do.
but what if the theory is wrong? not all companies are created equal (nor are all vulnerability researchers for that matter). nobody knows what goes on inside a company except the people inside that company. it's possible that a given company may actually be working on a comprehensive upgrade to their security posture that would obviate swaths of bugs including the ones that get identified by outside researchers. but that would have to be delayed and the company's time wasted so that they can chase down these individual bugs one at a time in order to appear to be doing something. such is the consequence of shifting vulnerability notification to the public sphere.
now, i'm not going to be disingenuous and suggest this is what happens all the time or even most of the time, but being in software development i feel confident that it is happening some of the time.
external full disclosure practitioners have no way to know that, however, nor have i seen any indication that they care. despite the warm and fuzzy ideology behind the practice, full disclosure is an application of force. it is an exercise in coercion, an example of using public relations to bend a company to the vulnerability researcher's own will because supposedly the researcher knows better than the company what is best for the company's users.
i'm no more a fan of lazy, arrogant, boorish companies than the next guy, but belligerent 3rd parties leave a bad taste in my mouth too.
why are they disclosing these vulnerabilities to the public? as with any exercise in full disclosure the idea is to force the company(ies) involved to clean up their act and fix the disclosed vulnerability. full disclosure puts the information into the public sphere so that many people are aware of it and something as attention-grabbing as a 'month of X bugs' captures even more mind-share and gets it that much closer to the mainstream.
in theory this is good for the public. in theory the researchers doing the disclosure are doing a good deed because they care about security and the company(ies) they're pantsing are lazy, arrogant boors who don't care about security or the public and who need to be forced into doing the right thing. and supposedly fixing the individual bugs the vulnerability researchers identify is the right thing to do.
but what if the theory is wrong? not all companies are created equal (nor are all vulnerability researchers for that matter). nobody knows what goes on inside a company except the people inside that company. it's possible that a given company may actually be working on a comprehensive upgrade to their security posture that would obviate swaths of bugs including the ones that get identified by outside researchers. but that would have to be delayed and the company's time wasted so that they can chase down these individual bugs one at a time in order to appear to be doing something. such is the consequence of shifting vulnerability notification to the public sphere.
now, i'm not going to be disingenuous and suggest this is what happens all the time or even most of the time, but being in software development i feel confident that it is happening some of the time.
external full disclosure practitioners have no way to know that, however, nor have i seen any indication that they care. despite the warm and fuzzy ideology behind the practice, full disclosure is an application of force. it is an exercise in coercion, an example of using public relations to bend a company to the vulnerability researcher's own will because supposedly the researcher knows better than the company what is best for the company's users.
i'm no more a fan of lazy, arrogant, boorish companies than the next guy, but belligerent 3rd parties leave a bad taste in my mouth too.
Tags:
ethics,
full disclosure,
month of X bugs,
twitter,
vulnerability
live by the sword, die by the sword
so yesterday i got a rather rude surprise by email. google, in their infinite wisdom, had decided to label this blog as spam and had locked it, preventing me from publishing what i had planned to publish. if i hadn't acted within 20 days the blog would have been deleted, apparently.
yes, it was the real google and not a spear phishing campaign (it would have made for good social engineering but who'd spear phish me?). they used the exact email address that i only gave to google for my blogger account, and furthermore, when i visited my blogger dashboard (not following the link in the email) it showed the warning about it there plain as day so it was definitely for real.
according to the literature i was seeing, the system by which they identify splogs is automated. they mentioned fuzzy logic, but we know what that means - heuristics (though not the same heuristics used in anti-malware, obviously). although i'm not exactly some hardcore heuristic fanboy i can't help but feel there's a bit of irony in me (or more specifically this particular blog) catching the business end of a heuristic false positive.
there is, of course, an appeal process they give you (so that your blog doesn't get deleted after the 20 day grace period) but i found it a little annoying that there was no indication anywhere that i'd successfully initiated that process and it was only when i checked again today and found the splog notification gone that i had any clue that anything had gone through. the real test, of course, is publishing and that's part of what this post is meant to achieve - if you're seeing this then the blog is back to normal.
yes, it was the real google and not a spear phishing campaign (it would have made for good social engineering but who'd spear phish me?). they used the exact email address that i only gave to google for my blogger account, and furthermore, when i visited my blogger dashboard (not following the link in the email) it showed the warning about it there plain as day so it was definitely for real.
according to the literature i was seeing, the system by which they identify splogs is automated. they mentioned fuzzy logic, but we know what that means - heuristics (though not the same heuristics used in anti-malware, obviously). although i'm not exactly some hardcore heuristic fanboy i can't help but feel there's a bit of irony in me (or more specifically this particular blog) catching the business end of a heuristic false positive.
there is, of course, an appeal process they give you (so that your blog doesn't get deleted after the 20 day grace period) but i found it a little annoying that there was no indication anywhere that i'd successfully initiated that process and it was only when i checked again today and found the splog notification gone that i had any clue that anything had gone through. the real test, of course, is publishing and that's part of what this post is meant to achieve - if you're seeing this then the blog is back to normal.
Tags:
google,
heuristics,
spam,
splog
Subscribe to:
Posts (Atom)