Wednesday, September 29, 2010

whole product testing / whole attack testing: two sides of the same coin

(this has actually been sitting in the drafts pile for a while)

several weeks ago brian krebs asked me for my thoughts about a new NSS Labs test which i was happy to provide. aside from the fairly predictable spike in traffic that resulted from brian's subsequent article, i also found an unexpected treat in my inbox - NSS' rick moy reached out to me so that we could discuss a few things. now, this post isn't intended to bring up anything anyone did or said in private email, but i do want to thank rick because if he hadn't prompted me to engage on this topic further than i had already done with brian i might not have gotten to this point in understanding the duality of whole product testing vs. whole attack testing.

the idea of whole attack testing came to me as i was contemplating what little information i could find freely available about NSS' most recent test of how well anti-malware products prevented drive-by downloads. the name was a play on the the term "who product testing" that has become so popular in anti-malware testing circles, and which NSS themselves are big proponents of (after all, they try to use their own attempts at whole product testing as a differentiating factor to set themselves apart from other testing organizations). i thought it was a natural extension of the line of reasoning that brought us whole product testing, maybe even the logical conclusion of that line of reasoning. after all, if you're only testing against part of a multi-stage attack it seems like you encounter similar biases that you get when you only test part of a multi-layer product.

but now that i've thought about it some more i realize what that really means. you literally can't have whole product testing without whole attack testing. if you only test one part of a multi-stage attack then you're only testing the parts of a product that are designed to deal with that particular stage of attack. if you're testing exclusively with neutered or otherwise benign exploits, for example, then it doesn't matter if you're testing  entire products against those exploits, only the parts of the products designed to deal with exploits will be capable of raising an alert. as a result, the biases you encounter aren't just similar to the ones you encounter in testing individual parts of a product, they're identical - because you will effectively still only be testing individual parts of the product.

in order to get a true measure of how well a product prevents compromise in the face of real attacks it is necessary to test the whole product against real whole attacks. as difficult, expensive, and painful as that may be, if we really want to produce tests that tell the laymen what they are expecting tests to tell them, this is what has to be done.

what is whole product testing?

whole product testing is a form of anti-malware testing that aims to measure the effectiveness of entire anti-malware products rather than just testing the known malware scanner or the heuristic engine within the product.

whole product testing came about in answer to the problem where testing individual parts of an anti-malware product in isolation didn't give an accurate view of how well the product as a whole could perform (for example a threat might slip by the known malware scanner but be picked up by a behavioural technique that wouldn't show on a scanner test) and there was no way to combine the results of tests of the various parts to represent the effectiveness of the whole product. only by giving every part of a product the opportunity to stop a threat can we have an idea of whether that threat would have been stopped on a end user's machine.

because of the wide array of passive and active defenses anti-malware products provide, whole product testing requires each malware sample in the test set to be launched and then the system checked for indications of how well or poorly the anti-malware product stopped the malware sample from compromising the system. after this the system has to be returned to a known-clean state (generally by restoring an image of the drive). this is quite a bit more time and labour intensive than simply running a scanner against a directory full of malware and as a result often requires the size of the test bed to be more modest due to practical considerations (not enough hardware, manpower, etc). while a smaller test bed size may potentially raise questions about statistical significance (depending on how small it is) the ability of the results to map more directly to what an end user can expect makes this type of testing more ideal than earlier testing of a product's individual parts.

back to index

what is anti-malware testing

anti-malware testing is a means by which a qualified organization measures various properties of anti-malware software, such as speed, memory footprint, malware prevention effectiveness, or even malware removal effectiveness.

in theory, anti-malware testing should be straight-forward. we want the test results to tell us what we would experience if we used the anti-malware ourselves in the real world in order that we can make better decisions about what product to use, so it stands to reason that a test should simulate real world usage. in practice such simulation can actually be very difficult and a variety of shortcuts have been introduced over the years to make anti-malware testing more practical.

unfortunately, as we have found out, even small deviations from the real world can often have a big impact on the actual meaning of the test results such that they can't actually be interpreted the way we intended. one of the challenges that the community faces is understanding how these shortcuts affect the meaning of the results, determining if the new meaning is still useful in some way, and developing new testing methodologies that have fewer and/or less impactful shortcuts so that the tests can come ever closer to approaching the ideal state where their results will actually have the meaning we intend for them to have.

back to index

Monday, September 27, 2010

stuxnet revisited

(some of you may have seen a very  early draft of this in your RSS feeds - a slip of the finger caused a publishing mishap)

even though it wasn't that long ago that i posted a number of scathing criticisms of the stuxnet worm, new revelations about the worm and also some of the discussion in this computer world article that asks "is stuxnet the best malware ever?" (and many others i've seen since starting this post) have prompted me to re-examine my opinion on stuxnet.

there have actually been a number of really good technical analyses of stuxnet, but things seem to fall down when people try to turn their technical analysis into a tactical analysis.

what does stuxnet have?
  1. 4 0-day exploits
  2. additional non-0-day exploits
  3. the ability to determine if it's running on a plant floor vs. a corporate network so that it can avoid using some of those exploits in environments where the 'noise' they produce would be noticed by IPS/IDS
  4. a windows stealthkit (also erroneously known as a rootkit)
  5. a SCADA PLC  stealthkit
  6. digital signatures on 2 versions of it's code using private keys stolen from 2 different sources
  7. a centralized command and control communications channel (now controlled by Symantec)
  8. a P2P update communications channel
  9. the ability to alter the way the SCADA system controls a very particular (and as yet unknown) process
  10. the ability to spread itself over the internal network of an organization via network shares and vulnerabilities
  11. the ability to spread itself beyond the confines of a particular organization's network using removable media and the 0-day exploit for the LNK vulnerability (and an unorthodox implementation of autorun before the LNK exploit was added)
  12. at least 3 distinct versions (the one prior to the inclusion of the LNK 0-day, the first version containing the LNK 0-day compiled in march, and a second containing the LNK 0-day compiled in june and using a different digital signature)
  13. an infection counter to (in theory) limit the spread of the worm
4 0-days is impressive, no doubt about that. the SCADA specific payload obviously required an engineer with knowledge and experience and (in all likelihood) access to a SCADA system that matched the intended target. many of stuxnet's properties are impressive, but some of them have additional significance.

the stealthkits are intended to provide stealth (obviously) so as to keep the window of opportunity for the attack to succeed open longer than it might otherwise be. this implies a persistent presence will be required for the attack to succeed.

the digital signatures on the code also provided some stealth from the heuristic engines of anti-malware products.

the IPS/IDS avoidance also qualifies as a kind of stealth.

the C&C channel (aside from making stuxnet a botnet on top of everything else) implies that the attack is not 100% autonomous. certain actions only happen when stuxnet receives commands to do them. as such, stuxnet will be waiting when it isn't being given commands and this will require a persistent presence.

the update functionality also implies an intent to maintain a persistent presence; and not just persistent over a short term, persistence over a long enough time frame that some part of the attack code becomes no longer fit for use and needs to be updated.

the release of the version with the second digital signature extended the useful lifetime of the signed binaries by several years, as the first was set to expire in june of this year.

as you can see, a considerable number of stuxnet's properties point towards a protracted operation. the payload shows a number of indications that a persistent presence on affected systems would be required for the intended attack scenario to play out as planned.

at the same time, however, the delivery mechanism thoroughly compromises that objective by being noisy and ultimately is the reason the worm and it's significance were uncovered. with each new system a virus tries to infect, the probability that the infection will fail catastrophically (and thereby draw attention to the virus' presence) goes up. while there was a mechanism in place meant to limit the self-replication (and therefore the probability of that catastrophic failure occurring) a simplistic infection counter was obviously not enough to keep the worm from spreading far and wide and drawing attention to itself. you don't take this risk unless you can't be more targeted (or unless you don't know what you're doing).

once the worm was found out, the fact that it was a technical marvel worked entirely against it. if stuxnet had simply been just another dumb autorun worm it probably would have remained in obscurity (and indeed the earlier version that was an autorun worm did remain in obscurity despite having been discovered previously), but because of the novelty of it's execution technique (the LNK 0-day) additional attention was paid to it and the SCADA-targeting payload was discovered and everything snowballed from there.

i have stated previously that i consider stuxnet a failure, and that much hasn't changed. the fact that it's a technical marvel doesn't mean it can't also be a tactical failure. the history of viruses is littered with examples of technically sophisticated viruses that never even made it into the wild while buggy, braindead viruses somehow
proliferated.

stuxnet at least made it into the wild, but the conflicting objectives between it's payload and it's distribution mechanism (one was targeted, the other was not; one was silent and patient, the other was more like a smash and grab) means that if the people behind it haven't already accomplished their objective, it's unlikely they will now.  the C&C channel is already lost to them, the P2P channel will almost certainly be monitored for new versions of the worm with new instructions and/or a new C&C channel. the entire population of infected machines they built up is now a complete write-off because they didn't know how to maintain harmony between the distribution mechanism and the payload.

furthermore, since they were still releasing new versions as late as june 14, it stands to reason they had not yet achieved their objective at that point.

to date, siemens have only found 14-15 SCADA systems that have been infected and as i understand it, none have had their PLC's altered. there really doesn't seem to be much evidence to suggest stuxnet's creators achieved their goals.

while there's a lot of speculation floating around that i don't agree with, i am willing to speculate that the people behind stuxnet are relatively new to the world of doing bad things with computers - i don't mean vulnerability reseach, by the way, since they clearly have some talented people in that arena - i'm talking about being new at mounting actual attacks. cybercriminals have adopted a proven strategy of 'keep it simple, stupid' (KISS) and it has served them well. on the other hand the stuxnet creators tried too hard, made their attack too complex, and generally didn't show the same kind of polish or experience at launching a successful targeted attack that cybercriminals have shown.

i think being relatively new at this is actually compatible with the possibility of them state-sponsored. while we often like to attribute supernatural powers to government efforts in the technical arena (ex. NSA's cryptographic capabilities are often believed to be light years beyond what the private sector can do), the US government made it abundantly clear that sometimes (especially when it comes to attacks in cyberspace) that faith is not always well founded. i don't expect nation states to have the experience that cybercriminals do because they aren't out there actually mounting attacks as frequently as cybercriminals are (if they were, the 'pain' suffered as a result of all those attacks would have triggered a war by now).

after being reminded of the US military's incompetence in 2008, i'm now more willing to believe that this failure was the work of a nation state. however, i'm still not completely ruling out other possibilities. while the industrial process altering payload does indeed change this from an issue of espionage to an issue of sabotage, that doesn't (in my mind) rule out rivalry between businesses. certainly legitimate businesses are not generally known for attempting to sabotage their competitors or others, but less legitimate businesses (say those with ties to traditional organized crime) certainly are.

the one piece of speculation i absolutely cannot abide by, however, is the one about the target of the stuxnet worm. the idea that a nation was the target is ridiculous - do you know how easy it would have been to limit the worm to only spread on computers running inside that nation? surely the geniuses behind it could have made the distribution mechanism much more targeted than it was had a nation been the target (or had a nation contained the entire target population). this more recent theory that it was targeted towards a particular iranian nuclear facility means that whoever was behind it was willing to risk causing an environment disaster, so you'd tend to think those who'd have something to lose by being nearby would know better than to try such a thing. one of the most ridiculous ideas is that stuxnet was targeted for a single system, unique in all the world, and that it's got a fingerprint of that system that it's looking for. in order to generate such a fingerprint in the first place, the attackers would need unprecedented access to such a target; the kind of access that would completely obviate the need for an untargeted distribution mechanism.

but people persist on thinking that iran was in some way the target, and i think i know why. it's because people are thinking of stuxnet like it's some sort of military-grade cyber missile. they see a pocket of high infection density and think they're looking at the electronic fallout of a cyber bomb. under these conventional kinetic warfare sorts of analogies you expect the target to be somewhere around the epicenter. but, and i cannot stress this enough, this is the WRONG mindset to use when you're talking about a virus and we are talking about a virus! if you're thinking about this in those sorts of kinetic warfare terms then you're head is in entirely the wrong place (interpret that as you will). computer viruses behave like a disease - stop thinking about ground zero and start thinking about patient zero - stop thinking of blast radius and start thinking about epidemiology. think about how difficult it is to control or even predict the movements of a biological vector in a biological attack. without a an agent friendly to the cause doing the dispersal, you can't know where it's going to go first or most often - and even if you do have a friendly agent doing the dispersal you can't know where the disease will spread to afterward or where it will thrive best.

you cannot tell who or what the target was by looking at where the most infected machines were. that only tells you where the worm enjoyed the most reproductive advantage - and most importantly (as kaspersky's alexander gostev rightly points out) the infection populations change over time. the only way you're going to find out what the target was forensically is by finding the PLC(s) it was designed to alter. then, and only then, will you actually know what the target is - and without knowing who/what the actual target was, you cannot make reasonable guesses about the specific motivations behind the attack, and by extension you cannot infer attribution based on who had the most to gain.

but maybe these questions should be reversed. instead of trying to figure out the likely culprit based on who the target was, perhaps it would be better to track down the culprit and ask them who the target is. the two private keys stolen from two different companies in the same area in taiwan seems unlikely to be a coincidence. someone there is involved - even if their sole involvement was selling keys they stole onto a 3rd party, that gets you a lot closer to the people responsible than searching for PLC needle in a haystack.

Monday, September 20, 2010

anatomy of a snake oil campaign

a certain piece of snake oil was brought to my attention over the weekend and i thought it might prove useful to highlight some of the questionable things i saw.


originally found here
  1. trusted by 100% of fortune 100 companies? what does that even mean? do you think all 100 of those companies use zone alarm? really? not a single one uses norton? that would be pretty amazing considering norton is in the #1 position in this industry. obviously trusting and using must be two very different things. it seems to me like this is just a clever way to put 100% on the page without claiming something false like 100% detection or 100% protection. instead they say something completely meaningless but still get the benefits of having 100% prominently displayed on the page. how many people do you think will come away from this page and think that 100% was actually in reference to something meaningful like detection even though such a claim, had they actually made it, would have been false? yeah. very sneaky.
  2. a financial trojan virus? really? financial trojan, sure. virus? maybe, i don't know for sure that it doesn't self-replicate. but to put those two terms together like that seems like the work of someone who didn't know what they were talking about. a common ploy is to throw out technical sounding jargon in order to add the air of credibility - but when you don't know what you're talking about you have a tendency to combine terms inappropriately. there's a fine line to walk when it comes to jargon. obviously there are times when a vendor needs to use these in order to convey particular information. but what also happens sometimes is that vendors will use jargon unnecessarily to confuse the audience and make themselves look smarter and more important. some vendors are good at this - checkpoint? not so much. at least not here.
  3. comparing products on the basis of virustotal results. some time ago i wrote about using virustotal for comparing anti-malware products. i wrote that those of us who know better will laugh at you when you do it. i'm laughing at you right now, checkpoint, and i don't think i'm the only one. the rule of thumb is this: virustotal is for testing malware, not anti-malware. vendors who want to be taken seriously should try to remember that. consumers should probably try keeping it in mind too. virustotal is a great tool, but it's a quick and dirty tool, there's a lot of functionality in modern anti-malware software that it doesn't (and probably can't) leverage.
  4. only zonealarm can protect you? the whole page hypes up the threat of this one variant of zeus. in one breath they tell you that zeus changes often (it does this by way of many, many variants) and then make a big deal out of protecting against this one variant. imagine advertising a bulletproof vest on the basis that it's the only thing that can protect against bullets with a particular striation pattern. all things considered, do you really think you're likely to be fired at with those particular bullets? i didn't think so.
  5. complete protection against new threats is almost textbook snake oil. nothing can protect the user completely, much less protect them completely against new threats. why are vendors still trying to pull crap like this? how have practitioners of this kine of snake oil salesmanship not gone under yet?
<sarcasm>really folks, just get zonealarm, it'll cure what ails you. or your computer. or your dog. </sarcasm>

folks, you need to learn to be more discerning consumers so that the pool of money that supports this sort of intellectual dishonesty dries up. vote with your wallet - steer clear of manipulative marketing and the companies that engage in it.

Wednesday, September 15, 2010

are companion infectors contrived?

while reading a recent threatpost article i was rather taken aback by the following quote:

However, one security researcher said that the vectors for using EXE files in this kind of attack are unlikely to be seen in the real world. HD Moore, CSO of Rapid7 and founder of the Metasploit Project, said that he'd seen some cases of other file types being vulnerable to this kind of attack, but didn't think widespread exploitation was likely.
"Most of the EXE cases are contrived vectors, not realistic for exploits," he said.
i suppose path precedence companion viruses must be contrived then. but if that's so then mr. moore must be using a meaning of contrived that i'm not familiar with, because not only did they work reasonably well in their day, but they still operate quite well even now.

to be clear, and to avoid hyping the issue, i should point out that they aren't much of an issue for users of windows explorer. the way explorer works and the way it's used doesn't necessarily lend itself to this attack. but if you use the command line or happen to write and/or use scripts then planting *.EXE binaries can most definitely still pose a security problem - and there are still users in that group, many of them IT or infosec professionals. i would hope that such people would have an awareness of such a threat, but i've seen increasing evidence that people (even security folks) just don't get viruses in general (even after over 1/4 of a century) much less an obscure, ancient kind like this.

Tuesday, September 14, 2010

buckshot yankee: cowboys and indians in cyberspace

if you've been following security news in the past month or so you've probably heard about the DoD revealing that an autorun worm managed to get onto classified systems. maybe you were even curious when they attributed it to an unspecified (and possibly unknown) foreign intelligence agency. maybe you were even surprised to learn that this was the genesis of the US cyber command.
 
my first reaction to hearing these things was something along the lines of:
holy crap, the origins of the US cyber command are a farce!
now, don't get me wrong, i think the idea of the US cyber command is probably a good one. but the idea that it was formed because of a run of the mill autorun worm, a profound skills/knowledge deficit (disabling autorun was a security best practice even then and there was a similar incident with NASA earlier that same year so what were their infosec people thinking?), and a hammer&nail mentality (when all you have is a hammer everything looks like a nail, when all you have is military training everything looks like the work of enemy agents) is actually kind of scary.

not only is it scary because of how badly they can blow banal malware incidents out of proportion, but also because in all the investigation and subsequent reorganization to form the cyber command they never seem to have overcome that skills deficit enough to realize their error and get that realization to the top level decision makers. so we're going to have a military body enforcing it's will in cyberspace, developing new and interesting ways of exercising it's authority, but still unable to distinguish between an attack with direct human intent from the actions of an autonomous software agent.

they don't call them viruses just because it sounds cool, folks. these things spread by themselves like a disease. they don't need to be aimed, they don't need someone intentionally helping them along by setting up websites or sending commands or any of that junk. heck, earlier that same year another autorun worm managed to spread to computers on the international space station. you think viruses in space was an intended goal? if it were then it wouldn't have had code to steal online gaming passwords. it boggles my mind how after over 25 years, more than a quarter century, people (even security folks) don't get that computer viruses spread by themselves like a disease without the need for intentional assistance. that's why it's called self-replication.

as such, without clear evidence of intent (and i've yet to hear about any such evidence nearly a month later), occam's razor dictates that we have to assume it wasn't an intentional act by a foreign intelligence agent ($deity help us if the military for the most powerful country in the world see's fit to ignore occam's razor). the supposed foreign agent is most likely imaginary and the military has spent the past 2 years engaging in make-believe. all that time, effort, and money that went into buckshot yankee and the development of the cyber command would have been better spent on overcoming their skills deficit and the institutional issues that allowed that deficit to persist.

that is of course unless the department of defense is actually telling us a partial fiction and the cyber command arose out of early speculation that a foreign power might be involved. i imagine, however, that they'd have a much harder time selling the new budgetary requirements for such a development on speculation alone, so an imaginary foe would have been required, and a virus infecting classified systems would have provided excellent context for selling that story.

maybe they should ban computers while they're at it

(look what i found in my drafts folder - i knew i had written about this at the time but i've been having a devil of a time trying to find it in order to refer back to it. how did i not hit the publish button? apparently this incident triggered what has now become known as operation buckshot yankee.) 

[originally written nov. 22, 2008]

so the US military, unable to get a handle on the spread of malware by USB drives, has opted to ban the drives outright in order to deal with the malware...

there are easier ways to deal with autorun malware (like disabling autorun, deploying application whitelisting technology, or even even  setting up something like sandboxie to only open USB drive content in a sandbox)... banning USB drives strikes me as a move made out of desperation by someone who doesn't understand the nature of malware and why/how it spreads...

any organization (be it a company, school, government agency, or the military) that wishes it's members to utilize computers for some non-trivial task are going to employ some kind of division of labour... that means different people will work on different parts, often at the same time and so requiring different computers... in order to combine the fruits of their labour and achieve the organization's goals it is necessary to share the outputs of their efforts... removable media like USB drives are just one path through which data can be shared between computers... any path that can be used for sharing data can also serve as a vector for malware...

banning usb drives may block this one form of malware (though since they don't appear to be implementing technological preventative controls, but rather administrative preventative controls backed by technological detective controls, i suspect they won't be entirely successful), but it leaves open the threat from CD's/DVD's/floppy disks, email, network connections in general, etc... if you're going to try to block malware by banning it's attack vector then you might as well do it for all the attack vectors, but that's going to make the division of labour in a computer-dependent project impossible to support because it will leave no paths for sharing data... without the division of labour, non-trivial computer-based tasks become next to impossible, leaving only trivial computing tasks... but then, if you can only do trivial computing tasks, what good is a computer to you? not much...

Monday, September 13, 2010

don't be too proud of this technological terror you've created

lot's of folks have been posting about the 'here you have' mass mailing email worm that's been making the rounds. it's strange that such an old-school technique should inspire so much discussion, but it has and some of it's actually interesting.

one of the discussions comes from the enterprise application whitelisting blog, in other words, it comes from application whitelisting vendor bit9. they are, perhaps understandably, quite bullish about the fact that their technology would have stopped the threat before it could have spread while the anti-virus software vendors were supposedly left to scramble to get detection added after the fact.

while it's true that a classical blacklist or known-malware scanner would require updating after the threat becomes known, it seems that at least some of the anti-virus software vendors that harry sverdlove was taking a shot at were actually able to detect the threat heuristically (see f-secure's post or kaspersky lab's post for example).

it also deserves to be said that many anti-virus software vendors are bundling whitelisting in their suites these days, so people using that feature of those offerings would have been just as safe as if they'd been using bit9's.

most importantly, though - if social engineering can be used to get people to extract malware from a password protected archive sent as an attachment and then run that malware (and we have historical examples of successful email worms that used precisely this technique), social engineering can be used to get people to add that malware to the whitelist.

whitelists do not make you magically immune to this threat. i'm not even convinced they raise the bar a significant amount when you consider how easily people can be tricked into doing all sorts of dumb things. perhaps an enterprise would be in a better position because relatively few would (in theory) have access to modify the whitelist, but administrative users aren't above doing dumb things.

Friday, September 10, 2010

scareware and human frailty

about two ago the folks at trend published a blog post about the persistence of fake av that really got under my skin. it was around the time of my birthday, which is ironic because the following quote really takes the cake:
Online, however, FAKEAV is a good example of a social engineering “success story.” By leveraging human weakness, FAKEAV effectively utilizes social engineering techniques such as blackhat search engine optimization (SEO) to trick users.
if there's one time a vendor should not be laying the blame for users being fooled on "human weakness" it's when talking about scareware.

scareware generally presents itself to the user in very much the same way legitimate security products do. vendors should consider that maybe scareware purveyors can be so effective while imitating legitimate security vendors is because of how close legitimate security vendors' messaging is to being an outright scam in and of itself.
use our software. we'll protect you.
worry free computing
we take care of X so you don't have to
you need our solution
the virus problem is solved
etc.

while those wearing vendor-coloured glasses may see the average user's propensity to believe the messaging put forward by illegitimate security vendors as nothing out of the ordinary (and certainly nothing to do with them themselves), i see over 2 decades of marketing and media training the populace to be as unquestioning, as unthinking as a pack of lemmings in a mindless frenzy when it comes to what security vendors say (whether they're really security vendors or not).

it's not human frailty at work here, it's bad guys figuring out how to exploit the one thing that security vendors are loathe to change: their marketing and business practices. legitimate (or so-called legitimate) security vendors made the market for scareware. the scareware purveyors are just showing up to the party, putting their hands out, and having a slice of the pie handed to them on a silver platter.

if the security industry really wants to do something about scareware purveyors, they should stop acting so much like them and start fostering skepticism amongst the populace - not only skepticism in what others say but also in what you yourselves say. stop creating an environment where scareware flourishes. stop doing their market development for them and actually start dismantling that blind-trust based market in spite of the fact that it's paid you so well in the past.

the bad guys are milking your cash cow, vendors. it's time to stop treating customers like cattle. it's time for you to lead rational critical thinkers rather than herd livestock. it's time for you to stop being part of the problem.

what is scareware?

scareware (also called rogueware, fake AV, rogue AV, and a host of other names) is a type of malware that pretends to be legitimate security software, often for the purposes of extorting money from the user.

the traditional method of operation for scareware once it runs on the victim's machine is that it will pretend to have found security threats on the system but inform the user that in order to remove them the user must pay to register the fake security software first. the security threats that it claims to find are used to scare the user into complying with the request for registration, but they are generally either non-existent or they were purposefully planted there by the scareware.

scareware can be introduced into a system by any number of means, including drive-by downloads, installation by other malware such as bots, droppers, or downloaders, or they could even do their fake initial  scan directly from the web page that sells the scareware (sometimes with hilarious inconsistencies like scanning windows folders when you're browsing from a mac computers).

because scareware pretends to be something which it is not in order to socially engineer the user into paying the malware writers who created it, it qualifies as a type of trojan horse program. it doesn't have to try to hard in order to trick the user as users are very willing to believe anything claiming to be security, especially when it says the user is unsafe, in large part because legitimate security vendors have long trained users to trust them without question.

back to index

Tuesday, September 07, 2010

well mine's infinity big

so sophos seems to be a little late to the mine's bigger party, but at least they're following the go big or go home philosophy to make sure everyone notices them. which is unfortunate - for sophos.

see, while the numbers game that most vendors play sometimes has the faint aroma of snake oil, kris braun's bold statement that sophos 'currently detects an infinite number of malicious files' (with a big infinity graphic, no less) has an overpowering stench of snake oil.

it's one thing to use big but meaningless and impossible to verify numbers, it's another thing to claim things that are literally false. sophos can't currently detect an infinite number of malicious files because an infinite number of malicious files do not currently exist.

claiming to detect not only more than you can verify you detect, but more than actually exist is ridiculous. you might as well say you recognize 15 different boolean outcomes or can write out all 57 letters of the english alphabet.


sophos may, arguably*, have the potential to detect an infinite number of things, but that's a lot different than being able to currently detect an infinite number of things, and you can bet that the average person not only can't tell the difference but will look at the claim of infinite detection much like they would a claim of perfect detection (in spite of kris' explanations to the contrary).

so congratulations go to sophos for the ignominious feat of innovating in the field of snake oil. now please do yourselves a favour and stop being so boastful.

(*technically, an infinite number of files would require that some of those files be infinitely long, since limiting them to a finite length would also limit them to a finite number. since there is not yet a computer capable of holding an infinitely large file let alone indexing one, there's strong evidence to suggest that sophos cannot actually detect those malicious files that happen to be infinitely large and thus cannot currently detect an infinite number of malicious files.)

Wednesday, September 01, 2010

of logic and malware

i've been accused of (among other far more odious things) having formal training/education in the area of logical debating. i don't actually have said training, but i do know a thing or two about logic, so when i read ed moyle's post on security curve about the industry's flawed logic as it relates to malcon it didn't take me long to realize how ed's logic (rather than the industry's) had gone pear-shaped.

to quote:
  • Major premise:  All conferences that provide details on how to create malware are a “bad idea”
  • Minor premise:  Blackhat/Defcon provide details on how to build malware (e.g. the Invisible Things Blue Pill presented at Defcon 2006; stated goal, “creating 100% undetectable malware”)
  • Conclusion:  Blackhat/Defcon is a “bad idea”.
i can use a similar pattern to equally questionable results:
  • major premise: cats are furry
  • minor premise: marmaduke is furry
  • conclusion: therefore marmaduke is a cat
this is the sort of logic statement i recall seeing on tests as a child. often they'd be worded in such a way as to be tricky so as to test our ability to judge the validity of a supposed logical argument. in my example the primary problem is the major premise. while it is true, it's not specific enough to be useful; many things other than cats are also furry.

likewise ed's premises have problems. starting with his minor premise, the details about how to make the blue pill were actually not given out. those details were behind a pay-wall rather than being freely handed out at the talk. furthermore, the classification of the blue pill as malware is questionable at best. just because it's a so-called 'rootkit' doesn't mean it's malware - the reason being that current use of the term 'rootkit' has become so twisted (by which i mean anything that hides things gets called a 'rootkit' now) that even anti-malware products got called rootkits. the blue pill was a novel stealth proof of concept. it could have been used in conjuction with actual malware, but the blue pill itself was not malware.

that tells me, at the very least, that the blue pill was the wrong example for ed to use. we can correct that, however, by using a better example. the race to zero would be a much better example because it involved the creation of actual malware (modifying existing malware to make something that has never been seen before is for all intents and purposes the creation of new malware), which is precisely what malcon aims to facilitate and so makes for a much closer analogy.

unfortunately, even if we replace the reference to the blue pill with a reference to the race2zero, ed's minor premise is still problematic. is the race to zero still not a good enough example? is there a better one? the fact is, no matter what blackhat/defcon presentation you select as an example you will never be able to improve the premise because it would still be just one presentation. blackhat/defcon are about more than just the race to zero or the blue pill. the blackhat/defcon conference pair focus on a wide variety of security issues, many of which not only deserve to be highlighted but also contribute to the betterment of the security condition in three well defined ways. they highlight problems that:
  1. should not have happened
  2. can be fixed
  3. can be avoided in future designs now that we know what to watch out for.
by way of contrast (since ed's argument compares blackhat/defcon to malcon simply by substituting one for the other in his logical framework above), malcon focuses explicitly and exclusively on the advancement of malware creation which is (in general) incapable of providing the same contribution to the security condition. this is the age old distinction between vulnerability research and malware 'research'. with the exception of exploits, malware can't be fixed or avoided because it relies on properties that are intrinsic to the general purpose computing platform.

we also gain no technical benefit by supposedly trying to open a dialog between malware writers and anti-malware researchers.
  • for reactive defenses the only prospective benefit would be to help analysts understand the malware. but going back as far as 2006, the average piece of malware could be processed in as little as 5 minutes, so understanding malware doesn't really seem to be something analysts need help with. 
  • for proactive defenses the hypothetical benefit would be in letting the analysts know what sort of things are coming so that anti-malware products can catch them before they've even seen them. unfortunately this model is based on predicting the future precisely enough that we'd know specifically what to look for and, as such, is unworkable. the proactive defenses that work are the ones that actually know less, not more, about specific threats whether past present or future (thus why they're called generic techniques).
now, before i stray from the immediate topic any further, let's get back to ed's logical problems. the major premise that "All conferences that provide details on how to create malware are a “bad idea”" is a poor premise as demonstrated by the blackhat/defcon example. one of the necessary properties of a premise is that it's something both parties in an argument can agree on, but this premise is overly broad. as discussed above, blackhat/defcon covers a wide variety of things - can we really say blackhat/defcon is bad as a whole because one of those things might be bad? that seems pretty ridiculous. malcon, on the other hand, is much more narrowly focused on just that one bad thing; so if we rewrite the premise to be more specific, perhaps something like "All conferences that exist solely to provide details on how to create malware are a “bad idea”", then we can include malcon and exclude blackhat/defcon.

now the question one might be asking is, if ed's logic is flawed, what logic would be better? well, for starters i really don't like the major premise, minor premise, conclusion construct - i prefer the premise, inference [, inference...], conclusion construct.
  • premise: malware is bad
  • inference 1: since malware is bad, creating malware is bad (with the exception of benign exploits)
  • inference 2: since creating malware is bad (with one exception), helping others create said malware by doing things that can reasonably be avoided is bad
  • conclusion: since malcon will help people create malware by doing something that could reasonably be avoided, malcon is bad.