in the past, the argument many experts have used against putting in the effort to actually correct people's misconceptions about what a computer virus is and what it does and how it's different from other forms of malware have centered on the idea that it doesn't really matter to a victim what kind of malware they have. when they have malware on their computer, all they care about is getting rid of it. i can understand this school of thought, even though i've never agreed with it. however, the world has changed.
this is a post stuxnet world and defenders/victims aren't the only ones we need to consider anymore. we now have people barking orders to create digital weapons, and the details of those orders matter. if they say make me a virus then someone will make them a virus even if they didn't understand what they were asking for - and there's evidence to suggest they don't.
stuxnet is generally believed to have been a targeted, covert operation, but it used a noisy, untargetable* type of malware - a computer virus (or more specifically a worm). there were and are better ways to achieve the ends we believe the creators of stuxnet were after, so one can only assume that high level decisions were made in ignorance of the differences between malware types and the consequences those differences would have on that type of operation.
the question, then, of whether to put in the effort to educate people about the proper use of the term virus can no longer be answered by looking exclusively at the victims who want their computers to be clean. it is now also necessary to consider aggressors who want to use malware as a weapon to serve national interests. as misguided as such behaviour is, we have to accept that it's happened and will continue to happen, and actually knowing the differences between malware types may mean the difference between a surgically precise operation, or one with a lot of collateral damage.
this isn't to say that i think we should start helping aggressors create and/or launch their digital weapons. i still don't believe in helping the bad guys, and even if i believed such nationalistic aggressors weren't bad guys (which i don't), i don't believe there's any way to help them without also helping those who are much more unambiguously bad. what i am saying, however, is that this particular form of ignorance that experts have been too lazy to address can cause real harm and ignoring it means ignoring the opportunity to reduce the unintended harm such people will cause.
(*many believe stuxnet was highly targeted, but there's a distinction to be made. while it's destructive payload was highly targeted to a very specific environment, it's self-replication was not - it spread far beyond it's intended target)
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Showing posts with label stuxnet. Show all posts
Showing posts with label stuxnet. Show all posts
Thursday, June 13, 2013
Monday, July 30, 2012
the folly of offensive cyberwarfare
i often feel like i can't speak freely about cyberwarfare (due almost entirely to my principles about not helping or giving ideas to those who make things worse, be they criminals or warmongers), but it's hard to deny the importance of the subject, and frankly when i read what others have written i can't help but think they haven't really thought things through very well.
when it comes to the development and use of digital weapons there are a couple of key points whose implications need to be understood and kept in mind. the first of these is the problem of attribution. the difficulty in attributing the source of a computer attack is both tactically advantageous, and strategically constraining. the advantages should be obvious - you can attack an opponent without the opponent knowing who is responsible for the attack (unless you screw up and reveal yourself). the problems begin, however, when you consider that the opposite is also true - one or more of your opponents can attack you without you being able to tell who it was.
consider what that means. if you can't tell who is attacking you, how can you possibly retaliate? imagine you're blindfolded, you're ears are plugged, you're handed a gun, and stuck in a room with other people who may or may not also have blindfolds, earplugs, and guns. if someone starts shooting at you, how can you realistically return fire to defend yourself without knowing where to shoot? without the ability to target the your opponent you cannot retaliate, you cannot end him before he ends you. further, when the threat of retaliation becomes empty like this, deterrence no longer works. as a result, so-called cyberweapons have no defensive value.
in the absence of attribution, a conflict must consist entirely of first strikes. there is no retaliation, there is no deterrence, there is no scaring an enemy off by showing what you can do, there is no point to visibly stockpiling armaments. that is significantly different from most conventional models of warfare. this is one of the reasons why cyberwarfare must only ever accompany traditional warfare - only then can combatants avoid firing blindly in the dark.
another important aspect of digital weapons to keep in mind is the fact that they're digital. they're code, bits and bytes inside a computer. what is the one thing computers are exceptionally good at doing with those bits and bytes? copying them. imagine a world where it's expensive to develop guns and tanks and bombs from scratch, but it costs virtually nothing to copy them. that is the world of cyberwarfare, and that is a world that actually does not favour the attacker, per se, but rather one that favours the forager (one of the things sun tzu teaches is to forage on the enemy) because s/he gets the most benefit (a sophisticated digital weapon) for the least cost.
when weapons cost a lot to develop from scratch but very little to copy, what conditions do you suppose would make their development and use make sense? if you could eliminate the possibility of copying and re-use, if the weapon assured you a decisive victory over your opponent then it wouldn't matter that it would be falling into your opponent's hands simply by you using it. unfortunately in the real world a nation has many opponents. they cannot all be fought at once and so a decisive victory against all whose hands such a weapon may fall into is not possible.
what's more, not all of those opponents are necessarily other nation states. the low cost of copying weapons means that the barrier to entry on this battlefield is lowered and more mundane opponents like terrorists or even sophisticated criminals can join the fray. as you can well imagine, those kinds of opponents are far less disciplined and restrained than a nation state would be.
our best example of a digital weapon thus far is stuxnet. it's believed to have cost millions of dollars and many man-years of effort to develop, and now anyone who wants a copy can download it for free from the internet. i would be remiss if i failed to point out that by now stuxnet is pretty well neutered (since the windows vulnerabilities it exploited have been patched and most anti-malware will detect it's presence) and it would actually take a fair bit of time and money to replace the neutered bits so it could be re-used; but there was a time before that was true when stuxnet was still in many people's hands and could have been re-used at a much lower cost. as strange as it may sound, the malware's discovery and subsequent neutering actually served to mitigate the potential for it's re-use. it's creators are lucky it happened before the malware could be re-used against them, their allies, or other interests they might have. that might not be the case next time.
it's a peculiar irony that the people most capable of developing digital weaponry (the technologically advanced and dependent) are the same people who have the most to lose if such weaponry is used against them. this should make it obvious that defense, not offense, is where one's money and effort would better spent. just so i'm not that guy who makes overly general, hand-wavy suggestions, here are some ideas that are more specific than just "you should do defense":
i've made a few veiled (and not so veiled) references to sun tzu. while some people may argue that "the art of war" is over-played and not particularly relevant to information security, when it comes to warfare of any kind i think it's very relevant:
when it comes to the development and use of digital weapons there are a couple of key points whose implications need to be understood and kept in mind. the first of these is the problem of attribution. the difficulty in attributing the source of a computer attack is both tactically advantageous, and strategically constraining. the advantages should be obvious - you can attack an opponent without the opponent knowing who is responsible for the attack (unless you screw up and reveal yourself). the problems begin, however, when you consider that the opposite is also true - one or more of your opponents can attack you without you being able to tell who it was.
consider what that means. if you can't tell who is attacking you, how can you possibly retaliate? imagine you're blindfolded, you're ears are plugged, you're handed a gun, and stuck in a room with other people who may or may not also have blindfolds, earplugs, and guns. if someone starts shooting at you, how can you realistically return fire to defend yourself without knowing where to shoot? without the ability to target the your opponent you cannot retaliate, you cannot end him before he ends you. further, when the threat of retaliation becomes empty like this, deterrence no longer works. as a result, so-called cyberweapons have no defensive value.
in the absence of attribution, a conflict must consist entirely of first strikes. there is no retaliation, there is no deterrence, there is no scaring an enemy off by showing what you can do, there is no point to visibly stockpiling armaments. that is significantly different from most conventional models of warfare. this is one of the reasons why cyberwarfare must only ever accompany traditional warfare - only then can combatants avoid firing blindly in the dark.
another important aspect of digital weapons to keep in mind is the fact that they're digital. they're code, bits and bytes inside a computer. what is the one thing computers are exceptionally good at doing with those bits and bytes? copying them. imagine a world where it's expensive to develop guns and tanks and bombs from scratch, but it costs virtually nothing to copy them. that is the world of cyberwarfare, and that is a world that actually does not favour the attacker, per se, but rather one that favours the forager (one of the things sun tzu teaches is to forage on the enemy) because s/he gets the most benefit (a sophisticated digital weapon) for the least cost.
when weapons cost a lot to develop from scratch but very little to copy, what conditions do you suppose would make their development and use make sense? if you could eliminate the possibility of copying and re-use, if the weapon assured you a decisive victory over your opponent then it wouldn't matter that it would be falling into your opponent's hands simply by you using it. unfortunately in the real world a nation has many opponents. they cannot all be fought at once and so a decisive victory against all whose hands such a weapon may fall into is not possible.
what's more, not all of those opponents are necessarily other nation states. the low cost of copying weapons means that the barrier to entry on this battlefield is lowered and more mundane opponents like terrorists or even sophisticated criminals can join the fray. as you can well imagine, those kinds of opponents are far less disciplined and restrained than a nation state would be.
our best example of a digital weapon thus far is stuxnet. it's believed to have cost millions of dollars and many man-years of effort to develop, and now anyone who wants a copy can download it for free from the internet. i would be remiss if i failed to point out that by now stuxnet is pretty well neutered (since the windows vulnerabilities it exploited have been patched and most anti-malware will detect it's presence) and it would actually take a fair bit of time and money to replace the neutered bits so it could be re-used; but there was a time before that was true when stuxnet was still in many people's hands and could have been re-used at a much lower cost. as strange as it may sound, the malware's discovery and subsequent neutering actually served to mitigate the potential for it's re-use. it's creators are lucky it happened before the malware could be re-used against them, their allies, or other interests they might have. that might not be the case next time.
it's a peculiar irony that the people most capable of developing digital weaponry (the technologically advanced and dependent) are the same people who have the most to lose if such weaponry is used against them. this should make it obvious that defense, not offense, is where one's money and effort would better spent. just so i'm not that guy who makes overly general, hand-wavy suggestions, here are some ideas that are more specific than just "you should do defense":
- fault tolerant designs
- redundancy is already something we know how to do, but we don't always do it well (as the 2003 blackout clearly demonstrated). the internet is said to be so fault tolerant that if part of it goes down the rest will just route around it. there are many paths to the same destination. obviously that's a property we want for power, communications, water, etc. it's something we should be designing for and unfortunately because it costs it's something we need to pay for.
- ease of recovery is something we perhaps don't think quite as much about. how easy is it to replace physical equipment that no longer operates as intended? how easy is it to overwrite logical systems from backups? how many minutes, hours, or days does it take? aiming to minimize that time also serves to minimize the impact of anything unfortunate happening to the system in question.
- system hardening
- vulnerability research and patching is something that already enjoys a certain measure of success in consumer and enterprise environments. if a nation wants to protect it's critical infrastructure then perhaps more money and energy should be poured into researching vulnerabilities in that critical infrastructure.
- eliminating or rethinking external connections (including both network connections as well as removable media) basically stands in direct opposition to the trend of hooking more and more of our most important systems up to one of the most dangerous networks on the planet (the internet). as with most things, the business incentives that are driving the current trend need to be accounted for. the cost saving benefits of remote connections are understood, but there are other ways of achieving that goal without resorting to the internet - that's simply the cheapest/easiest option
- whitelisting of code and possibly even data on critical infrastructure systems, because quite frankly why should new unknown material be introduced to these systems? it may make sense to occasionally and in a very controlled way apply fixes or make changes corresponding to changes in the industrial processes those systems are a part of, but in general those machines should be unchanging and that should probably be enforced. as a corollary, eliminating dual use is probably a good idea too. there's no reason you should be writing your TPS report on a machine that can control whether the lights stay on.
- early warning detection
- while some may argue otherwise, training of personnel actually can work and i can think of no better example of this right now than the recent case of a planted malware-laden USB flash drive that an employee decided was suspicious an took directly to IT.
- sensors within protected systems performing things like integrity monitoring (to help minimize and scrutinize changes to those systems), behavioural profiling of software components, logging, etc.
- evasion
- disinformation can be useful in a couple of ways. it can raise the cost of successfully performing an attack by tricking the attacker into doing useless things, and it can also trick the attacker into doing something that sets off an alarm (ie. they walk into a trap).
- decoy systems that look and act for all intents and purposes just like the real ones can reduce the impact and success of attacks, especially if they have the same warning sensors the production systems do, by turning the problem of attacking the right system into a game of chance for the attacker. holding out baits for the attacker to reveal their presence and/or intentions can certainly confer advantages on a defender.
i've made a few veiled (and not so veiled) references to sun tzu. while some people may argue that "the art of war" is over-played and not particularly relevant to information security, when it comes to warfare of any kind i think it's very relevant:
Sun Tzu said: The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy.that is to say, of course, that we need to take up a defensible position first before we start attacking. by most accounts (including president obama's) we aren't there yet.
Tags:
cyberwar,
cyberwarfare,
cyberweapon,
stuxnet,
sun tzu
Sunday, June 03, 2012
correcting a rebuttal
so if you haven't read it yet, mikko hypponen wrote a non-apology for why his company and companies like his failed to catch the flame worm. in response, attrition.org's jericho wrote a rebuttal, taking mikko to task for the perceived bullshit in the aforementioned non-apology. while i think his heart was in the right place, a number of the specific criticisms jericho makes are unfortunately based on an understanding of the AV industry that is too shallow.
(* those automated processes can't easily be distributed for customer use, by the way, as they require too much expertise to use, not to mention too much data as a comparison against a large corpus of other known malware is part of those processes)
really, when you think about it, if they'd known it was malware 2 years ago, they'd have added detection for it (even if they didn't look any closer - adding detection is also largely automated, especially for something that doesn't try to obscure itself or otherwise make the AV analyst's job harder) and then would have trumpeted the fact that they've been protecting their customers from this threat for years when it was finally revealed what a big deal it was. i think we've all seen this scenario play out before, and it would certainly serve their business goals better than admitting failure would.
hopefully this explains why this sort of rage...
while it is true that AV vendors claim their products have heuristics - technology that can detect as yet unknown malware - that is still based on knowledge gained from past malware. it's reasonably well suited to detecting derivative work, but anything truly novel (and stuxnet certainly was that) is going to get through.
second, as the provided chart clearly indicates, the disclosure date for 4 of the 5 vulnerabilities is after the discovery date for stuxnet (2010-06-17). i'm pretty sure that means the only previously documented vulnerability was the default password. either that or jericho is actually right and simply using poor/contradictory evidence to back up his point.
finally, and probably more to the point, mikko wasn't offering the 0-day exploit as an excuse for why AV failed to detect stuxnet. he was pointing to it as a previous example of something being missed despite part or all of it being in their archives. he was trying to explain how just because something is in their archives that doesn't mean they are aware of it's significance. try thinking of AV vendors as being like pack rats - the only reason they'd throw something away is if they've already got a copy of it (and even then, i'm not so sure).
second, being huge and not obfuscated works against being recognized as malware precisely because it is so out of character for malware (and thus it's not just the AV industry that's just now realizing the "hide in plain sight" method). unless you're under the mistaken belief that AV companies still reverse engineer every sample they get each day (numbering in the tens of thousands, last i heard), it should be obvious how the lack of obfuscation provides no help in determining if something is malware.
as for what else AV offers - there are tools (other than scanners) that typically aren't packaged with their consumer product because they require a higher level of technical expertise to operate than home users tend to have. IT folks in the enterprise are more likely to have the necessary know-how to use these tools. when mikko refers to consumer-grade anti-virus products in this context, he's talking about the technologies that require next to no knowledge to use, ones where the knowledge is baked in in the form of signatures. that kind of product isn't going to stand up well against nation-states. technologies which require more of the user, ones where it's the user him/herself looking for anomalies or where the user decides which code is safe to run or what programs should be allowed to do have a better chance of helping you defend yourself against a nation-state (at least when paired with a talented user of that technology).
also, once again, these pieces of malware had exploits for these vulnerabilities, not the vulnerabilities themselves. and even if the vulnerabilities were known, programmatically determining if an arbitrary program exploits a particular vulnerability is as reducible to the halting problem as programmatically determining if an arbitrary program self-replicates. if it's a known exploit it should be findable just as known-malware is findable, but otherwise don't hold your breath (unless you're willing to let it happen and detect it after the fact).
i do think jericho has hit the nail on the head about evolving, though. what i don't think many people appreciate is the path which the industry's evolution needs to take. prevention is always going to have failures like what has happened with stuxnet or the flame worm or even the garden variety malware while it's still new. we as users need to grow up and start accepting that fact. there is no magical prevention fairy - if you're old enough to not believe in santa, the easter bunny, and the tooth fairy, then you're old enough to realize that prevention has limitations and always will have limitations.
the evolution that the industry needs to do is to help users come to terms with this fact and help them deal with it by providing tools for detecting preventative failures in addition to the tools they already provide for prevention. to a certain extent they already used to do this but those tools appear to have disappeared from the mainstream. at least a few vendors had integrity checkers back in the day. i remember one from kaspersky labs (which was painfully slow as i recall), and one from frisk software. i've long wondered (and have my suspicions about) what happened to wolfgang stiller because his product "integrity master" was excellent but . i believe adinf (advanced diskinfoscope) is still around but it's very much not part of the mainstream - most people would have never heard of it, and certainly wouldn't think to use it because it's not part of a larger AV suite (as most people have been trained to look for).
the strategy most people use to keep themselves safe is what they learn from the AV industry, but the industry mostly just tells them to use product X and product X is almost invariably prevention only (maybe with a dash of cleanup). it is probably the most simplistic and rudimentary strategy possible, but i don't hold much hope the AV industry will ever change that. in the process of teaching the public a more sophisticated security strategy they would in effect be making users more sophisticated and less susceptible to the marketing manipulations that vendors currently employ in their competition with each other. on top of that they'd have to stop using those manipulations themselves and risk losing their existing users to another vendor who hasn't stopped before they have a chance to make the users proof against those manipulations. the principles behind the PDR triad (prevention, detection, recovery) are foreign to most people, and among the rest many have perverted it into something almost as mindless as "just use X".
AV companies are businesses and like all businesses their primary concern is their own financial interests. those interests don't align with the security interests of their customers or the public at large and no amount of hand wringing or foot stomping is going to change that. i don't see any way that the industry can drive the change that's needed while serving their own interests, and i'm not going to hold my breath waiting for them to sacrifice their own interests for the common good. the AV industry responds to market demand and if you truly believe evolution is needed then it's beholden upon you to help build demand for tools and techniques that go beyond prevention.
(update 2012/06/03 5:11pm - it appears that jericho has been remarkably responsive to comments from others on twitter so the rebuttal itself is in a state of flux. i'll have to wait and see how this turns out)
i can tell you right now how this apparent contradiction isn't actually a contradiction at all. AV companies receive many submissions per day, more than can possibly be examined by humans, and a great many of those submissions are not actually malware. AV companies use automated processes* that they develop in-house to determine if a sample is likely to be malware or not and (if possible) what malware family it belongs to. not everything that goes through these automated processes gets flagged as malware and that technical failure to recognize the sample as malware 2 years ago is almost certainly the failure that mikko was trying to explain. they still keep the sample, mind you, even though they don't have reason to believe it's malware, and that's how mikko was able to find it in their archives.When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed.In the second paragraph of this bizarre apology letter, Mikko Hypponen clearly states that the antivirus company he works for found or detected Flame, as far back as 2010. In this very same article, he goes on to say that F-Secure and the antivirus industry failed to detect Flame three times, while saying that they simply can't detect malware like Flame. Huh? How did he miss this glaring contradiction? How did Wired editors miss this?
(* those automated processes can't easily be distributed for customer use, by the way, as they require too much expertise to use, not to mention too much data as a comparison against a large corpus of other known malware is part of those processes)
really, when you think about it, if they'd known it was malware 2 years ago, they'd have added detection for it (even if they didn't look any closer - adding detection is also largely automated, especially for something that doesn't try to obscure itself or otherwise make the AV analyst's job harder) and then would have trumpeted the fact that they've been protecting their customers from this threat for years when it was finally revealed what a big deal it was. i think we've all seen this scenario play out before, and it would certainly serve their business goals better than admitting failure would.
hopefully this explains why this sort of rage...
Rage. Epic levels of rage at how bad this spin doctoring attempt is, and how fucking stupid you sound. YOU DETECTED THE GOD DAMN MALWARE. IT WAS SITTING IN YOUR REPOSITORY.... is misplaced.
this frustration is something i see coming from a lot of security professionals. unfortunately the truth is that there are technical and theoretical limitations on what can be done with "detection". it frustrates me that others can't seem to recognize and accept this fact. detection requires knowledge of what one is looking for, whether that be a binary or a behaviour or something else. you can't look for something without knowing something about what you're looking for.It wasn't the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems.For those of us in the world of security, hearing an antivirus company say "we missed detecting malware" isn't funny, because the joke is so old and so very tragically true. The entire business is based on a reactionary model, where bad guys write malware, and AV companies write signatures sometime after it. For those infected by the malware, it's too little too late. It's like a couple of really inept bodyguards, who stand next to you while you're getting beaten up and say, "I will remember that guy's face next time and ask management to not let him in." Welcome to the world of antivirus.
while it is true that AV vendors claim their products have heuristics - technology that can detect as yet unknown malware - that is still based on knowledge gained from past malware. it's reasonably well suited to detecting derivative work, but anything truly novel (and stuxnet certainly was that) is going to get through.
a couple things wrong with this. first, what i think jericho meant to say is that AV should have detected the exploits rather than the vulnerabilities, as the vulnerabilities weren't in stuxnet but rather in the software that stuxnet was attacking.When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time.This statement is a red herring Mikko. Stuxnet used five vulnerabilities, only one of which was 0-day. One local privilege escalation used in Stuxnet was unknown at the time, the rest were documented vulnerabilities. If your excuse for the AV industry is that Stuxnet wasn't detected because it used a 0-day, your argument falls flat in that you should have detected the other three code execution / privilege escalation vulnerabilities (the fifth being default credentials).
second, as the provided chart clearly indicates, the disclosure date for 4 of the 5 vulnerabilities is after the discovery date for stuxnet (2010-06-17). i'm pretty sure that means the only previously documented vulnerability was the default password. either that or jericho is actually right and simply using poor/contradictory evidence to back up his point.
finally, and probably more to the point, mikko wasn't offering the 0-day exploit as an excuse for why AV failed to detect stuxnet. he was pointing to it as a previous example of something being missed despite part or all of it being in their archives. he was trying to explain how just because something is in their archives that doesn't mean they are aware of it's significance. try thinking of AV vendors as being like pack rats - the only reason they'd throw something away is if they've already got a copy of it (and even then, i'm not so sure).
and again, jericho misinterpreted that second paragraph. now as for AV missing hundreds of pedestrian malware samples a day, i suspect the number is much higher, but that it doesn't miss them for long. a great deal of malware can't be detected before signatures for it are added, and those signatures can't be added until after the vendors get a sample. what sets flame and stuxnet apart from those cases is the length of time the malware was in the wild before signatures got added to the AV products. is this a flaw? once again, you can't look for something if you don't know anything about what you're looking for. in the sense that such a limitation prevents the system from being perfect i suppose it could be considered a flaw; but show me something that is perfect - you can't, can you, because nothing is perfect.The fact that the malware evaded detection proves how well the attackers did their job.Again, it didn't evade detection, according to Mikko Hypponen, Chief Blah Blah at an Antivirus Company. He said so in the second paragraph of an article I read. That said, the fact is antivirus companies miss hundreds of pedestrian malware samples every day. Is this because the authors did so well, or that your business model and detection capabilities are flawed? One could easily argue that they are intentionally so (reference that bit about 4 billion dollars).
first, while custom packers have worked well against the AV software that's distributed to customers, they don't work nearly as well against the processes, procedures, and techniques used by AV companies when processing sample submissions. as a result such custom packers only prevent detection at the customer's site for a relatively small amount of time (though obviously long enough for some customers to get pwned).And instead of trying to protect their code with custom packers and obfuscation engines - which might have drawn suspicion to them - they hid in plain sight. In the case of Flame, the attackers used SQLite, SSH, SSL and LUA libraries that made the code look more like a business database system than a piece of malware.Where to begin. First, custom packers and obfuscation engines have worked very well against antivirus software for a long time. I don't think that would have drawn any more suspicion. Second, Flame is 20 megs, around 20x more code than Stuxnet. In the world of antivirus, where you are usually scanning very small bits of obfuscated code, this should seem like a godsend. If it isn't using obfuscation, then what is the excuse for missing it? Are you really telling me that your industry is just now realizing the "hide in plain sight" method, in 2012?
second, being huge and not obfuscated works against being recognized as malware precisely because it is so out of character for malware (and thus it's not just the AV industry that's just now realizing the "hide in plain sight" method). unless you're under the mistaken belief that AV companies still reverse engineer every sample they get each day (numbering in the tens of thousands, last i heard), it should be obvious how the lack of obfuscation provides no help in determining if something is malware.
it's actually the reverse of what jericho says here; AV tends to add detection for garden variety malware quickly (thus closing the window of opportunity for the malware to successfully compromise the AV's customers and consequently protecting most of them from it) while they tend to add detection for state-sponsored malware quite slowly (and thus not doing much to protect anyone from it initially). i specifically express this as tendencies because there are exceptions (unless the energizer RAT, which took 3 years for people to recognize, was actually state-sponsored and nobody bothered to say anything).The truth is, consumer-grade antivirus products can't protect against targeted malware created by well-resourced nation-states with bulging budgets.No, the truth is, consumer-grade antivirus can't protect against garden variety malware, but it can apparently detect the well-resourced nation-state malware. Oh, that makes me wonder, what do you offer that is better than consumer-grade? Other than a bigger price tag, does it do a better job detecting malware?
as for what else AV offers - there are tools (other than scanners) that typically aren't packaged with their consumer product because they require a higher level of technical expertise to operate than home users tend to have. IT folks in the enterprise are more likely to have the necessary know-how to use these tools. when mikko refers to consumer-grade anti-virus products in this context, he's talking about the technologies that require next to no knowledge to use, ones where the knowledge is baked in in the form of signatures. that kind of product isn't going to stand up well against nation-states. technologies which require more of the user, ones where it's the user him/herself looking for anomalies or where the user decides which code is safe to run or what programs should be allowed to do have a better chance of helping you defend yourself against a nation-state (at least when paired with a talented user of that technology).
once again, i can't find any details suggesting these vulnerabilities associated with stuxnet were known before june 2010. i'm not doubting that that may be the case for one or more of them, i do seem to recall mention of something like that for a vulnerability exploited by stuxnet, but if it's in the documentation i can't find it.And the zero-day exploits used in these attacks are unknown to antivirus companies by definition.And again, this is bullshit. Stuxnet had 3 known vulnerabilities (CVEs 2010-2568, 2010-3888/2010-3338, 2010-2729), 4 if you count the default credentials in SIMATIC that it could leverage (CVE 2010-2772). Flame apparently has 2 known vulnerabilities (CVEs 2010-2568, 2010-2729). Even worse, if antivirus companies had paid attention to these samples sitting in their archives, they may have ferreted out the vulnerabilities before they were eventually disclosed.
also, once again, these pieces of malware had exploits for these vulnerabilities, not the vulnerabilities themselves. and even if the vulnerabilities were known, programmatically determining if an arbitrary program exploits a particular vulnerability is as reducible to the halting problem as programmatically determining if an arbitrary program self-replicates. if it's a known exploit it should be findable just as known-malware is findable, but otherwise don't hold your breath (unless you're willing to let it happen and detect it after the fact).
this may have been true once upon a time, but the way virustotal now shares their samples with vendors makes them a poor choice for doing malware q/a. and they've been sharing samples for quite some time now.As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn't be detected.Way to spin more Mikko. This is standard operating procedure for many malware writers. I believe VirusTotal is the first stop for many bad guys.
When a big malware event makes the news, it only helps you. Antivirus firms are the first to jump on the bandwagon claiming that more people need more antivirus software. They are the first to cry out that if everyone had their software, the computing world would be a much safer place. The reality? Even computers with antivirus software get popped by malware. They don't detect all of those banking trojans and email worms like you claimed in this article. They don't protect against the constant onslaught of new threats. Your industry, quite simply, has no reason to improve their detection routines.as a matter of fact, the individual member companies in the industry have quite a compelling reason to improve their detection routines. AV companies may not need to be better than the bad guys, but they certainly need to try and be better than each other. to adapt the old adage, they aren't trying to outrun the bear, they're trying to outrun each other. they compete. do you think mcafee simply rolled over and accepted the number 2 market position and let symantec have #1? no of course not. they'll try and take that market share if they can and symantec meanwhile will try to hold on to their lead. part of that is done with marketing, but part of it is also done with technological advancement.
I remember in the early 90's, the next big thing in the land of viruses was polymorphing virus code. Antivirus vendors were developing "heuristics" that would detect this polymorphing code. In almost 20 years, how has the development of that gone? Obviously not stellar. Antivirus companies spend their time cataloging signatures of known malware, because that sells. "We detect 40 squirrelzillion viruses, buy our software!" What happened if you developed reliable heuristics and marketed it? "Buy our software, once, and the advanced heuristics can catch just about any malware!" There goes your business model. So of course you don't want to evolve, you want to wallow in your big pile of shit because it is warm and comfortable. You can ignore that overwhelming smell of shit that comes with your industry, because of the money. Don't believe me? Let's hear it from the pro:actually, traditional polymorphism was attacked in a generic way through emulation. it worked pretty well, but has no effectiveness against server-side polymorphism due to lack of access to the server-side processes. as for developing reliable heuristics, why do you think malware writers perform malware q/a? their new pieces of malware will already evade known-malware scanners simply by virtue of not being known malware. the only reason to take that extra step of performing malware q/a is because the heuristics actually are fairly effective. what the the heuristics are not is perfect. they can be fooled (in some cases quite easily, but fooling an automaton has never been considered difficult). i suggested a heuristic countermeasure to malware q/a but i have no idea if anyone actually tried it.
i do think jericho has hit the nail on the head about evolving, though. what i don't think many people appreciate is the path which the industry's evolution needs to take. prevention is always going to have failures like what has happened with stuxnet or the flame worm or even the garden variety malware while it's still new. we as users need to grow up and start accepting that fact. there is no magical prevention fairy - if you're old enough to not believe in santa, the easter bunny, and the tooth fairy, then you're old enough to realize that prevention has limitations and always will have limitations.
the evolution that the industry needs to do is to help users come to terms with this fact and help them deal with it by providing tools for detecting preventative failures in addition to the tools they already provide for prevention. to a certain extent they already used to do this but those tools appear to have disappeared from the mainstream. at least a few vendors had integrity checkers back in the day. i remember one from kaspersky labs (which was painfully slow as i recall), and one from frisk software. i've long wondered (and have my suspicions about) what happened to wolfgang stiller because his product "integrity master" was excellent but . i believe adinf (advanced diskinfoscope) is still around but it's very much not part of the mainstream - most people would have never heard of it, and certainly wouldn't think to use it because it's not part of a larger AV suite (as most people have been trained to look for).
the strategy most people use to keep themselves safe is what they learn from the AV industry, but the industry mostly just tells them to use product X and product X is almost invariably prevention only (maybe with a dash of cleanup). it is probably the most simplistic and rudimentary strategy possible, but i don't hold much hope the AV industry will ever change that. in the process of teaching the public a more sophisticated security strategy they would in effect be making users more sophisticated and less susceptible to the marketing manipulations that vendors currently employ in their competition with each other. on top of that they'd have to stop using those manipulations themselves and risk losing their existing users to another vendor who hasn't stopped before they have a chance to make the users proof against those manipulations. the principles behind the PDR triad (prevention, detection, recovery) are foreign to most people, and among the rest many have perverted it into something almost as mindless as "just use X".
AV companies are businesses and like all businesses their primary concern is their own financial interests. those interests don't align with the security interests of their customers or the public at large and no amount of hand wringing or foot stomping is going to change that. i don't see any way that the industry can drive the change that's needed while serving their own interests, and i'm not going to hold my breath waiting for them to sacrifice their own interests for the common good. the AV industry responds to market demand and if you truly believe evolution is needed then it's beholden upon you to help build demand for tools and techniques that go beyond prevention.
(update 2012/06/03 5:11pm - it appears that jericho has been remarkably responsive to comments from others on twitter so the rebuttal itself is in a state of flux. i'll have to wait and see how this turns out)
Tags:
flame,
jericho,
mikko hypponen,
stuxnet
Monday, September 27, 2010
stuxnet revisited
(some of you may have seen a very early draft of this in your RSS feeds - a slip of the finger caused a publishing mishap)
even though it wasn't that long ago that i posted a number of scathing criticisms of the stuxnet worm, new revelations about the worm and also some of the discussion in this computer world article that asks "is stuxnet the best malware ever?" (and many others i've seen since starting this post) have prompted me to re-examine my opinion on stuxnet.
there have actually been a number of really good technical analyses of stuxnet, but things seem to fall down when people try to turn their technical analysis into a tactical analysis.
what does stuxnet have?
the stealthkits are intended to provide stealth (obviously) so as to keep the window of opportunity for the attack to succeed open longer than it might otherwise be. this implies a persistent presence will be required for the attack to succeed.
the digital signatures on the code also provided some stealth from the heuristic engines of anti-malware products.
the IPS/IDS avoidance also qualifies as a kind of stealth.
the C&C channel (aside from making stuxnet a botnet on top of everything else) implies that the attack is not 100% autonomous. certain actions only happen when stuxnet receives commands to do them. as such, stuxnet will be waiting when it isn't being given commands and this will require a persistent presence.
the update functionality also implies an intent to maintain a persistent presence; and not just persistent over a short term, persistence over a long enough time frame that some part of the attack code becomes no longer fit for use and needs to be updated.
the release of the version with the second digital signature extended the useful lifetime of the signed binaries by several years, as the first was set to expire in june of this year.
as you can see, a considerable number of stuxnet's properties point towards a protracted operation. the payload shows a number of indications that a persistent presence on affected systems would be required for the intended attack scenario to play out as planned.
at the same time, however, the delivery mechanism thoroughly compromises that objective by being noisy and ultimately is the reason the worm and it's significance were uncovered. with each new system a virus tries to infect, the probability that the infection will fail catastrophically (and thereby draw attention to the virus' presence) goes up. while there was a mechanism in place meant to limit the self-replication (and therefore the probability of that catastrophic failure occurring) a simplistic infection counter was obviously not enough to keep the worm from spreading far and wide and drawing attention to itself. you don't take this risk unless you can't be more targeted (or unless you don't know what you're doing).
once the worm was found out, the fact that it was a technical marvel worked entirely against it. if stuxnet had simply been just another dumb autorun worm it probably would have remained in obscurity (and indeed the earlier version that was an autorun worm did remain in obscurity despite having been discovered previously), but because of the novelty of it's execution technique (the LNK 0-day) additional attention was paid to it and the SCADA-targeting payload was discovered and everything snowballed from there.
i have stated previously that i consider stuxnet a failure, and that much hasn't changed. the fact that it's a technical marvel doesn't mean it can't also be a tactical failure. the history of viruses is littered with examples of technically sophisticated viruses that never even made it into the wild while buggy, braindead viruses somehow
proliferated.
stuxnet at least made it into the wild, but the conflicting objectives between it's payload and it's distribution mechanism (one was targeted, the other was not; one was silent and patient, the other was more like a smash and grab) means that if the people behind it haven't already accomplished their objective, it's unlikely they will now. the C&C channel is already lost to them, the P2P channel will almost certainly be monitored for new versions of the worm with new instructions and/or a new C&C channel. the entire population of infected machines they built up is now a complete write-off because they didn't know how to maintain harmony between the distribution mechanism and the payload.
furthermore, since they were still releasing new versions as late as june 14, it stands to reason they had not yet achieved their objective at that point.
to date, siemens have only found 14-15 SCADA systems that have been infected and as i understand it, none have had their PLC's altered. there really doesn't seem to be much evidence to suggest stuxnet's creators achieved their goals.
while there's a lot of speculation floating around that i don't agree with, i am willing to speculate that the people behind stuxnet are relatively new to the world of doing bad things with computers - i don't mean vulnerability reseach, by the way, since they clearly have some talented people in that arena - i'm talking about being new at mounting actual attacks. cybercriminals have adopted a proven strategy of 'keep it simple, stupid' (KISS) and it has served them well. on the other hand the stuxnet creators tried too hard, made their attack too complex, and generally didn't show the same kind of polish or experience at launching a successful targeted attack that cybercriminals have shown.
i think being relatively new at this is actually compatible with the possibility of them state-sponsored. while we often like to attribute supernatural powers to government efforts in the technical arena (ex. NSA's cryptographic capabilities are often believed to be light years beyond what the private sector can do), the US government made it abundantly clear that sometimes (especially when it comes to attacks in cyberspace) that faith is not always well founded. i don't expect nation states to have the experience that cybercriminals do because they aren't out there actually mounting attacks as frequently as cybercriminals are (if they were, the 'pain' suffered as a result of all those attacks would have triggered a war by now).
after being reminded of the US military's incompetence in 2008, i'm now more willing to believe that this failure was the work of a nation state. however, i'm still not completely ruling out other possibilities. while the industrial process altering payload does indeed change this from an issue of espionage to an issue of sabotage, that doesn't (in my mind) rule out rivalry between businesses. certainly legitimate businesses are not generally known for attempting to sabotage their competitors or others, but less legitimate businesses (say those with ties to traditional organized crime) certainly are.
the one piece of speculation i absolutely cannot abide by, however, is the one about the target of the stuxnet worm. the idea that a nation was the target is ridiculous - do you know how easy it would have been to limit the worm to only spread on computers running inside that nation? surely the geniuses behind it could have made the distribution mechanism much more targeted than it was had a nation been the target (or had a nation contained the entire target population). this more recent theory that it was targeted towards a particular iranian nuclear facility means that whoever was behind it was willing to risk causing an environment disaster, so you'd tend to think those who'd have something to lose by being nearby would know better than to try such a thing. one of the most ridiculous ideas is that stuxnet was targeted for a single system, unique in all the world, and that it's got a fingerprint of that system that it's looking for. in order to generate such a fingerprint in the first place, the attackers would need unprecedented access to such a target; the kind of access that would completely obviate the need for an untargeted distribution mechanism.
but people persist on thinking that iran was in some way the target, and i think i know why. it's because people are thinking of stuxnet like it's some sort of military-grade cyber missile. they see a pocket of high infection density and think they're looking at the electronic fallout of a cyber bomb. under these conventional kinetic warfare sorts of analogies you expect the target to be somewhere around the epicenter. but, and i cannot stress this enough, this is the WRONG mindset to use when you're talking about a virus and we are talking about a virus! if you're thinking about this in those sorts of kinetic warfare terms then you're head is in entirely the wrong place (interpret that as you will). computer viruses behave like a disease - stop thinking about ground zero and start thinking about patient zero - stop thinking of blast radius and start thinking about epidemiology. think about how difficult it is to control or even predict the movements of a biological vector in a biological attack. without a an agent friendly to the cause doing the dispersal, you can't know where it's going to go first or most often - and even if you do have a friendly agent doing the dispersal you can't know where the disease will spread to afterward or where it will thrive best.
you cannot tell who or what the target was by looking at where the most infected machines were. that only tells you where the worm enjoyed the most reproductive advantage - and most importantly (as kaspersky's alexander gostev rightly points out) the infection populations change over time. the only way you're going to find out what the target was forensically is by finding the PLC(s) it was designed to alter. then, and only then, will you actually know what the target is - and without knowing who/what the actual target was, you cannot make reasonable guesses about the specific motivations behind the attack, and by extension you cannot infer attribution based on who had the most to gain.
but maybe these questions should be reversed. instead of trying to figure out the likely culprit based on who the target was, perhaps it would be better to track down the culprit and ask them who the target is. the two private keys stolen from two different companies in the same area in taiwan seems unlikely to be a coincidence. someone there is involved - even if their sole involvement was selling keys they stole onto a 3rd party, that gets you a lot closer to the people responsible than searching for PLC needle in a haystack.
even though it wasn't that long ago that i posted a number of scathing criticisms of the stuxnet worm, new revelations about the worm and also some of the discussion in this computer world article that asks "is stuxnet the best malware ever?" (and many others i've seen since starting this post) have prompted me to re-examine my opinion on stuxnet.
there have actually been a number of really good technical analyses of stuxnet, but things seem to fall down when people try to turn their technical analysis into a tactical analysis.
what does stuxnet have?
- 4 0-day exploits
- additional non-0-day exploits
- the ability to determine if it's running on a plant floor vs. a corporate network so that it can avoid using some of those exploits in environments where the 'noise' they produce would be noticed by IPS/IDS
- a windows stealthkit (also erroneously known as a rootkit)
- a SCADA PLC stealthkit
- digital signatures on 2 versions of it's code using private keys stolen from 2 different sources
- a centralized command and control communications channel (now controlled by Symantec)
- a P2P update communications channel
- the ability to alter the way the SCADA system controls a very particular (and as yet unknown) process
- the ability to spread itself over the internal network of an organization via network shares and vulnerabilities
- the ability to spread itself beyond the confines of a particular organization's network using removable media and the 0-day exploit for the LNK vulnerability (and an unorthodox implementation of autorun before the LNK exploit was added)
- at least 3 distinct versions (the one prior to the inclusion of the LNK 0-day, the first version containing the LNK 0-day compiled in march, and a second containing the LNK 0-day compiled in june and using a different digital signature)
- an infection counter to (in theory) limit the spread of the worm
the stealthkits are intended to provide stealth (obviously) so as to keep the window of opportunity for the attack to succeed open longer than it might otherwise be. this implies a persistent presence will be required for the attack to succeed.
the digital signatures on the code also provided some stealth from the heuristic engines of anti-malware products.
the IPS/IDS avoidance also qualifies as a kind of stealth.
the C&C channel (aside from making stuxnet a botnet on top of everything else) implies that the attack is not 100% autonomous. certain actions only happen when stuxnet receives commands to do them. as such, stuxnet will be waiting when it isn't being given commands and this will require a persistent presence.
the update functionality also implies an intent to maintain a persistent presence; and not just persistent over a short term, persistence over a long enough time frame that some part of the attack code becomes no longer fit for use and needs to be updated.
the release of the version with the second digital signature extended the useful lifetime of the signed binaries by several years, as the first was set to expire in june of this year.
as you can see, a considerable number of stuxnet's properties point towards a protracted operation. the payload shows a number of indications that a persistent presence on affected systems would be required for the intended attack scenario to play out as planned.
at the same time, however, the delivery mechanism thoroughly compromises that objective by being noisy and ultimately is the reason the worm and it's significance were uncovered. with each new system a virus tries to infect, the probability that the infection will fail catastrophically (and thereby draw attention to the virus' presence) goes up. while there was a mechanism in place meant to limit the self-replication (and therefore the probability of that catastrophic failure occurring) a simplistic infection counter was obviously not enough to keep the worm from spreading far and wide and drawing attention to itself. you don't take this risk unless you can't be more targeted (or unless you don't know what you're doing).
once the worm was found out, the fact that it was a technical marvel worked entirely against it. if stuxnet had simply been just another dumb autorun worm it probably would have remained in obscurity (and indeed the earlier version that was an autorun worm did remain in obscurity despite having been discovered previously), but because of the novelty of it's execution technique (the LNK 0-day) additional attention was paid to it and the SCADA-targeting payload was discovered and everything snowballed from there.
i have stated previously that i consider stuxnet a failure, and that much hasn't changed. the fact that it's a technical marvel doesn't mean it can't also be a tactical failure. the history of viruses is littered with examples of technically sophisticated viruses that never even made it into the wild while buggy, braindead viruses somehow
proliferated.
stuxnet at least made it into the wild, but the conflicting objectives between it's payload and it's distribution mechanism (one was targeted, the other was not; one was silent and patient, the other was more like a smash and grab) means that if the people behind it haven't already accomplished their objective, it's unlikely they will now. the C&C channel is already lost to them, the P2P channel will almost certainly be monitored for new versions of the worm with new instructions and/or a new C&C channel. the entire population of infected machines they built up is now a complete write-off because they didn't know how to maintain harmony between the distribution mechanism and the payload.
furthermore, since they were still releasing new versions as late as june 14, it stands to reason they had not yet achieved their objective at that point.
to date, siemens have only found 14-15 SCADA systems that have been infected and as i understand it, none have had their PLC's altered. there really doesn't seem to be much evidence to suggest stuxnet's creators achieved their goals.
while there's a lot of speculation floating around that i don't agree with, i am willing to speculate that the people behind stuxnet are relatively new to the world of doing bad things with computers - i don't mean vulnerability reseach, by the way, since they clearly have some talented people in that arena - i'm talking about being new at mounting actual attacks. cybercriminals have adopted a proven strategy of 'keep it simple, stupid' (KISS) and it has served them well. on the other hand the stuxnet creators tried too hard, made their attack too complex, and generally didn't show the same kind of polish or experience at launching a successful targeted attack that cybercriminals have shown.
i think being relatively new at this is actually compatible with the possibility of them state-sponsored. while we often like to attribute supernatural powers to government efforts in the technical arena (ex. NSA's cryptographic capabilities are often believed to be light years beyond what the private sector can do), the US government made it abundantly clear that sometimes (especially when it comes to attacks in cyberspace) that faith is not always well founded. i don't expect nation states to have the experience that cybercriminals do because they aren't out there actually mounting attacks as frequently as cybercriminals are (if they were, the 'pain' suffered as a result of all those attacks would have triggered a war by now).
after being reminded of the US military's incompetence in 2008, i'm now more willing to believe that this failure was the work of a nation state. however, i'm still not completely ruling out other possibilities. while the industrial process altering payload does indeed change this from an issue of espionage to an issue of sabotage, that doesn't (in my mind) rule out rivalry between businesses. certainly legitimate businesses are not generally known for attempting to sabotage their competitors or others, but less legitimate businesses (say those with ties to traditional organized crime) certainly are.
the one piece of speculation i absolutely cannot abide by, however, is the one about the target of the stuxnet worm. the idea that a nation was the target is ridiculous - do you know how easy it would have been to limit the worm to only spread on computers running inside that nation? surely the geniuses behind it could have made the distribution mechanism much more targeted than it was had a nation been the target (or had a nation contained the entire target population). this more recent theory that it was targeted towards a particular iranian nuclear facility means that whoever was behind it was willing to risk causing an environment disaster, so you'd tend to think those who'd have something to lose by being nearby would know better than to try such a thing. one of the most ridiculous ideas is that stuxnet was targeted for a single system, unique in all the world, and that it's got a fingerprint of that system that it's looking for. in order to generate such a fingerprint in the first place, the attackers would need unprecedented access to such a target; the kind of access that would completely obviate the need for an untargeted distribution mechanism.
but people persist on thinking that iran was in some way the target, and i think i know why. it's because people are thinking of stuxnet like it's some sort of military-grade cyber missile. they see a pocket of high infection density and think they're looking at the electronic fallout of a cyber bomb. under these conventional kinetic warfare sorts of analogies you expect the target to be somewhere around the epicenter. but, and i cannot stress this enough, this is the WRONG mindset to use when you're talking about a virus and we are talking about a virus! if you're thinking about this in those sorts of kinetic warfare terms then you're head is in entirely the wrong place (interpret that as you will). computer viruses behave like a disease - stop thinking about ground zero and start thinking about patient zero - stop thinking of blast radius and start thinking about epidemiology. think about how difficult it is to control or even predict the movements of a biological vector in a biological attack. without a an agent friendly to the cause doing the dispersal, you can't know where it's going to go first or most often - and even if you do have a friendly agent doing the dispersal you can't know where the disease will spread to afterward or where it will thrive best.
you cannot tell who or what the target was by looking at where the most infected machines were. that only tells you where the worm enjoyed the most reproductive advantage - and most importantly (as kaspersky's alexander gostev rightly points out) the infection populations change over time. the only way you're going to find out what the target was forensically is by finding the PLC(s) it was designed to alter. then, and only then, will you actually know what the target is - and without knowing who/what the actual target was, you cannot make reasonable guesses about the specific motivations behind the attack, and by extension you cannot infer attribution based on who had the most to gain.
but maybe these questions should be reversed. instead of trying to figure out the likely culprit based on who the target was, perhaps it would be better to track down the culprit and ask them who the target is. the two private keys stolen from two different companies in the same area in taiwan seems unlikely to be a coincidence. someone there is involved - even if their sole involvement was selling keys they stole onto a 3rd party, that gets you a lot closer to the people responsible than searching for PLC needle in a haystack.
Monday, August 09, 2010
numbers, context, and background
one of the things i've come across while reading various sources is an attempt to pin down an intended target nation for the stuxnet worm based on prevalence data. the theory goes something like 'since nation X is where most instances of stuxnet are found, therefore nation X was the intended target (because obviously more work was put into spreading it there)'.
this theory has some problems, however. first and foremost is that not all the numbers agree. while we have symantec saying that ~60% is in Iran, we also have eset saying that ~60 are in the US just 3 days earlier. they can't both be right - or can they? and if they are, what are the implications for the targeted nation theory?
as is always the case when there are contradictory numbers, we have to look at how those numbers were arrived at. in fact, even when there aren't contradictory numbers, we should still be paying close attention to how those numbers were arrived at.
close examination of vikram thakur's post on the symantec site suggests that there number represents actual infected machines trying to connect to their C&C server (on top of everything else, stuxnet is also a botnet) during a 3 day period between july 19 and july 22. they were able to gather this data because they redirected the domains hosting the C&C servers to themselves so it seems like it would be a pretty accurate snapshot of the pool of infected machines at a particular point in time.
eset's numbers in david harley's post came from their installed clients throughout the world. their cloud-based technology reported the instances - however, since stuxnet employs stealth it's more likely that rather than reporting infected machines (where it would be active and hidden) it's actually reporting infected USB drives. it could also be reporting both if eset's products can see through the stealth, but the key point is that eset's numbers almost certainly include infected USB drives while symantec's do not. the USB numbers are important because that's how this worm spreads and if one were going to work on targeting a particular nation, spreading infected USB drives in that nation would be the way to do it.
furthermore eset's numbers appear to be from the time detection for the worm was added until the time the statistic was reported, rather than just the 3 day period covered by symatec's figures. this means that eset's figures represent a measurement of how many instances of the worm there were over it's detected lifetime to that point, while symantec's figures represent a measurement of how many infected machines remained at that point. this is important because by the time symatec started collecting it's data, negative population controls had already been in effect for some time.
controls which, like worms they intend to control, are not necessarily uniformly effective across the entire globe. some products have greater market share in some regions that they do in others, and the dominant product in certain regions might be poor at controlling particular worms and thus allow those worms greater reproductive advantages in those regions than they might find in others. the presence of population controls like anti-malware software affects both the death and birth rates of worm instances and as anyone who's heard johnny long discuss hackers for charity knows, such controls are not uniformly present or effective across all regions.
there are a actually a variety of other factors, in addition to such controls, that contribute to how well and in what way a worm or virus spreads, as discussed in some detail by jeff kephart, david chess, and steve white in "Computers and Epidemiology". some of these factors, like the degree of connectedness of susceptible hosts (and how often adequate contact between such hosts happens), can be influenced by computing culture, which in turn can be influenced by culture in the more general sense, geopolitical climate, and even socioeconomic considerations. hypothetically speaking, a nation that is cut off from US technologies due to trade sanctions (as metioned by by brian krebs) could well exhibit a higher rate of software sharing as part of 'alternative' procurement techniques and in so doing raise the region above the epidemic threshold for some unspecified worm.
ultimately both symantec and eset could be right since they were measuring different things over different periods of time. what that means for the targeted nation theory is that things aren't as clear-cut as either set of numbers would suggest on their own. what we do know is that stuxnet appeared to enjoy more reproductive success in Iran than elsewhere. whether that's down to purely epidemiological factors or intentional injection of the worm into the local computing population by a malicious actor is unclear, but eset's data would support an argument against the latter option as the effort seems to have been expended elsewhere. on the other hand, if we were to entertain the notion that the US was the target based on the amount of infected materials floating around the computing population, then we are left once again with the conclusion that stuxnet was a failure since in spite of all that effort the prevalence of actual infected machines in the US was minuscule.
i don't think much can be read into the fact that there were more infected machines in Iran than elsewhere since such pockets of infection are actually normal - especially for self-replicating malware that must be spread by physical media. some region had to draw the short straw and this time it was Iran.
this theory has some problems, however. first and foremost is that not all the numbers agree. while we have symantec saying that ~60% is in Iran, we also have eset saying that ~60 are in the US just 3 days earlier. they can't both be right - or can they? and if they are, what are the implications for the targeted nation theory?
as is always the case when there are contradictory numbers, we have to look at how those numbers were arrived at. in fact, even when there aren't contradictory numbers, we should still be paying close attention to how those numbers were arrived at.
close examination of vikram thakur's post on the symantec site suggests that there number represents actual infected machines trying to connect to their C&C server (on top of everything else, stuxnet is also a botnet) during a 3 day period between july 19 and july 22. they were able to gather this data because they redirected the domains hosting the C&C servers to themselves so it seems like it would be a pretty accurate snapshot of the pool of infected machines at a particular point in time.
eset's numbers in david harley's post came from their installed clients throughout the world. their cloud-based technology reported the instances - however, since stuxnet employs stealth it's more likely that rather than reporting infected machines (where it would be active and hidden) it's actually reporting infected USB drives. it could also be reporting both if eset's products can see through the stealth, but the key point is that eset's numbers almost certainly include infected USB drives while symantec's do not. the USB numbers are important because that's how this worm spreads and if one were going to work on targeting a particular nation, spreading infected USB drives in that nation would be the way to do it.
furthermore eset's numbers appear to be from the time detection for the worm was added until the time the statistic was reported, rather than just the 3 day period covered by symatec's figures. this means that eset's figures represent a measurement of how many instances of the worm there were over it's detected lifetime to that point, while symantec's figures represent a measurement of how many infected machines remained at that point. this is important because by the time symatec started collecting it's data, negative population controls had already been in effect for some time.
controls which, like worms they intend to control, are not necessarily uniformly effective across the entire globe. some products have greater market share in some regions that they do in others, and the dominant product in certain regions might be poor at controlling particular worms and thus allow those worms greater reproductive advantages in those regions than they might find in others. the presence of population controls like anti-malware software affects both the death and birth rates of worm instances and as anyone who's heard johnny long discuss hackers for charity knows, such controls are not uniformly present or effective across all regions.
there are a actually a variety of other factors, in addition to such controls, that contribute to how well and in what way a worm or virus spreads, as discussed in some detail by jeff kephart, david chess, and steve white in "Computers and Epidemiology". some of these factors, like the degree of connectedness of susceptible hosts (and how often adequate contact between such hosts happens), can be influenced by computing culture, which in turn can be influenced by culture in the more general sense, geopolitical climate, and even socioeconomic considerations. hypothetically speaking, a nation that is cut off from US technologies due to trade sanctions (as metioned by by brian krebs) could well exhibit a higher rate of software sharing as part of 'alternative' procurement techniques and in so doing raise the region above the epidemic threshold for some unspecified worm.
ultimately both symantec and eset could be right since they were measuring different things over different periods of time. what that means for the targeted nation theory is that things aren't as clear-cut as either set of numbers would suggest on their own. what we do know is that stuxnet appeared to enjoy more reproductive success in Iran than elsewhere. whether that's down to purely epidemiological factors or intentional injection of the worm into the local computing population by a malicious actor is unclear, but eset's data would support an argument against the latter option as the effort seems to have been expended elsewhere. on the other hand, if we were to entertain the notion that the US was the target based on the amount of infected materials floating around the computing population, then we are left once again with the conclusion that stuxnet was a failure since in spite of all that effort the prevalence of actual infected machines in the US was minuscule.
i don't think much can be read into the fact that there were more infected machines in Iran than elsewhere since such pockets of infection are actually normal - especially for self-replicating malware that must be spread by physical media. some region had to draw the short straw and this time it was Iran.
Friday, August 06, 2010
why the stuxnet worm is an abject failure
in a previous post i called the stuxnet worm an abject failure, and intimated that it was because it was a worm. some might wonder why i said that when there have been a number of worms in recent memory (conficker, for example) that have spread far and wide and were considered successful, not just in spite of becoming widespread, but rather because of it. why should the stuxnet worm be subjected to an apparent double standard?
to understand why you have to look who the intended pool of victims are for the malware in question. for most common worms or viruses the goal is to infect or infest as many machines as possible. there is no special subset that is being targeted, they're just looking to add computing resources to their attack platform; where that attack platform can be used for anything from simply infecting/infesting still more machines to something as complex as building a botnet. anything where more is better.
in this kind of scenario there is a rather large number of machines for which public knowledge of the attack makes little or no difference with respect to whether the attack succeeds. the virus or worm will be able to thrive within this population because the machines are poorly administered, have misconfigured anti-malware software (assuming it's present and enabled at all), and no special mitigating steps have been taken to deal with the possibility of getting infected by the virus or worm (like disabling autorun, for example). if you guessed these were home user machines you'd be right (although there are a significant number of these in other environments as well). these are generally considered low value targets so the people charged with taking care of them generally don't do all that good a job.
the stuxnet worm, in contrast to this, was targeting machines that controlled industrial processes like manufacturing, power generation, water treatment, etc. these are, quite clearly, near the opposite end of the spectrum of valuable targets. as a result the people tasked with taking care of these machines are generally trained professionals who, in all likelihood, will take special steps to mitigate the threat of a new attack that specifically targets those machines once that attack becomes public knowledge.
and therein lies the rub. self-replicating malware does not stay below the radar. malware that makes copies of itself always winds up drawing attention to itself. each copy it makes increases the chance that someone will catch on, and when someone does that's the first step in the inevitable process of the attack becoming public knowledge.
consider the difference between plant eating animals and meat eating animals. plants don't generally react to protect themselves when they sense a plant eater coming near them, but prey animals definitely do react to protect themselves so meat eaters have to behave in a way that prevents their prey from knowing they're coming. if meat eaters bumbled along like plant eaters they'd never catch anything and eventually starve. they'd be abject failures just as the stuxnet worm is.
to understand why you have to look who the intended pool of victims are for the malware in question. for most common worms or viruses the goal is to infect or infest as many machines as possible. there is no special subset that is being targeted, they're just looking to add computing resources to their attack platform; where that attack platform can be used for anything from simply infecting/infesting still more machines to something as complex as building a botnet. anything where more is better.
in this kind of scenario there is a rather large number of machines for which public knowledge of the attack makes little or no difference with respect to whether the attack succeeds. the virus or worm will be able to thrive within this population because the machines are poorly administered, have misconfigured anti-malware software (assuming it's present and enabled at all), and no special mitigating steps have been taken to deal with the possibility of getting infected by the virus or worm (like disabling autorun, for example). if you guessed these were home user machines you'd be right (although there are a significant number of these in other environments as well). these are generally considered low value targets so the people charged with taking care of them generally don't do all that good a job.
the stuxnet worm, in contrast to this, was targeting machines that controlled industrial processes like manufacturing, power generation, water treatment, etc. these are, quite clearly, near the opposite end of the spectrum of valuable targets. as a result the people tasked with taking care of these machines are generally trained professionals who, in all likelihood, will take special steps to mitigate the threat of a new attack that specifically targets those machines once that attack becomes public knowledge.
and therein lies the rub. self-replicating malware does not stay below the radar. malware that makes copies of itself always winds up drawing attention to itself. each copy it makes increases the chance that someone will catch on, and when someone does that's the first step in the inevitable process of the attack becoming public knowledge.
consider the difference between plant eating animals and meat eating animals. plants don't generally react to protect themselves when they sense a plant eater coming near them, but prey animals definitely do react to protect themselves so meat eaters have to behave in a way that prevents their prey from knowing they're coming. if meat eaters bumbled along like plant eaters they'd never catch anything and eventually starve. they'd be abject failures just as the stuxnet worm is.
Wednesday, August 04, 2010
digital signatures are not a poor man's whitelist
back when mcafee had their catastrophic false positive i made the suggestion that av vendors in general (and mcafee in particular) could use a whitelist of critical system files to avoid false alarms that render systems unbootable/unusable. basically the idea being that known trusted files could be ignored by anti-malware components prone to false alarm. in the comments of that post didier stevens suggested checking digital signatures of files could be used as an alternative. at first i thought that was an OK compromise to developing a proper whitelist of critical files, but recent events have made me rethink that.
as kaspersky's alexander gostev described, a digital signature will cause a file to be regarded as trusted by security software, much like didier suggested in his comment, and if it's malware that means it will be effectively hidden from the anti-malware software.
i think that's a pretty glaring problem and underscores the fact that digital signatures are a poor substitute for a proper whitelist. a proper whitelist is constructed of items that are known and trusted, but with digital signatures it's neither the anti-malware vendor nor the user who's constructing this implied whitelist. instead, the implicit whitelist is constructed jointly by every tom, dick, and harry who happens by hook or by crook to get his/her hands on a valid digital certificate. that means the people who make the determination of whether a file is safe and/or trustworthy are (generally) the same ones who created it. however, those people could lie, they could fail to actually check the safety/integrity of the file they're signing, or they might not even be qualified to check the safety/integrity of the file they're signing. even if the owner of the digital certificate used to sign the file is a trustworthy entity, that doesn't mean the file is also trustworthy. first and foremost, trust is not commutative. besides that, though, as the stuxnet example shows us, the certificate can fall into the wrong hands. treating digital signatures as whitelist entries creates a situation where altogether too many entities have influence over what is considered trusted and safe.
thus digital signatures can not tell us what we need a whitelist to tell us, namely that the file is trusted and safe. the presence of a valid digital signature only tells us two things: that the file was signed with certificate X (which may or may not be under the exclusive control of the party it was issued to), and that the file hasn't changed since it was signed. whether it's safe, whether it's fit for use, is entirely outside the scope of what a digital signature can tell us.
i think the idea of using a whitelist to avoid false alarms is a good one, but using digital signatures in it's place is a shortcut. av vendors need to stop cutting corners and implement proper whitelists for this particular application. there are already tens of thousands of digitally signed malware samples in f-secure's collection so i'm at a loss trying to figure out why this technique of false positive avoidance hasn't been abandoned already. now that news that detections of stuxnet didn't start until after the digital signature expired, malware authors will surely be looking to exploit this behaviour more and more.
as kaspersky's alexander gostev described, a digital signature will cause a file to be regarded as trusted by security software, much like didier suggested in his comment, and if it's malware that means it will be effectively hidden from the anti-malware software.
i think that's a pretty glaring problem and underscores the fact that digital signatures are a poor substitute for a proper whitelist. a proper whitelist is constructed of items that are known and trusted, but with digital signatures it's neither the anti-malware vendor nor the user who's constructing this implied whitelist. instead, the implicit whitelist is constructed jointly by every tom, dick, and harry who happens by hook or by crook to get his/her hands on a valid digital certificate. that means the people who make the determination of whether a file is safe and/or trustworthy are (generally) the same ones who created it. however, those people could lie, they could fail to actually check the safety/integrity of the file they're signing, or they might not even be qualified to check the safety/integrity of the file they're signing. even if the owner of the digital certificate used to sign the file is a trustworthy entity, that doesn't mean the file is also trustworthy. first and foremost, trust is not commutative. besides that, though, as the stuxnet example shows us, the certificate can fall into the wrong hands. treating digital signatures as whitelist entries creates a situation where altogether too many entities have influence over what is considered trusted and safe.
thus digital signatures can not tell us what we need a whitelist to tell us, namely that the file is trusted and safe. the presence of a valid digital signature only tells us two things: that the file was signed with certificate X (which may or may not be under the exclusive control of the party it was issued to), and that the file hasn't changed since it was signed. whether it's safe, whether it's fit for use, is entirely outside the scope of what a digital signature can tell us.
i think the idea of using a whitelist to avoid false alarms is a good one, but using digital signatures in it's place is a shortcut. av vendors need to stop cutting corners and implement proper whitelists for this particular application. there are already tens of thousands of digitally signed malware samples in f-secure's collection so i'm at a loss trying to figure out why this technique of false positive avoidance hasn't been abandoned already. now that news that detections of stuxnet didn't start until after the digital signature expired, malware authors will surely be looking to exploit this behaviour more and more.
Tuesday, August 03, 2010
APT or WAPT?
... and by WAPT i mean wannabe APT.
so, one of the more colourful stories this past little while has been the stuxnet worm. apparently some people are having fun speculating about whether it's an example of a nation state targeting the critical infrastructure of another.
really i think we're just so uncertain about APT style threats that we're trying to find examples so as to make things more clear. does this case qualify? that's the question of the hour, isn't it. i guess i'll throw in my own 2 cents in the speculation game.
the components of this malware certainly lend themselves to a conclusion that it was part of an attack launched by an APT level of attacker. it's got a 0-day exploit to auto-execute itself when the directory containing the malware is viewed in explorer, a stealthkit to hide it's presence, digitally signed binaries using digital certificates from multiple well known companies to cause anti-malware software to overlook them, and a payload that targets a particular brand of SCADA (supervisory control and data acquisition) system.
do those properties really mean what they seem to mean though?
the people behind stuxnet certainly seem likely to have had financial backing, and the targeting conclusion seems unavoidable, but if they were advanced at all it was in a purely academic sense. they may have come up with the 0-day exploit and thereby qualified as researchers of some skill but they clearly don't have experience designing full attack campaigns from scratch. they don't understand the strategic strengths and weaknesses of the pieces they cobbled together and seem to have a somewhat antiquated idea of the malware threat landscape. if they were backed by a government, officially or otherwise, then that government must be in pretty dire straights to have employed the services of someone so green. it could, i suppose, have also been an attempt at industrial espionage, but either way the attackers' inexperience has tipped off the entire world to their efforts and that's pretty much an abject failure.
so, one of the more colourful stories this past little while has been the stuxnet worm. apparently some people are having fun speculating about whether it's an example of a nation state targeting the critical infrastructure of another.
really i think we're just so uncertain about APT style threats that we're trying to find examples so as to make things more clear. does this case qualify? that's the question of the hour, isn't it. i guess i'll throw in my own 2 cents in the speculation game.
the components of this malware certainly lend themselves to a conclusion that it was part of an attack launched by an APT level of attacker. it's got a 0-day exploit to auto-execute itself when the directory containing the malware is viewed in explorer, a stealthkit to hide it's presence, digitally signed binaries using digital certificates from multiple well known companies to cause anti-malware software to overlook them, and a payload that targets a particular brand of SCADA (supervisory control and data acquisition) system.
do those properties really mean what they seem to mean though?
- we assume that the 0-day security flaw was developed by the attacker, which would seem to make the attacker technically advanced - but it is conceivable that the vulnerability was instead purchased, presumably for a very high price considering the calibre of the vulnerability, so this could instead be an example of the attacker being well funded and thus probably satisfying the persistent criteria for APT.
- stealthkits aren't really that earth shattering these days, there are books and websites dedicated to teaching the reader how to make them so there's not much that can be inferred just from that.
- getting access to a well known company's digital certificate in order sign one's binaries seems like a rather mysterious feat that could point to advanced skills, or insider access gained as part of the kind of detailed plan you'd expect from someone with persistence, except that it could also have been done with a turnkey crimeware kit like zeus. getting access to the digital certificates of two companies in the same geographic location makes the probabilities of advanced skills or persistence much less likely and simple opportunity much more likely.
- the SCADA-specific payload rather unambiguously points to a targeted attack (which is what a persistent threat would carry out), and also suggests access to similar SCADA systems for the purposes of R&D (which would probably tend to imply some financial backing), but it was put in a piece of self-replicating malware (malware that spreads itself in an automated fashion) which is pretty much the antithesis of targeted.
the people behind stuxnet certainly seem likely to have had financial backing, and the targeting conclusion seems unavoidable, but if they were advanced at all it was in a purely academic sense. they may have come up with the 0-day exploit and thereby qualified as researchers of some skill but they clearly don't have experience designing full attack campaigns from scratch. they don't understand the strategic strengths and weaknesses of the pieces they cobbled together and seem to have a somewhat antiquated idea of the malware threat landscape. if they were backed by a government, officially or otherwise, then that government must be in pretty dire straights to have employed the services of someone so green. it could, i suppose, have also been an attempt at industrial espionage, but either way the attackers' inexperience has tipped off the entire world to their efforts and that's pretty much an abject failure.
Subscribe to:
Posts (Atom)
