so if you haven't read it yet,
mikko hypponen wrote a non-apology for why his company and companies like his failed to catch the flame worm. in response,
attrition.org's jericho wrote a rebuttal, taking mikko to task for the perceived bullshit in the aforementioned non-apology. while i think his heart was in the right place, a number of the specific criticisms jericho makes are unfortunately based on an understanding of the
AV industry that is too shallow.
When we went digging through our archive for related samples
of malware, we were surprised to find that we already had samples of
Flame, dating back to 2010 and 2011, that
we were unaware we possessed.
In the second paragraph of this bizarre apology letter, Mikko Hypponen clearly states that
the antivirus company he works for found or detected Flame, as far back as 2010. In this
very same article, he goes on to say that F-Secure and the antivirus industry failed
to detect Flame three times, while saying that they simply can't detect malware
like Flame. Huh? How did he miss this glaring contradiction? How did Wired editors miss this?
i can tell you right now how this apparent contradiction isn't actually a contradiction at all. AV companies receive many submissions per day, more than can possibly be examined by humans, and a great many of those submissions are not actually
malware. AV companies use automated processes* that they develop in-house to determine if a sample is likely to be malware or not and (if possible) what malware family it belongs to. not everything that goes through these automated processes gets flagged as malware and that technical failure to recognize the sample as malware 2 years ago is almost certainly the failure that mikko was trying to explain. they still keep the sample, mind you, even though they don't have reason to believe it's malware, and that's how mikko was able to find it in their archives.
(* those automated processes can't easily be distributed for customer use, by the way, as they require too much expertise to use, not to mention too much data as a comparison against a large corpus of other known malware is part of those processes)
really, when you think about it, if they'd known it was malware 2 years ago, they'd have added detection for it (even if they didn't look any closer - adding detection is also largely automated, especially for something that doesn't try to obscure itself or otherwise make the AV analyst's job harder) and then would have trumpeted the fact that they've been protecting their customers from this threat for years when it was finally revealed what a big deal it was. i think we've all seen this scenario play out before, and it would certainly serve their business goals better than admitting failure would.
hopefully this explains why this sort of rage...
Rage. Epic levels of rage at how bad this spin doctoring attempt is, and how fucking
stupid you sound. YOU DETECTED THE GOD DAMN MALWARE. IT WAS SITTING IN YOUR REPOSITORY.
... is misplaced.
It wasn't the first time this has happened, either. Stuxnet went
undetected for more than a year after it was unleashed in the wild, and
was only discovered after an antivirus firm in Belarus was called in to
look at machines in
Iran that were having problems.
For those of us in the world of security, hearing an antivirus company say
"we missed detecting malware" isn't funny, because the joke is so old and so
very tragically true. The entire business is based on a reactionary model, where
bad guys write malware, and AV companies write signatures sometime after it.
For those infected by the malware, it's too little too late. It's like a couple
of really inept bodyguards, who stand next to you while you're getting beaten up and say,
"I will remember that guy's face next time and ask management to not let him in."
Welcome to the world of antivirus.
this frustration is something i see coming from a lot of security professionals. unfortunately the truth is that there are technical and theoretical limitations on what can be done with "
detection". it frustrates me that others can't seem to recognize and accept this fact. detection requires knowledge of what one is looking for, whether that be a binary or a behaviour or something else. you can't look for something without knowing something about what you're looking for.
while it is true that AV vendors claim their products have
heuristics - technology that can detect as yet unknown malware - that is still based on knowledge gained from past malware. it's reasonably well suited to detecting derivative work, but anything truly novel (and stuxnet certainly was that) is going to get through.
When researchers dug back through their archives for anything similar to
Stuxnet, they found that a zero-day exploit that was used in Stuxnet
had been used before with
another piece of malware, but had never been noticed at the time.
This statement is a red herring Mikko. Stuxnet used five vulnerabilities, only
one of which was 0-day.
One local privilege escalation used
in Stuxnet was unknown at the time, the rest were documented vulnerabilities. If your
excuse for the AV industry is that Stuxnet wasn't detected because it used a 0-day, your
argument falls flat in that you should have detected the other three code execution / privilege
escalation vulnerabilities (the fifth being default credentials).
a couple things wrong with this. first, what i think jericho meant to say is that AV should have detected the
exploits rather than the vulnerabilities, as the vulnerabilities weren't in stuxnet but rather in the software that stuxnet was attacking.
second, as the provided chart clearly indicates, the disclosure date for 4 of the 5 vulnerabilities is
after the discovery date for stuxnet (2010-06-17). i'm pretty sure that means the only previously documented vulnerability was the default password. either that or jericho is actually right and simply using poor/contradictory evidence to back up his point.
finally, and probably more to the point, mikko wasn't offering the 0-day exploit as an excuse for why AV failed to detect stuxnet. he was pointing to it as a previous example of something being missed despite part or all of it being in their archives. he was trying to explain how just because something is in their archives that doesn't mean they are aware of it's significance. try thinking of AV vendors as being like pack rats - the only reason they'd throw something away is if they've already got a copy of it (and even then, i'm not so sure).
The fact that the malware evaded detection proves how well the attackers did their job.
Again, it didn't evade detection, according to Mikko Hypponen, Chief Blah Blah at an Antivirus Company.
He said so in the second paragraph of an article I read. That said, the fact is antivirus
companies miss hundreds of pedestrian malware samples every day. Is this because the authors
did so well, or that your business model and detection capabilities are flawed? One could easily
argue that they are intentionally so (reference that bit about 4 billion dollars).
and again, jericho misinterpreted that second paragraph. now as for AV missing hundreds of pedestrian malware samples a day, i suspect the number is much higher, but that it doesn't miss them for long. a great deal of malware can't be detected before signatures for it are added, and those signatures can't be added until after the vendors get a sample. what sets flame and stuxnet apart from those cases is the length of time the malware was in the wild before signatures got added to the AV products. is this a flaw? once again, you can't look for something if you don't know anything about what you're looking for. in the sense that such a limitation prevents the system from being perfect i suppose it could be considered a flaw; but show me something that
is perfect - you can't, can you, because nothing is perfect.
And instead of trying to protect their code with custom packers and
obfuscation engines - which might have drawn suspicion to them - they
hid in plain sight. In the case
of Flame, the attackers used SQLite, SSH, SSL and LUA libraries that
made the code look more like a business database system than a piece of
malware.
Where to begin. First, custom packers and obfuscation engines have worked very well against
antivirus software for a long time. I don't think that would have drawn any more suspicion.
Second, Flame is 20 megs,
around 20x more code than Stuxnet. In the world of antivirus, where you are usually scanning very small
bits of obfuscated code, this should seem like a godsend. If it isn't using obfuscation, then
what is the excuse for missing it? Are you really telling me that your industry is just now
realizing the "hide in plain sight" method, in 2012?
first, while custom
packers have worked well against the AV software that's distributed to customers, they don't work nearly as well against the processes, procedures, and techniques used by AV companies when processing sample submissions. as a result such custom packers only prevent detection at the customer's site for a relatively small amount of time (though obviously long enough for some customers to get pwned).
second, being huge and not obfuscated works against being recognized as malware precisely because it is so out of character for malware (and thus it's not just the AV industry that's just now realizing the "hide in plain sight" method). unless you're under the mistaken belief that AV companies still reverse engineer every sample they get each day (numbering in the tens of thousands, last i heard), it should be obvious how the lack of obfuscation provides no help in determining if something is malware.
The truth is, consumer-grade antivirus products can't protect against
targeted malware created by well-resourced nation-states with bulging
budgets.
No, the truth is, consumer-grade antivirus can't protect against garden variety malware,
but it can apparently detect the well-resourced nation-state malware. Oh, that makes me wonder,
what do you offer that is better than consumer-grade? Other than a bigger price tag, does it do a better
job detecting malware?
it's actually the reverse of what jericho says here; AV tends to add detection for garden variety malware quickly (thus closing the window of opportunity for the malware to successfully compromise the AV's customers and consequently protecting most of them from it) while they tend to add detection for state-sponsored malware quite slowly (and thus not doing much to protect anyone from it initially). i specifically express this as tendencies because there are exceptions (unless the
energizer RAT, which took 3 years for people to recognize, was actually state-sponsored and nobody bothered to say anything).
as for what else AV offers - there are tools (other than
scanners) that typically aren't packaged with their consumer product because they require a higher level of technical expertise to operate than home users tend to have. IT folks in the enterprise are more likely to have the necessary know-how to use these tools. when mikko refers to consumer-grade anti-virus products in this context, he's talking about the technologies that require next to no knowledge to use, ones where the knowledge is baked in in the form of signatures. that kind of product isn't going to stand up well against nation-states. technologies which require more of the user, ones where it's the user him/herself looking for anomalies or where the user decides which code is safe to run or what programs should be allowed to do have a better chance of helping you defend yourself against a nation-state (at least when paired with a talented user of that technology).
And the zero-day exploits used in these attacks are unknown to antivirus companies by definition.
And again, this is bullshit. Stuxnet had 3 known vulnerabilities
(CVEs 2010-2568, 2010-3888/2010-3338, 2010-2729), 4 if you count the
default
credentials in SIMATIC that it could leverage (CVE 2010-2772). Flame
apparently has 2 known vulnerabilities (CVEs 2010-2568, 2010-2729).
Even worse, if antivirus companies had paid attention to these samples
sitting in their archives, they may have ferreted
out the vulnerabilities before they were eventually disclosed.
once again, i can't find any details suggesting these vulnerabilities associated with stuxnet were known before june 2010. i'm not doubting that that may be the case for one or more of them, i do seem to recall mention of something like that for a vulnerability exploited by stuxnet, but if it's in the documentation i can't find it.
also, once again, these pieces of malware had exploits for these vulnerabilities, not the vulnerabilities themselves. and even if the vulnerabilities were known, programmatically determining if an arbitrary program exploits a particular vulnerability is as reducible to the halting problem as programmatically determining if an arbitrary program self-replicates. if it's a known exploit it should be findable just as known-malware is findable, but otherwise don't hold your breath (unless you're willing to let it happen and detect it after the fact).
As far as we can tell, before releasing their malicious codes to attack
victims, the attackers tested them against all of the relevant antivirus
products on the market to make sure that the malware wouldn't be
detected.
Way to spin more Mikko. This is standard operating procedure for many malware writers. I believe
VirusTotal is the first stop for many bad guys.
this may have been true once upon a time, but the way virustotal now shares their samples with vendors makes them a poor choice for doing
malware q/a. and they've been sharing samples for quite some time now.
When a big malware event makes the news, it only helps you. Antivirus
firms are the first to jump on the bandwagon
claiming that more people need more antivirus software. They are the first to cry out that if everyone had
their software, the computing world would be a much safer place. The reality? Even computers with
antivirus software get popped by malware. They don't detect all of those banking trojans and email worms
like you claimed in this article. They don't protect against the constant onslaught of new threats. Your
industry, quite simply, has no reason to improve their detection routines.
as a matter of fact, the individual member companies in the industry have quite a compelling reason to improve their detection routines. AV companies may not need to be better than the bad guys, but they certainly need to try and be better than each other. to adapt the old adage, they aren't trying to outrun the bear, they're trying to outrun each other. they compete. do you think mcafee simply rolled over and accepted the number 2 market position and let symantec have #1? no of course not. they'll try and take that market share if they can and symantec meanwhile will try to hold on to their lead. part of that is done with marketing, but part of it is also done with technological advancement.
I remember in the early 90's, the next big thing in the land of viruses was polymorphing virus code.
Antivirus vendors were developing "heuristics" that would detect this polymorphing code. In almost 20 years,
how has the development of that gone? Obviously not stellar. Antivirus companies spend their time cataloging
signatures of known malware, because that sells. "We detect 40 squirrelzillion viruses, buy our software!"
What happened if you developed reliable heuristics and marketed it? "Buy our software, once, and the advanced
heuristics can catch just about any malware!" There goes your business model. So of course you don't want
to evolve, you want to wallow in your big pile of shit because it is warm and comfortable. You can ignore
that overwhelming smell of shit that comes with your industry, because of the money. Don't
believe me? Let's hear it from the pro:
actually, traditional
polymorphism was attacked in a generic way through emulation. it worked pretty well, but has no effectiveness against
server-side polymorphism due to lack of access to the server-side processes. as for developing reliable heuristics, why do you think malware writers perform malware q/a? their new pieces of malware will already evade known-malware scanners simply by virtue of not being known malware. the only reason to take that extra step of performing malware q/a is because the heuristics actually are fairly effective. what the the heuristics are not is perfect. they can be fooled (in some cases quite easily, but fooling an automaton has never been considered difficult). i suggested
a heuristic countermeasure to malware q/a but i have no idea if anyone actually tried it.
i do think jericho has hit the nail on the head about evolving, though. what i don't think many people appreciate is the path which the industry's evolution needs to take. prevention is always going to have failures like what has happened with stuxnet or the flame worm or even the garden variety malware while it's still new. we as users need to grow up and start accepting that fact. there is no magical prevention fairy - if you're old enough to not believe in santa, the easter bunny, and the tooth fairy, then you're old enough to realize that prevention has limitations and always will have limitations.
the evolution that the industry needs to do is to help users come to terms with this fact and help them deal with it by providing tools for detecting preventative failures in addition to the tools they already provide for prevention. to a certain extent they already used to do this but those tools appear to have disappeared from the mainstream. at least a few vendors had
integrity checkers back in the day. i remember one from kaspersky labs (which was painfully slow as i recall), and one from frisk software. i've long wondered (and have my suspicions about) what happened to wolfgang stiller because his product "integrity master" was excellent but . i believe adinf (advanced diskinfoscope) is still around but it's very much not part of the mainstream - most people would have never heard of it, and certainly wouldn't think to use it because it's not part of a larger AV suite (as most people have been trained to look for).
the strategy most people use to keep themselves safe is what they learn from the AV industry, but the industry mostly just tells them to use product X and product X is almost invariably prevention only (maybe with a dash of cleanup). it is probably the most simplistic and rudimentary strategy possible, but i don't hold much hope the AV industry will ever change that. in the process of teaching the public a more sophisticated security strategy they would in effect be making users more sophisticated and less susceptible to the marketing manipulations that vendors currently employ in their competition with each other. on top of that they'd have to stop using those manipulations themselves and risk losing their existing users to another vendor who hasn't stopped before they have a chance to make the users proof against those manipulations. the principles behind the PDR triad (prevention, detection, recovery) are foreign to most people, and among the rest many have perverted it into something almost as mindless as "just use X".
AV companies are businesses and like all businesses their primary concern is their own financial interests. those interests don't align with the security interests of their customers or the public at large and no amount of hand wringing or foot stomping is going to change that. i don't see any way that the industry can drive the change that's needed while serving their own interests, and i'm not going to hold my breath waiting for them to sacrifice their own interests for the common good. the AV industry responds to market demand and if you truly believe evolution is needed then it's beholden upon you to help build demand for tools and techniques that go beyond prevention.
(update 2012/06/03 5:11pm - it appears that jericho has been remarkably responsive to comments from others on twitter so the rebuttal itself is in a state of flux. i'll have to wait and see how this turns out)