Thursday, January 03, 2013

imperva's anti-virus study is garbage

Enough is enough! I have had it with these motherf#$%ing flakes on this motherf#$%ing train of thought - (what i imagine samuel l. jackson might say if he were following this nonsense about imperva)

in case you are unfamiliar, imperva (a security vendor of some sort) commissioned a bunch of students from the technion-israel institute of technology to perform an evaluation of the efficacy of anti-virus (all anti-virus as a whole, apparently, rather than comparing them to each other) by uploading 82 found samples to virustotal. yes, you read that right, it's another virustotal-based test.

these days i have a number of alternative avenues to express myself that i didn't have when this blog was still young, and that can often sate my need to express my feelings on some topic. i can make snide comments on twitter, or even parody tweets from a satirical twitter account. in fact i can even make memes about it. unfortunately none of that has proven sufficient in this case because the hits just keep coming.

you see, imperva keeps shopping this quackery out to more and more media outlets where it gets gobbled up and regurgitated uncritically by writers/editors (who really ought to know better if reporting on this sort of topic is part of their actual job) and thus gets put in front of more and more eyeballs of those who realistically can't know better. along the way it can even collect somewhat supporting voices from venerated members of the security community like robert david graham
or wim remes

let me be clear, however - this is all wrong. as has been repeated over and over again, virustotal is for testing samples, not anti-malware. they say it themselves in their about page
The reason is that using VirusTotal for antivirus testing is a bad idea.
and
BAD IDEA: VirusTotal for antivirus/URL scanner testing
those statements alone should be enough but, because virustotal later talks specifically about comparative tests, imperva (and others) have tried to argue that imperva's test is OK because it doesn't compare products to each other. however...
VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
this makes it pretty clear that the product a customer installs is very much a different thing from the program that virustotal uses - they will in most cases behave very differently and so the results that virustotal spits out cannot be considered representative of what actual users of anti-malware products will experience.

(ironically,  a product that appears to fare best in a virustotal-based test may actually be the worst because a higher focus on the type of static (often signature-based) detection that virustotal best measures could be to cover for a weakness in (or absence of) more generic/dynamic detection capabilities.)

but don't just take my word for it, let's hear from a couple of people who actually work at virustotal
yes, that's right, imperva's study is a joke. this shouldn't be surprising to long time readers of this blog since when i first wrote about this problem four years ago the first reason i gave for why you might want to avoid performing virustotal-based tests was that those of us who know better will laugh at you. i'm sure a number of people are laughing at imperva's gross incompetence (hanlon's razor makes me choose this explanation over the more sinister alternatives) but i'm afraid i can't consider the mess they're making to be a laughing matter.

promulgating ignorance in a security context has the potential to do real harm, and that is where i draw the line. that's why i'm writing this, that's why the title gets straight to the point, and that's why i'm going to start naming some names of people/organizations who have helped make this mess and who really ought to have known better. imperva has behaved like a dung beetle, persistently rolling this turd around, but somehow it keeps getting bigger like some katamari damacy of bullshit, and i think it's important to see the scale and scope of it and hold the people responsible accountable. it's worth noting, however, that somewhere deep down someone at imperva must also have seen the potential for their message to do harm - that's why the caveat that they weren't advising eliminating AV was added (as an apparent afterthought).

a non-exhaustive list of people/orgs who really should have known better, tried harder, and ought to be held to account for this growing mess is as follows:
(i'm aware there's a lot more than this that you can simply find by googling sentences from the press release, i wish i had the time to make this list exhaustive - that said: reuters, the new york times, and the wallstreet journal... that definitely caught a lot of eyeballs)

now, perhaps you're thinking i'm being too hard on the journalists involved here. after all, they aren't experts. frankly, however, they don't have to be experts to see what's wrong with this test. if you're the type of reporter who reports on this type of technology then you should already know about virustotal and about how it can and can't be used. this isn't rocket science, or even some obscure nuance that only matters every 5th wednesday - not in the context of reporting on this subject. this is something reporters covering security technology ought to know. it's table stakes. you need to be this tall to get on the ride.

perhaps you think i'm being too hard on the students and their supervisor(s)? but this is academia we're talking about. they're expected to do their research, and i don't just mean the experimental research, i mean looking up and reading about the issues involved in designing and performing tests on anti-malware products. and their supeverisor(s) should have made sure they were doing their due diligence in this regard. frankly, in my time i've seen lone rank amateurs perform better tests than this with fewer resources. this is not acceptable academic performance.

and as for imperva themselves, well... if you intend to occupy part of the security industry that hopes to steal some of the AV industry's market, then you better know this stuff like it's the back of your hand. the institutional incompetence going all the way up the chain of command to the chief technology officer is astonishing and i'm surprised they managed to find someone with too many dollars and too little sense to give them funding, but i guess p.t. barnum was right about there being one born every minute.

imperva - do yourselves a favour and put a stop to this mess before it gets any bigger. you can't defend this junk computer science, the truth will eventually come out (it seems to have already started). you can't sweep it under the rug either, you've let things get too out of hand. the kind of smear campaign you're currently running was already attempted by the whitelisting industry years before you, and while that industry itself is still around and may even still be pumping out this same kind of junk, it didn't stop them from drifting back into obscurity. the way i see it the only way you can move forward sustainably is
  1. admit your error
  2. publicly retract your study
  3. reach out to the journalists whose reputations have been tarnished by listening to you and apologize
  4. assist the students you dragged into this in learning the error of the experimental methodology they followed (you can probably find a lot of good info either on or linked to from the anti-malware testing blog)
  5. start over with a more intelligent methodology and try to make your case again with valid data
and if you can't manage to follow these steps then i'll be glad to watch you fade away or get swallowed up in a few years time, because the kind of incompetence you've been proudly displaying so far is not the path to success.

8 comments:

Anonymous said...

You are just attacking the messenger and are at fault of the same thing you are ranting against: unqualified statements or no fact-backing.

Know what you can do to make your argument better? Take those 82 samples yourself and pit them against your chosen AV desktop solution, using the signatures for the timeframes denoted in the studies. With hard valid data yourself your argument would be an actual argument instead of a rant.

While not without flaws, the study points at something we've known for a time, that AV's entire functional model is based around negative or known-bad matching and the people doing malware are perfectly aware of how it works and how best to get the better of the AV companies. You didn't address any of this.

kurt wismer said...

you're probably right, i didn't address any of the myths you insist on believing.

anti-virus products are a lot more than just signature scanners and they have been for many, many years.

that's part of what makes imperva's experimental design invalid - they tested only a subset of the protective capabilities those products have (the subset everyone is most familiar with), and misrepresented them as the full protective capabilities in order to support patently fallacious conclusions.

and as for redoing the test myself with their samples, since they don't provide those samples or information about them (other than how they acquired them) it's really not possible.

Anonymous said...

Kurt, can you point to any independent study that refutes the accuracy claims of Imperva. If not then I agree with the first commenter, beyond expressing opinions the AV community should reply with whatever it believes is a proper research.

kurt wismer said...

Anonymous#2:

as it so happens i have already linked to a story that quotes representatives from multiple independent testing labs. here's the link again in case you missed it http://securitywatch.pcmag.com/none/306552-experts-slam-imperva-antivirus-study

in it you should find a quote from andreas marx (of av-test.org) saying that the lowest detection rates they see in real world zero-day testing is in the 64-69% range, not the less than 5% range claimed by imperva. that's the lowest, by the way, on average it's apparently more like 88-90%.

another point of contradiction was simon edwards (dennis labs) observation that contrary to imperva's findings that free products offer the best protection, the findings of dennis labs has been consistently that paid products offer better protection.

finally, randy abrams (nss labs) was quoted saying "It is rare that I encounter such an incredibly unsophisticated methodology, improper sample collection criteria, and unsupported conclusions wrapped up in a single PDF."

those are all independent testing organizations.

Anonymous said...

Would you run an unknown malicious software which was just release and was probably no analyzed by antivirus vendors on your own computer with all antiviruses installed? Answer yourself.

kurt wismer said...

Anonymous#3:

that's a strawman argument. no one is saying AV is perfect or can protect you in all situations. what i'm specifically saying in this post is that imperva's study is garbage.

it doesn't even matter what the results of imperva's test are. the way they were arrived at demonstrated a lack of understanding of AV, virustotal, and even the threat landscape itself.

the way you arrive at a conclusion is as important as the conclusion itself. if you use a stupid methodology then you might be right once or twice, the way a stopped clock is right twice a day, but generally your conclusions won't be worth the paper they're printed on.

these comments really make me kind of sad. they demonstrate a culture bereft of critical thinking. people who value facts over knowledge, answers over understanding, and probably people who don't even realize there's a difference between those things.

Anonymous said...

I've seen a lot of people beating up Imperva over this but I haven't seen anyone who has given a possible ulterior motive for them producing such a study. They don't do desktop products. They don't even do anti-malware at all. They bashed an industry they don't even compete in. And they are right.

Anyone who has worked in pentesting knows it is so trivial to bypass desktop AV that those vendors are selling a sham and auditors are lemming idiots for considering them an effective control.

Regardless of how or why Imperva did the study, those products simply are not effective any more. And that is the real point.

kurt wismer said...

Anonymous#4:
i'm sorry, but if you think they aren't competing with the anti-malware industry i think you need to take a closer look.

they do in fact compete, maybe not for home users, but certainly for corporate users. their product line may not look anything like a security suite from an anti-malware vendor, but they do sell products meant to protect against attacks and they're trying to make others look bad in order to make themselves look more appealing to potential customers.

as for how easy it is for a person to bypass AV, this is a forgone conclusion. computers are not smarter than people - never have been and probably never will be.

the thing is, having an intelligent attacker directly involved enough to be comparable to a pentesting scenario simply doesn't scale as well as dumb software-only attacks do (and cybercriminals certainly like their economies of scale). and those dumb software-only attacks don't have the same advantages even the dumbest pentester does.

sure there are attacks that bypass security suites easily, but they're the minority because they rely on a resource that isn't digital and thus can't be copied - malicious human beings.

no matter how you slice it, facing a dedicated attacker is a relative rarity in the threat landscape.