Saturday, November 02, 2013

AV complicity explained

earlier this week i wrote a post about the idea of the AV industry being somehow complicit in the government spying that has been all over the news for months. some people seemed to really 'get it' while others, for various reasons, did not; so i thought i'd try to be a little more clear about my thoughts on the subject.

the question that the EFF et al have put towards the AV industry (besides having already been asked and answered some years ago) is a little banal, a little pedestrian, a little sterile. real life is messy and complicated and things don't always fit into neat little boxes. i wanted to try to get people to think outside the box with respect to complicity, what it means, what it would look like, etc. but i think some people have a hard time letting go of the straightforward question of complicity that has been put forward so let's start by talking about that.

has the NSA (or other organization) asked members of the AV industry to look the other way and has the AV industry (or parts thereof) agreed to that request? almost certainly the NSA has not made such a request, for at least a couple of reasons:

  1. telling people about your super-secret malware is just plain bad OpSec. if you want to keep something secret, the last thing you want to do is tell dozens of armies of reverse engineers to look the other way.
  2. too many of the companies that make up the AV industry are based out of foreign countries and so are in no way answerable to the NSA or any other single intelligence organization.
  3. there's quite literally no need. there are already well established techniques for making malware that AV software doesn't currently detect. commercial malware writers have been honing this craft for years and it seems ridiculous to suggest that a well-funded intelligence agency would be any less capable.


now while it seems comical that such a request would be made, to suggest that the AV industry would agree to such a request would probably best be described as insulting. whatever you might think of the AV industry, there are quite a few highly principled individuals working in that would flat out refuse, in all likelihood regardless of what their employer decided (in the hypothetical case that the pointy-haired bosses in AV aren't quite as principled).

now please feel free to enjoy a sigh of relief over the fact that i don't think the AV industry has secretly agreed to get into bed with the NSA and help them spy on people.

done? good, because now we're going to take a deeper look at the nature of complicity and the rest of this post is probably not going to be nearly as pleasant.

here's one of the very first things wikipedia has to say about complicity:
An individual is complicit in a crime if he/she is aware of its occurrence and has the ability to report the crime, but fails to do so. As such, the individual effectively allows criminals to carry out a crime despite possibly being able to stop them, either directly or by contacting the authorities, thus making the individual a de facto accessory to the crime rather than an innocent bystander.

in the case of government spying we may or may not be talking about a crime. the government says they broke no law and observers speculate that that may be because they've subverted the law (much like they subverted encryption algorithms). so let's consider a version of this that relates to ethical and/or moral wrong-doing instead of legal wrong-doing:
an individual is complicit in wrong-doing if he/she is aware of it's occurrence and has the ability to alert relevant parties but fails to do so. as such, the individual effectively allows immoral or unethical people to carry out their wrong-doing despite possibly being able to stop them either directly or by alerting others who can, thus making the individual a de facto accessory to the wrong-doing rather than an innocent bystander.

in this context, could the AV industry be complicit with government spying? perhaps not directly, not in the sense that they saw what the government was doing and failed to alert people to that wrong-doing. however, what about a different wrong-doing by a different entity but still related to the government spying?

hbgary wrote spyware for the government. this became public knowledge in the beginning of 2011. by providing the government with tools to perpetrate spying they become accessories to that spying.

hbgary was and is a partner of mcafee. now what is the nature of this partnership? hbgary is an integration partner. they make technology that integrates into mcafee's endpoint security product to extend it's functionality. mcafee does marketing/advertising for this technology and by extension for hbgary, giving them exposure, lending them credibility, and generally helping them make money. that money is almost certainly re-invested into research and development of hbgary's products, which includes governmental malware that's used for spying on people/organizations. there are mcafee customers out there right now whose security suite includes components that were written by known malware writers and endorsed by mcafee (although they make sure to weasel out of responsibility for anything going wrong with those components with some fine print). mcafee didn't break off the partnership when hbgary's status as an accessory to government spying became known, and since they didn't break off the partnership you can probably make a safe bet that they didn't warn those customers that part of their security suite was made by people aiding the government in spying either. even if we ignore the fact that mcafee aids a business that writes malware for the government, mcafee's failure to raise the alarm about the possible compromising nature of any content provided by hbgary makes them accessories to hbgary's wrong-doing. by breaking ties with hbgary and warning the public about what hbgary was up to they could have had a serious impact on hbgary's cash flow and hurt their ability to win contracts and/or execute on their more offensive espionage-assisting projects. they didn't do any of that and that makes them complicit in the sense discussed a few paragraphs earlier.

the rest of the AV industry may not be directly aiding hbgary's business but, like mcafee, they have failed to raise any alarm about hbgary. they could have done much the same as mcafee by warning the public, with the added bonus that they would have hurt one of the biggest competitors in their own industry while they were at it and that would have benefited all of them (except mcafee, of course). again, failing to act to help prevent wrong-doing makes them a de facto accessory to that wrong-doing. the AV industry as a whole is complicit in the sense discussed earlier.

of course, the AV industry isn't alone in being accessories to an accessory to government spying, and that brings up a consideration that should not be overlooked because there is a larger context here. historically, the culture of the AV industry has been one that values being very selective in things like who to trust, who to accept into certain groups, etc. add to that a very narrowly defined mission statement (to fight viruses and other malware) and it's little wonder that the ethical boundaries that developed in the early days were so dead-set against hiring, paying, or doing anything else that might assist malware writers or possibly promote malware writing. heck, i knew one member who wouldn't even engage viruses writers in conversation, and another who said he was wary of hiring anyone who already knew about viruses just in case they came by that knowledge through unsavoury means. aiding malware writers, turning a blind eye to their activities, etc. are things that normally would have violated AV's early ethical boundaries.

by contrast, the broader security industry is highly inclusive and has long viewed the AV industry's selectivity as unfair elitism. that inclusivity means that the security industry isn't actually just one homogeneous group. there are many groups, from cryptographers to security operations personnel to vulnerability researchers to penetration testers, etc. each one has it's own distinct mission statement and it's own code of ethics. what do you think you get from a highly inclusive melting pot of security disciplines? well, in order for them to tolerate each other, one necessary outcome is a very relaxed ethical 'soup'. many quarters openly embrace the more offensive security-related disciplines such as malware creation. in order for AV to integrate into this broader security community (and they have been, gradually, over time), AV has to loosen it's own ethical restrictions and be more accepting.

so while the AV industry failed to raise the alarm about hbgary, the broader security industry failed as well. the difference is that ethics in the security industry don't necessarily require raising an alarm over what was going on. hbgary is a respected company in security industry circles and it's founder greg hoglund is a respected researcher whose proclivity for creating malware has been known for a long, long time. as far as the security industry is concerned, hbgary's activities don't necessary qualify as ethical wrong-doing. there will probably be those who think it does, but in general the ethical soup will be permissive enough to allow it, and without being able to call something "wrong-doing" there can be no complicity. this is where AV is going as it continues to integrate into the broader security community. in fact it may be there already. maybe that's the reason they didn't raise the alarm - because they've become ethically compromised, not as a result of a request from some intelligence organization, but as a result of trying to fit in and be something other than what they used to be.

in the final analysis, if you were hoping for a yes or no answer to the question of whether AV is in any way complicit in the spying that the government has been doing (specifically, the spying done using malware), i'm afraid you're going to be disappointed. it depends. based on AV's earlier ethics the answer would probably be yes. based on the security community's ethics the answer may well be no. where is the AV industry now? somewhere between what they were and what the broader security community is. ethical relativity is unfortunately a significant complicating factor. then again, i'm an uncompromising bastard, so i say "yes" (after all, i did grow up with those old-school ethics).

Tuesday, October 29, 2013

what would AV's complicity in government spying look like?

as you may well have heard, the EFF and a bunch of security experts have written an open letter to the AV industry asking about any possible involvement by them in the mass spying scandal that has been in the headlines for much of this year. at first i thought this was old news for AV, since the issue of government trojans has actually been around a lot longer than the current spying revelations. i thought these people had simply failed to do their homework but, as time passed, the wheels began to turn and i started thinking differently. now i think the question we should all be asking ourselves is, what would AV's complicity look like?

some background, first. the subject of government trojans have been around for over a decade. magic lantern, for example, dates back to 2001 (or at least public awareness of it does). so it should come as little surprise that the question of whether the AV industry looks the other way has come up before. in 2007 cnet ran a story where 13 different vendors were asked about this very thing. they all more or less denied being a party to such shenanigans, but i suggest you read the article and pay careful attention to the answers.

now earlier this year one of the first controversial spying revelations to come about was about a program called PRISM which a whole bunch of well known, big name internet companies (including google, microsoft, yahoo, facebook, etc) were apparently involved with. the companies all denied it of course, and it turns out they may be legally required to do so.

that adds an interesting wrinkle to the question now being put towards the AV industry; would they be allowed to admit to any complicity that might be going on? they say actions speek louder than words, so maybe we should look for something other than the carefully crafted assurances of multi-million dollar corporations. maybe what we should be looking for is the same thing that alerted us to the mass spying in the first place - a leak. maybe then we can get a glimpse of their actions.

back in early 2011 a rather spectacular breach occurred. security firm hbgary was breached by some members of anonymous, and one of the things that leaked out was the fact that hbgary wrote malware for the government. in fact, it doesn't take much imagination to suppose that this would be the very type of malware the EFF et al are concerned the AV industry may have been asked to ignore.

it's unknown whether any AV vendor actually did field such a request. i have my doubts since traditional commercial malware writers seem to be perfectly capable of creating undetected malware without making such requests. that being said, one fact that became rather suspicious in light of the revelations about hbgary was the fact that they were partners with mcafee, one of the biggest AV vendors around and certainly one of the best known names in AV. i wrote about this apparent ethical conflict back in february of 2011, and then again in march of 2011 to note the tremendous non-reaction from the industry. i even went so far as to create a blog specifically for keeping an eye on the industry (though as an outsider myself there was little i could do on my own).

the EFF and others want to know if the AV industry has been complicit in the government's spying. well, one AV vendor was notably evasive when asked by cnet in 2007 about their handling of governmental trojans/police spyware. that same AV vendor was and still is partnered with a company that wrote government malware (in all likelihood for very purpose in question).  furthermore, in the intervening years, nothing has come of it. no other vendor has said anything or done anything to call attention to or raise awareness of this partnership. even after the mass surveillance controversy started earlier this year, not a one bothered to raise the alarm and suggest that mcafee might at least in principle be compromised by that partnership, even though they certainly could have benefited from disrupting mcafee's market share. no one thought they could profit from it? no one thought it was their duty to warn people of a potential problem? to raise concerns that the protection mcafee's customers receive may suffer in some way because of their close ties with government malware writers? to give voice to the doubts this partnership creates even after publicly wringing their hands over how wrong what the government themselves were doing was?

AV vendors may or may not have been asked to turn a blind eye to government malware - we may never know, and it's impossible to prove a negative. but they've done a heck of a job turning a blind eye to the people who make government malware and to those in their own ranks who got in bed with government malware writers. i asked at the beginning what AV complicity would look like and i think when it comes to those whose job it is to raise an alarm, complicity would probably have to look like silence (and something about silence makes me sick).

(2013-10-29 13:21 - updated to change the open letter link to point to the blog post that includes the list of intended recipients as well as a link to the letter itself)

Wednesday, October 16, 2013

my experiences at #sectorca in 2013

well, another year, another sector conference. i almost got another of my colleagues at work to go too (an actual security operations sort of guy at that) but in the end it didn't happen. i'm going to have to see if there's anything more i can do to make it happen next year. in fact, i'm pretty sure some of the folks at work would have preferred if i hadn't gone either (just so much to do) but it was already paid for, so...

the first thing that struck me this year (aside from the great big gaping hole where the street around union station used to be) was that the staff at the metro toronto convention center could accurately guess where i was trying to go just by looking at me. i guess that must mean i look like i belong with the crowd of other sector attendees, even if i've never really felt like i do (what with not being an information security professional and all).

the second thing that stuck me was the badge redesign. more space was dedicated to the QR code than to the human readable name. almost as if my interactions with machines are more important than my interactions with people.

the first keynote of day one was "how the west was pwned" by g. mark hardy. i suppose it was a kind of cyberwar talk (that's certainly how it was introduced), but really focused more on economic/industrial espionage, theft of trade secrets and intellectual property and that sort of thing. there were some interesting bits of trivia, like china's cyber warrior contingent having a comparable number of people to the entire united states marine corps. also an interesting observation about the global form of government (that being the system that governs us on a global scope rather than simply within our own nations) being anarchy. i'd never thought of it that way before, but there really isn't anyone governing over how how nations interact with each other or how people interact with foreign nations.

the first normal talk of day one that i attended was a so-called APT building talk. specifically it was "exploiting the zero'th hour: developing your advanced persistent threat to pwn the network" given by solomon sonya and nick kulesza. i kinda knew going in that this wasn't going to be the best quality APT talk just by the title. they clearly believe APT is simply a kind of advanced malware rather than realizing that APT is people. i can't say references to "the internet cloud" improved my opinion any. add to that the fact that anyone who took an undergrad systems programming course would have recognized most of the concepts they were talking about and i was pretty "meh" about the talk. the rest of the audience, however, was clearly very impressed based on the applause. all but one, that is. he called them out on their amateurish malware (about the only part of the APT acronym they got right was persistent, and even that is debatable). he also called them out on their releasing of malware (i swear he wasn't me, even though it probably seems like something i would do) that really wouldn't help anyone defend but certainly would help arm the shallower end of the attacker gene pool. i quite agreed with his opposition, but the applause again from the rest of the audience when one of the speakers said he could sleep quite well at night made it clear who the community was siding with here.

that all left a bad taste in my mouth so i decided to skip the next round of talks. that wasn't a difficult decision to make since the entire time-slot was filled with sponsored talks which i've long found to be a disappointment. so instead i took the time to look around and see what and who i could see.

i happened to luck out and stumble across chris hoff. i'm not entirely sure he remembered/recognized me but that doesn't come as a huge surprise since i'm not the most memorable person in the world and my appearance has changed significantly since the days when he did remember/recognize me. also, and perhaps more to the point, someone like chris has got to get approached by so many people that there'd be no way he could remember them all. that's part of being a "security rock star". anyway, we chatted briefly and he asked me if i was a speaker or listener. i'm definitely not a speaker and i told him i've sorta been down the speaking path before and it didn't work out so well (part of being on a panel involves speaking, right?). he shared an anecdote of his own which frankly put my bad experience to shame. still, if i went to the effort to develop that skill, what would i do a talk about? "everything you know about anti-virus is wrong"? i expect that would go over about as well as a lead balloon. my specialty is in something that has little or no respect in the information security community, so even if i did by some miracle make it past the CFP stage, i can't imagine there'd be much of a turn-out.

after that i saw a familiar face i never would have expected. an old colleague from work, joel campbell, who i gather now works at trustwave and was manning their booth on the expo floor. we chatted a bit about work of course, but also about security conferences like sector and how they compare with some of the ones in the states. sector is apparently small, which rationally i knew since i did once attend RSA, but i guess with little else to compare it to in more recent times, sector seems big to me.

the lunch keynote given by gene kim about DevOps interested me in a "i know someone who'd probably be interested in this" sort of way. i can't wait for the video to become available so i can share it with some of my higher-ups in the dev department at work (we do have an ops guy sort of embedded with us devs, i wonder what DevOps would say about that). there was also a very interesting observation about human nature; apparently when we break promises we compensate by making more promises that are even bolder and less likely to be kept. i think i've seen that play out on more than one occasion.

after lunch i attended kelly lum's talk ".net reversing: the framework, the myth, the legend", which was pretty good despite the original recipe bugs that kept her distracted at the beginning. i actually saw a .net hacking talk last year as well (i'm a .net developer, it stands to reason i'd be interested in knowing how people can attack my work) but this one spent less time talking about all the various gadgets you could use to attack .net programs and more time talking about the format such that one could possible use it as a starting point for creating one's own .net reverse engineering tools. that'll certainly be filed away for future reference.

following that i attended leigh honeywell's talk "threat modeling 101", only it wasn't really a talk. this was one of the more inventive uses of the time-slots speakers are given, as she actually had us break up into groups to play a card game called elevation of privilege. it's quite an interesting approach to teaching people to think about various types of attacks and i've already talked about the game at work and shared some links. hopefully i can get some of my coworkers to play.

for the last talk of day 1 i attended "return of the half schwartz fail panel" with james arlen, mike rothman, dave lewis, and ben shapiro. this was apparently a follow-up of a previous fail panel that i never saw but that didn't seem to matter because it didn't seem to reference it at all. i didn't find it particularly cohesive, i guess because the only common theme it was designed to have running throughout was failure, but one interesting thing i took away was the notion of venture altruism. it's a different way of looking at things than i'm used to as i tend to frame things more as 'noblese oblige', but it certainly appears as though quite a few people really do have their hearts in the right place in that they're trying to make the world a better place in their own particular, security-centric way.

i decided to opt out of the reception afterwards. i felt guilty about it because i know i really ought to have gone but the truth is that in all the times i've gone before i've never really felt comfortable among all those strangers in a purely social environment. plus there was last year's (and possibly other years as well, but definitely last year) shenanigans where your badge would get scanned in order for you to get drink tickets, and then the company doing the scanning would send you email as though you had actually shown interest in them and visited their booth. i know the conference is an important tool for generating leads for sales, but over drink tickets? really? i suppose if they're paying for the drinks then it's hard to argue against them getting your contact info in return, but at least when facebook asks you to trade your privacy for some reward you have some kind of idea that that's what's going on. it made participating in the reception feel like bad OpSec; and you know, if you add enough disincentives together you're eventually going to inhibit behaviour.

the day 2 morning keynote was another panel, and if i'd gotten the impression from the fail panel that panels lacked cohesion, this one dispelled it. "crossing the line; career building in the IT security industry" with brian bourne, leigh honeywell, gord taylor, james arlen, and bruce cowper as moderator focused very strongly on the issue of crossing legal, ethical, and moral lines and whether that was necessary to get ahead and be taken seriously in security. i came into the keynote thinking it would be more about career building (which hasn't been that interesting to me in the past since i'm perfectly happy not being in InfoSec) but the focus on the law, ethics, and morals is much more interesting to me as the frequent mentions of ethics on this blog could probably attest to. i was pleased to see both leigh and gord take the position that crossing those lines is not necessary and holding themselves up as examples. james was careful to point out that those lines are not set in stone (they're "rubber" as he put it, though he also made a point that that doesn't mean they aren't well defined), and certainly theres a point there at least with the relevancy of the law as there are some really poorly written laws as well as some badly abused laws (as the prosecution of aaron schwartz certainly highlights). of course as the amateurish malware distributors from day 1 demonstrated, crossing ethical and moral lines is still widely accepted and embraced in the information security community. one might want to draw a comparison between that and lock pick village which teaches people how to breach physical security, but the lock picking at least has a dual use (beyond simple education) in that it allows you to regain access to things that you have a legal right to but would otherwise be unable to access because you lost a key, for example. the AV community was historically much more stringent about not crossing those lines, and much closer to having (or at least implicitly obeying) a kind of hippocratic oath; and having literally grown up with that influence i'm certainly in favour of it, though when leigh mentioned the hippocratic oath it did not seem that well received. james pointed out that ISC^2 has a rule against consorting with hackers and yet gives credits for attending hacker conferences - which to me just makes them seem like they're either hypocrites or toothless. i could probably write an entire post about this topic alone, or rather another entire post about this topic since i already did once years ago that's kind of begging for a follow-up.

the first regular talk i attended the second day was schuyler towne's "how they get in and how they get caught", which turned out to be a lock picking forensics talk (in the security fundamentals track, no less). after having seen a number of talks about lock picking over the years, seeing one on detecting that lock picking has occurred rounded things out really nicely. the information density for the talk was high, there was even a guy in front of my taking picture after picture of the diagrams being shown on the screen, but schuyler is really passionate about the subject matter and did a good job of keeping the audience's interest in spite of all the details and photos of lock parts under high magnification.

after that talk i finally relented and attended one of the sponsored talks, specifically "the threat landscape" by ross barrett and ryan poppa of rapid7. i suppose it's only fitting that a vendor would hand out buzzword bingo sheets. certainly it's good that they acknowledge that as vendors they're expected to throw out a lot of buzzwords. but i think it kind of backfired for the talk because rather than paying attention to what they were saying i found myself paying attention to what buzzwords i could cross off my sheet. buzzword bingo is a funny joke, but if you make it real i think you wind up sabotaging your talk. on the other hand, perhaps that acts as a proxy for actual engagement of the audience, so that people will come away feeling better about the talk than they otherwise might have.

the lunch keynote by marc saltzman was really more entertainment than information. flying cars? robots? virtual reality? ok. lunch was good, though.

after lunch i attended an application security talk given by gillis jones. this one wasn't in the schedule so i can't look up the actual name of the talk. it replaced james arlen's "the message and the messenger" which i've already seen on youtube. i guess whenever they say app sec they must be talking about web application security, because i can't say i've seen much in the way of winform application security talks (unless .net reversing counts). i'm not a web guy, i don't do web application development (yet) so i sometimes find myself out of my depth, but (perhaps because it was in the security fundamentals track) gillis approached the topic in a way that would help beginners understand, and i certainly feel like i have a better handle on some of the topics he covered. in fact, i started trying to find XSS vulnerabilities at work the very next day.

for the final talk of the conference i attended todd dow's "cryptogeddon" which was a walk-through of a cyber wargame exercise. it had a very class-room like approach to working through a set of clues in order to gain access to an enemy's resources. that format works well, i think, and i can see why educators would want to use todd's materials for their classes.

and that was pretty much my experience of sector 2013. it's taken me several days to write this up - certainly enough time for me to come down with the infamous "con-flu", but i never do. i'm not certain, but i have a feeling that my less social nature makes me less likely to contract it somehow. i don't shake as many hands, or collect as many cards, or stand face to coughing/sniffling/sneezing face with as many people as some of the more gregarious attendees do.

Wednesday, August 07, 2013

if google gives you security advice, get a second opinion

originally posted on secmeme
remember back when google's chrome browser was shiny and new and their vaunted sandboxing technology didn't actually place plug-ins in a sandbox even though the plug-ins, with their existing body of vulnerabilities and research, would have been the most likely vector of attack for a brand new browser? seems like kind of a glaring oversight, right?

and who could forget google's chris dibona ranting about android not needing anti-malware and sellers of such products being scammers and charlatans? of course now google themselves are hard at work trying to stem the tide of android malware with things like bouncer, but that's far from perfect.

heck, even google's infamous tavis ormandy had to take a second stab at executing his sophail vendetta* because his first attempt was so laughably bad.
[* i refer to it as a vendetta because a) it followed then sophos representative graham cluley publicly chewing tavis ormandy out for what has since become official google policy (disclosing vulnerabilities after a ridiculously short period of time), and b) the entire sophail effort from start to finish spanned years.]

now comes news that google's chrome browser doesn't require the user to enter a master password before displaying saved passwords? and not only that but it also comes with a condescending head of chrome security, justin schuh, defending the design by claiming that master passwords breed a false sense of security by making people think it's safe to share their computer with others or leave them unlocked and unsupervised. he repeatedly falls back on the trope of "once the bad guy got access to your account the game was lost". nevermind the fact that most people will assume it's protected regardless of what chrome does because that's how most browsers have behaved for years (so not protecting the passwords is even worse than protecting them partially), nor the fact that attackers are also capable of bypassing the user account protection chrome is abdicating password security responsibility to. no protection is perfect, but that doesn't mean we throw out the imperfect ones or we'll eventually be left with none at all.

it's almost enough to make you think google never gets anything in security right the first time. but wait - it's not like password storage is an innovative new concept. there's been an established pattern around for years that they could have simply followed. it's not even like they could claim to not be aware of it when other browsers follow that pattern. frankly, if the folks at google really think they know password storage security better than everyone that came before them, from a UK software developer to mozilla engineers to bruce freaking schneier, then i respectfully suggest that they pull their heads out of their asses and get with the program. if they were really concerned about a false sense of security then maybe they shouldn't be storing passwords in the first place, after all it's not unheard of for a browser to be tricked into revealing the contents of it's password store to a remote attacker when visiting a specially crafted malicious webpage.

Thursday, June 13, 2013

expert misuse of the term "virus"

in the past, the argument many experts have used against putting in the effort to actually correct people's misconceptions about what a computer virus is and what it does and how it's different from other forms of malware have centered on the idea that it doesn't really matter to a victim what kind of malware they have. when they have malware on their computer, all they care about is getting rid of it. i can understand this school of thought, even though i've never agreed with it. however, the world has changed. 

this is a post stuxnet world and defenders/victims aren't the only ones we need to consider anymore. we now have people barking orders to create digital weapons, and the details of those orders matter. if they say make me a virus then someone will make them a virus even if they didn't understand what they were asking for - and there's evidence to suggest they don't. 

stuxnet is generally believed to have been a targeted, covert operation, but it used a noisy, untargetable* type of malware - a computer virus (or more specifically a worm). there were and are better ways to achieve the ends we believe the creators of stuxnet were after, so one can only assume that high level decisions were made in ignorance of the differences between malware types and the consequences those differences would have on that type of operation.

the question, then, of whether to put in the effort to educate people about the proper use of the term virus can no longer be answered by looking exclusively at the victims who want their computers to be clean. it is now also necessary to consider aggressors who want to use malware as a weapon to serve national interests. as misguided as such behaviour is, we have to accept that it's happened and will continue to happen, and actually knowing the differences between malware types may mean the difference between a surgically precise operation, or one with a lot of collateral damage. 

this isn't to say that i think we should start helping aggressors create and/or launch their digital weapons. i still don't believe in helping the bad guys, and even if i believed such nationalistic aggressors weren't bad guys (which i don't), i don't believe there's any way to help them without also helping those who are much more unambiguously bad. what i am saying, however, is that this particular form of ignorance that experts have been too lazy to address can cause real harm and ignoring it means ignoring the opportunity to reduce the unintended harm such people will cause.

(*many believe stuxnet was highly targeted, but there's a distinction to be made. while it's destructive payload was highly targeted to a very specific environment, it's self-replication was not - it spread far beyond it's intended target)

Monday, May 27, 2013

more on bromium and snake oil

in my previous post about bromium i looked at claims that a technology reporter made about their technology (that it would kill all malware forever), noting that it was for all intents and purposes snake oil, and suggesting that the folks at bromium were doing the public a disservice by failing to dispel the false sense of security that that sort of reporting/opinion generates.

bromium's tal klein took exception to this based on what he believed to be my misunderstanding of their technology, and suggested i go read their white paper, among other things. here's some friendly advice for all you vendors out there: when someone calls you out for snake oil and you tell them to go read your white paper because they don't know what they're talking about, you better make sure they don't find more snake oil in your white paper - especially not in the second paragraph. and i quote:
It defeats all attacks by design.
that's a rather bold claim, don't you think? sort of suggests perfect security, doesn't it? but wait, there's more in the third paragraph:
It revolutionizes information protection, ensures compliance – even when users make mistakes, eliminates remediation, and empowers users – saving money and time, and keeping employees productive.
the emphasis above is mine. "eliminates remediation" or "eliminates the need for remediation" is something of a recurring theme in bromium's marketing materials. you can find it in their introductory video, and even hear a version of it from the mouth of simon crosby himself in their video on isolating and defeating attacks.

the only way you can eliminate remediation is if prevention never fails. but there is no such thing as a prevention technique that never fails. all preventative measure fail sometimes. if you believe otherwise then i've got a bridge to sell you (no, not really). perfect prevention is a pipe-dream. it's too good to be true, but people still want to believe and so snake oil peddlers seize on it as a way to help them sell their wares.

so it would appear that things are actually worse than i had originally thought. not only is bromium letting 3rd party generated snake oil go unchallenged, they're actively peddling their own as well. now just to be clear, i'm not saying that vsentry isn't a good product, from what i've read it sounds quite clever, but - even if you have the best product in the world, if you make it out to be better than it is (or worse still, make it out to be perfect) and foster a false sense of security in prospective customers, then you are peddling snake oil.

customers may opt to ignore the possibility for failure and the need to remediate an incident, but i wouldn't suggest it. to re-iterate something from my previous post, it's an isolation-based technique. although they often like to gloss over the finer details, their isolation is not complete - they have rules (or policies if you prefer) governing what isolated code can access outside the isolated environment, as well as rules/policies for what can persist when the isolated code is done executing. this is necessary.  you're probably familiar with the aphorism about if a tree falls in forest and no one is around to hear it, does it make any sound? well:
 if code runs in an environment that is completely isolated, does it do any useful work?
the answer is a resounding no. useful work (the processing of data to attain something that can itself be used by something else) has not occurred. all isolation-based techniques must allow for exceptions because of the nature of how work gets done. we divide labour, not just between multiple people, but also between multiple computers, multiple processes, multiple threads, and even multiple subroutines. we need exceptions to isolation, paths through which information can flow into and out of isolated environments, so that the work that gets done in isolation can be used as input for yet more work. this transcends the way the isolation is implemented, it is an inescapable part of the theory of isolating work.

and that is a weakness of every isolation-based technique - the need to breach the containment it affords in order to get work done. someone or something has to decide if the exception being made is the right thing to do, if the data crossing the barrier is malicious or will be used maliciously. if a person is making the decision then it boils down to whether that person is good at deciding what's trustworthy or not. if a machine is making the decision then, by definition, it's a decidability problem and is subject to some of the same constraints as more familiar decidability problems (like detection - after all, determining if data is malicious is as undecidable as determining if code is malicious). in the case of vsentry, a computer is making the day to day decisions. the decisions are dictated by policies written by people, of course, but written long before the circumstances prompting the decision have occurred, so people aren't really making the decision so much as they're determining how the computer arrives at it's decision. the policies are just variables in an algorithm. the decisions made by people involve what things vsentry will isolate (it only isolates untrusted tasks, not all tasks), but people deciding what to trust and what not to trust is basically the same thing that happens in a whitelisting deployment or when people think they're smart enough to go without anti-virus software, and we already know the ways in which that can go awry.

vsentry may have scored a perfect score in an evaluation performed by NSS Labs using malware and an operator utilizing metasploit, but that doesn't mean it's perfect anymore than receiving the VB100 award makes an anti-virus product perfect. they weren't able to find a way past vsentry's defenses because vsentry is still new and still novel. it will take time for people to figure out how to effectively attack it, but eventually they will. the folks at bromium need to tone down their claims and take these famous words to heart:
Don't be too proud of this technological terror you've constructed - Darth Vader

Monday, May 20, 2013

no, bromium will not kill all malware forever

over the weekend a discussion broke out on twitter (as discussions are want to do) about a somewhat overly optimistic article concerning the new anti-malware apple of the security community's eye: bromium.

the primary tactic that bromium uses (or at least the primary one that people focus on) is isolation/sandboxing. bromium's vsentry product uses virtualization on a per-process basis to isolate every process from the system and from each other. that level of granularity for isolation is a lot higher than most sandboxing efforts can give you. while there are certainly benefits to that granularity, there are also drawbacks.

perfect isolation is actually not desirable, we want and even need to be able to use the results of one process inside another one. the more sandboxes you have, he harder this is to manage. the folks at bromium have opted to address this issue using rule-based systems to decide what something in a sandbox can access as well as what to do with any changes that are left when the sandboxed process is finished. rules which, in all likelihood, the administrator can modify to suit their needs.

now, while the article in question is reasonably good at explaining what bromium's vsentry does, the author (jason perlow) takes the arguably naive view that this sandboxing technique can stop all possible malware (as evidenced by the article's headline: "Bromium: A virtualization technology to kill all malware, forever"). the reality, however, is that there are limits to what sandboxing can do, and as clever as the folks at bromium are, they aren't clever enough to deliver on the promise that headline makes.

that's a problem, because people are going to read that headline, see nothing in the article to actually contradict it, and believe that it's actually true. have we seen claims like that before? sure we have - saying it can kill all malware forever is not intrinsically different from claiming 100% protection. it's classic snake oil, only in this case it's not the vendor that's spreading it (as far as we know - we don't know exactly what the folks at bromium may have said to mr. perlow, only that they say the headline is his words, not theirs).

i suppose that should mean there's no problem, right? the vendor's hands are clean, after all. the snake oil is being spread by a third party. the vendor isn't doing anything about it in this case or previous cases that have arisen because, let's face it, they benefit from it. it's good for bromium's business if people think vsentry is better than it actually is, at least in the short term. in the long term, the kinds of mismatched expectations that creates are the same kind that the AV industry struggles with daily.

it is bromium's responsibility to control how their products are perceived, and by failing to take action they are giving tacit approval to the snake oil being spread on their behalf. their hands are not actually clean, they are dirty through negligence. however, i didn't really expect any better of them (though i did give them an opportunity to surprise me) and you probably shouldn't either. tread carefully - caveat emptor.

know your enemy: security vendors

just to be clear, i'm not suggesting that vendors are waging some kind of war against their own customers - they aren't (usually) that kind of enemy. but by the same token, vendors are not your friends either. when it comes to laying out strategies for protecting yourself and your stuff, it's important to know what category to place the various players involved, and vendors are best thought of as adversaries.

to better explain what i mean, imagine you're sitting around a table with your friends playing the classic board game monopoly. although these people really are your friends, in the context of the game, their goal is to win at everyone else's expense. in serving their own interests, they act in ways that don't serve yours and in fact may sometimes be in direct opposition to your interests. in this way it can be said that you and your friends have competing interests.

the customer and the vendor are generally not competing with each other in the conventional sense, but their interests are not aligned and in some cases the interests do compete. you as a customer have an interest in keeping your computers, intellectual property, banking credentials, etc. safe and secure. vendors also have an interest in that to a certain extent, but protecting you and your stuff is not a vendor's highest priority.

vendors are companies. as such their highest priority is the bottom line. without the bottom line, the company ceases to be. companies don't just start up out of thin air, they need money; which means they have investors and those investors expect a good return on their investment, or else it's not a good investment and they might not invest anymore in the future, or maybe even pull out their stake in the company. companies also have operating expenses. they need to pay to keep the lights on and the machines running, and they need to pay their employees who themselves have expenses (families they need to feed and put roofs over their heads). therefore the company has to make profit it's priority. the way vendors make money is by vending - they sell a product and the more product they sell the more money they make.

in theory if the product is good then they'll sell more of it, but it doesn't need to be good enough to stop all the threats to you or your stuff - vendors aren't competing with the bad guys, they're competing with each other, so they only need to be better than other vendors. what's more, since technical 'goodness' is difficult for customers to accurately quantify, the vendor only needs their product to be perceived to be good. technical quality is still required up to a point, of course, because you can't fool all the people all the time. but, since your buying decisions as a customer are based on perception, and that perception can be altered/manipulated more cheaply through marketing than through technological advancement, companies engage in this kind of shortcut to help them maintain or even advance their market position.

how does this compete with your interests as a defender of yourself and stuff? well, in a few different ways, actually:

  1. by conventional falsehood, they make their product out to be better than it is and so draw you away from something that may actually suit your needs better (example: look at any vendor that's ever claimed to be able to take care of all/100% of any kind of threat)
  2. by omission, they make solving your security problems seem easier than they really are because nobody wants to make the customer swallow a bitter pill about how much work is really involved in staying safe, especially when their competitors aren't doing it (example: how many vendors will tell you about what you need to do when their product doesn't work? how many will even talk about that scenario?)
  3. by framing the issue, they make the customer think about the customer's security issues in the vendor's terms, thereby favouring the vendor's proposed 'solution' rather than formulating strategies to meet the customers own unique, individual needs (example: a number of anti-malware vendors used to provide generic detective controls in the form of integrity checkers, but those seem to be mostly gone now and vendors instead talk about technologies based on having varying degrees and types of knowledge about threats, while 'generic detection' (of a different sort) has become a glossed over, value added feature of their scanners)
all of these work against your interests in protecting yourself and your stuff. they work against you finding the best tool for your job, or figuring out everything you need to do, or even knowing there's more to it than just using the vendor's product.

before you get the wrong idea, i don't want you to think this is a condemnation of the people who work for vendors. individually, many of them may well be much closer to being your friend and being on your side than the company they work for as a whole is. their interests are never perfectly aligned with yours, of course. you won't see them sacrificing their own interests (their families, their money, their jobs) for your benefit, and you wouldn't really expect them to, would you? some of them (a scant few when you consider the total number that security vendors employ) will sacrifice some of their time and energy to help people (whether their company's customers or no) learn about the threats that are out there and thus be better armed against those threats. just because someone works for a vendor doesn't mean their character is a reflection of the character of the corporate entity that employs them. yes, companies are run by people but it's their collective behaviour that makes the character of the company. the phrase "none of us are as cruel as all of us" doesn't just apply to anonymous, nor does it just apply to cruelty. 

i also don't want you to think this is a condemnation of vendor companies either. remember, they're not exactly enemies in the conventional sense, but rather adversaries. as much as i tend to refer to them as bad actors, or irresponsible, or any number of other judgmental labels, i can't really see how they could work any other way. the judgments are really just a way of highlighting the divergence of interests between the vendor and the customer. there is some variation in the degree to which they do the the things that they do, of course. smaller companies are more easily influenced by noble ideals, in part because of size and in part because they have less at stake and so can afford to be more 'innovative' in how they operate. it doesn't always work that way, and it doesn't mean their bottom line isn't still the bottom line, but some take a more scenic route to their goals.

that being said, the fact remains that vendors' interests do not align with those of their customers (i.e. you). that means it's important to take what they say with a grain of salt and to evaluate whether the things they say or do or produce are really of actual benefit to you. pick over what they have to offer, take what you can use and throw away the rest. in essence, forage on the enemy.

Tuesday, April 30, 2013

the abc's of security

over the years i've found myself becoming increasingly dissatisfied with the boiler plate advice i formulated when i was younger, as well as all the other boiler plate advice i've seen/heard given by other people, and even the very concept of boiler plate advice itself. this includes things like best practices (aren't you done practicing yet?) and really any simple, prescriptive answer to questions involving how to keep oneself secure. more and more they seem like incomplete or obsolete anachronisms that aren't suited to the diverse and ever changing circumstances in the real world. never mind the fact that everyone's values (and thus their priorities) are slightly different from each other so boiler plate advice is rarely a really good fit - and of course people's priorities change over time, too.

i've grown and evolved as a security user (a user of security), and no boiler plate seems capable of reflecting my reality anymore. it's just not how i think about or approach the problem of keeping myself secure anymore and i find it difficult to direct others down such fixed, one dimentional paths.

and yet i know people still need advice and direction in order to grow themselves. the subject of first principles and fundamentals occasionally comes up and so i thought to myself what is the most fundamental thing in all of security? if there was just one thing about security that i could impart to another human being, what would it be? the answer is surprisingly simple, surprisingly complex, and surprisingly not limited to just security but in fact really a life lesson that happens to have meaning within security.

the most important thing for anyone to remember when it comes to defending yourself and the things and people you care about is this:

when i say this, i don't mean changing mindlessly like some derivative of the crazy ivan maneuver from the movie hunt for red october (although being unpredictable certainly has tactical advantages) but rather that you change what you do to protect yourself in intelligent, mindful ways. you should always be learning, always growing, always evolving, always adapting, always improving. don't stand still because your adversaries certainly won't be and you don't want to fall behind (or at least any further behind).

there are no easy answers, no matter how many people may be offering them (it seems like everyone does), and no matter how well-intentioned they may be.

Wednesday, February 06, 2013

debating AV effectiveness with security experts

a rather disheartening conversation took place on twitter over the weekend. as public conversations sometimes do, it grew beyond any capability i have to do it justice through description, so instead i'll provide some screenshots and links to a couple of branches of the discussion.

because i don't follow either dan kaminsky or robert graham, i knew nothing about this discussion until someone retweeted the tweet pictured below (i included as much context as i could):

what first made me take interest in this was that robert graham seemed to be talking about 2 different things as though they were the same. the AV that's only 4% effective (or 0% when he's done with it) is different than the AV that organizations pay 40% of their budget on.

the apparently ineffective AV is actually the scanner component of the AV; as you can see he describes his methodology for bypassing it - a methodology that essentially amounts to malware q/a, which happens to be a countermeasure against heuristic detection, which is a feature of scanners.

the AV that organizations pay 40% of their budget on (assuming that's an accurate figure, i wouldn't know) is the enterprise security suite, which includes other things beyond just the scanner. for example the tweet by dan kaminsky that seems to have started the entire conversation alludes to the failure of symantec's product to stop 44 out of 45 pieces of malware in the recently publicized attack on the new york times. but as symantec rightly pointed out, their product included a reputation system which, for all intents and purposes, behaves much like a whitelist - if something doesn't have a good reputation (and new things have no reputation at all) then it will be flagged. that is about as different from a traditional scanner as one can imagine and bypassing it isn't nearly as straightforward.

talking about 2 different "AV"s as though they were the same is symptomatic of not being able to see beyond the abstraction that is AV. conceptually AV is an abstraction that encompasses a variety of disparate preventative, detective, and recovery techniques. most people, however, just see AV as a magic box that you turn on and it just protects things. the only component that behaves anything like that is the real-time scanner, but it is not the only component in a security suite (especially an enterprise security suite) by any stretch of the imagination.

failing to see beyond the abstraction means, unfortunately, that you will fail to argue intelligently about the subject. it also means you will probably fail to make effective use of AV. if you don't know how a tool works, how can you possibly hope to use it to your fullest advantage? furthermore, how can you value something you don't understand? just as you can't price a car based on the effectiveness of the wheels, you shouldn't value AV based on the supposed effectiveness of the scanner.

one of the things that also became clear as i read some of the subsequent tweets was that robert seems to think there's nothing special about his attacks. but the fact is his attacks are special. as a penetration tester he launches targeted attacks. targeted attacks take more effort, more human capital to execute, and he himself described some of that extra effort. this basically means targeted attacks are more expensive to launch than the more automated variety and that they don't scale quite as well. consequently targetted attacks represent a minority of the overall attacks being performed. note, however, that that doesn't necessarily mean targetted attacks are a minority of the attacks a particular organization sees, as it's entirely possible that an organization may be a juicy enough target to receive a great deal of attention from targeted attackers.

from what i can tell, dan kaminsky also has difficulty seeing beyond the abstraction of AV. much of what he says above reflects the idea that AV is a magic box that you simply turn on and it should protect you. in the preceding example it appears he thinks there's an expectation that organizations solve all their security problems with scanners when in fact the expectation is simply that they have AV suites in their toolbox and that they use the appropriate tool (which may or may not be part of the suite) for the job (as rik ferguson attempted to explain).

dan also quoted the DBIR as showing AV to only be effective 3% of the time. i wondered about that so i looked a little deeper. DBIR stands for Data Breach Investigations Report. let the meaning of that phrase sink in a little bit. a data breach investigations report is a report about data breach investigations. data breach investigations are investigations of data breaches, and data breaches only occur when all the effort you put into preventing them failed.

you can't judge how successful something is by only looking at it's failures.

one of the consequences of this is that dan has actually failed to understand the statistic he's reported. the 3% where AV detected the breach still represents a failure because it was a detection after the breach had happened. this can happen due to things like signature updates (a scanner can detect more today than it could detect yesterday).

another consequence is that trying to use the DBIR to evaluate the effectiveness of AV represents a self-selected sample bias because the failure itself causes the event to be included in the study. a success would have excluded the event from the study. now one might have entertained the possibility that dan simply wasn't familiar with selection bias, but as we will see, that appears to not be the case.

it appears that dan has in fact heard of selection bias before, not to mention the WildList too (bravo). unfortunately it doesn't appear that he can use them properly.
  • AV testers don't define the WildList, WildList reporters do (in a sense, but it's probably more accurate to say the the WildList is a result of a particular type of sampling)
  • AV testers typically don't do WildList testing, although virus bulletin does offer a WildList-based certification in addition to larger, more inclusive tests
  • if a product only detects 90%+ of the WildList, it's generally considered to be crap, because the WildList is the absolute bottom of the barrel of performance metrics. anything less than 100% is an embarrassment.
  • AV testers don't define the set of malware they're going to test against, they cull samples from as wide a variety of real-life sources as they can and describe them as being 'in-the-wild' so as to distinguish them from malware that only exists in a 'zoo' or malware that was whipped up in a lab for testing purposes (something that's generally frowned on)
  • defining what you're going to measure is not actually selection bias. "Selection bias occurs when some part of the target population is not in the sampled population" ("Sampling: Design and Analysis" by Sharon L. Lohr)  (now THAT's a textbook definition of selection bias - good thing was minoring in statistics in university). if testers defined their target to be the samples they already had then by definition there couldn't possibly be any selection bias because the target population and the sample population would be the same set.
this isn't to say there isn't selection bias in the tests performed by AV testers. it's entirely possible that some classes of malware (perhaps even targeted malware, for example) are harder to find samples of due to circumstances outside the testers' control. that being said, that bias is a lot more subtle than looking exclusively at failures.

now, it just so happens that i continued digging into the DBIR beyond just figuring out what went into it, and came across a rather interesting chart.
i highlighted the part that should really be interesting here. just to be clear, this only covers organizations that have to be compliant with PCI, but unless organizations that are legally obligated to run up-to-date AV are somehow magically more stupid than the rest of the organizations, the rest of the organizations actually have less motivation to run up-to-date AV and so their numbers are probably as low if not lower. 

now what this means is that there is really no way at all to use the DBIR to evaluate the effectiveness of AV because it appears that most of the organizations included in the report can't even follow the most basic of AV best practices. it also suggests that dan hasn't read his own sources thoroughly enough. if i'm not mistaken his 3% figure comes from the year when only 53% of PCI-bound organizations were running up-to-date AV. the subsequent year it was 1% of breaches discovered by signature-based anti-virus, and in the most recent one, AV doesn't appear to have helped after the fact at all.

that ever decreasing percentage of organizations running up-to-date AV is actually kind of disturbing, and it makes you wonder what's hurting organizations more; the amount of money they pay to AV vendors or the amount of attention they pay to security experts pontificating on subjects they are demonstrably ignorant of?

i anticipate that there will be those thinking that all i do is criticize and that i have nothing constructive to offer, so let's think about how we'd really measure AV effectiveness. the independent AV tests are apparently not good enough - in fact dan kaminsky went so far as to say this:
so how would we measure effectiveness really? like effectiveness in the field where AV is actually getting used? well first of all we stop limiting our data collection to just those instances where AV failed, because that's just ridiculous. no, we'll need to collect data on each and every time the AV raises an alert as well, to supplement our failure statistics. oh, and we'll have to follow up each of those alerts to make sure they aren't false positives because those certainly don't contribute to the effectiveness of AV. we'll also have to use all of the AV suite, rather than just parts of it, in order that people can determine if the effectiveness justifies the money they pay for the entire suite. additionally, we'll need to control for variables - different organizations have different security measures and controls that they deploy in conjunction with AV that may stop some malware before the AV gets a chance. that's not a bad thing, of course, and if they all used the same security measures then we could collect data on how effective AV is under those particular circumstances. but because each organization has different measures, they'll affect the AV results to differing degrees and that will skew the measurement. so we'll have to get organizations to either all use the same complementary measures or get them all to stop using any complementary measures. neither of which seem very likely in production environments, so that leaves us with trying to simulate what happens in the field - but then we get back into the AV testing lab territory which apparently 'no competent soul on the planet' trusts.

the reality is that it doesn't matter what kind of test you do, it's never going to match people's anecdotal experience. that's because testing is inherently designed around the idea of arriving at a single set of results representing how effective AV can be - it's never going to be able to reflect what happens when variables such as complementary controls, sub-optimal operation, targeting motivation, etc. aren't controlled for - and in the real world they aren't controlled for. tests necessarily reflect ideal circumstances and your mileage may (probably will) vary.

were i to be overly judgmental i might sign off this post with this little gem by none other than robert graham himself that i found yesterday will going through my RSS backlog:
The problem with our industry is that it's full of self-styled "experts" who are adept at slinging buzzwords and cliches. These people are skilled at tricking the masses, but they have actually zero expertise in cybersecurity.
but i prefer the school of thought from my own post about security experts from 2006 - security is just too big for anyone to be an expert in all parts of it. it seems to me that the expertise of dan and robert lie elsewhere.

it's important for people to recognize their own limitations, and i believe it's also important to recognize the limitations of the authorities you're listening to as well, lest you give credence to well meaning but uninformed experts. anti-malware is a complicated field, more complicated than i think either dan or robert realize, and if they have difficulty seeing beyond the AV abstraction imagine, how many other people do as well. i hope someday dan and robert and other experts like them can gain a deeper appreciation for how complex it is so that they can pass that along to those who depend on them to do the heavy cognitive lifting.

Thursday, January 03, 2013

imperva's anti-virus study is garbage

Enough is enough! I have had it with these motherf#$%ing flakes on this motherf#$%ing train of thought - (what i imagine samuel l. jackson might say if he were following this nonsense about imperva)

in case you are unfamiliar, imperva (a security vendor of some sort) commissioned a bunch of students from the technion-israel institute of technology to perform an evaluation of the efficacy of anti-virus (all anti-virus as a whole, apparently, rather than comparing them to each other) by uploading 82 found samples to virustotal. yes, you read that right, it's another virustotal-based test.

these days i have a number of alternative avenues to express myself that i didn't have when this blog was still young, and that can often sate my need to express my feelings on some topic. i can make snide comments on twitter, or even parody tweets from a satirical twitter account. in fact i can even make memes about it. unfortunately none of that has proven sufficient in this case because the hits just keep coming.

you see, imperva keeps shopping this quackery out to more and more media outlets where it gets gobbled up and regurgitated uncritically by writers/editors (who really ought to know better if reporting on this sort of topic is part of their actual job) and thus gets put in front of more and more eyeballs of those who realistically can't know better. along the way it can even collect somewhat supporting voices from venerated members of the security community like robert david graham
or wim remes

let me be clear, however - this is all wrong. as has been repeated over and over again, virustotal is for testing samples, not anti-malware. they say it themselves in their about page
The reason is that using VirusTotal for antivirus testing is a bad idea.
and
BAD IDEA: VirusTotal for antivirus/URL scanner testing
those statements alone should be enough but, because virustotal later talks specifically about comparative tests, imperva (and others) have tried to argue that imperva's test is OK because it doesn't compare products to each other. however...
VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
this makes it pretty clear that the product a customer installs is very much a different thing from the program that virustotal uses - they will in most cases behave very differently and so the results that virustotal spits out cannot be considered representative of what actual users of anti-malware products will experience.

(ironically,  a product that appears to fare best in a virustotal-based test may actually be the worst because a higher focus on the type of static (often signature-based) detection that virustotal best measures could be to cover for a weakness in (or absence of) more generic/dynamic detection capabilities.)

but don't just take my word for it, let's hear from a couple of people who actually work at virustotal
yes, that's right, imperva's study is a joke. this shouldn't be surprising to long time readers of this blog since when i first wrote about this problem four years ago the first reason i gave for why you might want to avoid performing virustotal-based tests was that those of us who know better will laugh at you. i'm sure a number of people are laughing at imperva's gross incompetence (hanlon's razor makes me choose this explanation over the more sinister alternatives) but i'm afraid i can't consider the mess they're making to be a laughing matter.

promulgating ignorance in a security context has the potential to do real harm, and that is where i draw the line. that's why i'm writing this, that's why the title gets straight to the point, and that's why i'm going to start naming some names of people/organizations who have helped make this mess and who really ought to have known better. imperva has behaved like a dung beetle, persistently rolling this turd around, but somehow it keeps getting bigger like some katamari damacy of bullshit, and i think it's important to see the scale and scope of it and hold the people responsible accountable. it's worth noting, however, that somewhere deep down someone at imperva must also have seen the potential for their message to do harm - that's why the caveat that they weren't advising eliminating AV was added (as an apparent afterthought).

a non-exhaustive list of people/orgs who really should have known better, tried harder, and ought to be held to account for this growing mess is as follows:
(i'm aware there's a lot more than this that you can simply find by googling sentences from the press release, i wish i had the time to make this list exhaustive - that said: reuters, the new york times, and the wallstreet journal... that definitely caught a lot of eyeballs)

now, perhaps you're thinking i'm being too hard on the journalists involved here. after all, they aren't experts. frankly, however, they don't have to be experts to see what's wrong with this test. if you're the type of reporter who reports on this type of technology then you should already know about virustotal and about how it can and can't be used. this isn't rocket science, or even some obscure nuance that only matters every 5th wednesday - not in the context of reporting on this subject. this is something reporters covering security technology ought to know. it's table stakes. you need to be this tall to get on the ride.

perhaps you think i'm being too hard on the students and their supervisor(s)? but this is academia we're talking about. they're expected to do their research, and i don't just mean the experimental research, i mean looking up and reading about the issues involved in designing and performing tests on anti-malware products. and their supeverisor(s) should have made sure they were doing their due diligence in this regard. frankly, in my time i've seen lone rank amateurs perform better tests than this with fewer resources. this is not acceptable academic performance.

and as for imperva themselves, well... if you intend to occupy part of the security industry that hopes to steal some of the AV industry's market, then you better know this stuff like it's the back of your hand. the institutional incompetence going all the way up the chain of command to the chief technology officer is astonishing and i'm surprised they managed to find someone with too many dollars and too little sense to give them funding, but i guess p.t. barnum was right about there being one born every minute.

imperva - do yourselves a favour and put a stop to this mess before it gets any bigger. you can't defend this junk computer science, the truth will eventually come out (it seems to have already started). you can't sweep it under the rug either, you've let things get too out of hand. the kind of smear campaign you're currently running was already attempted by the whitelisting industry years before you, and while that industry itself is still around and may even still be pumping out this same kind of junk, it didn't stop them from drifting back into obscurity. the way i see it the only way you can move forward sustainably is
  1. admit your error
  2. publicly retract your study
  3. reach out to the journalists whose reputations have been tarnished by listening to you and apologize
  4. assist the students you dragged into this in learning the error of the experimental methodology they followed (you can probably find a lot of good info either on or linked to from the anti-malware testing blog)
  5. start over with a more intelligent methodology and try to make your case again with valid data
and if you can't manage to follow these steps then i'll be glad to watch you fade away or get swallowed up in a few years time, because the kind of incompetence you've been proudly displaying so far is not the path to success.