there are two primary questions he tries to answer, the first of which being whether or not AMTSO is serious about improving anti-malware testing. he concludes that the answer is no and holds up VB100 as an example to support this conclusion because he thinks if they were serious about improving anti-malware testing they'd ban the VB100 test on the basis that it misleads the public. of course, in reality AMTSO hasn't done that because they can't. they don't have the power to do so. AMTSO is trying to create improved standards but they don't have the authority to enforce those standards. all they can do is use indirect means to exert pressure on testing organizations to improve their methods. anyone reading the AMTSO FAQs, especially the one about their charter, can plainly see that enforcement is neither explicitly mentioned nor implicitly referred to.
additionally, VB100 isn't actually a test in it's own right. it's a certification/award based on a subset of the results of a larger comparative review. kevin should have known this had he bothered to read the first sentence of the VB100 test procedures. furthermore, the VB100 award itself is not misleading. this is one instance when we really ought to be shooting the messenger because the misleading is being done by vendor marketing departments which happen to use the VB100 award in an incredibly superficial and manipulative way (and frankly there's little that testers could do to stop that). there actually isn't all that much wrong with the VB100 award except that, due to it's being based on the WildList, it has lost most of it's relevance. that said, certifications in general have limited relevance as all they really do is help to establish a lower bound on quality. a lot of people don't understand this or even what VB100 really is, but that lack of understanding is hardly the fault of virus bulletin, especially when most people don't even go to the virus bulletin site to learn what the results mean.
before i move on to the second of kevin's main questions, i'd like to take an aside and look at something he wrote about the WildList itself:
this latency means that, almost by definition, the Wild List includes little, if any, of the biggest threat to end-users: zero-day malwarewhat kevin and many before him have failed to realize is that the reason zero-day malware is as big a threat as it is today is because it's competition has been largely eliminated thanks to a focus on the WildList. without that we'd still be getting compromised by the exact same malware year after year because the stuff that was demonstrably in the wild wouldn't be getting higher priority treatment.
the second of kevin's main questions had to do with whose interests AMTSO was really serving. he concludes that they serve the vendors interests rather than the end user's based on his assumptions about the reason behind their adherence to the rule about not creating new malware, but also based on his decision to buy into the spin being put forth by NSS Labs CEO rick moy.
for starters i can't believe that after all these years people are still getting bent out of shape or trying to read ulterior motives into the 'no malware creation' rule. it's one of the oldest and most fundamental ethical principles in the anti-malware community. if people found out that the CDC was creating new diseases they'd be up in arms - worse still if one of those new diseases got out (something which has happened in the malware world) - but in the case of the anti-malware community outsiders assume it's because everyone in the anti-malware community has vendor ties and the vendors don't want to look bad in tests. we're not talking about the 'we mostly frown on malware except when it's useful to us' community, it's the ANTI-malware community. you can't really call yourself anti-X if you go around making X's. that would just make you a hypocrite.
furthermore, and speaking directly to the following rather uninformed rhetorical question kevin puts forward about the 'no malware creation' rule:
Why not? How can you test the true heuristic behavioral capabilities of an AV product without testing it against a brand new sample that you absolutely know it has never experienced before?it is already possible to test anti-malware products against malware they've never seen before without creating new malware. it's been possible for a long, long time. it's called retrospective testing, and anyone with familiarity with tests (not even testing issues, just the tests themselves) knows that retrospective tests make vendors look terrible. detection rates around 40% used to be the norm but in more recent times they've edged up closer to 50%. there are still some below the 40% mark, though, and even some below the 20% mark.
as for believing the NSS spin and using the 2 test reviews available (yes, what an incredibly small sample size) on the AMTSO site to try and support that view i offer the following support to the counter-argument: retrospective tests have made far more vendors look far worse than NSS's test did and no one is challenging the results. no one is using the AMTSO review process to dismiss those tests, as kevin phrased it. how can the conspiracy theory about protectionism in AMTSO be true if nobody is trying to discredit tests that are even more damning and damaging than NSS'? if you think it's because the tests are too obscure, think again - they're produced by some of the top names in independent anti-malware testing (even NSS' own vikram phatak recognized one of the organizations as being independent in that video i've referenced twice before), who also happen to be a part of AMTSO.
kevin believed the spin, i suspect, because he was predisposed to. previous posts on his blog show an existing bias against AMTSO, apparently due in part to the involvement of vendors. there is a very sad tendency in the general security community to not be able to see past a person's vendor affiliations. apparently people think that if you work for a vendor you're nothing more than a mouthpiece for your employer and that the entire company is one big unified collective entity. no attempts are made to distinguish between divisions within the company and recognize the huge difference between the technical people and the business people in those companies (you never know what you're going to get when the two overlap, though - just compare frisk with eugene kaspersky). it's not the business people, the marketroids (so called in order to distinguish them from actual human beings), or the HR departments participating in AMTSO, it's the researchers.
one final idea that kevin put forward in his post is the importance of the user - going so far as to suggest that users should be part of AMTSO, that users determine whether tests are any good, etc. i don't know what on earth he was thinking, but the layman hasn't the tools to divine good science from bad. most users (and i say this as a user myself) haven't got the first clue about what makes a good test or a biased test. in fact most users don't read or interact with tests at all. the only thing they know about tests is what they read in vendor marketing material (usually on the cover of the box), which, as previously mentioned, neither testers nor AMTSO have any control over. i really don't see what users could bring to AMTSO, but i do see something that AMTSO could bring to users - that being tools for to help them understand the tests, to put them in the proper perspective, and yes to also be able to pick out the good ones from the bad.
to be perfectly honest, i understand some of the indignation kevin is directing towards testers and by extension AMTSO, but i think it's misdirected. for most people, marketing is the first and sometimes only voice they hear with respect to security. it's marketing's job to distort and/or omit facts in order to make the company and it's product/service look good. of course marketing does this at the behest of management, of CEOs and shareholders, and people whose concerns are business and profit rather than the good of the user. none of that has anything to do with testing or AMTSO, however.