at the black hat security conference this year tavis ormandy presented his research into the way an anti-virus product, namely sophos' product, operates in order to shine a light on something seems like it would be an important subject to consider when judging the efficacy of a product. the paper associated with the presentation can be found here.
tavis was not particularly kind in his evaluation of the code (which he apparently performed by reverse engineering the product). sophos' response is very measured and diplomatic, which is pretty much the perfect way (from a PR perspective) to respond to the kind of criticism being leveled at them. as usual, however, i don't have to be diplomatic.
tavis' paper betrays a conceit that i suspect is more common in those who break things than it is in those who make things. developers, upon dealing with someone else's code, inevitably learn an important lesson: the code tells you what the software does, but it doesn't tell you why it does it that way. tavis thinks he knows all he needs to know, but he only had the code to go by. so when it comes to why certain things were done the way they were, the only thing he could reasonably do is make educated guesses. in some cases those guesses may well have been quite good, but in others they were not.
i first realized this was going to be the case on the second page of the paper where he describes how weak the encryption used on the signatures was, often an XOR with an 8 bit key. if you were to guess that such encryption was to protect the signatures from an attacker, as tavis seems to have, then you'd be dead wrong. the primary purpose encrypting signatures serves is to prevent other anti-virus products from raising an alarm on the signature database (something that used to happen in the very early days).
on page 3 it's mentioned that the heavy use of CRC32 in the signatures means it's easy to maliciously create false alarms by creating files that have the same CRC32 values that a particular signature is looking for and in the same places. now i ask you, the reader, if someone is maliciously planting files on your network that are designed to raise an alarm, is that alarm really false? it may be a false identification, but there really is something malicious going on that an administrator needs to investigate.
also on page 3 he criticizes the quality of the signatures by stating they appear to ignore the context of the programs they come from, that they're for irrelevant, trivial, or even dead code. perhaps tavis expanded on this in his live presentation, but the paper doesn't make clear whether or not he actually looked at the malware the samples were supposed to be for. if he didn't, then the criticism about ignoring context would be particularly ironic. let's assume then that he did. how many malware samples did he examine? if only a handful then there's a not insignificant chance that he was dealing with bad examples that aren't really representative of the overall signature quality. did he ensure that his samples were actually malware? did he ensure that his samples were being identified by the right signatures? his previous criticism (on the same page!) about false identifications should highlight the fact that hey may have been looking at the wrong code when judging the quality of the signatures. but more importantly than that, there isn't a 1-to-1 relationship between signatures and samples. one signature may be intended to detect many (tens, hundreds, even thousands of) related samples - and however pointless tavis may think those sections of the malware code are, they may represent the optimal commonality between all those samples.
around about page 8 or so, tavis makes a point of highlighting the fact that the emulation the product does in order to let malware reveal it's unencrypted/unpacked form only goes for about 500 cycles. this highlights a failure to understand one of the core problems in malware detection: the halting problem. for any other criteria the code might look for in deciding it's seen enough, there's no guarantee that criteria will ever be encountered on a particular input program. there has to be a hardcoded cut-off point or the process runs the risk of never stopping - and that would severely impact the usability of the AV software. likewise if the hardcoded cut-off point isn't reached soon enough it also impacts the usability of the AV software.
there may yet be other examples of poor guesswork that i didn't see. my own knowledge of why certain design choices might be made is limited as i've never actually built an anti-virus product. i have considered it, of course, since i am a maker of things. perhaps tavis ormandy would benefit from more experience making things rather than breaking them. perhaps this was an unorthodox expression of the oft' repeated concept that the skills needed to attack are not the same as the skills needed to defend.