Tuesday, July 26, 2011

careful who you let write the history books

over the course of the previous two weeks, kevin mcaleavey has been publishing a series of blog posts on the infosecisland site (parts one, two, three, four, five, and six) about the history of the anti-virus industry. rob lewis thought this might be a subject that would interest me and he was right. unfortunately, rather than finding it mostly informative, i found with each passing part an increasing desire to post a serious critique. i'll take them one part at a time.

part one

to start with i think we need to pay attention to how the author presents himself. each part refers to him as a long time industry insider, but doesn't go into anything more verifiable or specific than that. with such vague credentials (and those are meant as credentials - there's no reason to put them there except to try to convince the reader that the author is an authority on the subject he's writing about) it really feels like he's saying "trust me, i'm a security expert". now i've never heard of him but that alone doesn't say that much, the anti-virus industry employs hundreds if not thousands of people and i'm really only familiar with a comparative handful of them. in part three we get to find out more specifics, but for the time being let's just say that such nebulous credentials makes me very suspicious.

to be fair i should point out my own credentials, so that i'm not being a hypocrite. unlike kevin, i am not now, nor have i ever been an industry insider. i have never been employed by any company in the anti-virus industry, i have never received financial remuneration for anything i've said or done involving the anti-virus field, and frankly i've only ever met a handful of people who were part of the industry. that being said, what i am (in terms relevant to this discussion) is a long time observer of the anti-virus industry. i've always been on the outside looking in, but from about the age of 15 onwards i basically grew up interacting with security hobbyists, security professionals, security software engineers, security researchers, and even some of the big names that kevin mentions in part two.

in theory, being an outsider seems like it should mean i have a more superficial view of the anti-virus industry.  we shall see.

the first part of kevin's series focuses mostly on where we are today so a lot of the things he mentions should sound familiar to people following the security news. one of the things he mentions is the rustock botnet. he presents it as a single piece of malware with a 5 year half-life. this is a bit misleading for a couple of reasons, the first being that rustock is actually a family of malware - there have been many versions since 2006, in part because the anti-virus industry keeps interfering with the utility of existing versions by detecting them. additionally, the term "half life" has a very specific meaning which doesn't really apply that well to a botnet that has for all intents and purposes been killed now that microsoft controls the command and control server.

subsequently he made the following quote:
"TDL4" however has publicly caused the security industry to transition into full panic mode and literally throw in the towel
this may seem pedantic, but in order to literally throw in the towel, there has to be an actual physical towel, and someone has to throw it. an argument could be made, i suppose, for saying that they figuratively threw in the towel, or metaphorically threw in the towel. maybe even virtually threw in the towel - but if he insists that they literally threw in the towel then i have five words - pics or it didn't happen.

now there was some early misinterpretation of the use of the word "indestructible" that got posted in less knowledgeable media circles, specifically that the malware was indestructible rather than the observation that malware authors were trying to create an indestructible or bulletproof network of compromised computers, but a knowledgeable industry insider should have been able to see through that. furthermore, from my perspective the only panic i saw was the panicky feeding frenzy in the media over a statement which, like so many more purposeful scams, was simply too spectacular to be true.

i got the distinct impression upon reading part one that he tends to take mainstream media as gospel when it suggests that all is lost in the fight against malware. that seems strange to me. why is an industry insider putting so much credence in what the mainstream media says? he acknowledges that the industry has called this interpretation wrong, and that they're trying to correct that misinterpretation but he seems to suggest that the "corrections" are a product of public relations rather than a genuine attempt to correct factually erroneous statements that spread fear, uncertainty, and doubt.


another statement i'd like to draw attention to is the following:
To see this public admission that 1980's technology has utterly failed is nothing short of breathtaking.   
admission by whom? where? what are the details? none are given and we must take him on his word that such an admission actually took place. or perhaps he thinks the misinterpretations spread by mainstream media represent that admission. could it be that he's unfamiliar with the degree to which they botch things up on a regular basis? it really makes me question his credibility if he placing his preference on the words of reporters over the words of researchers.

part two


part two is where he actually starts talking about history. he starts right at the beginning but there's a problem. he mistakenly thinks the brain virus 'destroyed' many hard disks. brain came out at a time when hard disks were a rarity. moreover, brain specifically avoided infecting hard disks. the link kevin himself provided says this so i can only assume that he's not actually reading the sources he's providing to the reader.

he also seems to mistakenly think brain required a reinstall to recover from. this is, of course, false. there have always been less drastic ways of restoring boot sectors. the main problem was information about doing so wasn't as easy to come by back then as it is now. still, thinking there wasn't a way to do it is essentially a form of ignorance that you wouldn't expect to find in a long time industry insider.

yet another point on the brain virus; he seems to think the backlash from it forced the developers who made it out of business. i guess he never saw this video by f-secure's mikko hypponen, where mikko found the developers of the virus still working at exactly the same address they were at 25 years ago when they originally wrote the virus.

brain isn't the only piece of malware whose details he gets wrong, though. he mistakenly thinks popureb requires a windows reinstall to remove, much like brain. in reality what's required is to restore the MBR with the recovery console. this seems to be another example where kevin has taken mainstream media at their word instead of digging deeper and getting actual inside information, the way you'd expect an insider to do.

he also seems to think that stoned was the first virus to go memory resident and infect any disk that was inserted in the drive. stoned was a boot sector infector, however. all boot sector infectors infect (more or less) every disk inserted into the drive, it's kinda how they spread, and since it wasn't the first of that sort of virus i can't imagine how he got the impression it was the first one to go memory resident.

more generally he seems to think the majority of viruses in the 90's were jokes and pranks and programs designed to delete files. my recollection is that most actually gave no outward indication of infection whatsoever. virus writers quickly decided they liked the notoriety that came with one of their viruses spreading far and wide, and the best way to help that to happen was to make sure it didn't make itself known by messing up people's computers. the ones that stuck out in people's memories were oftentimes jokes or pranks or had destructive payloads, however.

it's not just malware he seems to get wrong, however. he appears to be under the impression that VIRUS-L was a private echomail conference available to SysOps (system operators) of BBSes that carried FidoNet. VIRUS-L was actually an internet mailing list which was gated into usenet in the form of the newsgroup comp.virus. i have no doubt it was further gated into FidoNet - i know alt.comp.virus was as my first encounter with it was through a usenet-to-FidoNet gateway. i have a feeling i encountered comp.virus the same way (without ever being a SysOp). that said, it's possible kevin could have been thinking of something else; maybe VIRUS or VIRUS_INFO (or even VIRUS_NFO) which were in fact echomail conferences on FidoNet (the first two of which have me as their most recent moderator). they also aren't private, however; i really can't seem to figure out where that part of his recollection might have come from.

another curious thing i noticed is that he seems to think eugene kaspersky is the only member of the old guard of virus hunters still in the field after john mcafee, alan solomon, and peter norton (?) left. he also mentioned frisk software international but sort of as an afterthought, and somehow failed to mention frisk the person. frisk the person (and the company) are still out there, still working. they may not be attracting attention but they're still there. as such, kaspersky isn't the last so kevin's depiction of the industry as having been lobotomized seems all the more inaccurate. what's more, however, is that kevin only seems to be acknowledging a handful of the icons in the anti-virus industry. there were and are a lot more people from the old days out there. a great example would be aryeh goretsky, who was from the sounds of it like john mcafee's right hand man, and who currently is with eset. then there's the venerable vesselin bontchev, once a well respected virus expert at university of hamburg and now a well respected virus expert at FSI. then there's jimmy kuo of symantec and then mcafee and now microsoft. there are many great minds from the early days that are still with us today, so this focus on just a handful of icons and the suggestion that since most of those are gone that the industry has been lobotomized, seems like a decidedly superficial view.

part three

the third part in kevin's series was about the demise of the anti-virus industry (it's not dead, it's just sleeping). it's also where we finally find out about his actual credentials. he was part of privacy software corporation, the company behind BOClean; what was apparently originally a cleanup tool for the back orifice remote access trojan, but was later expanded to cover more malware. apparently this company was bought by comodo in 2007, but from the sounds of things it doesn't sound like kevin stayed on with comodo for all that long, and even if he was still with them today 4 years isn't exactly "long term".

he presents himself as a long term industry insider in a discussion about anti-virus vendors, but his actual first hand inside experience with the kinds of companies he talks about and criticizes in this series of posts only went on for 2 1/4 years according to his linkedin profile. that seems rather underwhelming. oh sure, before that he spent a long time in a different part of the anti-malware industry, but he spends very little time actually talking about the part of the industry where most of his actual experience is from - except to extoll the virtues of the techniques used by BOClean, of course. in any event, it's clear by this point why he was so vague about his credentials in earlier parts.

back to the demise of the anti-virus industry, however. kevin focuses on the point in which trojans started to take off as the death knell of the industry. it was, after all, the point at which his company was forced to start dealing with malware. he seems to think that the early failures to handle trojans reflected inability on the industry's part. just to be clear, when trojans began to take off the industry did hesitate, and i don't mean for a second or a minute or an hour or a day or week or a month or even a year. the anti-virus industry hesitated to redefine itself. it took a lot of doing to overcome the philosophical inertia that kept them out non-viral malware detection. trojans were not viruses, remote access 'tools' arguably had legitimate uses, and the industry was gun-shy with regards to litigation. i hope kevin can appreciate that last point as it was his revered dr solomon who explained that one to me. apparently they couldn't even add generic detection for MtE based polymorphic viruses using the existence of the MtE algorithm as an indicator because some brainiac went and put the algorithm in a 'legitimate' code obfuscation tool. the industry did eventually overcome their philosophical qualms about dealing with non-viral malware but it took the escalation of the trojan problem to make people (both inside and outside the industry) realize that the industry would be justified in going after this new kind of threat that was previously outside of their purview.

a great deal of kevin's characterization of the failure of the industry seems to focus on what his company handled better. the reader is left to take his word for it that his company did a better job. he's clearly not an impartial 3rd party, his perceptions should be taken with a grain of salt. he talks about a number of techniques used by the anti-virus industry and what their failings are - a number of which are almost certainly exaggerated. i've seen my share of alternative anti-malware approaches whose proponents go on ad nauseum about how their approach is the superior one (zvi netiv stands out as a remarkable example of this). better is always subjective, there's always good and bad points.

he describes BOClean's approach of only looking for the malware in memory, rather than trying to find it on disk like traditional anti-virus products do, as superior. there are benefits, of course. packers and cryptors have no effect on detection because that transformation is undone at runtime. however, there are also drawbacks. once the malware is in memory it is active. it's possible for active malware to interfere with security software in any number of ways. it could use stealth, it could shut down the security software (which i gather was a problem for BOClean), or it could even use the security software's own filesystem enumerating behaviour to find new host programs to infect (if you're dealing with a file infecting virus). once the malware is active the security software has lost an important tactical advantage. imagine you're in a street fight - do you wait for your opponent to strike first or do you try to strike before they're even in a position to defend themselves? and remember, i'm talking about a street fight here, not some fair fight with referees and judges. you do not wait for the malware to go active in memory if you can avoid it. the code that gets control first wins, and once the malware is active in memory, it's gained control. it's only by kevin's good fortune (or BOClean's low profile) that malware creators didn't try to stomp out or otherwise interfere with BOClean. the AV industry has already faced that, going back all the way to brain (which had stealth).

some of his exaggerations devolve into more factual errors. for example he calls win32/ska a simple file infector and criticises the industry for taking months to handle it. the fact is that win32/ska's infection technique was nothing like the traditional file infection technique kevin laid out elsewhere. win32/ska was not some appending file infector, it didn't simply insert itself into the winsock DLL, it carried a modified copy of the winsock DLL with it and replaced the original with the modified version. as such, recovery of the original winsock DLL was not comparable to recovering from a traditional file infector. it may have taken months to build the capabilities to recover from this type of infection into their general purpose tools, but that in part was because there were special purpose tools available to mitigate the problem and lower the priority of getting it into their general purpose products. there also happened to be manual instructions for removing win32/ska (i know because i have given those instructions to people) which also mitigated the problem and thus lowered the priority.

he also points out that even now systems are still getting compromised with back orifice 12 years later and characterizes that as an example of the anti-malware industry's failure. the fact is that his BOClean is just as much a failure in that as the rest of the anti-malware industry is. that is if you can really call it a failure. stopping that trojan from ever getting used again, making it become extinct, that's just not something anyone can actually do. and it doesn't really have that much to do with whether the malware can be detected or removed by software tools, but more with the heterogeneity of the computer population and the mind-share of the malware in question amongst those who would use it. old viruses never die, and apparently old trojans don't either.

he places the blame for why all windows malware works at all on microsoft. unfortunately it's not that simple. any general purpose computer is capable of supporting malware, no matter what the operating system. in this way malware can be said to be a 'feature' of general purpose computing. no matter how tightly you try to control things, there will always be the potential for software to maliciously do something you don't want it to do.


kevin seems to think that AV industry has for the past 30 years (which is longer than the industry has actually existed) simply been coming up with signatures and not looking for patterns that could be useful for future variants. the fact that we have the concept of malware families at all invalidates this line of reasoning. it wouldn't be possible to classify new samples as belonging to an existing family without looking for such patterns. this line of reasoning also ignores heuristic alerts that say "possibly modified variant of X". it also ignores the practice of consolidating many specific signatures into a few generic signatures.

he also calls automated analysis 'cheating' and characterizes it as simply creating an MD5 or SHA1 hash. this seems to ignore the fact that you can't logically identify that a sample is a variant of another malware family by simply creating a hash. it also clearly ignores the concept of automated classification, whereby a sample is compared with many other samples in order to determine which ones it's most similar to and thus which family it belongs to. one also has to wonder how exactly vendors are supposed to deal with 100,000 new samples a day without some kind of automation. even if you had a way to detect most of those new samples ahead of time with generic signatures, you still need to test and make sure each of those new samples is actually malware and is actually detected.

rather unsurprisingly at this point, he describes heuristics as bad because they emulate the malware while trying to catch it in the act of doing something bad. this is only (somewhat) true of dynamic heuristics, not static heuristics. it's also hypocritical coming from someone championing the idea of letting the malware become active in memory before trying to do anything about it.

he also referred to heuristics as creating "massive" false positive problems. in reality false positives are relatively rare. a single false positive can, of course, cause problems for many people if it's a system file, but those are rare even amongst regular false positives. those really high profile failures are something that has only happened a handful of times.

part four

the fourth part of kevin's series focuses on operating systems, mostly windows, but covering mac os x and linux too. i actually agree with a lot of what kevin has to say in this part (which makes me wonder if my knowledge of OS security might be a little too shallow), but there are a few things that are worth pointing out.

i tend not to agree with his suggestion that the concept of file associations based on file extensions are a bad thing and should be replaced by some kind of intelligent parsing. for one thing i think the parsing idea is logically infeasible. intelligent parsing would require windows to know about every file format, even the one i just invented yesterday, which it obviously can't. he also envisioned that the parser could generate a warning for the user to indicate what sort of action windows was about to take, but the warning could just as easily been done with file associations. the warning is based on the outcome and has little to do with how the outcome is decided upon (be it parsing or association).

on the matter of unix, kevin appears to be of the opinion that pure unix was secure. this would seem to ignore the fact that the original academic treatment of computer viruses (where the term actually got coined) had a virus spreading on a professionally administered unix system without the admin's assistance. how secure is that?

he also seems to believe that linux used to be secure in the beginning, but seems to forget or not care that rootkits worked on linux way back close to it's beginning. he admits that there was malicious software but contends that it got removed from distros quickly after being introduced. this ignores the fact that once end users start using such an OS, malware doesn't need to be bundled into the distro. if a user can run it, they will, and linux didn't and doesn't prevent running malware

part five

the fifth and penultimate part of the series deals the concept of defense in depth, or as kevin prefers layered security. he spends an inordinate amount of time talking about firewalls and how awful they are. he sort of presented defense in depth as a caricature, focusing only on the layers that gained widespread adoption and suggesting that anything else was outside of the regular user's price range. frankly i don't think integrity checking was ever priced that way, nor behaviour blocking, nor any number of other techniques that he conveniently ignores.

what's more, he criticizes each layer for being insufficient. it's as if he doesn't understand the purpose of having multiple layers. each one has flaws and weaknesses, sure, all security measures do, but in aggregate those weaknesses are diminished. they're never completely eliminated, of course, because there's no such thing as perfect security, but it's certainly something that we can approach. one might argue that one can approach that by re-engineering one particular layer instead of having many, but some of those weaknesses are inherent rather than being design or implementation flaws. such inherent weaknesses can only be braced by other additional, complementary layers.

part six

the final part of keven mcaleavey's series was meant to present solutions to the problems mentioned in the previous 5 parts. when i read the final part in the series all the pieces finally clicked into place. all the confusion i previously had was gone, all the WTF moments finally made sense. i finally knew what he was on about and what the series' true purpose was. kevin's got a product to sell and the series was an extraordinarily long sales pitch.

it followed a familiar pattern too, now that i think about it. first he dished out FUD about his competitors (and since he's now a secure systems provider his competitors include both the anti-malware software vendors and the operating system vendors), and then at the end hype up his own product and make it look like the blatantly obvious choice to avoid the horrible, horrible problems with everything else. he even threw in a smattering of snake oil phrases like "absolute security and protection" for good measure.

the product and/or service in question (KNOS) appears to my untrained self to be similar to some sort of freeBSD-based LiveCD composed entirely out of carefully audited modules, and with the provision that you can request custom configurations from to suit your needs. the idea appears to be that the code that is allowed to run is carefully controlled and limited by them (rather than being something the user can add code to him/herself) and that exploitable vulnerabilities have supposedly been eliminated. this is supposed to keep the users safe from both malware and from them themselves.

unfortunately, kevin seems to have fallen prey to the engineer's conceit - the (often mistaken) belief that a particular problem can solved through carefully engineered technology alone. KNOS does not offer absolute protection, and to say it does offers users a false sense of security. it may well stop all the malware that kevin can imagine, but (to paraphrase something that's often said in cryptographic circles) it's easy to come up with a security system so advanced that you yourself can't figure out a way around it - what's hard is figuring out one that other people can't figure out a way around either.

obviously i don't know enough of the details of KNOS to suggest specific scenarios where it's security can fail, but i do know enough about computation in general to see what kevin (and others who aim to ensure only good/safe code is allowed to run) has missed. what many people don't realize, what society's conventional thinking about computers fails to hint at, is that data is code. the distinction we draw between the two is little more than a mental construct that simplifies the task of building systems. it doesn't represent how computation actually works, and it isn't necessarily adhered to by the people who break systems.

unless and until someone can come up with a way to control which data gets processed as scrupulously as they control which code gets executed, data will remain an open window through which systems with locked down code can be attacked. and since determining the safety of data will ultimately require the data to be processed in some way, we arrive at a catch-22.

i fully expect that KNOS probably has some impressive security capabilities, but absolute security is a fantasy, and the more traditional security layers that kevin derides have managed to keep me essentially malware free (except for an externally non-addressable RAT installation on a sacrificial secondary system) for over two decades. beware security historians who are trying to sell you something - chances are they'll rewrite history to suit their sales objective.

Thursday, July 07, 2011

maybe we should blame the victim

pardon my iconoclasm, but a twitter conversation with jerome segura and maxim weinstein got me thinking about this. it was sparked by maxim's blog post "stop blaming the victims" where he argued that we shouldn't be blaming people for failing to follow security best practices (such as keeping web servers up to date). personally i consider this to be a form of infantilization. i've argued against coddling users before but i want to expand on the idea here.

the principle and practice of not blaming the users basically sends them the message that they're OK, they didn't do anything wrong, and they can keep doing things the way they have been. this is a marked departure from many of the other messages we send users trying to get them to be more aware of security and to make better decisions in security contexts. that makes the "don't blame the victim" dogma a substantially mixed message. have they really done nothing wrong? often times there are things they could/should have done differently, things they've been told about in the past but still failed to consider. can they be entirely free from responsibility for what happens to them in such a circumstance? i don't believe so. do we really want to send the message that they did nothing wrong and don't have to change? how will we ever get people to take better care of their security if we do that? many people are poorly adapted to the realities of the modern world and if there's no force giving them pushes in the right direction they'll never improve.

more fundamental than that is the fact that victims are victims of the word "victim". by acknowledging someone as a victim we accept and embrace the notion of powerlessness that the word engenders. recognizing people as victims gives them a license to be victims and to remain victims. when someone is taken advantage of we shouldn't be treating them as some helpless and fragile thing, we should be helping them to become empowered so that they don't get taken advantage of again and again and again. by telling them they're helpless victims we rob them of the opportunity to better master their fates and gain confidence in their abilities. perpetuating the notion of the victim keeps the lay-person down.

therefore, not only do i think we should hold people at least partially responsible for the consequences of their actions or inactions (to blame the victim in normal parlance), but i also think we should blame the people who say "don't blame the victim". their well-meaning but ultimately misplaced mollycoddling holds people back and stymies our collective growth and advancement. we can never adapt if we're taught that we can't change our fates.

Monday, July 04, 2011

i wandered lonely as a cloud

relax, i'm not about to start waxing poetic about daffodils. rather i'm thinking about cloud-based anti-malware software.

it's something i've been thinking about for a little while now but i've finally decided to commit my thoughts to a more permanent format and share them with others.

for the past couple of years the major anti-malware vendors have been deploying cloud technology to improve the effectiveness of their products. often this has been an optimization specifically for their known malware scanners, although some have also taken the opportunity to build reputation systems.

it occurred to me that the cloud could be used for a great deal more than just that. think about what those reputation systems are doing. the user is faced with a complex question - is file X safe - and the cloud answers. the cloud can do this either because there are experts feeding the cloud it's answers or because there's a community feeding the cloud it's answers (or both, come to think of it). the point is that the cloud reduced the complexity for the user.

now think for a moment about all those technologies that have sprung up and then fallen by the wayside over the years. how many of them fell out of favour because they required too much knowledge, because they asked too much of the user? do you see where i'm heading yet? the cloud as a complexity reducing technology (alright it technically transfers and collates that complexity, but from the user's perspective it reduces it) seems like it actually has the potential to breathe new life in virtually all of those other techniques, be they sandboxing, whitelisting, behaviour blocking, or even integrity checking.

and of course, as i was originally coming up with that list i was reminded of the fact that many of them have actually been augmented with some kind of cloud technology to help take the complexity out of their operation. those efforts simply haven't been particularly mainstream. the biggest vendors have been slow to recognize the opportunity to augment these technologies (which can be superior in the right hands) with complexity reduction as a service. the smaller vendors that are taking a chance with this don't necessarily have the stability to keep it going. it would be nice if those other options saw more more mainstream deployment and adoption.

quick thought on cyberwarfare

one of the topics that keeps coming up in discussions of cyberwarfare is 'attribution'. that ability to know where an attack came from, who's responsible for it, etc. it keeps coming up because many of us recognize that it's very difficult to do with attacks in cyberspace.

this is a source of confusion for many because our model of warfare involves things like deterrence, counter attack, and appropriate response. without attribution these things aren't possible.

cyberattacks are often likened to missiles or other kinetic warfare weapons where attribution is a much more straightforward process - i think the confusion is rooted in this comparison. instead of thinking about overt warfare, cyberwarfare would be better likened to covert warfare - black ops, wet works, that sort of thing. there is no meaningful attribution with these sort of warfare and so there is no meaningful deterrence or response to such attacks. it is an area of warfare where there is attack without counter attack, where one attacks simply because it is strategically advantageous.

don't picture these so-called cyberweapons as being like electronic missiles that anyone can launch simply by pressing a button, that gives entirely the wrong sort of character to the topic. if you must consider them cyberweapons at all (ie. if you must focus on the tool instead of the attack itself) think of them as guns with silencers on them. or better yet, think of them as knives, used with surgical precision and giving away no clue to those in the vicinity where the attack came from.