the EICAR (european institute for computer antivirus research) conference was held earlier this week and something which may at first blush seem quite curious happened... fred cohen (a keynote speaker, the father of computer viruses, and the man behind the seminal academic treatment of the subject) said that anti-virus doesn't work...
i wish i'd been there to see that (actually no i don't, i almost certainly would have rolled my eyes, crossed my arms and gone to sleep), but since i'm not part of the industry and flying over to berlin on my own dime is kinda out of the question i had to miss out... it seems i wouldn't have even heard about it if not for randy abrams posting about it here... i've waited patiently for a couple of days now for anyone else to write something about it so i could get more details of what was actually said, but that doesn't seem to be forthcoming... even eicar's site only tells me that the title of his talk was supposed to be "Computer Virology: 25 Years From Now"...
so, since i don't have first hand experience of the conference to give me a clue as to what on earth he was thinking, i went and did some research to see if anything he's said or written in the past might give me a clue and i believe i've found what i was looking for...
in an article on his site titled "Unintended Consequences" it starts to become clear that he feels the computer security industry is in the practice of detecting and reacting to bad things instead of preventing and deterring them, and he paints that in a rather negative light... when i read this i was struck - if he thinks anti-virus (probably one of the most mainstream parts of the computer security industry) is about detection and not prevention then he's doing it wrong!... in my earlier post on defensive lines in anti-malware i illustrate pretty clearly how anti-virus technologies (both the ones that security numpties consider av and the ones they don't) are employed as preventative measures, preventing such things as access to threats, execution of threats, and behaviour of threats (among other things)...
the thing is, fred cohen is really not the sort of person you'd expect to be using av wrong... thankfully i found a pair of interviews with him - one called "Three Minutes With Fred Cohen, Virus Trends Tracker", and the other "Fred Cohen - Not all virus writing is a crime"... although anti-virus is certainly used for prevention, it's underlying mechanisms are largely detection-based - however when cohen talks of prevention he's referring to the type where the underlying mechanism is sort of like an immunity... it all became clear to me when he said we should be using systems that are less susceptible to viruses in the first place - systems that have more limited functionality... this relates back to his 1984 paper Computer Viruses - Theory and Experiments where, in the section on prevention (i really should have looked there earlier) he discusses the 3 key properties of a system that allow viruses to spread - sharing, transitivity of information flow, and the generality of interpretation...
systems with limited functionality refers specifically to the third of these properties - the generality of interpretation - by limiting the functions that a system can perform you limit what can be done as a result of interpreting data as code... cohen poses the examples of turning off macros in ms word, or turning off javascript in the browser - i would follow those up with more contemporary examples such as microsoft recently announcing the disabling of autorun for flash media, or the increasing trend (at least among security conscious users) to use alternate PDF viewers as opposed to Adobe's own Acrobat Reader which has been bloated with unnecessary (and frankly unsafe) functionality for a long time...
i'm not going to try and detract from his arguement by saying the idea doesn't have merit - it does... i even considered whether my defensive lines post needed another line regarding system immunity, though i later realized that limited functionality is already covered by existing lines (as it's just another method of preventing exection and/or particular behaviours)... the problem isn't that his argument in favour of using this technique has merit, the problem is figuring out how he could think anti-virus is broken and this technique isn't - or what he thinks the security industry can do about limiting functionality...
the very existence of the security industry underscores the fact that there has historically been no (and largely still is no) place for security in the rest of the computer industry... limiting functionality is the domain of the original product developers, not 3rd party security companies - the norton's and mcafee's of the world don't have the freedom to cripple Acrobat Reader for example (at least not without getting sued)... and the original product developers by and large aren't limiting functionality because that goes against consumer expectations for technology... technology is supposed to empower us to do more, to do things better/faster, etc - so it's a given that when we choose something we expect to empower us, we aren't going to go with the less powerful option... this is born out by history as well because there have always been less functional/powerful options available and the market has consistently selected against them in favour of the bloated product suites and application platforms...
furthermore, limiting applications is all well and good but there's plenty of malware out there that exists as native binary executables, and limiting the operating system's functionality (or worse, that of the hardware itself) to prevent things like self-replication or botnet command and control communication is a far trickier proposition...
so given the difficulty in getting more adoption of limited systems and given the fact that even cohen isn't suggesting limiting things to the point of systems with fixed first order functionality, i can't help but wonder how he could consider this any less flawed an approach in practice than anti-virus... it is clear to me that it will be just as possible to work around the limitations he's proposing as it is to work around anti-virus technologies, and limited functionality defenses are a lot less agile than things like known-malware scanning so when those special cases come along it will cost a great deal more to redesign a system to further limit functionality so as to deal with such special cases...
that should give you a clue as to my opinion of limited functionality-based defenses - they have promise as a complement to anti-virus techniques... anti-virus may not work in the sense that it's not completely effect at preventing malware infestation, but limited functionality systems aren't either... anti-virus may not work in the sense that it's a computationally expensive approach to prevention, but limited functionality has logistical costs for development, marketing, and maintenance that we may never completely overcome... i fully endorse the idea of selecting software that only has as much power and functionality as you need and no more, but i still think you should use anti-virus right along side that...
frankly, at the end of the day, the main difference between systems that are designed to have more limited functionality and ones where anti-virus is deployed is that the former's limitations are baked in while the latter's limitations are bolted on (because like it or not when an anti-virus stops a piece of malware from executing on your system it is effectively limiting your system's ability to perform the function that malware represents)... that appears to be what cohen's real contention is about - that baked in is better than bolted on... in some ways that may be true, but in others (like the matter of agility) it's definitely not (because it's easier to change the bolt than it is to change the whole system)...
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Saturday, May 16, 2009
Wednesday, May 06, 2009
defensive lines in anti-malware revisited
now, i realize i've covered the topic of defensive lines in anti-malware before, about 2 years ago, but i've refined my thinking since then and now i've found the motivation to share (guess i was just being lazy before)... this thread on wilders security forums made me realize that there are some misconceptions about defensive lines in anti-malware... specifically the ideas of what can be the first or last line of defense seem to require some refinement so here goes...
if you're thinking about multiple lines of defense then it makes sense to consider that there is a sequence of defensive lines at various 'distances' from the final outcome of complete system compromise... that is to say that there are various opportunities (some earlier, some later) during the course of a piece of malware's progression towards compromising a system at which you can prevent a system from becoming compromised, and each of these points a defense can be mounted... the first line of defense, therefore, should be the one furthest away (or at the earliest opportunity) from that possible outcome... likewise the last line of defense necessarily must be the last chance for preventing that outcome...
so what is the first possible line of defense then? well, first opportunity you have for avoiding having your computer get infested with malware is the point of initial exposure - if you don't encounter the malware in the first place then there's nothing else to say about it... when it comes to defending oneself the main defense here is actually risk aversion behaviours like not going to so-called 'dodgy' sites, not plugging strange flash drives you found on the ground into your computer, etc... when it comes to defending others, any malicious site take-down effort will also help prevent people from coming across the malware on malicious sites that get taken down... somewhere in the middle are tools like mcafee's site advisor, or google's malware warning functionality that can help users determine which sites to avoid...
the next logical point at which you can prevent compromise is preventing the transfer of the malware onto the machine we're trying to protect... for automatic transfers (such as drive-by downloads or vulnerability exploiting worms) the primary defense is closing the avenues by which such automated transfer can take place, usually by applying security patches (at least where browser exploits are concerned)... a basic NAT enabled router can also protect machines behind it from certain types of automatic transfer by virtue of preventing unsolicited traffic to those machines... for manual transfers, risk aversion behaviours again play an important role - only downloading software from reputable sites (preferably direct from the vendor's site) being a prime example... a somewhat after-the-fact but related defense is on-demand scanning all incoming materials (whether you think they could carry malware or not) before allowing them to be accessed for any other purpose (sort of like a mandatory quarantine period)...
after potential malware has successfully been transferred onto the system the next opportunity to prevent it's progression to full compromise is preventing access to it... this is where on-access scanners come into play... if the user can't access the malware then there should be no way they can trigger it...
if your malware access controls fail to prevent access (say because the malware is packaged within some obfuscating wrapper, or perhaps the malware is just new) then the following point at which full compromise can be avoided is by preventing execution of the malware... on-access known-malware scanning can prevent execution as well, at least for known malware (if it's in some obfuscated wrapper like a dropper, then when the malware is 'dropped' it will no longer be obfuscated and that camouflage will no longer be protecting it from the scanner)... application whitelisting can also prevent a great deal of malware (even that which is too new to be considered 'known' yet) from executing (by virtue of preventing everything that isn't already explicitly allowed to execute from executing) so long as the user doesn't give the malware permission to run...
after malware has successfully begun to execute on a system, any certainty about preventing that malware from compromising that system is gone, but that doesn't mean there aren't still possibilities for prevention... it may still be possible to prevent compromise by preventing one or more of the malware's bad behaviours using behavioural blacklisting, behavioural whitelisting, or some combination thereof (all of which falling under the general umbrella of behaviour blocking)... in so far as certain behaviours involve accessing system resources that can have their access restricted, operating as a user who doesn't have access to those resources (ie. running as a non-administrative user, following the principle of least privilege) will prevent a great deal of existing malware from being able to operate properly and thus can prevent compromise...
finally, if malware gets past all those previously described defensive lines, there's still one possible opportunity to full system compromise... if you can avoid the consequences of the malware's behaviour by being lucky enough (or by managing the circumstances well enough) that the malware was actually running in a sandbox rather than on the main host system then it will be the sandbox that gets compromised rather than the main host system... sandboxes are such that their compromise is usually inconsequential because they can be regenerated easily - though extrusion of sensitive data may still be a problem, if the malware was contained withing a sandbox then compromise of the host will have been avoided... otherwise the system will have become compromised, prevention will have completely failed, and it would now be an issue of detecting the preventative failure, diagnosis to determine the extent of that failure, and recovery from it...
you can make any one of these stages your own first line of defense, but only by ignoring earlier opportunities to defend yourself... likewise, you can make any of these your last line of defense, but only by ignoring subsequent opportunities to defend yourself... think about what that statement means: ignoring opportunities to defend yourself... is that really something you want to do? does it sound wise? it certainly doesn't sound that way to me...
if you're thinking about multiple lines of defense then it makes sense to consider that there is a sequence of defensive lines at various 'distances' from the final outcome of complete system compromise... that is to say that there are various opportunities (some earlier, some later) during the course of a piece of malware's progression towards compromising a system at which you can prevent a system from becoming compromised, and each of these points a defense can be mounted... the first line of defense, therefore, should be the one furthest away (or at the earliest opportunity) from that possible outcome... likewise the last line of defense necessarily must be the last chance for preventing that outcome...
so what is the first possible line of defense then? well, first opportunity you have for avoiding having your computer get infested with malware is the point of initial exposure - if you don't encounter the malware in the first place then there's nothing else to say about it... when it comes to defending oneself the main defense here is actually risk aversion behaviours like not going to so-called 'dodgy' sites, not plugging strange flash drives you found on the ground into your computer, etc... when it comes to defending others, any malicious site take-down effort will also help prevent people from coming across the malware on malicious sites that get taken down... somewhere in the middle are tools like mcafee's site advisor, or google's malware warning functionality that can help users determine which sites to avoid...
the next logical point at which you can prevent compromise is preventing the transfer of the malware onto the machine we're trying to protect... for automatic transfers (such as drive-by downloads or vulnerability exploiting worms) the primary defense is closing the avenues by which such automated transfer can take place, usually by applying security patches (at least where browser exploits are concerned)... a basic NAT enabled router can also protect machines behind it from certain types of automatic transfer by virtue of preventing unsolicited traffic to those machines... for manual transfers, risk aversion behaviours again play an important role - only downloading software from reputable sites (preferably direct from the vendor's site) being a prime example... a somewhat after-the-fact but related defense is on-demand scanning all incoming materials (whether you think they could carry malware or not) before allowing them to be accessed for any other purpose (sort of like a mandatory quarantine period)...
after potential malware has successfully been transferred onto the system the next opportunity to prevent it's progression to full compromise is preventing access to it... this is where on-access scanners come into play... if the user can't access the malware then there should be no way they can trigger it...
if your malware access controls fail to prevent access (say because the malware is packaged within some obfuscating wrapper, or perhaps the malware is just new) then the following point at which full compromise can be avoided is by preventing execution of the malware... on-access known-malware scanning can prevent execution as well, at least for known malware (if it's in some obfuscated wrapper like a dropper, then when the malware is 'dropped' it will no longer be obfuscated and that camouflage will no longer be protecting it from the scanner)... application whitelisting can also prevent a great deal of malware (even that which is too new to be considered 'known' yet) from executing (by virtue of preventing everything that isn't already explicitly allowed to execute from executing) so long as the user doesn't give the malware permission to run...
after malware has successfully begun to execute on a system, any certainty about preventing that malware from compromising that system is gone, but that doesn't mean there aren't still possibilities for prevention... it may still be possible to prevent compromise by preventing one or more of the malware's bad behaviours using behavioural blacklisting, behavioural whitelisting, or some combination thereof (all of which falling under the general umbrella of behaviour blocking)... in so far as certain behaviours involve accessing system resources that can have their access restricted, operating as a user who doesn't have access to those resources (ie. running as a non-administrative user, following the principle of least privilege) will prevent a great deal of existing malware from being able to operate properly and thus can prevent compromise...
finally, if malware gets past all those previously described defensive lines, there's still one possible opportunity to full system compromise... if you can avoid the consequences of the malware's behaviour by being lucky enough (or by managing the circumstances well enough) that the malware was actually running in a sandbox rather than on the main host system then it will be the sandbox that gets compromised rather than the main host system... sandboxes are such that their compromise is usually inconsequential because they can be regenerated easily - though extrusion of sensitive data may still be a problem, if the malware was contained withing a sandbox then compromise of the host will have been avoided... otherwise the system will have become compromised, prevention will have completely failed, and it would now be an issue of detecting the preventative failure, diagnosis to determine the extent of that failure, and recovery from it...
you can make any one of these stages your own first line of defense, but only by ignoring earlier opportunities to defend yourself... likewise, you can make any of these your last line of defense, but only by ignoring subsequent opportunities to defend yourself... think about what that statement means: ignoring opportunities to defend yourself... is that really something you want to do? does it sound wise? it certainly doesn't sound that way to me...
Tags:
anti-malware,
defense in depth,
prevention
Subscribe to:
Posts (Atom)