devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Tuesday, November 20, 2007
looking for information
a little different from the normal fare for this blog, i just wanted to point people towards a comment that many probably wouldn't have otherwise seen (because they rarely leave their feed reader)... vesselin is looking for some info about a number of office-related vulnerabilities/exploits... i would imagine there must be some people out there who can help...
Monday, November 19, 2007
defense in depth revisited
so as a result of my previous post on the use of multiple scanners as a supposed form of defense in depth i was pointed towards this set of slides for a presentation by sergio alvarez and thierry zoller at n.runs:
http://www.nruns.com/ps/The_Death_of_AV_Defense_in_Depth-Revisiting_Anti-Virus_Software.pdf
the expectation was that i'd probably agree with it's contents, and some of them i do (ex. some of those vulnerabilities are taking far too long to get fixed) my blog wouldn't be very interesting if all i did was agree with people so thankfully for the reader there's a number of things in the slides i didn't agree with...
the first thing is actually in the filename itself - did anyone catch that death of av reference in there? obviously this is qualified to be specific to a more narrow concept but the frame of reference is still fairly clear...
i'm not going to harp on that too much because that's just a taste of what's in store, actually... the main thrust of the presentation these slides are for seems to be that defense in depth as it's often implemented (ie. with multiple scanners at multiple points in the network) is bad because of all the vulnerabilities in scanners that malware could exploit to do anything from simply bypassing the scanner to actually getting the scanner to execute arbitrary code...
this says to me that instead of recognizing that you can't build defense in depth using multiple instances of essentially the same control, the presenters would rather call the construct a defense in depth failure and blame the failure on the scanners and the people who make them (and make no mistake, there certainly is some room for blame there)... the fact is that it was never defense in depth in the first place and if you want to assign blame, start with the people who think it is defense in depth because they clearly don't understand the concept... in a physical security context, if my only defensive layer is a wall and i decide to add a second wall (and maybe even a third) i add no depth to the defense... an attack that can be stopped by a wall will be stopped by the first one and one that can't be stopped by the first wall probably won't be stopped by subsequent walls...
the slides also have some rather revealing bullet points, such as the one that lists "makes our networks and systems more secure" as a myth... this goes back to the large surface area of potential vulnerabilities; the argument can be made that using such software increases the total number of software vulnerabilities present on a system or in a network - however this is true for each and every piece of software one adds to a system... i've heard this argument used in support of the idea that one shouldn't use any security software and instead rely entirely on system hardening, least privileged usage, etc. but it's no more convincing now than it has been in the past... yes the total number of vulnerabilities increase but there's more to security than raw vulnerability counts... the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners... there aren't legions of malware instances exploiting these scanner vulnerabilities, otherwise we'd have ourselves an optimal malware situation...
another point, and one they repeat (so it must be important) is the paradox that the more you protect yourself the less protected you are... this follows rather directly from the previous point, using multiple scanners is bad because of all the vulnerabilities... the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure... i don't know if that was the intended implication, i'm tempted to give the benefit of the doubt and suggest it wasn't, but the implication remains... again, there's more to security than what they're measuring - they're looking at one part of the change in overall security rather than the net change...
yet another point (well set of points, really) had to do with how av vendors handle vulnerability reports... as i said earlier, some of the vendors are taking far too long but some of the other things they complain about are actually quite reasonable in my eyes and i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet (no point in giving out details that only the bad guys can actually act on), condensing bugs when rewriting an area of code (i develop software myself, it makes perfect sense to me that a bunch of related bugs with a single fix would be condensed into one report), fixing bugs silently (above all else, don't help the bad guys), and spamming vulnerability info in order to give credit to researchers (if you're in it for credit or other rewards you'll get no sympathy from me)...
finally, and perhaps the most novice error in the whole presentation was the complaint that scanners shouldn't flag archive files as clean if they're unable to parse them... there is a rather large difference between "no viruses found" and "no viruses present"... scanners do not flag things as clean (at least not unless the vendor is being intellectually dishonest) because a scanner cannot know something is clean - all a scanner can do is flag things that aren't clean and the absence of such flags cannot (and should not, if you know what you're talking about) be interpreted to mean a thing is clean... scanners tell you when they know for sure something is bad, not when they don't know for sure that something isn't bad... if you want the latter behaviour then you want something other than a scanner...
all this being said, though, i did take away one important point from the slides: not only is using multiple scanners not defense in depth, using multiple scanners in the faux defense in depth setup that many people swear by comes with security risks that most would never have considered...
http://www.nruns.com/ps/The_Death_of_AV_Defense_in_Depth-Revisiting_Anti-Virus_Software.pdf
the expectation was that i'd probably agree with it's contents, and some of them i do (ex. some of those vulnerabilities are taking far too long to get fixed) my blog wouldn't be very interesting if all i did was agree with people so thankfully for the reader there's a number of things in the slides i didn't agree with...
the first thing is actually in the filename itself - did anyone catch that death of av reference in there? obviously this is qualified to be specific to a more narrow concept but the frame of reference is still fairly clear...
i'm not going to harp on that too much because that's just a taste of what's in store, actually... the main thrust of the presentation these slides are for seems to be that defense in depth as it's often implemented (ie. with multiple scanners at multiple points in the network) is bad because of all the vulnerabilities in scanners that malware could exploit to do anything from simply bypassing the scanner to actually getting the scanner to execute arbitrary code...
this says to me that instead of recognizing that you can't build defense in depth using multiple instances of essentially the same control, the presenters would rather call the construct a defense in depth failure and blame the failure on the scanners and the people who make them (and make no mistake, there certainly is some room for blame there)... the fact is that it was never defense in depth in the first place and if you want to assign blame, start with the people who think it is defense in depth because they clearly don't understand the concept... in a physical security context, if my only defensive layer is a wall and i decide to add a second wall (and maybe even a third) i add no depth to the defense... an attack that can be stopped by a wall will be stopped by the first one and one that can't be stopped by the first wall probably won't be stopped by subsequent walls...
the slides also have some rather revealing bullet points, such as the one that lists "makes our networks and systems more secure" as a myth... this goes back to the large surface area of potential vulnerabilities; the argument can be made that using such software increases the total number of software vulnerabilities present on a system or in a network - however this is true for each and every piece of software one adds to a system... i've heard this argument used in support of the idea that one shouldn't use any security software and instead rely entirely on system hardening, least privileged usage, etc. but it's no more convincing now than it has been in the past... yes the total number of vulnerabilities increase but there's more to security than raw vulnerability counts... the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners... there aren't legions of malware instances exploiting these scanner vulnerabilities, otherwise we'd have ourselves an optimal malware situation...
another point, and one they repeat (so it must be important) is the paradox that the more you protect yourself the less protected you are... this follows rather directly from the previous point, using multiple scanners is bad because of all the vulnerabilities... the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure... i don't know if that was the intended implication, i'm tempted to give the benefit of the doubt and suggest it wasn't, but the implication remains... again, there's more to security than what they're measuring - they're looking at one part of the change in overall security rather than the net change...
yet another point (well set of points, really) had to do with how av vendors handle vulnerability reports... as i said earlier, some of the vendors are taking far too long but some of the other things they complain about are actually quite reasonable in my eyes and i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet (no point in giving out details that only the bad guys can actually act on), condensing bugs when rewriting an area of code (i develop software myself, it makes perfect sense to me that a bunch of related bugs with a single fix would be condensed into one report), fixing bugs silently (above all else, don't help the bad guys), and spamming vulnerability info in order to give credit to researchers (if you're in it for credit or other rewards you'll get no sympathy from me)...
finally, and perhaps the most novice error in the whole presentation was the complaint that scanners shouldn't flag archive files as clean if they're unable to parse them... there is a rather large difference between "no viruses found" and "no viruses present"... scanners do not flag things as clean (at least not unless the vendor is being intellectually dishonest) because a scanner cannot know something is clean - all a scanner can do is flag things that aren't clean and the absence of such flags cannot (and should not, if you know what you're talking about) be interpreted to mean a thing is clean... scanners tell you when they know for sure something is bad, not when they don't know for sure that something isn't bad... if you want the latter behaviour then you want something other than a scanner...
all this being said, though, i did take away one important point from the slides: not only is using multiple scanners not defense in depth, using multiple scanners in the faux defense in depth setup that many people swear by comes with security risks that most would never have considered...
Monday, November 12, 2007
the myth of optimal malware
this is going to be an interesting myth-debunking post because the object of the myth actually exists...
there are any number of various optimal properties that a given piece of malware can have, such as novelty (new/unknown), rarity (targeted), stealth, polymorphism, anti-debugging tricks, security software termination, automatic execution through exploit code, etc...
likewise, there are any number of optimizations a malware purveyor can adopt, such as continually updating the malware, targeting the malware to a small group of people, spreading it with botnets, only using malware which is itself optimal, etc...
there really is malware using some or perhaps most of those tricks and there really are malware purveyors using some or all of those techniques so you may be wondering where the myth comes in... the myth comes in when we start considering these optimizations as being universal or at least close to it - that most if not all malware is as optimal as it can possibly be and that most if not all malware purveyors use the the most optimal deployment techniques they possibly can...
while each of those optimizations on their own seem to have become common place, few instances of malware can truly be considered optimal... for example, most malware is not targeted - sure the instances we hear about make for a great story and draws lots of readers, but they're a drop in the bucket next to the total malware population... novelty is seemingly even more popular since all malware starts out as new at some point, but as i've said before novelty is a malware advantage that wears off... polymorphism attempts to keep that novelty going indefinitely but it, along with novelty and targeting, were really only ever effective against known malware scanning - they hold no particular advantage against anti-malware techniques that don't operate by knowing what the bad thing looks like...
the same holds true for malware purveyors as well, few do what it really takes to get the most out of malware... otherwise malware would be much more successful than it is... even security conscious folks like you and i would be getting compromised left right and center because our anti-malware controls would just not be effective...
but that doesn't stop people from believing or falling into the logical trap that is optimal malware... i'm sure you've seen and perhaps even constructed arguments based on this fallacy... the anti-virus is dead argument is based in this as it posits that scanners are not effective because of new/unknown malware despite the fact that that malware doesn't stay new/unknown for long and that the effectiveness of known-malware scanning is precisely the reason the malware creators have to keep churning out new versions of their wares... the school of thought that says software firewalls are useless because malware can just shut them down or tunnel through some authorized process is likewise based on the myth of optimal malware because although some malware certainly does bypass software firewalls, not all do, and so they remain at least somewhat effective as a security control... in fact, any similar argument that says security technology X is useless because malware can just do Y to get around it is based on the myth of optimal malware as there is plenty of malware that doesn't do Y... i think i've even fallen prey to this fallacy on occasion when constructing arguments (so there's no need to point examples out to me, i know i'm not perfect)...
so keep this in mind the next time you run across a school of thought that attributes near supernatural abilities to malware - with truly optimal malware the malware purveyors would be able to get past most if not all our anti-malware controls all of the time (not unlike fooling all of the people all of the time), and since that isn't happening we can conclude that most malware is in fact not optimal...
there are any number of various optimal properties that a given piece of malware can have, such as novelty (new/unknown), rarity (targeted), stealth, polymorphism, anti-debugging tricks, security software termination, automatic execution through exploit code, etc...
likewise, there are any number of optimizations a malware purveyor can adopt, such as continually updating the malware, targeting the malware to a small group of people, spreading it with botnets, only using malware which is itself optimal, etc...
there really is malware using some or perhaps most of those tricks and there really are malware purveyors using some or all of those techniques so you may be wondering where the myth comes in... the myth comes in when we start considering these optimizations as being universal or at least close to it - that most if not all malware is as optimal as it can possibly be and that most if not all malware purveyors use the the most optimal deployment techniques they possibly can...
while each of those optimizations on their own seem to have become common place, few instances of malware can truly be considered optimal... for example, most malware is not targeted - sure the instances we hear about make for a great story and draws lots of readers, but they're a drop in the bucket next to the total malware population... novelty is seemingly even more popular since all malware starts out as new at some point, but as i've said before novelty is a malware advantage that wears off... polymorphism attempts to keep that novelty going indefinitely but it, along with novelty and targeting, were really only ever effective against known malware scanning - they hold no particular advantage against anti-malware techniques that don't operate by knowing what the bad thing looks like...
the same holds true for malware purveyors as well, few do what it really takes to get the most out of malware... otherwise malware would be much more successful than it is... even security conscious folks like you and i would be getting compromised left right and center because our anti-malware controls would just not be effective...
but that doesn't stop people from believing or falling into the logical trap that is optimal malware... i'm sure you've seen and perhaps even constructed arguments based on this fallacy... the anti-virus is dead argument is based in this as it posits that scanners are not effective because of new/unknown malware despite the fact that that malware doesn't stay new/unknown for long and that the effectiveness of known-malware scanning is precisely the reason the malware creators have to keep churning out new versions of their wares... the school of thought that says software firewalls are useless because malware can just shut them down or tunnel through some authorized process is likewise based on the myth of optimal malware because although some malware certainly does bypass software firewalls, not all do, and so they remain at least somewhat effective as a security control... in fact, any similar argument that says security technology X is useless because malware can just do Y to get around it is based on the myth of optimal malware as there is plenty of malware that doesn't do Y... i think i've even fallen prey to this fallacy on occasion when constructing arguments (so there's no need to point examples out to me, i know i'm not perfect)...
so keep this in mind the next time you run across a school of thought that attributes near supernatural abilities to malware - with truly optimal malware the malware purveyors would be able to get past most if not all our anti-malware controls all of the time (not unlike fooling all of the people all of the time), and since that isn't happening we can conclude that most malware is in fact not optimal...
Tags:
anti-malware,
malware,
myth
the user is part of the system
dave lewis posted a short observation on how XSS gets discounted and in the process touched on something much bigger... a lot of people want to discount anything in security that depends on the user...
daniel miessler more or less this very thing when he wrote about the new mac trojan and marcin wielgoszewski seems to have agreed with him... then there are those who discount the notion of user education as something that doesn't work or is wasted effort... more ubiquitous than that are the security models and software designs that do their best to exclude or otherwise ignore the user in order to devise purely technological solutions to security problems...
perhaps this is something that not everyone learned in school (like i did) but the user is part of the system... sure the user can be considered a complete and whole thing on it's own, the user is a person, an individual who can exist and be productive without the system if need be, but can we say the same about the system? does the system do what it's intended to do without the user? does the work that the system needs to complete get done without the user? if the answer is no (and it generally is) then the system is not complete without the user... that means security models that ignore or exclude the user are models of systems missing a key component - and so-called solutions designed to work without regard for the user wind up getting applied to problem environments that don't match the ideal user-free world they were designed for...
including the user in one's analysis is hard and messy, i know, but excluding the user trades that difficulty in for another in the form of reduced applicability to the way things work in practice... after all, treating user-dependent risks as second-class security problems certainly doesn't make a lot of sense when social engineering is proving to be more effective than exploiting software vulnerabilities in the long run...
daniel miessler more or less this very thing when he wrote about the new mac trojan and marcin wielgoszewski seems to have agreed with him... then there are those who discount the notion of user education as something that doesn't work or is wasted effort... more ubiquitous than that are the security models and software designs that do their best to exclude or otherwise ignore the user in order to devise purely technological solutions to security problems...
perhaps this is something that not everyone learned in school (like i did) but the user is part of the system... sure the user can be considered a complete and whole thing on it's own, the user is a person, an individual who can exist and be productive without the system if need be, but can we say the same about the system? does the system do what it's intended to do without the user? does the work that the system needs to complete get done without the user? if the answer is no (and it generally is) then the system is not complete without the user... that means security models that ignore or exclude the user are models of systems missing a key component - and so-called solutions designed to work without regard for the user wind up getting applied to problem environments that don't match the ideal user-free world they were designed for...
including the user in one's analysis is hard and messy, i know, but excluding the user trades that difficulty in for another in the form of reduced applicability to the way things work in practice... after all, treating user-dependent risks as second-class security problems certainly doesn't make a lot of sense when social engineering is proving to be more effective than exploiting software vulnerabilities in the long run...
Tags:
security
Saturday, November 10, 2007
using multiple scanners is not defense in depth
i often come across nuggets of information that i want to respond to (often because they represent fundamental assumptions that i think are wrong) that don't really have anything to do with the main point of the article, so i'll leave it as an exercise for the reader to guess who mentioned using 2 different scanners as being a part of defense in depth...
that post didn't have anything to do with av and this post doesn't really have anything to do with that post but the idea that using one vendor's scanner at the gateway and a different vendor's scanner on the desktops qualifies as defense in depth is actually fairly old and oft-repeated so this really goes out to a fairly broad audience...
using multiple scanners is NOT defense in depth... at best it's defense in breadth... known malware scanners all have essentially the same strengths and weaknesses, they all look for and block essentially the same sorts of things, there's going to be very little caught by one that isn't also caught by the other so they don't really complement each other...
the premise of defense in depth is that any given defensive technique has both strengths and weaknesses and overall defense can be stronger if that technique is combined with one or more other defensive techniques that are strong where the first one is weak... no layer in the defense is impenetrable but in combination the layers together approach much closer to impenetrability...
so defense in depth requires complementary techniques/technologies and in so far as av companies are increasingly providing that in their suites, using similar products from multiple vendors doesn't get you any more defense in depth than you could have gotten with a single product because similar products are not complementary... what it can get you, however, is best of breed - some scanners may have features that make them better suited to gateway usage than others...
of course one could argue that they regard defense in depth as having defenses at multiple perimeters (the gateway and the host machines) but again, if those defenses are mostly the same then the inner layers of defense won't really be adding that much more to the overall defense... so using multiple similar products at different perimeters doesn't really add to the depth of your defenses, instead it adds redundancy which is the primary ingredient of fault tolerance...
that post didn't have anything to do with av and this post doesn't really have anything to do with that post but the idea that using one vendor's scanner at the gateway and a different vendor's scanner on the desktops qualifies as defense in depth is actually fairly old and oft-repeated so this really goes out to a fairly broad audience...
using multiple scanners is NOT defense in depth... at best it's defense in breadth... known malware scanners all have essentially the same strengths and weaknesses, they all look for and block essentially the same sorts of things, there's going to be very little caught by one that isn't also caught by the other so they don't really complement each other...
the premise of defense in depth is that any given defensive technique has both strengths and weaknesses and overall defense can be stronger if that technique is combined with one or more other defensive techniques that are strong where the first one is weak... no layer in the defense is impenetrable but in combination the layers together approach much closer to impenetrability...
so defense in depth requires complementary techniques/technologies and in so far as av companies are increasingly providing that in their suites, using similar products from multiple vendors doesn't get you any more defense in depth than you could have gotten with a single product because similar products are not complementary... what it can get you, however, is best of breed - some scanners may have features that make them better suited to gateway usage than others...
of course one could argue that they regard defense in depth as having defenses at multiple perimeters (the gateway and the host machines) but again, if those defenses are mostly the same then the inner layers of defense won't really be adding that much more to the overall defense... so using multiple similar products at different perimeters doesn't really add to the depth of your defenses, instead it adds redundancy which is the primary ingredient of fault tolerance...
Wednesday, November 07, 2007
what is a drive-by download?
a drive-by download is a form of exploitation where simply visiting a particular malicious website using a vulnerable system can cause a piece of malware to be downloaded and possibly even executed on that system...
in other words it's a way for a system to be compromised just by visiting a website...
the vulnerability(s) exploited in order to cause a drive-by download can be in the web browser itself or possibly in some other component involved in rendering the content of the malicious page (such as a multimedia plug-in or a scripting engine)...
drive-by downloads are particularly pernicious for two reasons... the first is that it can be hard to avoid being vulnerable and still maintain the functionality people have come to expect from the web... all software has vulnerabilities at least some of the time and there may be quite a few pieces of software on a given system that deal with web content (such as real player, quicktime, flash, adobe acrobat reader, etc) that may have vulnerabilities... add to that the fact that vulnerabilities aren't always fixed right away and that many users don't apply patches or updates as soon as they're available and you wind up with a fairly large pool of potential victims...
the second reason they are so pernicious is that it can be hard to avoid being exposed to an exploit leading to a drive-by download... the exploit can be delivered through legitimate, high profile, mainstream sites by way of the advertising (or other 3rd party) content on the site... if the ad network that supplies the advertising content is infiltrated by cyber-criminals (which has been known to happen) then they can sneak a malicious ad into the network's ad rotation and get it inserted into otherwise trusted and trustworthy sites... for this reason the old advice of only visiting trusted sites can't really protect you from this type of threat...
back to index
in other words it's a way for a system to be compromised just by visiting a website...
the vulnerability(s) exploited in order to cause a drive-by download can be in the web browser itself or possibly in some other component involved in rendering the content of the malicious page (such as a multimedia plug-in or a scripting engine)...
drive-by downloads are particularly pernicious for two reasons... the first is that it can be hard to avoid being vulnerable and still maintain the functionality people have come to expect from the web... all software has vulnerabilities at least some of the time and there may be quite a few pieces of software on a given system that deal with web content (such as real player, quicktime, flash, adobe acrobat reader, etc) that may have vulnerabilities... add to that the fact that vulnerabilities aren't always fixed right away and that many users don't apply patches or updates as soon as they're available and you wind up with a fairly large pool of potential victims...
the second reason they are so pernicious is that it can be hard to avoid being exposed to an exploit leading to a drive-by download... the exploit can be delivered through legitimate, high profile, mainstream sites by way of the advertising (or other 3rd party) content on the site... if the ad network that supplies the advertising content is infiltrated by cyber-criminals (which has been known to happen) then they can sneak a malicious ad into the network's ad rotation and get it inserted into otherwise trusted and trustworthy sites... for this reason the old advice of only visiting trusted sites can't really protect you from this type of threat...
back to index
Tags:
definition,
drive-by download,
exploit,
malware
Subscribe to:
Posts (Atom)