in point of fact, both anti-av and pro-av are absurd... i'll explain why in a moment...
given my comments about dancho danchev's anti-av leanings in my previous post, i anticipate there will be a number of people thinking (though not necessarily bothering to say it to my face) that i'm the polar opposite - fanatically pro-av... indeed, this has already happened in the past as there are those who wonder about my sanity, consider me an av extremist, and think i belong on the list of the 6 dumbest people in IT security (an homage to ranum's largely ill-conceived 6 dumbest ideas in IT security)...
let me ask you something, though... does it make sense to be pro-hammer? how about anti-screwdriver? would you sit on the fence about a tape measure?
av, even if we accept the retarded interpretation of the term (somewhat more forgivable from average joe public since s/he has an excuse for not knowing better) as just known-malware scanning, is just a tool - no more and no less... it's a tool that is remarkably good at a narrowly defined set of tasks: detecting known-malware for the purposes of prevention, and connecting known-malware incidents that aren't prevented to expert knowledge of the malware in question (by identifying the malware) for the purposes of diagnosis...
being anti-av is like being anti-hammer or anti-screwdriver (and likewise for being pro-av, pro-hammer, pro-screwdriver)... it's a tool (not a religion or a political party), and there are times when it's the appropriate tool to use... those who would completely drop it in favour of some other supposedly superior technology, those who would complain that they relied on av and it let them down, are the very people who never learned the lesson about how when all you have is a hammer, everything looks like a nail... known-malware scanning is a tool, whitelisting is a tool, sandboxing is a tool, behaviour blocking is a tool, heuristic analysis is a tool, etc, and i use and advise on the use of most all of them - security practitioners better than anyone should understand the importance of having a well equipped security toolbox with a variety of tools for a variety of jobs... only the naive would think that the entire malware problem could be comparable to just hammering nails and thus require just one tool...
and since i do make use of all 3 preventative paradigms, it's hard to imagine how i could simply be pro-av... i'm pro-knowledge - i think people should know their tools, what they can do and what they can't, and know the problem so that they can choose and use the tools appropriately... it's really a shame when people commit the anti-malware equivalent of trying to hammer screws into wood, and not nearly as funny...
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Sunday, July 27, 2008
the ongoing n.runs saga
you my recall my previous post about n.runs... well, it seems i wasn't the only one who saw FUD as ryan permeh wrote on mcafee's blog about what n.runs was saying specifically about mcafee... now it seems that thierry zoller of n.runs has responded to the mcafee post, or at least he tried to...
he didn't do a particularly good job of it, however, as despite explaining that the graphs come from data gleaned from publicly available 3rd party vulnerability catalogs (something that was clear from their original press release and not in need of additional explanation), he didn't address the issue that ryan raised about not being able to verify n.runs' figures when looking at the raw data and instead mistakenly or intentionally mislead the reader into thinking that ryan was looking to verify the 800 figure (which was n.runs' own) when it was clear from his post that he was only trying to verify the figures that applied directly to mcafee and that were supposed to come have come from 3rd party databases...
thierry also denied making the claim and/or believing that running av makes you less secure in spite of the fact that an n.runs slide deck i posted about last november makes exactly that claim...
additionally, where ryan claimed there was no evidence of these vulnerabilities in mcafee's product being exploited in the wild thierry responds by saying that it's because of the way the vulnerabilities are reported - apparently ignoring the fact that being used in the wild means there should be malware samples implementing the exploit(s) and that mcafee should have seen some of these by now...
one thing that ryan didn't really bring up and so wasn't addressed by thierry is the absurdity of aggregating the vulnerability count across an entire industry (where the 800 vulnerabilities figure is supposed to come from)... it's not an actionable metric, it doesn't say anything about any particular product or vendor within that industry, and only serves to scare people... this is the kind of marketing that john mcafee (long absent from the company bearing his name) used back in the days of the michelangelo virus (have i just invoked the anti-malware industry's version of godwin's law?)... even if there technically are that many vulnerabilities across the product lines of the entire set of vendors in the av industry, it's an entirely pointless measurement...
and while we're on the subject of marketing, am i the only one whose noticed that dancho danchev has put rather a lot of effort into providing a platform for n.runs to spread their marketing message from? one might wonder if he were still as 'independent' as he claims to be, though a more reasonable explanation might be that his rather obvious anti-av leanings (he's frequently made disparaging insinuations on his blog in the past) have been kicked up a notch so that, given the obvious ammo this 800 vulnerabilities claim could provide to an anti-av agenda, he either doesn't care or isn't aware that it's a marketing message he's helping to spread... given a more recent post where he misleads by misusing terms that someone in his position has no legitimate excuse to mix up (samples != variants != families != signatures, so counts of one can't be compared to counts of another), this latter explanation seems all the more plausible...
he didn't do a particularly good job of it, however, as despite explaining that the graphs come from data gleaned from publicly available 3rd party vulnerability catalogs (something that was clear from their original press release and not in need of additional explanation), he didn't address the issue that ryan raised about not being able to verify n.runs' figures when looking at the raw data and instead mistakenly or intentionally mislead the reader into thinking that ryan was looking to verify the 800 figure (which was n.runs' own) when it was clear from his post that he was only trying to verify the figures that applied directly to mcafee and that were supposed to come have come from 3rd party databases...
thierry also denied making the claim and/or believing that running av makes you less secure in spite of the fact that an n.runs slide deck i posted about last november makes exactly that claim...
additionally, where ryan claimed there was no evidence of these vulnerabilities in mcafee's product being exploited in the wild thierry responds by saying that it's because of the way the vulnerabilities are reported - apparently ignoring the fact that being used in the wild means there should be malware samples implementing the exploit(s) and that mcafee should have seen some of these by now...
one thing that ryan didn't really bring up and so wasn't addressed by thierry is the absurdity of aggregating the vulnerability count across an entire industry (where the 800 vulnerabilities figure is supposed to come from)... it's not an actionable metric, it doesn't say anything about any particular product or vendor within that industry, and only serves to scare people... this is the kind of marketing that john mcafee (long absent from the company bearing his name) used back in the days of the michelangelo virus (have i just invoked the anti-malware industry's version of godwin's law?)... even if there technically are that many vulnerabilities across the product lines of the entire set of vendors in the av industry, it's an entirely pointless measurement...
and while we're on the subject of marketing, am i the only one whose noticed that dancho danchev has put rather a lot of effort into providing a platform for n.runs to spread their marketing message from? one might wonder if he were still as 'independent' as he claims to be, though a more reasonable explanation might be that his rather obvious anti-av leanings (he's frequently made disparaging insinuations on his blog in the past) have been kicked up a notch so that, given the obvious ammo this 800 vulnerabilities claim could provide to an anti-av agenda, he either doesn't care or isn't aware that it's a marketing message he's helping to spread... given a more recent post where he misleads by misusing terms that someone in his position has no legitimate excuse to mix up (samples != variants != families != signatures, so counts of one can't be compared to counts of another), this latter explanation seems all the more plausible...
Tags:
dancho danchev,
fud,
nruns,
ryan permeh,
thierry zoller
Friday, July 25, 2008
i'm going to exploit DNS
which is to say i'm going to exploit everyone's interest in the DNS vulnerability dan kaminsky discovered in order to draw eyeballs... in other words: made you look!...
actually, i read on the recurity labs blog that every computer security blog on the planet had written something about the DNS vulnerability, and since i hadn't done so yet i was technically making a liar out of that blogger and that's no good...
problem is, though, that for the most part i really have nothing to say about the DNS vulnerability... i don't like writing about things i don't know and DNS is one of the many times many things i don't know all that much about...
the disclosure argument that ensued was somewhat familiar territory but it wasn't until i read andre gironda's comments to rich mogull's post about whose interests are being served that things finally crystallized in my mind about this... in a nutshell, the information is such that the vast majority of people have no legitimate need or use for it and so to keep it out of the hands of the black hats it really shouldn't be broadcast for the world to see, but at the same time there are enough people that may have a legitimate need or desire to know the information that any centrally coordinated effort to inform them will fail to include many and will only result in an exercise in elitism...
believe it or not, there's actually an old lesson from the AV field to be learned here because the AV community has had to deal with a very similar situation somewhere on the order of about a million times now... long, long ago they came up with a solution that actually works pretty well: a kind of darknet where links are formed between individuals who know and trust each others motives, competence, and judgment - basically a web of trust for sharing malware samples... the benefits are that the adherents to this approach avoid contributing to the sharing of information/materials with people who have no business handling them while at the same time giving everyone who may have a legitimate need or desire for the information/materials has a fair opportunity (but not a guarantee) to acquire them by virtue of being connected to someone else (or better still, many others) who has the means and desire to acquire it themselves...
it's not a perfect system, of course... there have been some instances where the wrong person was trusted, prompting people to rethink how they decide who to trust... there has also been no shortage of lazy bums who can't be bothered to put in the work necessary to actually earn the trust of at least one of their peers and instead whine about how unfair and elitist it is, like some petulant child who feels entitled to receive whatever s/he asks for... there's nothing stopping them from building the relationships necessary to participate in such a network but some people just don't understand that nothing worthwhile in this world is free...
i really think that the wider security community would benefit from adopting a similar approach to 'disclosure', and certainly in the case of the DNS vulnerability there could have been benefits if such a distributed trust-based network had already been established and was utilized... i say "already been established" because it can take time for a trust-based network to get connected and mature; but really, from a tactical point of view, information/intelligence gathering/sharing in hostile territory (and when we talk about broadcasting things on the global stage, the presence of hostile agents is beyond doubt) requires the use of covert channels such as this...
actually, i read on the recurity labs blog that every computer security blog on the planet had written something about the DNS vulnerability, and since i hadn't done so yet i was technically making a liar out of that blogger and that's no good...
problem is, though, that for the most part i really have nothing to say about the DNS vulnerability... i don't like writing about things i don't know and DNS is one of the many times many things i don't know all that much about...
the disclosure argument that ensued was somewhat familiar territory but it wasn't until i read andre gironda's comments to rich mogull's post about whose interests are being served that things finally crystallized in my mind about this... in a nutshell, the information is such that the vast majority of people have no legitimate need or use for it and so to keep it out of the hands of the black hats it really shouldn't be broadcast for the world to see, but at the same time there are enough people that may have a legitimate need or desire to know the information that any centrally coordinated effort to inform them will fail to include many and will only result in an exercise in elitism...
believe it or not, there's actually an old lesson from the AV field to be learned here because the AV community has had to deal with a very similar situation somewhere on the order of about a million times now... long, long ago they came up with a solution that actually works pretty well: a kind of darknet where links are formed between individuals who know and trust each others motives, competence, and judgment - basically a web of trust for sharing malware samples... the benefits are that the adherents to this approach avoid contributing to the sharing of information/materials with people who have no business handling them while at the same time giving everyone who may have a legitimate need or desire for the information/materials has a fair opportunity (but not a guarantee) to acquire them by virtue of being connected to someone else (or better still, many others) who has the means and desire to acquire it themselves...
it's not a perfect system, of course... there have been some instances where the wrong person was trusted, prompting people to rethink how they decide who to trust... there has also been no shortage of lazy bums who can't be bothered to put in the work necessary to actually earn the trust of at least one of their peers and instead whine about how unfair and elitist it is, like some petulant child who feels entitled to receive whatever s/he asks for... there's nothing stopping them from building the relationships necessary to participate in such a network but some people just don't understand that nothing worthwhile in this world is free...
i really think that the wider security community would benefit from adopting a similar approach to 'disclosure', and certainly in the case of the DNS vulnerability there could have been benefits if such a distributed trust-based network had already been established and was utilized... i say "already been established" because it can take time for a trust-based network to get connected and mature; but really, from a tactical point of view, information/intelligence gathering/sharing in hostile territory (and when we talk about broadcasting things on the global stage, the presence of hostile agents is beyond doubt) requires the use of covert channels such as this...
Tags:
andre gironda,
disclosure,
DNS,
rich mogull,
tactics,
vulnerability
Sunday, July 13, 2008
if i have a whitelist, do i still need AV?
on reading this dark reading article about a texas bank dumping AV in favour of application whitelisting technology i was reminded of an email conversation i had with a reader who was dealing with almost the exact same issue only days earlier... i've come to realize this is a question more and more people are going to be wrestling with as time goes on so instead of expecting them to contact me privately i'm going to answer the question of whether you need AV software if you have a whitelist here... the glib answer is no you don't need it...
of course that answer continues with: you don't need the whitelist either, or your computer for that matter - people survived perfectly fine before any of that stuff existed...
clearly, the glib answer isn't all that useful...
the real answer is that i can't decide for anyone whether they need AV... in practice security has a lot to do with making trade-offs and which trade-off is right for any particular person or organization is best decided by those who would have to live with the consequences of that decision... what i can do, however, is point out weaknesses in both individual technologies and describe some of the benefits of using them together...
it's no secret that known-malware scanners are ill-suited to detection of new/unknown malware, updates can be a hassle for large organizations, and the ongoing subscription fees have in the past seemed like a necessary evil but are increasingly seeming less necessary as application whitelisting works it's way into the mainstream...
of course, as we know from figures provided by bit9, the set of good software is larger and growing faster than the set of malware so a vendor-supplied whitelist will almost certainly be worse from an update point of view (and maybe subscription-wise too)... the customer-editable whitelist is much more palatable in that regard because you only put on it those things you actually need in your production environment... coming up with the initial whitelist (not to mention modifying it if/when the needs of that environment change or when software needs updating) puts the customer in the position of having to decide what's safe or not... certainly one should only trust software from known, reputable sources but those sources aren't perfect (even microsoft has accidentally distributed malware in the past) so the first benefit of continuing to use known-malware scanner even though you've chosen to use a whitelist is that you can check the software you intend to whitelist (as whitelist vendors often do for whitelists they provide)... this is an example of the axiom "trust but verify"...
that alone may not seem like it's enough to warrant keeping the desktop av, however, so consider this: just as known-malware scanners don't recognize everything that's malware, application whitelist software doesn't recognize everything that's a program... will the whitelist software block office macros if the office binaries are on the whitelist? will it block batch, perl, kixstart, etc. scripts if their respective interpreter is on the whitelist? will it block javascript from the myriad of ways it can be launched on the system? as an example, i use the application launch control functionality of sunbelt personal firewall as a whitelist and i discovered quite by accident recently that it does not block individual batch files... a known-malware scanner would detect known malware in a batch file, however, and in an office macro, etc... that's a second benefit of keeping known-malware scanners around in a whitelist deployment, and it's one that applies even at the desktop level...
that's not all though... there's also the issue of exploit code that exploits vulnerabilities in whitelisted applications... if the exploit needs to launch additional applications that aren't on the whitelist then the whitelisting software will have interfered with and potentially blocked the exploit, but what if all the applications it needs are on the whitelist? the whitelisting software would be helpless to stop it but a known-malware scanner might still have at least some chance to do so...
known-malware scanners are weak against that which is novel while application whitelists (assuming no malware gets whitelisted) are weak against that which is exotic... together they're only weak against that which is both novel and exotic... this is the essence of what defense in depth means when it comes to the anti-malware world - not using multiple different scanners but rather using multiple and entirely different types of technology so that the second (or third, or fourth, etc) line of defense can stop at least some of what gets through the cracks in the previous line(s) of defense... it's still up to the people affected to decide whether the benefits of a multi-layered strategy warrant the cost but they should definitely consider it as no technology is an island, perfect unto itself...
of course that answer continues with: you don't need the whitelist either, or your computer for that matter - people survived perfectly fine before any of that stuff existed...
clearly, the glib answer isn't all that useful...
the real answer is that i can't decide for anyone whether they need AV... in practice security has a lot to do with making trade-offs and which trade-off is right for any particular person or organization is best decided by those who would have to live with the consequences of that decision... what i can do, however, is point out weaknesses in both individual technologies and describe some of the benefits of using them together...
it's no secret that known-malware scanners are ill-suited to detection of new/unknown malware, updates can be a hassle for large organizations, and the ongoing subscription fees have in the past seemed like a necessary evil but are increasingly seeming less necessary as application whitelisting works it's way into the mainstream...
of course, as we know from figures provided by bit9, the set of good software is larger and growing faster than the set of malware so a vendor-supplied whitelist will almost certainly be worse from an update point of view (and maybe subscription-wise too)... the customer-editable whitelist is much more palatable in that regard because you only put on it those things you actually need in your production environment... coming up with the initial whitelist (not to mention modifying it if/when the needs of that environment change or when software needs updating) puts the customer in the position of having to decide what's safe or not... certainly one should only trust software from known, reputable sources but those sources aren't perfect (even microsoft has accidentally distributed malware in the past) so the first benefit of continuing to use known-malware scanner even though you've chosen to use a whitelist is that you can check the software you intend to whitelist (as whitelist vendors often do for whitelists they provide)... this is an example of the axiom "trust but verify"...
that alone may not seem like it's enough to warrant keeping the desktop av, however, so consider this: just as known-malware scanners don't recognize everything that's malware, application whitelist software doesn't recognize everything that's a program... will the whitelist software block office macros if the office binaries are on the whitelist? will it block batch, perl, kixstart, etc. scripts if their respective interpreter is on the whitelist? will it block javascript from the myriad of ways it can be launched on the system? as an example, i use the application launch control functionality of sunbelt personal firewall as a whitelist and i discovered quite by accident recently that it does not block individual batch files... a known-malware scanner would detect known malware in a batch file, however, and in an office macro, etc... that's a second benefit of keeping known-malware scanners around in a whitelist deployment, and it's one that applies even at the desktop level...
that's not all though... there's also the issue of exploit code that exploits vulnerabilities in whitelisted applications... if the exploit needs to launch additional applications that aren't on the whitelist then the whitelisting software will have interfered with and potentially blocked the exploit, but what if all the applications it needs are on the whitelist? the whitelisting software would be helpless to stop it but a known-malware scanner might still have at least some chance to do so...
known-malware scanners are weak against that which is novel while application whitelists (assuming no malware gets whitelisted) are weak against that which is exotic... together they're only weak against that which is both novel and exotic... this is the essence of what defense in depth means when it comes to the anti-malware world - not using multiple different scanners but rather using multiple and entirely different types of technology so that the second (or third, or fourth, etc) line of defense can stop at least some of what gets through the cracks in the previous line(s) of defense... it's still up to the people affected to decide whether the benefits of a multi-layered strategy warrant the cost but they should definitely consider it as no technology is an island, perfect unto itself...
Wednesday, July 09, 2008
lies, damn lies, and statistics
nruns (who, by the way, have a product designed to fix the problem they're playing chicken little over) are reporting that there are 800 vulnerabilities in anti-virus products...
that's a pretty scary number, isn't it? 800... wow...
but wait, that's not 800 vulnerabilities in your av product, or in my av product, that's the aggregate number across more or less all av products... when was the last time anyone reported an undifferentiated vulnerability metric for an entire class of software? when has that even been interesting the the past?
imagine how many vulnerabilities there are in operating systems - not just one operating system, but all of them combined... and while we're imagining this metric lets make sure that we count each distribution of linux independently, just so we can see how high we can make the metric go...
i wonder how many vulnerabilities there are in browsers or media players or word processors... not just individual products but the entire classes... again i'm sure the numbers are big - see how using browsers or media players or word processors become part of the problem? because that is one of the arguments nruns is making - security software increases the attack surface and therefore makes you less secure... something which obviously completely ignores the positive contributions they make to security - you need to examine both positive and negative contributions, folks... you have balanced a checkbook or budget at some point, right?
while we're examining things, lets examine the independent figures that nruns themselves point out... 50 security advisories for the period from 2002 to 2005, and 170 for the period from 2005 to 2007... first of all, this looks like growth but could just as easily be the result of increased focus on finding vulnerabilities in this class of software... second, 220 (170 + 50) is a far cry from 800... i'd be interested in knowing EXACTLY how that number was arrived at (since such knowledge is how one avoids being mislead by bad statistics)... is each vulnerability distinctly different? if 50 applications have the same vulnerability does it get counted 50 times? if the vulnerabilities were found by forensic analysis of binaries, were any vulnerabilities counted multiple times in the same product due to appearing in multiple places in the code? etc...
at a time when treating the av industry as security's whipping boy is at an all time high, such sensationalistic numbers probably make for good marketing for nrun's product, so long as people don't recognize it for the opportunistic FUD that it is... nruns has what sounds a bit like a scanner sandboxing product which on the face of it sounds like it might be a pretty good idea, but they clearly have a vested interest in making the av industry look bad (because that drives demand for their product) so even if you don't believe the 800 vulnerabilities figure is intentionally poorly defined or an example of opportunistic FUD, you should at least recognize that it's a figure that should be taken with more than a few grains of salt...
that's a pretty scary number, isn't it? 800... wow...
but wait, that's not 800 vulnerabilities in your av product, or in my av product, that's the aggregate number across more or less all av products... when was the last time anyone reported an undifferentiated vulnerability metric for an entire class of software? when has that even been interesting the the past?
imagine how many vulnerabilities there are in operating systems - not just one operating system, but all of them combined... and while we're imagining this metric lets make sure that we count each distribution of linux independently, just so we can see how high we can make the metric go...
i wonder how many vulnerabilities there are in browsers or media players or word processors... not just individual products but the entire classes... again i'm sure the numbers are big - see how using browsers or media players or word processors become part of the problem? because that is one of the arguments nruns is making - security software increases the attack surface and therefore makes you less secure... something which obviously completely ignores the positive contributions they make to security - you need to examine both positive and negative contributions, folks... you have balanced a checkbook or budget at some point, right?
while we're examining things, lets examine the independent figures that nruns themselves point out... 50 security advisories for the period from 2002 to 2005, and 170 for the period from 2005 to 2007... first of all, this looks like growth but could just as easily be the result of increased focus on finding vulnerabilities in this class of software... second, 220 (170 + 50) is a far cry from 800... i'd be interested in knowing EXACTLY how that number was arrived at (since such knowledge is how one avoids being mislead by bad statistics)... is each vulnerability distinctly different? if 50 applications have the same vulnerability does it get counted 50 times? if the vulnerabilities were found by forensic analysis of binaries, were any vulnerabilities counted multiple times in the same product due to appearing in multiple places in the code? etc...
at a time when treating the av industry as security's whipping boy is at an all time high, such sensationalistic numbers probably make for good marketing for nrun's product, so long as people don't recognize it for the opportunistic FUD that it is... nruns has what sounds a bit like a scanner sandboxing product which on the face of it sounds like it might be a pretty good idea, but they clearly have a vested interest in making the av industry look bad (because that drives demand for their product) so even if you don't believe the 800 vulnerabilities figure is intentionally poorly defined or an example of opportunistic FUD, you should at least recognize that it's a figure that should be taken with more than a few grains of salt...
Tags:
anti-virus,
nruns,
vulnerability
Monday, July 07, 2008
the future of malware past
in the two most recent posts on eset's threatblog david harley has been talking about old malware... i don't just mean a couple months or even years old - seriously old malware from over a decade ago that none-the-less continues to cause problems for people...
in the first post david asked how this could be (and in the second pointed out that my answer was to more of a 'why' question than a 'how' one - just the kind of kick in the butt i need to remember to think twice and speak once)...
indeed, if you've got an on-access scanner you ought to be protected from old malware without even thinking about it, right? that simplistic view is certainly what most people have learned but if the infected disk isn't accessed until the computer is booting (a time when neither the on-access scanner nor the operating system itself are loaded yet) then that same simplistic view is proven to be a false sense of security...
that being said, the reason people don't think about or talk about or otherwise take special cases like this into consideration is the widespread (and false) belief that malware eventually becomes extinct... i've already stated numerous times that old viruses never die but what harm could that little mental shortcut really do? well, for one thing when left unchallenged it becomes a dominant pervasive belief... a belief so strong that certain best practices and safe hex behaviours like manually scanning disks before using them even though they're from your own backups or changing the boot priority in the BIOS to prevent possible boot sector infectors from getting an opportunity to execute fall out of common use and knowledge... a belief so strong that vendors may actually remove virus signatures from their signatures databases for performance reasons and eventually allowing viruses that have no good reason to still be around to once again cause problems for many people...
on the other hand, there's no way any reasonable person would accept a belief system that says old malware remains as much a threat now as it did when it was first released, so how should we think about this problem? i suggest that we think of old malware the way we think of landmines - long forgotten and unused but not entirely gone, and one false step and you may be hosed...
this goes for all malware, however one may rightly suggest that some malware won't age as well as others... malware that relies on some sort of infrastructure probably won't do so well in 10 years (when it's command and control network no longer exists or no one's listening to the domain it sends it's logs to)... older (pre-commercial) malware didn't tend to rely on such things though and viruses (due to their infectious nature) are more likely to find their way into backups and so be re-encountered weeks/months/years later... BSI's specifically may well prove to be the longest lived in practice because of their independence from and execution priority over the OS or most any other software component of the system... and of course since they're some of the oldest viruses (the first pc virus was a boot sector infector) and one of the first types to fall out of fashion, and since they're still causing problems, they're already off to a fine start...
foolishly forgetting or recklessly ignoring the the threat posed by old malware will ultimately make utilizing backups, archives, and just plain old media a bit like strolling through a minefield without a map... old malware won't go away, it will just lie dormant in the nooks and crannies of the computer world until we take that one wrong step and it comes back anew...
in the first post david asked how this could be (and in the second pointed out that my answer was to more of a 'why' question than a 'how' one - just the kind of kick in the butt i need to remember to think twice and speak once)...
indeed, if you've got an on-access scanner you ought to be protected from old malware without even thinking about it, right? that simplistic view is certainly what most people have learned but if the infected disk isn't accessed until the computer is booting (a time when neither the on-access scanner nor the operating system itself are loaded yet) then that same simplistic view is proven to be a false sense of security...
that being said, the reason people don't think about or talk about or otherwise take special cases like this into consideration is the widespread (and false) belief that malware eventually becomes extinct... i've already stated numerous times that old viruses never die but what harm could that little mental shortcut really do? well, for one thing when left unchallenged it becomes a dominant pervasive belief... a belief so strong that certain best practices and safe hex behaviours like manually scanning disks before using them even though they're from your own backups or changing the boot priority in the BIOS to prevent possible boot sector infectors from getting an opportunity to execute fall out of common use and knowledge... a belief so strong that vendors may actually remove virus signatures from their signatures databases for performance reasons and eventually allowing viruses that have no good reason to still be around to once again cause problems for many people...
on the other hand, there's no way any reasonable person would accept a belief system that says old malware remains as much a threat now as it did when it was first released, so how should we think about this problem? i suggest that we think of old malware the way we think of landmines - long forgotten and unused but not entirely gone, and one false step and you may be hosed...
this goes for all malware, however one may rightly suggest that some malware won't age as well as others... malware that relies on some sort of infrastructure probably won't do so well in 10 years (when it's command and control network no longer exists or no one's listening to the domain it sends it's logs to)... older (pre-commercial) malware didn't tend to rely on such things though and viruses (due to their infectious nature) are more likely to find their way into backups and so be re-encountered weeks/months/years later... BSI's specifically may well prove to be the longest lived in practice because of their independence from and execution priority over the OS or most any other software component of the system... and of course since they're some of the oldest viruses (the first pc virus was a boot sector infector) and one of the first types to fall out of fashion, and since they're still causing problems, they're already off to a fine start...
foolishly forgetting or recklessly ignoring the the threat posed by old malware will ultimately make utilizing backups, archives, and just plain old media a bit like strolling through a minefield without a map... old malware won't go away, it will just lie dormant in the nooks and crannies of the computer world until we take that one wrong step and it comes back anew...
Tags:
boot sector virus,
david harley,
eset,
malware,
virus
Saturday, July 05, 2008
the malware plateau
i'm sure the title of this posts makes it pretty clear i'm referring to findings recently posted on the mcafee blog by toralv dirro that show that malware growth is slowing... matt hines has also written about it wondering and postulating about what might be going on...
while it still may be too early to tell exactly what's going on, the very idea that it might be even just temporarily slowing down has brought to mind an interesting train of thought:
there's no question that the set of all malware has undergone incredible growth in numbers over the past several years... just as unquestionable, though, is the fact that the resources being consumed in order to affect the apparent exponential growth in malware numbers are themselves either fixed or growing at a significantly less than exponential rate... by resources, i'm not just talking about money (though that certainly is part of the equation) but also man power, time, the pool of susceptible marks, etc...
reaching a point of equilibrium was a foregone conclusion... even if this isn't that point, a plateau like this is inevitable...
while it still may be too early to tell exactly what's going on, the very idea that it might be even just temporarily slowing down has brought to mind an interesting train of thought:
exponential growth (in anything) is not sustainable indefinitely
there's no question that the set of all malware has undergone incredible growth in numbers over the past several years... just as unquestionable, though, is the fact that the resources being consumed in order to affect the apparent exponential growth in malware numbers are themselves either fixed or growing at a significantly less than exponential rate... by resources, i'm not just talking about money (though that certainly is part of the equation) but also man power, time, the pool of susceptible marks, etc...
reaching a point of equilibrium was a foregone conclusion... even if this isn't that point, a plateau like this is inevitable...
Tags:
malware,
malware creation,
matt hines,
mcafee,
toralv dirro
Wednesday, July 02, 2008
how not to comment spam
i recently received some comment spam for this blog... that's not altogether unusual, and certainly not noteworthy in itself, but this comment was something else... i think i'll need to show you what i saw:
now i've seen a number of things over the years that have triggered my internal spam detecter (including people trying to put html tags in the name field - a great way to get your comment rejected, just so you know) but never has anyone put in so much individual effort into blowing smoke up my ass...
add to that the name this person entered: seo is a common abbreviation for search engine optimization which is something that comment spammers try to accomplish...
and of course, the most obvious sign of them all, the link that has nothing to do with the article the commenter was commenting on nor the comment itself... links that aren't a natural part of the conversation shouldn't go in the comment... if you want to raise awareness for a site, stick the url in the website field of the comment form...
in fact, if you really want to successfully raise awareness for something, take a look at rob lewis' comments... it became clear to me rather quickly that rob trying to drum up interest in a product but i accepted his comments anyways... why? because he asked questions, he asked for opinions, he engaged in conversation and posted links in a manner that fit with the conversation... in short he was an evangelist rather than a mindless comment spammer...
oh well, just more grist for the lolthreats mill i suppose...
now i've seen a number of things over the years that have triggered my internal spam detecter (including people trying to put html tags in the name field - a great way to get your comment rejected, just so you know) but never has anyone put in so much individual effort into blowing smoke up my ass...
add to that the name this person entered: seo is a common abbreviation for search engine optimization which is something that comment spammers try to accomplish...
and of course, the most obvious sign of them all, the link that has nothing to do with the article the commenter was commenting on nor the comment itself... links that aren't a natural part of the conversation shouldn't go in the comment... if you want to raise awareness for a site, stick the url in the website field of the comment form...
in fact, if you really want to successfully raise awareness for something, take a look at rob lewis' comments... it became clear to me rather quickly that rob trying to drum up interest in a product but i accepted his comments anyways... why? because he asked questions, he asked for opinions, he engaged in conversation and posted links in a manner that fit with the conversation... in short he was an evangelist rather than a mindless comment spammer...
oh well, just more grist for the lolthreats mill i suppose...
Tags:
comment spam
take my content, please!
rich mogull exacted his revenge against a blog scraper today by calling the scraper out on his blog and thereby getting that scraped and put on the scraper's site...
i've seen people complain about scrapers before but i just don't get it, especially when they're not monetizing their own content in the first place...
now, i have no idea if the scraper in question also scrapes this blog but i know there are others who do and it's never really bothered me... that's not because i'm a crank (though i know some people like to think so) or because i'm desperate for readers... it's because (perhaps counter-intuitively) i don't write content for the purposes of drawing people to my blog...
i write content for precisely 2 reasons - to get things off my chest and to share what i know and neither of those things are negatively impacted by scrapers republishing my content... in fact, republishing actually helps me share what i know because it makes the information (rather than just links to my site) easier to find by virtue of being in more places...
and if they're selling ads around my content? even better, they have an extra monetary incentive to spread the word of kurt and the money doesn't come out of my pocket...
i'm not trying to monetize my content and i'm not trying to build a brand out of my content... i want people to learn and i have no interest in trying to control how or where they get the information i try to make available...
this isn't to say that other people with other motives shouldn't have the right to try and control such things about their own content, mind you, i just think trying to exert such control is silly...
so with that in mind, to all you scrapers out there - if you want to republish my content then you're free to do so...in fact it was one of the possible uses i had in mind when decided on which creative commons license to use... my only request (and it is just a request) is that you use the full content, not partial text feeds... i despise those abridged post summaries as they ruin the usability of the message (far more than my wacky writing style does)...
i've seen people complain about scrapers before but i just don't get it, especially when they're not monetizing their own content in the first place...
now, i have no idea if the scraper in question also scrapes this blog but i know there are others who do and it's never really bothered me... that's not because i'm a crank (though i know some people like to think so) or because i'm desperate for readers... it's because (perhaps counter-intuitively) i don't write content for the purposes of drawing people to my blog...
i write content for precisely 2 reasons - to get things off my chest and to share what i know and neither of those things are negatively impacted by scrapers republishing my content... in fact, republishing actually helps me share what i know because it makes the information (rather than just links to my site) easier to find by virtue of being in more places...
and if they're selling ads around my content? even better, they have an extra monetary incentive to spread the word of kurt and the money doesn't come out of my pocket...
i'm not trying to monetize my content and i'm not trying to build a brand out of my content... i want people to learn and i have no interest in trying to control how or where they get the information i try to make available...
this isn't to say that other people with other motives shouldn't have the right to try and control such things about their own content, mind you, i just think trying to exert such control is silly...
so with that in mind, to all you scrapers out there - if you want to republish my content then you're free to do so...in fact it was one of the possible uses i had in mind when decided on which creative commons license to use... my only request (and it is just a request) is that you use the full content, not partial text feeds... i despise those abridged post summaries as they ruin the usability of the message (far more than my wacky writing style does)...
Tags:
administrivia,
rich mogull
Subscribe to:
Posts (Atom)