- TaoSecurity: What Would Galileo Think
don't know if i agree with it yet or not but interesting none-the-less to think about whether you can arrive at (or approach) a secure setup by experimental procedure... my gut says it'll only lead to resistance against whatever attacks happen to be current... in other words i suspect it would be weak against real novelty (and we already know something that's weak against novelty, don't we)... - ThreatExpert Blog: New Rustock Switches to Hotmail
in answer to the rising use of traffic monitoring to detect malware, malware authors will start making their malware operate the same way you do - and this webmail-using spambot is an excellent example... - Schneier on Security: Ransomware
i often think bruce should stick to his strengths when he starts talking malware but in this case he's got it bang on.... ransomware should be a non-issue - when recovery is as easy as restoring from backups then why give it so much special attention? - Jeremiah Grossman: Why most WAFs do not block
jeremiah brings us an interesting quote from dan geer concering default-permit/default-deny along with a discussion of it's implications for webappsec... the quote itself is perfect, though, and i suspect it applies to just about every branch of security... it certainly has some strong implications for application whitelisting... - T2W --> Trojan to Worm - PandaLabs
and people think worms and viruses are going extinct... they aren't, they're just a feature that's gone out of fashion like stealth did for a while - and like stealth they'll come back into style at some point and tools like this will help that happen... - and I say we are detecting between 400,000 and 10,000,000 malware! - McAfee Avert Labs Blog
an excellent post on counting malware threats - the take-away is that the most bloated numbers are those based on samples rather than variants or families due to having multiple copies of what is for all intents and purposes the same threat... - R.I.P. CISSP | tssci security
anyone recall me forecasting the end of security experts? i doubt i'll make any friends by saying so but this is a symptom/manifestation of what i was talking about before... - Errata Security: Apple malware
0-day or 1-day exploits for the mac in the wild... neither alternative is good and these are things more mac users really need to pay attention to - too bad most have been trained not to... some have made a point of saying that despite vendors reporting it to be in the wild there's no evidence that it actually is - you have to congratulate such people on knowing more than the folks whose business it is to know these things... - Another way of restoring files after a Gpcode attack
hahaha, after all these years the folks making gpcode still haven't figured out how to implement a cryptosystem properly - the ability to use plaintext/ciphertext pairs to decrypt other ciphertext tells me they don't understand stream ciphers like rc4 at all... apart from the various steps one needs to follow to use rc4 safely, they might want to consider that unless they're encrypting an actual stream there really isn't much reason to use a stream cipher...
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Monday, June 30, 2008
suggested reading
Tags:
suggested reading
Tuesday, June 24, 2008
that sucking sound ain't the av industry
eva chen of trend micro (yes, i'm sensing a trend too) was recently quoted saying the following:
let that sink in a little...
are you surprised? i know i am... i haven't been this surprised since symantec's john thompson said the virus problem was solved, and yes i'm surprised for essentially the same reason this time as last time - what an incredibly stupid thing for someone in the av industry to say...
the number of new viruses has everything to do with the people creating them and nothing to do with the av industry... the industry is not now, nor has it ever been nor will it ever be in the business of preventing their creation... i'm having difficulty imagining what kind of orwellian world we'd need to be living in for any group to be capable of preventing people from creating malicious software... maybe eva knows something i don't, but as far as i'm aware the industry's business is preventing their customers from getting infected, not preventing the creation of new viruses...
“For me for the last three years I’ve been feeling that the anti-virus industry sucks. If you have 5.5 million new viruses out there how can you claim this industry is doing the right job?”
let that sink in a little...
are you surprised? i know i am... i haven't been this surprised since symantec's john thompson said the virus problem was solved, and yes i'm surprised for essentially the same reason this time as last time - what an incredibly stupid thing for someone in the av industry to say...
the number of new viruses has everything to do with the people creating them and nothing to do with the av industry... the industry is not now, nor has it ever been nor will it ever be in the business of preventing their creation... i'm having difficulty imagining what kind of orwellian world we'd need to be living in for any group to be capable of preventing people from creating malicious software... maybe eva knows something i don't, but as far as i'm aware the industry's business is preventing their customers from getting infected, not preventing the creation of new viruses...
Tags:
anti-virus,
eva chen,
trend micro
trend micro boycott
while i'm talking about trend micro i might as well mention this post advocating the boycott of their products because of their patent infringement suit against barracuda networks...
apparently there seems to be the idea that trend is trying to attack clamav and it's users through this suit, but i'm sure nothing could be further from the truth...
the fact is that clamav has been openly derided in the av community as inferior technology for years... nobody bothered trying to sue clamav when it was on it's own, and nobody tried when they were gobbled up by sourcefire... it would have been pointless and it would have lended credibility to a technology that has none...
the suit has nothing to do with the scanner technology and everything to do with the application of that technology... if trend has a valid patent (and since they've sued more mainstream av vendors over the same patent it seems like it might be) then people who make products using techniques covered by that patent need to work out a deal with trend... that's just the way patents work... i'm no intellectual property maximalist (heck, i classify DRM as a type of malware) but i recognize the law of the land when i see it...
barracuda isn't being singled out, clamav isn't the focus of some conspiracy by trend, other av vendors have had to deal with exactly the same intellectual property issue and somehow managed to resolve it... barracuda needs to do so as well, and some open source zealots need to get over themselves...
apparently there seems to be the idea that trend is trying to attack clamav and it's users through this suit, but i'm sure nothing could be further from the truth...
the fact is that clamav has been openly derided in the av community as inferior technology for years... nobody bothered trying to sue clamav when it was on it's own, and nobody tried when they were gobbled up by sourcefire... it would have been pointless and it would have lended credibility to a technology that has none...
the suit has nothing to do with the scanner technology and everything to do with the application of that technology... if trend has a valid patent (and since they've sued more mainstream av vendors over the same patent it seems like it might be) then people who make products using techniques covered by that patent need to work out a deal with trend... that's just the way patents work... i'm no intellectual property maximalist (heck, i classify DRM as a type of malware) but i recognize the law of the land when i see it...
barracuda isn't being singled out, clamav isn't the focus of some conspiracy by trend, other av vendors have had to deal with exactly the same intellectual property issue and somehow managed to resolve it... barracuda needs to do so as well, and some open source zealots need to get over themselves...
Tags:
barracuda,
clamav,
trend micro
trend micro and cloud watching
last thursday mike rothman wrote the following:
if you're like mike and think this has anything to do with moving away from signature-based technology then you need to read that release a second time... or a third time... or an nth time - it doesn't really matter how many times you've read it before you need to read it again, especially the following passage:
in other words, it's still signature-based, they're just putting the signatures in the cloud rather than the endpoint now... it's not your grandmother's scanner but it's still signature-based av...
one of the benefits of letting this post stew a little is that i've gotten to see what others have said about the subject... as a not-so-accurate barometer of av innovation, amrit williams thinks it's the most innovative thing to come out of the av industry in the past decade but alan shimel rightly points out that it sounds a lot like something panda has been doing for some time now... i would add that both are conceptually reminiscent of the digital immune system developed by ibm (starting more than a decade ago) and sold to symantec (and we know what symantec does with the things it buys, or rather we don't)... it would be hard to imagine that either trend or panda came up with their 'in-the-cloud' architectures without taking a few lessons away from this progenitor and so i wonder if we can really say this innovation came about within the past decade at all...
Is traditional signature-based AV dead? It's definitely on life-support, as Trend announces a cloud-based something or other. Will it work? Who knows, but clearly the sacred cow of AV will be served for dinner sooner rather than later. - Trend Micro release
if you're like mike and think this has anything to do with moving away from signature-based technology then you need to read that release a second time... or a third time... or an nth time - it doesn't really matter how many times you've read it before you need to read it again, especially the following passage:
By storing the majority of pattern files in an Internet cloud database and keeping them at a minimum on the endpoint, Trend Micro helps stop Web, file and email threats before they reach the end-user or the corporate network. This new approach lightens bandwidth consumption on customers’ networks and endpoints and provides faster and more comprehensive up-to-date protection.
in other words, it's still signature-based, they're just putting the signatures in the cloud rather than the endpoint now... it's not your grandmother's scanner but it's still signature-based av...
one of the benefits of letting this post stew a little is that i've gotten to see what others have said about the subject... as a not-so-accurate barometer of av innovation, amrit williams thinks it's the most innovative thing to come out of the av industry in the past decade but alan shimel rightly points out that it sounds a lot like something panda has been doing for some time now... i would add that both are conceptually reminiscent of the digital immune system developed by ibm (starting more than a decade ago) and sold to symantec (and we know what symantec does with the things it buys, or rather we don't)... it would be hard to imagine that either trend or panda came up with their 'in-the-cloud' architectures without taking a few lessons away from this progenitor and so i wonder if we can really say this innovation came about within the past decade at all...
the failure-focused mindset
schneier wrote about what he calls the security mindset some time ago in which he uses a rather broad brush to paint a rather unflattering picture of engineers...
schneier is certainly no engineer if he thinks engineers don't think about failure... although i don't have a degree in engineering (nor the iron ring one receives as part of the ritual of the calling of an engineer) i was in the engineering program at uoft for a few years and one of the things they drilled into our heads (besides the mass/energy balance and the fact that you can't push a rope) is that people die when engineers don't think about failure... buildings collapse, bridges crumble, dams break, gas tanks explode, planes fall out of the sky, etc...
some of the comments on schneier's blog post point out the fact that engineers actually do think about failure, though some of the commenters seem to think engineers only think about the consequences of failure - this is wrong... they think about the conditions under which a failure can occur first, before they think about the possible consequences... knowing those conditions makes knowing how to cause the failure straight forward, maybe even trivial depending on the circumstances, because all you need to do is figure out how to produce those conditions...
some also think engineers are only concerned with natural failure, rather than failures caused by intelligent attackers and cite things like buildings as examples (ie. building engineers are concerned with things like force that wind can exert on the outside of a building but not the force a person can exert on the outside of a building)... the truth is there's not an awful lot a person can do to a building to cause it to fail, and those things a person can do (blow it up?) can't easily (or cost effectively) be addressed through engineering... there are, however, plenty of things where a person can cause a failure and the engineers who deal with those things do consider those types of failures...
now, i'm willing to accept that the failure-focused mindset isn't natural for a lot of people, but singling out groups and saying "it's not natural for these people, it's not what they do" is ridiculous - worse still, to use that brush to paint a group whose failure-focused mindset has saved all of our lives dozens of times over (if not more) is an insult... the failure-focused mindset may not be common but, but it's not rare enough to make you special for having it either... anyone who thinks otherwise needs to get over themselves...
schneier is certainly no engineer if he thinks engineers don't think about failure... although i don't have a degree in engineering (nor the iron ring one receives as part of the ritual of the calling of an engineer) i was in the engineering program at uoft for a few years and one of the things they drilled into our heads (besides the mass/energy balance and the fact that you can't push a rope) is that people die when engineers don't think about failure... buildings collapse, bridges crumble, dams break, gas tanks explode, planes fall out of the sky, etc...
some of the comments on schneier's blog post point out the fact that engineers actually do think about failure, though some of the commenters seem to think engineers only think about the consequences of failure - this is wrong... they think about the conditions under which a failure can occur first, before they think about the possible consequences... knowing those conditions makes knowing how to cause the failure straight forward, maybe even trivial depending on the circumstances, because all you need to do is figure out how to produce those conditions...
some also think engineers are only concerned with natural failure, rather than failures caused by intelligent attackers and cite things like buildings as examples (ie. building engineers are concerned with things like force that wind can exert on the outside of a building but not the force a person can exert on the outside of a building)... the truth is there's not an awful lot a person can do to a building to cause it to fail, and those things a person can do (blow it up?) can't easily (or cost effectively) be addressed through engineering... there are, however, plenty of things where a person can cause a failure and the engineers who deal with those things do consider those types of failures...
now, i'm willing to accept that the failure-focused mindset isn't natural for a lot of people, but singling out groups and saying "it's not natural for these people, it's not what they do" is ridiculous - worse still, to use that brush to paint a group whose failure-focused mindset has saved all of our lives dozens of times over (if not more) is an insult... the failure-focused mindset may not be common but, but it's not rare enough to make you special for having it either... anyone who thinks otherwise needs to get over themselves...
Tags:
bruce schneier,
security
debunking the mythology of whitelist practicality
while i don't normally listen to podcasts, it does occasionally happen and the risky business podcast episode 66 mentioned on the tenable security blog was one of those times... one of the topics discussed was the practicality of whitelists over blacklists and it amazed me (again) that people actually think this way...
why, when the number of developers making good software far outnumbers those making bad software, do people insist on believing there's more bad software than good and it's easier for vendors to keep track of good software than it is to keep track of bad software...
it's a pretty popular belief these days that it's not practical to keep track of all bad software anymore and vendors should be keeping track of the good software instead because that's somehow more practical but that belief starts to look a little ridiculous when you start considering the origins of the good and bad software in the world... just like most people in the world are actually good people (police states would be a necessity otherwise), most programmers are good people too so they're not writing malware... if most of the programmers in the world are writing good software rather than malware then it stands to reason that the production of good software out-paces the production of malware and since it has always been this way it should also outnumber malware...
as such, good software far outnumbers malicious software and is produced at a faster pace than malicious software... however big the set of malicious software seems and however fast it seems to be growing you need to ask yourself how much more aware you are of those stats for malware than for good software (a lot less attention is paid to those figures for good software)... i've mentioned before (and i'll probably mention again) that bit9 actually has some figures related to both the total number and rate of production of good software and it's shocking the degree to which it dwarfs those same measures for malicious software... billions of good programs while there were still less than a million malicious ones, and millions more good programs produced each day while malicious software is still in the range of thousands for the same period...
for the average person this may not seem intuitive; indeed, how could microsoft alone produce 500,000 new files each day - they certainly don't have that many products... the reason for the discrepancy is at least 3-fold... 1) the average person doesn't understand how many different things actually qualify as programs and would need to be kept track of if vendors were to supply whitelists, 2) the average person doesn't realize how many programs go into a single product, and 3) the average person doesn't actually have any idea how many products a company like microsoft actually produces because microsoft produces software for such disparate sets of people... you think ms word is just one program? it's not, it's many different programs that inter-operate to give you the functionality and user experience you're used to... if it were a single program there would be little or no need to install it, you could just run it as a stand alone application... the same holds for excel, and powerpoint, and outlook, and so on and so forth... do you think the hundreds (if not thousands) of megabytes that windows takes up is all because of data? what data does an operating system need? it's mostly programs...
a common refrain these days is that blacklisting just isn't working, but the problem with common notions is that they're often over simplified... blacklisting just isn't working well enough all on it's own... it is a challenge to keep up with the malware production rate so just imagine how much more of a challenge it is to keep up with the good software production rate... sure whitelisting companies like bit9 seem to be able to do it but you wanna know how? by using the same blacklists people think are failing in order to determine what's safe to put on their whitelist... it shouldn't take a rocket scientist to figure out that such a whitelist will be no more accurate than the blacklist it's based on - anything the blacklist misses will get onto the whitelist and then what will you do?
why, when the number of developers making good software far outnumbers those making bad software, do people insist on believing there's more bad software than good and it's easier for vendors to keep track of good software than it is to keep track of bad software...
it's a pretty popular belief these days that it's not practical to keep track of all bad software anymore and vendors should be keeping track of the good software instead because that's somehow more practical but that belief starts to look a little ridiculous when you start considering the origins of the good and bad software in the world... just like most people in the world are actually good people (police states would be a necessity otherwise), most programmers are good people too so they're not writing malware... if most of the programmers in the world are writing good software rather than malware then it stands to reason that the production of good software out-paces the production of malware and since it has always been this way it should also outnumber malware...
as such, good software far outnumbers malicious software and is produced at a faster pace than malicious software... however big the set of malicious software seems and however fast it seems to be growing you need to ask yourself how much more aware you are of those stats for malware than for good software (a lot less attention is paid to those figures for good software)... i've mentioned before (and i'll probably mention again) that bit9 actually has some figures related to both the total number and rate of production of good software and it's shocking the degree to which it dwarfs those same measures for malicious software... billions of good programs while there were still less than a million malicious ones, and millions more good programs produced each day while malicious software is still in the range of thousands for the same period...
for the average person this may not seem intuitive; indeed, how could microsoft alone produce 500,000 new files each day - they certainly don't have that many products... the reason for the discrepancy is at least 3-fold... 1) the average person doesn't understand how many different things actually qualify as programs and would need to be kept track of if vendors were to supply whitelists, 2) the average person doesn't realize how many programs go into a single product, and 3) the average person doesn't actually have any idea how many products a company like microsoft actually produces because microsoft produces software for such disparate sets of people... you think ms word is just one program? it's not, it's many different programs that inter-operate to give you the functionality and user experience you're used to... if it were a single program there would be little or no need to install it, you could just run it as a stand alone application... the same holds for excel, and powerpoint, and outlook, and so on and so forth... do you think the hundreds (if not thousands) of megabytes that windows takes up is all because of data? what data does an operating system need? it's mostly programs...
a common refrain these days is that blacklisting just isn't working, but the problem with common notions is that they're often over simplified... blacklisting just isn't working well enough all on it's own... it is a challenge to keep up with the malware production rate so just imagine how much more of a challenge it is to keep up with the good software production rate... sure whitelisting companies like bit9 seem to be able to do it but you wanna know how? by using the same blacklists people think are failing in order to determine what's safe to put on their whitelist... it shouldn't take a rocket scientist to figure out that such a whitelist will be no more accurate than the blacklist it's based on - anything the blacklist misses will get onto the whitelist and then what will you do?
microsoft patents sandboxed behavioural profiling?
so microsoft is trying to patent running suspect samples in a virtual environment and using behavioural analysis to determine if the sample is malware...
can we say prior art? isn't this what norman's sandbox technology has been doing since (i think) the previous millennium?
can we say prior art? isn't this what norman's sandbox technology has been doing since (i think) the previous millennium?
Tags:
behavioural analysis,
microsoft,
norman,
sandbox
learning from the past
saw this post on the agnitum blog about how an older version of their technology was being detected as a stealthkit... basically they were keeping integrity information about files that had been previously scanned to optimize future scanning (no need to scan a file that hasn't changed) and hiding that integrity information using all too familiar means...
it reminded me of two things - first is yisreal radai's nearly 15 year old paper on integrity checking (due to his conclusion that to truly protect integrity information from attack it must be stored offline) and the second is the witch hunt that resulted from the misguided redefinition of rootkit to be anything that hides things (which has already dinged other security vendors - especially kaspersky for remarkably similar reasons)...
as the saying goes - those who cannot remember the past are condemned to repeat it...
it reminded me of two things - first is yisreal radai's nearly 15 year old paper on integrity checking (due to his conclusion that to truly protect integrity information from attack it must be stored offline) and the second is the witch hunt that resulted from the misguided redefinition of rootkit to be anything that hides things (which has already dinged other security vendors - especially kaspersky for remarkably similar reasons)...
as the saying goes - those who cannot remember the past are condemned to repeat it...
Tags:
agnitum,
stealthkit
clearing the backlog
well folks, my drafts folder runeth over so i'm going to try and post all (or as many as i can) of the things in it just to get it out mind and off of chest... please bear with me, some of this stuff is going to be pretty old and not necessarily fleshed out to the extent that my normal posts are...
and to start off i'm going to do something completely unlike my normal posts and deal with some emails i received some months back... my apologies for not dealing with these sooner, i put them to the side while i figured out how i should deal with reader emails and then never got back to them... in retrospect i imagine when people send me emails pointing me to stories or blog posts that they're probably doing so because they'd like to see my reaction so to show my belated appreciation for those who cared enough to send such emails i'm going to do a postbag type of post and give some sort of response here...
the first email came from joe hepperle and concerned a page on the web (i won't link to it but i'm sure anyone with even the slightest bit of google-fu can find it) that seems to suggest i'm a pervert... no, it's not true, i'm not a pervert... those who know me well know that i'm about as far from being a pervert as one can be (at least for now, perhaps i'll become more balanced when i get older), and certainly far from the sexual predators the page in question compares me to... i was actually already aware of the page in question as the notorious usenet troll who created it has posted the link in various newsgroups on more than one occasion... as counterpoint to the page i would direct the curious to check out some google searches on the 2 most common variations of his/her pseudonym (pcbutts, pcbutts1) though i would warn against image searching as you may run across images hosted on the troll's domain which are not only not safe for work, they aren't safe for anywhere (that which is seen cannot be unseen)...
the second email comes from andreas clementi which points me to an internet storm center blog entry about a keylogger called 'tiny keylogger 2.0' being missed by av products (i did mention these emails were old, right?)... my abbreviated reaction is similar to a commenter on the same entry - that the keylogger in question is probably greyware and the vendors have either chosen not to bother with it or require you to enable their greyware detection capabilities... on reading the feature list for this keylogger i would tend lean to the former because i find it hard to believe something so lame could be a credible threat to anyone... maybe if it was combined with a RAT in some ad hoc multi-stage attack...
the third email comes from james manning and concerns an article singing the praises of whitelisting over blacklisting... it didn't really seem to me that the article was saying anything that hadn't been said before or that i haven't countered before... in retrospect, however, it should be noted that while the article claims whitelists don't require virus or spyware definition updates, they do require goodware definition updates (basically updates to the whitelist)... furthermore, while i've mentioned before that good software is far more numerous than bad and produced at a far faster rate than bad (thus leading to larger and faster growing signature databases), it turns out that whitelisting companies have a tendency to use blacklist software to keep the bad stuff out of their whitelists... a whitelist based on the complement of a blacklist can be no more accurate than that same blacklist...
the fourth email is also from james and concerns a bit of snake-oil in the form of comodo's promise of a worry-free malware-free pc... it's actually more like a money-back guarantee because comodo were going to cover the cost of recovery when (not if) their product failed to prevent a malware infestation, but either way they're telling the customer that using their product/service means not having to worry about malware anymore and if that's not selling a false sense of security i don't know what is...
the final email is from luke tan and concerns the compromise of both trend and avast sites... i really ought to have posted a heads-up/PSA when he sent me that email (as i have with the others he's sent me), i'm not sure why i didn't (though if memory serves the trend incident was well publicized)... he also asks whether security companies should be embarrassed by such incidents... i don't think they should be, at least not any more than any other company, unless they specialize in web security... security is too big a field to expect anyone or any company to do all parts of it perfectly...
and to start off i'm going to do something completely unlike my normal posts and deal with some emails i received some months back... my apologies for not dealing with these sooner, i put them to the side while i figured out how i should deal with reader emails and then never got back to them... in retrospect i imagine when people send me emails pointing me to stories or blog posts that they're probably doing so because they'd like to see my reaction so to show my belated appreciation for those who cared enough to send such emails i'm going to do a postbag type of post and give some sort of response here...
the first email came from joe hepperle and concerned a page on the web (i won't link to it but i'm sure anyone with even the slightest bit of google-fu can find it) that seems to suggest i'm a pervert... no, it's not true, i'm not a pervert... those who know me well know that i'm about as far from being a pervert as one can be (at least for now, perhaps i'll become more balanced when i get older), and certainly far from the sexual predators the page in question compares me to... i was actually already aware of the page in question as the notorious usenet troll who created it has posted the link in various newsgroups on more than one occasion... as counterpoint to the page i would direct the curious to check out some google searches on the 2 most common variations of his/her pseudonym (pcbutts, pcbutts1) though i would warn against image searching as you may run across images hosted on the troll's domain which are not only not safe for work, they aren't safe for anywhere (that which is seen cannot be unseen)...
the second email comes from andreas clementi which points me to an internet storm center blog entry about a keylogger called 'tiny keylogger 2.0' being missed by av products (i did mention these emails were old, right?)... my abbreviated reaction is similar to a commenter on the same entry - that the keylogger in question is probably greyware and the vendors have either chosen not to bother with it or require you to enable their greyware detection capabilities... on reading the feature list for this keylogger i would tend lean to the former because i find it hard to believe something so lame could be a credible threat to anyone... maybe if it was combined with a RAT in some ad hoc multi-stage attack...
the third email comes from james manning and concerns an article singing the praises of whitelisting over blacklisting... it didn't really seem to me that the article was saying anything that hadn't been said before or that i haven't countered before... in retrospect, however, it should be noted that while the article claims whitelists don't require virus or spyware definition updates, they do require goodware definition updates (basically updates to the whitelist)... furthermore, while i've mentioned before that good software is far more numerous than bad and produced at a far faster rate than bad (thus leading to larger and faster growing signature databases), it turns out that whitelisting companies have a tendency to use blacklist software to keep the bad stuff out of their whitelists... a whitelist based on the complement of a blacklist can be no more accurate than that same blacklist...
the fourth email is also from james and concerns a bit of snake-oil in the form of comodo's promise of a worry-free malware-free pc... it's actually more like a money-back guarantee because comodo were going to cover the cost of recovery when (not if) their product failed to prevent a malware infestation, but either way they're telling the customer that using their product/service means not having to worry about malware anymore and if that's not selling a false sense of security i don't know what is...
the final email is from luke tan and concerns the compromise of both trend and avast sites... i really ought to have posted a heads-up/PSA when he sent me that email (as i have with the others he's sent me), i'm not sure why i didn't (though if memory serves the trend incident was well publicized)... he also asks whether security companies should be embarrassed by such incidents... i don't think they should be, at least not any more than any other company, unless they specialize in web security... security is too big a field to expect anyone or any company to do all parts of it perfectly...
Tags:
postbag
Thursday, June 12, 2008
no such thing as trusted sites anymore
darn, rich mogull beat me to the publish button - that will teach me to put work and home repair first... i think i'm going to post what i had anyways, though, because mine isn't exactly the same and frankly when it comes to principles that i anticipate needing to hammer home repeatedly it's nice when i have them all in one place... also, actual incidents like yahoo mail serving malware and not cleaning up promptly sort of drive this point home better than xss vulnerabilities found on security vendors sites; i'm a security blogger and even i rarely visit security vendor sites so i don't imagine the average person does much either - yahoo mail, on the other hand gets LOTS of traffic from average folks... so here's what i had in my draft folder with the addition of some links i was waiting to find time to look up...
once upon a time there used to be this piece of advice about online security that said don't go to dodgy sites and you should be just fine... the principle behind it is that if you won't get compromised by malicious web content if you only ever run trustworthy web content and if you only ever got to trustworthy sites then trustworthy web content should be all your browser is exposed to...
internet explorer's security zone model has this very principle in mind, some sites are trustworthy and some aren't and those that aren't don't get to take advantage of as much rich web-based functionality as those that are...
even the mighty noscript firefox plugin depends on this basic premise to protect those firefox users who use noscript (in fact, adding a site to noscript's whitelist is in many ways the same as adding a site to ie's trusted sites zone - only easier, more convenient, and more intuitive)...
the principle makes sense and the advice (even in absence of the technologies that try to make it automatic) has been one of the more successful bits of security know-how at gaining widespread adoption... unfortunately the principle is falling apart because malicious web content is increasingly finding it's way on to what would otherwise be considered trustworthy sites... dancho danchev often informs his readers of instances of web sites being directly compromised to serve malware, and sandi hardmeier regularly informs her readers of instances of sites serving malware indirectly by virtue of malvertizements (malicious advertisements) infiltrating the 3rd party ad networks the site owners use...
when (as this zdnet article suggests) such compromises are up 400% over last year, and when affected sites include such well known internet properties as yahoo mail, cnn (among others), or the superbowl then it begs the question "is there any such thing as trusted sites anymore?" and the answer i think has to be either "no" or "not for much longer"...
now, of course the efficacy of the tools like noscript aren't quite as affected as a basic careful internet user would be since the tools can look at where the content is coming from rather than just what the current site is, but it really makes you wonder about those oh-so-clever users who go around not using any anti-malware software and thinking they're fine because they don't go to dodgy sites...
prudence alone isn't really enough anymore, you need good tools to help control what web content is allowed to run (i.e. some kind of whitelist like noscript), in what environment it runs (i.e. some kind of web sandbox whether it's multiple browsers/browser profiles, or sandboxing software like sandboxie, or even a full virtual machine like vmware), and to detect when something slips through the cracks (i.e. a scanner, preferably one that implements an lsp) to help prevent it from stealing data you enter and/or using your browsing session as a staging point for an attack on other things on your network (like your router) or the internet at large...
once upon a time there used to be this piece of advice about online security that said don't go to dodgy sites and you should be just fine... the principle behind it is that if you won't get compromised by malicious web content if you only ever run trustworthy web content and if you only ever got to trustworthy sites then trustworthy web content should be all your browser is exposed to...
internet explorer's security zone model has this very principle in mind, some sites are trustworthy and some aren't and those that aren't don't get to take advantage of as much rich web-based functionality as those that are...
even the mighty noscript firefox plugin depends on this basic premise to protect those firefox users who use noscript (in fact, adding a site to noscript's whitelist is in many ways the same as adding a site to ie's trusted sites zone - only easier, more convenient, and more intuitive)...
the principle makes sense and the advice (even in absence of the technologies that try to make it automatic) has been one of the more successful bits of security know-how at gaining widespread adoption... unfortunately the principle is falling apart because malicious web content is increasingly finding it's way on to what would otherwise be considered trustworthy sites... dancho danchev often informs his readers of instances of web sites being directly compromised to serve malware, and sandi hardmeier regularly informs her readers of instances of sites serving malware indirectly by virtue of malvertizements (malicious advertisements) infiltrating the 3rd party ad networks the site owners use...
when (as this zdnet article suggests) such compromises are up 400% over last year, and when affected sites include such well known internet properties as yahoo mail, cnn (among others), or the superbowl then it begs the question "is there any such thing as trusted sites anymore?" and the answer i think has to be either "no" or "not for much longer"...
now, of course the efficacy of the tools like noscript aren't quite as affected as a basic careful internet user would be since the tools can look at where the content is coming from rather than just what the current site is, but it really makes you wonder about those oh-so-clever users who go around not using any anti-malware software and thinking they're fine because they don't go to dodgy sites...
prudence alone isn't really enough anymore, you need good tools to help control what web content is allowed to run (i.e. some kind of whitelist like noscript), in what environment it runs (i.e. some kind of web sandbox whether it's multiple browsers/browser profiles, or sandboxing software like sandboxie, or even a full virtual machine like vmware), and to detect when something slips through the cracks (i.e. a scanner, preferably one that implements an lsp) to help prevent it from stealing data you enter and/or using your browsing session as a staging point for an attack on other things on your network (like your router) or the internet at large...
Saturday, June 07, 2008
stop trying to decrypt your data
according to a new network world article, gpcode is back... for those that don't remember it's a piece of malware whose payload encrypts your data and holds it for ransom... you have to either pay the bad guys to get the decryption key or you have to hope that the av vendors figure out how to crack it (and probably pay them for their product so you can use it to decrypt your data)...
this is all misdirected effort, though... for all intents and purposes this is a data corrupting payload - the fact that the transformation it performs on your data is reversible is a red herring meant to make people spin their wheels and eventually capitulate and give the bad guys what they want while distracting you from the fact that a data encryptor is no harder to recover from than a data destroyer...
you're prepared for drive failures, right? you have a plan for when something comes along and hoses your data irrecoverably, right? of course you do, they're called backups... those same backups work equally well against maliciously encrypted data as they do against maliciously or accidentally destroyed data so the question i have for those who are concerned about this new version of gpcode is "what's the big deal?"...
and for those working on a way to crack this thing my question is "why bother?"... for all you know the malware writers screwed up their crypto code again but this time in such a way that the data actually is unrecoverable... and if they didn't screw it up then you'll pile huge amounts of effort into cracking the key and either fail or succeed and force the malware writer to take the rather trivial step of creating a new key and releasing a new version of the malware... why bother making the distinction between this and a true data loss event for the user? if the user has backups they're fine and if they don't then this is the kind of event that they actually need in order to learn how important good backups really are... yes it would suck to be them but this is the real world and the real world has consequences that they need to know about rather than be sheltered from... consequences are what help us learn...
*non-update* (since i found out before i managed to post this article): apparently the folks at kaspersky are trying to organize a combined effort to crack the key... there's still nothing to convince me this isn't a waste of time but i should at least acknowledge the difference in opinion... then again, i suppose businesses have a tough time trying to get away with tough love, so i suppose it does make business sense for them to try and coddle their customers in this way...
this is all misdirected effort, though... for all intents and purposes this is a data corrupting payload - the fact that the transformation it performs on your data is reversible is a red herring meant to make people spin their wheels and eventually capitulate and give the bad guys what they want while distracting you from the fact that a data encryptor is no harder to recover from than a data destroyer...
you're prepared for drive failures, right? you have a plan for when something comes along and hoses your data irrecoverably, right? of course you do, they're called backups... those same backups work equally well against maliciously encrypted data as they do against maliciously or accidentally destroyed data so the question i have for those who are concerned about this new version of gpcode is "what's the big deal?"...
and for those working on a way to crack this thing my question is "why bother?"... for all you know the malware writers screwed up their crypto code again but this time in such a way that the data actually is unrecoverable... and if they didn't screw it up then you'll pile huge amounts of effort into cracking the key and either fail or succeed and force the malware writer to take the rather trivial step of creating a new key and releasing a new version of the malware... why bother making the distinction between this and a true data loss event for the user? if the user has backups they're fine and if they don't then this is the kind of event that they actually need in order to learn how important good backups really are... yes it would suck to be them but this is the real world and the real world has consequences that they need to know about rather than be sheltered from... consequences are what help us learn...
*non-update* (since i found out before i managed to post this article): apparently the folks at kaspersky are trying to organize a combined effort to crack the key... there's still nothing to convince me this isn't a waste of time but i should at least acknowledge the difference in opinion... then again, i suppose businesses have a tough time trying to get away with tough love, so i suppose it does make business sense for them to try and coddle their customers in this way...
Tags:
backup,
encryption,
gpcode,
kaspersky,
malware
Monday, June 02, 2008
suggested reading
changing the name of these posts due to a change in frequency... it won't be weekly anymore (that was starting to feel like noise), don't expect to see another until the end of the month...
- Schneier on Security: How to Sell Security
so apparently we prefer sure gains and taking a risk with our losses - an interesting discussion of the bias in our decision making process... - ThreatBlog » Blog Archive » The Race to Zero
yet another response to the race to zero (credibility) contest... randy nails it by pointing out that it's a script kiddie contest, and then nails it again by observing the lack of spine on the part of the organizers... - ThreatBlog » Blog Archive » The AV Industry from the Outside In and the Inside Out
another great post from randy... i've said a number of the same things about the industry in the past but i'm afraid i've never really been able to credibly relate this perspective on how the industry operates to others in part because i'm not part of the industry... coming from me it would be second-hand so go read it from someone who is part of the industry...
Tags:
suggested reading
Subscribe to:
Posts (Atom)