an expert is someone that YOU feel has advanced knowledge or skills in a certain field...
the classification is an entirely subjective and personal matter... calling someone an expert is a sign of respect that a person or persons can show to another person, however there are certain ways in which the label can lose it's meaning...
one way the title of the title of expert can lose it's meaning is when the the person holding the title of expert gave him/herself that title... this is the self-proclaimed expert that most people recognize as not being a REAL expert, at least when they recognize that the title was self-given... you see it's easy to identify a self-proclaimed expert when the person actually calls themself an expert, but it can much harder to identify them when they simply act like they're experts... one tell tale sign of an implicit self-proclaimed expert is making claims and then failing to back them up in a reasonable and logical way... it's not always so clear, however, so the line between the implicit self-proclaimed expert and the person who is just confident that s/he is right can become quite blurred... that's one of the reasons why i occasionally take it upon myself to explicitly state that i am not an expert, so that people won't mistake me for a self-proclaimed (or even real) expert...
another way the title of expert can lose it's meaning is when it's applied through the 'friend of a friend' approach... this is where person A says person B is an expert and instead of taking that declaration with a grain of salt (as one should) and determining for yourself if person B really deserves to be called an expert, you simply take person A's word for it for whatever reason... this effectively gives person B respect (your respect) that person B hasn't earned - and respect is something that should definitely be earned, not just given away...
back to index
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Wednesday, May 31, 2006
Tuesday, May 30, 2006
flame on dick morrell
dick morrell has been having a little email argument with someone via his blog here and here over mr. morrell's views on okopipi...
in the first one, rather than address points made by the emailer, morrell laughs at the emailer for not being able to figure out how to jump through the necessary hoops in order to leave an actual comment on the blog... the hoops are there to prevent blog spam, i can understand why a so-called anti-spam expert would want to prevent blog spam, but the hoops are non-obvious as i soon found out... it's not enough to enter your name, email address, webpage, comment, and captcha code in the comment entry webform like you would for most other blogs - no, for this one you have to hunt down the login link, create a user account, check your email for the password, go back and log on, and then enter your name, email address, webpage, comment and captcha code... i went through all of those steps and then left a comment to the effect that perhaps the emailer failed to figure out the comment process because the comment process was non-obvious... nothing particularly unfriendly in pointing out the user experience of his blog could use some work - heck, just putting a login link in the comment webform area would have probably made a huge improvement...
in the second one morrell pulls the old who does he think he's talking to routine, again instead of addressing the emailer's points... for this one i didn't pull any punchs - i plainly stated that the emailer was talking to someone who was clearly too full of their own self-importance to do anything other than say "i'm an expert and i say it's like this"... i pointed out that just because he was supposedly an expert that it doesn't mean his claims don't need to be backed up, and then i called him on his support of gadi evron's FUD and his own smearing of the principals involved in okopipi...
and he deleted my comments... both of them as i turns out... and what's more he sent me email - not to the anonymous email address i entered (to avoid spam) in the comment webform but rather to the disposable email address i had to use (to avoid spam) instead when creating a user account in order to keep the password secret (i wouldn't want someone reading the email to the anonymous email address and posing as me)... the email went something like (but not exactly like, since unlike mr. morrell i follow the letter of rfc1855 so i'm paraphrasing here) this:
ok, first of all respect is something you earn, not something that's given to you and certainly not something you take as an entitlement for being a so-called expert... second, who do you think you're talking to, dick? "my team"? what team? i'm not part of okopipi, i'm just someone who thinks you and gadi need to explain yourselves and support the claims you've made with logically valid arguments - just being an 'expert' isn't enough...
but wait, there's more... not long after he sent a second email that went a little like this:
[sarcasm]no... deleting my comments doesn't breach freedom of speech at all...[/sarcasm]
if you really supported freedom of speech you wouldn't delete comments that were 'undesirable' to you... clearly if i want freedom of speech i'm going to have to take it thusly... and as for knocking my use of a disposable email address - are you for real? stand up and be counted my ass, it's a farking anti-spam measure and a real anti-spam expert should have respected that...
then again, a real anti-spam expert should probably have also thought twice about sending me unwanted email in the first place... i don't appreciate the out-of-band communication, dick - if you wanted to engage me in discussion you should have responded to my comments where i left them... i didn't enter an address into your blog's anti-spam system so that you could send me patronizing email that i don't want, i entered it because i was forced to, because it was the only way to leave a comment... you abused that information... it's a good thing i anticipated that and used a disposable address - consider that address disposed of...
in the first one, rather than address points made by the emailer, morrell laughs at the emailer for not being able to figure out how to jump through the necessary hoops in order to leave an actual comment on the blog... the hoops are there to prevent blog spam, i can understand why a so-called anti-spam expert would want to prevent blog spam, but the hoops are non-obvious as i soon found out... it's not enough to enter your name, email address, webpage, comment, and captcha code in the comment entry webform like you would for most other blogs - no, for this one you have to hunt down the login link, create a user account, check your email for the password, go back and log on, and then enter your name, email address, webpage, comment and captcha code... i went through all of those steps and then left a comment to the effect that perhaps the emailer failed to figure out the comment process because the comment process was non-obvious... nothing particularly unfriendly in pointing out the user experience of his blog could use some work - heck, just putting a login link in the comment webform area would have probably made a huge improvement...
in the second one morrell pulls the old who does he think he's talking to routine, again instead of addressing the emailer's points... for this one i didn't pull any punchs - i plainly stated that the emailer was talking to someone who was clearly too full of their own self-importance to do anything other than say "i'm an expert and i say it's like this"... i pointed out that just because he was supposedly an expert that it doesn't mean his claims don't need to be backed up, and then i called him on his support of gadi evron's FUD and his own smearing of the principals involved in okopipi...
and he deleted my comments... both of them as i turns out... and what's more he sent me email - not to the anonymous email address i entered (to avoid spam) in the comment webform but rather to the disposable email address i had to use (to avoid spam) instead when creating a user account in order to keep the password secret (i wouldn't want someone reading the email to the anonymous email address and posing as me)... the email went something like (but not exactly like, since unlike mr. morrell i follow the letter of rfc1855 so i'm paraphrasing here) this:
i've deleted your comments because you don't show the proper respect for me or gadi or any of the other people who work really hard to stop spammers the correct way.
i have a lot more experience than you or any of your team on . . .
ok, first of all respect is something you earn, not something that's given to you and certainly not something you take as an entitlement for being a so-called expert... second, who do you think you're talking to, dick? "my team"? what team? i'm not part of okopipi, i'm just someone who thinks you and gadi need to explain yourselves and support the claims you've made with logically valid arguments - just being an 'expert' isn't enough...
but wait, there's more... not long after he sent a second email that went a little like this:
i hope you don't think my deleting your comments breaches the freedom of speech i support.
the email address you use [place disposable email address here] says it all.
stand and be counted and have a reasonable discussion instead of acting like some guy sitting in his darkened room in front of a glowing computer monitor.
[sarcasm]no... deleting my comments doesn't breach freedom of speech at all...[/sarcasm]
if you really supported freedom of speech you wouldn't delete comments that were 'undesirable' to you... clearly if i want freedom of speech i'm going to have to take it thusly... and as for knocking my use of a disposable email address - are you for real? stand up and be counted my ass, it's a farking anti-spam measure and a real anti-spam expert should have respected that...
then again, a real anti-spam expert should probably have also thought twice about sending me unwanted email in the first place... i don't appreciate the out-of-band communication, dick - if you wanted to engage me in discussion you should have responded to my comments where i left them... i didn't enter an address into your blog's anti-spam system so that you could send me patronizing email that i don't want, i entered it because i was forced to, because it was the only way to leave a comment... you abused that information... it's a good thing i anticipated that and used a disposable address - consider that address disposed of...
Tags:
anti-spam,
dick morrell,
okopipi
Sunday, May 28, 2006
the merits of the blue security anti-spam approach
if you hire someone to send a million messages on your behalf, you better be prepared for a million responses...
that was the gist of blue security's blue frog anti-spam technology, and it's the approach that okopipi hopes to build upon... i posted about blue security going down for the count previously and i predicted that a group like okopipi would try and fill blue security's shoes (not through any sort of precience, mind you, but rather just because it would have fit an established pattern of human behaviour), but it seems okopipi have stirred up a hornet's nest of controversy in the process...
let's look at the criticisms... the one with the most technical merit concerns the anti-spam registry that the blue security approach had... essentially it was a list of hashes of email addresses that the spammers could use to remove blue frog users from their spam lists... the fact that email addresses were hashed (a non-reversible transformation) prevented spammers from finding any new addresses directly from the registry, however it did allow them to identify which addresses in their own spam lists were blue frog users and that allowed them to retaliate against those users... however there is no way to tell spammers which email addresses to remove without identifying those email addresses - the only alternative is to not offer the spammers any kind of remediation process at all and say "sucks to be you", which clearly would have had much worse chances of a productive outcome... a zero tolerance approach may be safer for the users, but it can't get their names removed from the spammers' lists - it's predicated on getting the spammers to give up their business entirely instead of simply adjusting their approach and i suspect that nobody is that persuasive...
another criticism is that the blue security approach could be used against innocent merchants... the idea was that if one sent out spam advertising a competitor, that competitor would then have to deal with a deluge of complaints they could do nothing about.... this rose out of the more general concern about whether it's possible for blue frog to target the wrong site and what happens then... the thing is, blue frog could only send complaints to sites that blue security enabled it to and blue security took pains to confirm that those sites were appropriate places to lodge complaints... the process they followed can be reviewed here...
[edited to add this paragraph] still another criticism is that sites are hosted on hacked machines and just move around from one hacked machine to another... that kind of thing can be detected, however, since blue security contacted the isp as well as the merchant site... also, a fly-by-night operation wouldn't have gone unnoticed with an examination period exceeding 10 days... if the site was one that didn't stick around for at least a couple days blue security would have had no reason to develop a script to send complaints to it...
there's also a criticism that sending opt-out requests to the merchant sites constituted a DDoS... this is a rather ridiculous thing to say - each person who receives a spam has a right to complain about it, and since the spammer was merely acting as an agent of the merchant when s/he sent out the spam (and since the spammers go to great lengths to not be reachable themselves) the merchant is the appropriate entity to address one's complaints to... at most one opt-out would be sent per spam the blue frog user received and each opt-out was a response to an incomming spam message... while that may result in service disruptions for the merchant, it's no different than if each spam recipient manually went to the merchant's site and complained (and lets face it, the spam invites each recipient to visit the merchant's site)... the blue frog cleint automated the process of filing a complaint initiated by the user, nothing more...
an even more outlandish claim is that the blue frog clients installed on user machines constituted a botnet... a botnet is ultimately controlled by a central controller, a bot master... the blue frog clients were operated by the blue frog users themselves, not blue security - blue security just sent out updates to those clients (which included instructions on how to send the complaints but not instructions to actually send them)... these kinds of claims by so-called security experts are pure FUD... those spreading the FUD appear to be parroting the opinions of others and simply claim there is universal agreement rather than actually backing up their claims - that kind of argumentation is fallacious and hopefully more people will be able to see that now...
blue security's approach was designed in such a way that the easiest way to resolve the problem was to remove the specified addresses from their spam lists (and they had safeguards in place to prevent their system from being abused to hurt legitimate merchants) - unfortunately while we as humans often take the easiest way out sometimes we don't and that manifested itself in this case in a significant DDoS attack against blue security (bringing down their website and service) and it's users (sending them orders of magnitude more junk mail than usual)... some spammers didn't like being told what to do and and had the means to retaliate and now blue security is no more... okopipi hopes to develop a similar system but one that is less prone to attack (though nothing is invulnerable)... i hope they employ equivalent safeguards against abuse and if so, more power to them...
that was the gist of blue security's blue frog anti-spam technology, and it's the approach that okopipi hopes to build upon... i posted about blue security going down for the count previously and i predicted that a group like okopipi would try and fill blue security's shoes (not through any sort of precience, mind you, but rather just because it would have fit an established pattern of human behaviour), but it seems okopipi have stirred up a hornet's nest of controversy in the process...
let's look at the criticisms... the one with the most technical merit concerns the anti-spam registry that the blue security approach had... essentially it was a list of hashes of email addresses that the spammers could use to remove blue frog users from their spam lists... the fact that email addresses were hashed (a non-reversible transformation) prevented spammers from finding any new addresses directly from the registry, however it did allow them to identify which addresses in their own spam lists were blue frog users and that allowed them to retaliate against those users... however there is no way to tell spammers which email addresses to remove without identifying those email addresses - the only alternative is to not offer the spammers any kind of remediation process at all and say "sucks to be you", which clearly would have had much worse chances of a productive outcome... a zero tolerance approach may be safer for the users, but it can't get their names removed from the spammers' lists - it's predicated on getting the spammers to give up their business entirely instead of simply adjusting their approach and i suspect that nobody is that persuasive...
another criticism is that the blue security approach could be used against innocent merchants... the idea was that if one sent out spam advertising a competitor, that competitor would then have to deal with a deluge of complaints they could do nothing about.... this rose out of the more general concern about whether it's possible for blue frog to target the wrong site and what happens then... the thing is, blue frog could only send complaints to sites that blue security enabled it to and blue security took pains to confirm that those sites were appropriate places to lodge complaints... the process they followed can be reviewed here...
[edited to add this paragraph] still another criticism is that sites are hosted on hacked machines and just move around from one hacked machine to another... that kind of thing can be detected, however, since blue security contacted the isp as well as the merchant site... also, a fly-by-night operation wouldn't have gone unnoticed with an examination period exceeding 10 days... if the site was one that didn't stick around for at least a couple days blue security would have had no reason to develop a script to send complaints to it...
there's also a criticism that sending opt-out requests to the merchant sites constituted a DDoS... this is a rather ridiculous thing to say - each person who receives a spam has a right to complain about it, and since the spammer was merely acting as an agent of the merchant when s/he sent out the spam (and since the spammers go to great lengths to not be reachable themselves) the merchant is the appropriate entity to address one's complaints to... at most one opt-out would be sent per spam the blue frog user received and each opt-out was a response to an incomming spam message... while that may result in service disruptions for the merchant, it's no different than if each spam recipient manually went to the merchant's site and complained (and lets face it, the spam invites each recipient to visit the merchant's site)... the blue frog cleint automated the process of filing a complaint initiated by the user, nothing more...
an even more outlandish claim is that the blue frog clients installed on user machines constituted a botnet... a botnet is ultimately controlled by a central controller, a bot master... the blue frog clients were operated by the blue frog users themselves, not blue security - blue security just sent out updates to those clients (which included instructions on how to send the complaints but not instructions to actually send them)... these kinds of claims by so-called security experts are pure FUD... those spreading the FUD appear to be parroting the opinions of others and simply claim there is universal agreement rather than actually backing up their claims - that kind of argumentation is fallacious and hopefully more people will be able to see that now...
blue security's approach was designed in such a way that the easiest way to resolve the problem was to remove the specified addresses from their spam lists (and they had safeguards in place to prevent their system from being abused to hurt legitimate merchants) - unfortunately while we as humans often take the easiest way out sometimes we don't and that manifested itself in this case in a significant DDoS attack against blue security (bringing down their website and service) and it's users (sending them orders of magnitude more junk mail than usual)... some spammers didn't like being told what to do and and had the means to retaliate and now blue security is no more... okopipi hopes to develop a similar system but one that is less prone to attack (though nothing is invulnerable)... i hope they employ equivalent safeguards against abuse and if so, more power to them...
Tags:
anti-spam,
blue frog,
blue security,
okopipi,
spam
Friday, May 26, 2006
does µTorrent contain adware / spyware?
i recently became aware some controversy over the possibility that the popular bittorrent client µTorrent contains adware and/or spyware...
there are a number of threads on the µTorrent forums about so i thought i'd use this an an object lesson in how to make these determinations for yourself - and as an added bonus, we're going to do it without even downloading the software...
we can do that by looking at what the software's author had to say about the issue here...
the interesting thing (at least to me) is that there is no 3rd party adware client software - the ad-serving functionality is built by the same person(s) who built the µTorrent client... it's an option i didn't cover in my post about software developers who go the adware route but it's an important one none-the-less... if a software developer finds s/he needs ad revenue, creating the ad serving engine in-house is an excellent way to avoid the pitfalls of potentially nefarious behaviour from 3rd party adware clients... a software developer has the potential to offer users a far more benign adware client if they go this route than if they just accept whatever some 3rd party hands them...
unfortunately that potential seems to be lost in this case... for one thing, the ad-supported nature is not disclosed on the front page of the site nor on the download page - probably because the author doesn't want to admit that it's adware (just because you make it yourself doesn't mean it's not adware)... potential users need to be informed about this sort of thing, they need to be able to make an informed decision about whether they want to install software that displays ads before they download and install it... not disclosing it (whether one is in denial or for some other reason) is a breach of good faith...
the other thing that negates it's potential to be benign is the spyware issue...
there are a number of threads on the µTorrent forums about so i thought i'd use this an an object lesson in how to make these determinations for yourself - and as an added bonus, we're going to do it without even downloading the software...
we can do that by looking at what the software's author had to say about the issue here...
“Nothing is placed on the user's machine [when the NanoTorrent browser opens,]” Ludvid explains. “It's an advertisement inside the web browser only, the ad comes from a webserver owned by me, and it's removed when the window is closed. No cookies at all are installed, not even my own…The ads are generated by the script on the webserver. The µTorrent client as such does not contain any ads. They are generated by the webserver and shown through a php script to the webbrowser when the user searches.”so it does in fact display ads - and not just by accident, the browser is directed to his own site to display ads hosted there... the fact that the ads are not contained in the bittorrent client is not really an issue - adware doesn't need to contain the ads it displays, it just needs to display ads... the fact that the client isn't actually displaying the ads in a window that's part of the client itself but instead uses a browser window is also not really an issue, plenty of adware uses browser windows to display their ads (it saves them from having to reinvent the wheel)...
the interesting thing (at least to me) is that there is no 3rd party adware client software - the ad-serving functionality is built by the same person(s) who built the µTorrent client... it's an option i didn't cover in my post about software developers who go the adware route but it's an important one none-the-less... if a software developer finds s/he needs ad revenue, creating the ad serving engine in-house is an excellent way to avoid the pitfalls of potentially nefarious behaviour from 3rd party adware clients... a software developer has the potential to offer users a far more benign adware client if they go this route than if they just accept whatever some 3rd party hands them...
unfortunately that potential seems to be lost in this case... for one thing, the ad-supported nature is not disclosed on the front page of the site nor on the download page - probably because the author doesn't want to admit that it's adware (just because you make it yourself doesn't mean it's not adware)... potential users need to be informed about this sort of thing, they need to be able to make an informed decision about whether they want to install software that displays ads before they download and install it... not disclosing it (whether one is in denial or for some other reason) is a breach of good faith...
the other thing that negates it's potential to be benign is the spyware issue...
The latest version of µTorrent, version 1.5, contains an integrated search feature. The end user can opt to search several of the major search engines, such as Mininova, ThePirateBay, TorrentSpy, and isoHunt. Once the search is conducted, an independent browser window is opened. Instead of going to the Mininova.org domain however, the browser is directed to NanoTorrent.com.that basically tells me that my search queries are being sent somewhere other than the search engine they were intended for... oh, they get to their intended destination eventually, but not before going through a middle-man first... we're assured that the information is not collected or used in any way, but that's not the point (nor is it something we can verify)... spyware isn't spyware because the information it gives a 3rd party is abused, it's spyware because it gives a 3rd party (in this case nanotorrent.com, owned and operated by the same guy) your information (in this case your search queries)... personally, i would rather my search queries not go through middle-men - not because i search for things that i'd be ashamed to let others know about but rather purely on the basis of principle - what i search for is between me and my search engine of choice...
Tags:
adware,
bittorrent,
malware,
spyware,
µTorrent
Wednesday, May 17, 2006
the future of blue security
i know it sounds like a strange subject to blog about, now that blue security has given up... the thing is, this is the internet - and i've been on the internet long enough to know that neither good ideas nor bad ideas ever happen just once... everything gets repeated - over and over and over and over and over again...
so the spammers knocked the company off the face of the web with a DDoS (distributed denial of service) attack... the very fact that they mounted such a counter attack is an indication that the technique blue security was using was working... lashing out the way they did is a sign of weakness, not strength...
unfortunately (for the spammers) the spammers are apparently unfamiliar with the streisand effect... their actions have served to advertise their vulnerability and all the people who thought blue security's idea was a good one and a bunch of new people who hadn't even heard of it before are now going to recognize that vulnerability for what it is and put that information to use...
you see, you might be able to kill a commercial venture like blue security through force and intimidation, but you can't kill an idea quite so easily... you remember what happened when the original napster went down? hundreds of knock-offs popped up in it's place and peer-to-peer filesharing has been an unstoppable hydra ever since...
it's likely that something similar will happen here with the same analysis of past efforts, identification of points of weakness, and innovation to overcome those weaknesses that continues to take place on the p2p front to this very day...
and what about that one russian spammer threatening to take down the entire internet if he can't send his spam? well let's just say that there are all kinds on the internet, and i'm not naive enough to think that there aren't some people out there that are so fed up with spam as to be willing to endure the internet version of a scorched earth in order to affect a final solution to the spam problem...
blue security may be gone, but i don't think the story is over... not by a long shot... the spammers clearly won this battle, but they may have just lost the war...
so the spammers knocked the company off the face of the web with a DDoS (distributed denial of service) attack... the very fact that they mounted such a counter attack is an indication that the technique blue security was using was working... lashing out the way they did is a sign of weakness, not strength...
unfortunately (for the spammers) the spammers are apparently unfamiliar with the streisand effect... their actions have served to advertise their vulnerability and all the people who thought blue security's idea was a good one and a bunch of new people who hadn't even heard of it before are now going to recognize that vulnerability for what it is and put that information to use...
you see, you might be able to kill a commercial venture like blue security through force and intimidation, but you can't kill an idea quite so easily... you remember what happened when the original napster went down? hundreds of knock-offs popped up in it's place and peer-to-peer filesharing has been an unstoppable hydra ever since...
it's likely that something similar will happen here with the same analysis of past efforts, identification of points of weakness, and innovation to overcome those weaknesses that continues to take place on the p2p front to this very day...
and what about that one russian spammer threatening to take down the entire internet if he can't send his spam? well let's just say that there are all kinds on the internet, and i'm not naive enough to think that there aren't some people out there that are so fed up with spam as to be willing to endure the internet version of a scorched earth in order to affect a final solution to the spam problem...
blue security may be gone, but i don't think the story is over... not by a long shot... the spammers clearly won this battle, but they may have just lost the war...
Tags:
anti-spam,
blue frog,
blue security,
spam
Tuesday, May 16, 2006
mcafee-siteadvisor says what about search engine results?
perhaps you've heard - search engines lead users to dangerous content... actually, that's what siteadvisor (a company and technology recently purchased by mcafee) is saying...
siteadvisor, as you may or may not know, provides technology to tell you which of the search engine results on the result page are safe and which ones aren't... they've basically come up with a list of known bad sites and they mark up the search engine results to show you what's bad and what isn't... why they only do this for search engine results i don't know, search engine result pages are not the only pages that link to other pages - it should be possible to mark up all web pages to indicate link safety...
at any rate, singling out search engines this way is FUD... the implication is (and this is born out by all the alarmist revisionist headlines on articles talking about this siteadvisor study) that search engines aren't safe... search engines just point to web pages, just like other web pages point to web pages... it's not search engines that are unsafe it's the web in general - the same malware infested pages you can encounter using a search engine you can also encounter by following links on other pages (which is necessarily true in order for the unsafe pages to get a high enough page rank to be in the first 5 pages of google results)... search engines just point to the rest of the web - their results are an organized reflection of the web pages that exist...
but what is the siteadvisor study really saying? well, that their technology for detecting unsafe results returned from search engines is detecting unsafe results being returned from search engines... thanks, siteadvisor, for something very close to a tautology, and congratulation on validating your raison d'etre... but y'know - it's not unlike when experts on internet addiction do studies that show internet addiction is a growing problem - whether it's true or not, it's completely self-serving...
mcafee-siteadvisor are basically tooting their own horn trying to get attention, and i think i know what kind of attention they're hoping for... you see, back in the day it used to be a pretty big deal to be the anti-virus company that microsoft used in-house... it was a badge of honour, a matter of prestige and great PR... in today's world the internet (rather than the PC) is becoming the new platform - the web represents the file system, the search engine index represents the file allocation table, and the search engine itself is analogous to the OS... how prestigious would it be to be the scanning technology licensed by google or yahoo or msn to clear the bad pages out of their indices? mcafee-siteadvisor doesn't even have to try hard, they're the only game in town, they just have to hype things up a bit and raise some awareness so as to generate market pressure on the search engines to bite (thus the focus on search engines being unsafe)...
3.1% of organic search results and 8.5% of sponsored search results are bad - yup, that should work... not too outrageous, not too easy to ignore, it sounds 'just right'...
siteadvisor, as you may or may not know, provides technology to tell you which of the search engine results on the result page are safe and which ones aren't... they've basically come up with a list of known bad sites and they mark up the search engine results to show you what's bad and what isn't... why they only do this for search engine results i don't know, search engine result pages are not the only pages that link to other pages - it should be possible to mark up all web pages to indicate link safety...
at any rate, singling out search engines this way is FUD... the implication is (and this is born out by all the alarmist revisionist headlines on articles talking about this siteadvisor study) that search engines aren't safe... search engines just point to web pages, just like other web pages point to web pages... it's not search engines that are unsafe it's the web in general - the same malware infested pages you can encounter using a search engine you can also encounter by following links on other pages (which is necessarily true in order for the unsafe pages to get a high enough page rank to be in the first 5 pages of google results)... search engines just point to the rest of the web - their results are an organized reflection of the web pages that exist...
but what is the siteadvisor study really saying? well, that their technology for detecting unsafe results returned from search engines is detecting unsafe results being returned from search engines... thanks, siteadvisor, for something very close to a tautology, and congratulation on validating your raison d'etre... but y'know - it's not unlike when experts on internet addiction do studies that show internet addiction is a growing problem - whether it's true or not, it's completely self-serving...
mcafee-siteadvisor are basically tooting their own horn trying to get attention, and i think i know what kind of attention they're hoping for... you see, back in the day it used to be a pretty big deal to be the anti-virus company that microsoft used in-house... it was a badge of honour, a matter of prestige and great PR... in today's world the internet (rather than the PC) is becoming the new platform - the web represents the file system, the search engine index represents the file allocation table, and the search engine itself is analogous to the OS... how prestigious would it be to be the scanning technology licensed by google or yahoo or msn to clear the bad pages out of their indices? mcafee-siteadvisor doesn't even have to try hard, they're the only game in town, they just have to hype things up a bit and raise some awareness so as to generate market pressure on the search engines to bite (thus the focus on search engines being unsafe)...
3.1% of organic search results and 8.5% of sponsored search results are bad - yup, that should work... not too outrageous, not too easy to ignore, it sounds 'just right'...
Tags:
fud,
mcafee,
siteadvisor
Monday, May 15, 2006
pro-active vs. reactive technologies and techniques
we've all heard the rhetoric... known virus/malware scanning is reactive rather than pro-active - it's essentially a dead technology... we need pro-active technologies to deal with todays threats... pro-active technologies that look for virus/malware-like behaviour...
if you're like most security lemmings you're probably nodding your head in agreement at this point so i'm going to have to debunk some myths...
is known virus/malware scanning (more generally, blacklisting) reactive? developing known virus/malware scanners is certainly reactive since you have to wait for the virus or malware to actually exist before you can write a routine to identify it - so it's reactive in the scope of developing a technology for global consumption... at the local scope, the end user's machine, the application of a blacklist is a preventative measure - it stops the malware it's able to stop before the malware can activate, before the virus can infect anything, before sensitive data is compromized... that is the very definition of pro-active...
is behavioural virus/malware detection pro-active? developing the technology is certainly pro-active since you can write a routine to detect anything that performs behaviour X before most of the things that actually do perform behaviour X are even written - so it's pro-active in the global scope... at the local scope, however, the application of behavioural monitoring software is reactive by definition - think about it; the malware has to run, it has to become active, it has to try something naughty before the behavioural monitor can do anything... it reacts to bad behaviour from software... it's not prevention if it kicks in after...
let's look at some other technologies... take the application whitelist, the other preventative technology... it's development is pro-active since it can address malware before the malware is ever written... it's use is also pro-active as it stops the execution of any software that isn't known to be good... maybe this is the real champion of pro-active technologies - but wait: cataloging all good things is even more unmanagable than cataloging all bad things (blacklist) so a vendor supplied whitelist isn't such a great option... that means the real work, deciding what goes on the list and what doesn't, is left to the end user - that's a recipe for failure... oh, don't get me wrong, it's still a valuable tool, and software firewalls (implementing network connection whitelists) have shown us that it can work pretty well, but deciding what to trust and what not to trust (and thus what to add to the whitelist and what not to add) is a problem we already know end users aren't good at so some failures are going to happen...
how about change detection? it's development is pro-active too, all malware changes something, so long as your change detector can monitor that particular something for changes you can detect those changes even if they're made by malware that wasn't even thought of when the change detector was developed... unfortunately, in use change detection is reactive - it detects the changes after they have been made... then too, the work of deciding what changes are ok and what changes represent malware activity is largely left up to the end user...
now, i don't think anyone would argue that prevention isn't the preferred outcome... with prevention there is no clean-up, with prevention there is no lost data, with prevention there are no bank account passwords or credit card numbers to change - the alternatives to prevention are much messier... does prevention happen at the global scope of things? does simply making the technology stop the malware? no of course not... prevention happens at the end points, at the local scope, where the techniques actually get put to use... it is in that scope where 'pro-active or reactive' should be determined - the conventional wisdom on this matter is entirely backwards...
further, it needs to be realized that the more you can push the difficult task of figuring out what is trustworthy and what isn't back on the developers the better... security works best when decisions are made by informed users so the more relevant information the security software can give them the better, and the vendors are in a much better position to come up with and disseminate that information...
so is known virus/malware scanning really dead? no... in fact it is the cleanest and most cost effective technique that exists for dealing with malware... it does fail, but all preventative measures fail, that's the nature of things... that's why reactive techniques like behaviour monitoring and change detection exist, to help detect when preventative measures fail... the idea that scanning should be scrapped in favour of behaviour based detection systems is entirely wrong-headed; they should be used in conjunction with each other, they complement each other, they constitute defense in-depth... all of the above mentioned techniques have their place in a multi-layered anti-virus/anti-malware strategy...
if you're like most security lemmings you're probably nodding your head in agreement at this point so i'm going to have to debunk some myths...
is known virus/malware scanning (more generally, blacklisting) reactive? developing known virus/malware scanners is certainly reactive since you have to wait for the virus or malware to actually exist before you can write a routine to identify it - so it's reactive in the scope of developing a technology for global consumption... at the local scope, the end user's machine, the application of a blacklist is a preventative measure - it stops the malware it's able to stop before the malware can activate, before the virus can infect anything, before sensitive data is compromized... that is the very definition of pro-active...
is behavioural virus/malware detection pro-active? developing the technology is certainly pro-active since you can write a routine to detect anything that performs behaviour X before most of the things that actually do perform behaviour X are even written - so it's pro-active in the global scope... at the local scope, however, the application of behavioural monitoring software is reactive by definition - think about it; the malware has to run, it has to become active, it has to try something naughty before the behavioural monitor can do anything... it reacts to bad behaviour from software... it's not prevention if it kicks in after...
let's look at some other technologies... take the application whitelist, the other preventative technology... it's development is pro-active since it can address malware before the malware is ever written... it's use is also pro-active as it stops the execution of any software that isn't known to be good... maybe this is the real champion of pro-active technologies - but wait: cataloging all good things is even more unmanagable than cataloging all bad things (blacklist) so a vendor supplied whitelist isn't such a great option... that means the real work, deciding what goes on the list and what doesn't, is left to the end user - that's a recipe for failure... oh, don't get me wrong, it's still a valuable tool, and software firewalls (implementing network connection whitelists) have shown us that it can work pretty well, but deciding what to trust and what not to trust (and thus what to add to the whitelist and what not to add) is a problem we already know end users aren't good at so some failures are going to happen...
how about change detection? it's development is pro-active too, all malware changes something, so long as your change detector can monitor that particular something for changes you can detect those changes even if they're made by malware that wasn't even thought of when the change detector was developed... unfortunately, in use change detection is reactive - it detects the changes after they have been made... then too, the work of deciding what changes are ok and what changes represent malware activity is largely left up to the end user...
now, i don't think anyone would argue that prevention isn't the preferred outcome... with prevention there is no clean-up, with prevention there is no lost data, with prevention there are no bank account passwords or credit card numbers to change - the alternatives to prevention are much messier... does prevention happen at the global scope of things? does simply making the technology stop the malware? no of course not... prevention happens at the end points, at the local scope, where the techniques actually get put to use... it is in that scope where 'pro-active or reactive' should be determined - the conventional wisdom on this matter is entirely backwards...
further, it needs to be realized that the more you can push the difficult task of figuring out what is trustworthy and what isn't back on the developers the better... security works best when decisions are made by informed users so the more relevant information the security software can give them the better, and the vendors are in a much better position to come up with and disseminate that information...
so is known virus/malware scanning really dead? no... in fact it is the cleanest and most cost effective technique that exists for dealing with malware... it does fail, but all preventative measures fail, that's the nature of things... that's why reactive techniques like behaviour monitoring and change detection exist, to help detect when preventative measures fail... the idea that scanning should be scrapped in favour of behaviour based detection systems is entirely wrong-headed; they should be used in conjunction with each other, they complement each other, they constitute defense in-depth... all of the above mentioned techniques have their place in a multi-layered anti-virus/anti-malware strategy...
Tags:
anti-malware,
anti-virus
Sunday, May 14, 2006
what is a hybrid / blended threat?
a malware hybrid is a combination of 2 or more types of malware... for example, osx/leap.a is an instant messaging worm and a type of executable file infecting virus known as an overwriting infector...
although it isn't generally well known, a piece of malware can be a virus and a worm and a rat and a rootkit and any number of other malware types all at the same time - the various malware types are not mutually exclusive in any way... anti-malware vendors (anti-virus vendors in particular) don't generally do a great deal to make this obvious to the general computer using public, often preferring to treat one type as taking precedence over the others... occasionally one may see a write-up that lists something as a "spyware worm" or something like that but generally not...
this may be one of the more detrimental things that the industry practices because it misrepresents the breadth and scope of the threat that a particular pigeon-holed piece of malware poses... no malware type is an island unto itself, they can all be combined with one another and that is an important point to remember when dealing with the issue of what type of malware something is...
another (better known) term for this, at least the way some people (like kaspersky) use it, is "blended threat"... symantec, on the other hand, reserve the term blended threat for those hybrids that include exploit code as one of the malware types in the combination... according to nick fitzgerald, symantec coined the term to mean just that so that is the more formal meaning - however i can see no reason why exploit code should be so special as to deserve a special term for it's hybrids and clearly others agree...
back to index
although it isn't generally well known, a piece of malware can be a virus and a worm and a rat and a rootkit and any number of other malware types all at the same time - the various malware types are not mutually exclusive in any way... anti-malware vendors (anti-virus vendors in particular) don't generally do a great deal to make this obvious to the general computer using public, often preferring to treat one type as taking precedence over the others... occasionally one may see a write-up that lists something as a "spyware worm" or something like that but generally not...
this may be one of the more detrimental things that the industry practices because it misrepresents the breadth and scope of the threat that a particular pigeon-holed piece of malware poses... no malware type is an island unto itself, they can all be combined with one another and that is an important point to remember when dealing with the issue of what type of malware something is...
another (better known) term for this, at least the way some people (like kaspersky) use it, is "blended threat"... symantec, on the other hand, reserve the term blended threat for those hybrids that include exploit code as one of the malware types in the combination... according to nick fitzgerald, symantec coined the term to mean just that so that is the more formal meaning - however i can see no reason why exploit code should be so special as to deserve a special term for it's hybrids and clearly others agree...
back to index
Tags:
blended threat,
definition,
hybrid
Saturday, May 13, 2006
what is an exploit?
an exploit is something that takes advantage of (or exploits) a defect or vulnerability in either an existing piece of software or hardware.
unlike most forms of malware, an exploit is not necessarily a program in the traditional sense. while it can be used to refer to a program specially written to communicate bad input to a vulnerable piece of software, it can also be just the bad input itself. any bad input (or even valid input that the developer just failed to anticipate) can cause the vulnerable application to behave improperly. in the sense that all data is code and all inputs form languages, however, an exploit can be thought of as a program written in the input language of the vulnerable software.
because exploits may simply be bad input rather than an application, they may act in the context of an exploited application without first being saved to disk. this is a problem for conventional anti-malware apps that focus on scanning for malware on disk hopefully before it has a chance to execute (often just before they execute if you're using an on-access scanner). in the case of an exploit it can be a legitimate but vulnerable application that is running and performing the malware functions so a known malware scanner would have to scan the vulnerable application's input in order to address this case.
although malware is generally not based on exploiting software vulnerabilities and therefore not inherently dependent on them, it is possible to make malware in part or completely out of exploit code. such malware would then depend on vulnerabilities by definition.
if the malware is completely dependent on vulnerabilities it can be an exception to the general rule about disclosing malware not leading to the closure of any window of exposure, since it's disclosure would make the affected vendor(s) and community aware of the vulnerability and place pressure on the vendor(s) to get it fixed. however disclosing a working exploit still arms the bad guys - it should be possible to disclose the vulnerability that the malware exploits without giving out the actual malware (thereby helping to close the window of exposure without arming the bad guys as they'd have to come up with their own exploit), so disclosure of exploit based malware is still a bad thing to do.
like adware, exploits aren't necessarily always malware. a non-weaponized exploit can be used to test for the presence of a known vulnerability after a patch for the vulnerability has been made available so that people can determine if they need the patch and/or if the patch installed properly and/or if the patch is effective. this kind of exploit disclosure actually facilitates the closure of the window of exposure for the vulnerability in question.
(updated to integrate a somewhat language-theoretic view of exploits)
back to index
unlike most forms of malware, an exploit is not necessarily a program in the traditional sense. while it can be used to refer to a program specially written to communicate bad input to a vulnerable piece of software, it can also be just the bad input itself. any bad input (or even valid input that the developer just failed to anticipate) can cause the vulnerable application to behave improperly. in the sense that all data is code and all inputs form languages, however, an exploit can be thought of as a program written in the input language of the vulnerable software.
because exploits may simply be bad input rather than an application, they may act in the context of an exploited application without first being saved to disk. this is a problem for conventional anti-malware apps that focus on scanning for malware on disk hopefully before it has a chance to execute (often just before they execute if you're using an on-access scanner). in the case of an exploit it can be a legitimate but vulnerable application that is running and performing the malware functions so a known malware scanner would have to scan the vulnerable application's input in order to address this case.
although malware is generally not based on exploiting software vulnerabilities and therefore not inherently dependent on them, it is possible to make malware in part or completely out of exploit code. such malware would then depend on vulnerabilities by definition.
if the malware is completely dependent on vulnerabilities it can be an exception to the general rule about disclosing malware not leading to the closure of any window of exposure, since it's disclosure would make the affected vendor(s) and community aware of the vulnerability and place pressure on the vendor(s) to get it fixed. however disclosing a working exploit still arms the bad guys - it should be possible to disclose the vulnerability that the malware exploits without giving out the actual malware (thereby helping to close the window of exposure without arming the bad guys as they'd have to come up with their own exploit), so disclosure of exploit based malware is still a bad thing to do.
like adware, exploits aren't necessarily always malware. a non-weaponized exploit can be used to test for the presence of a known vulnerability after a patch for the vulnerability has been made available so that people can determine if they need the patch and/or if the patch installed properly and/or if the patch is effective. this kind of exploit disclosure actually facilitates the closure of the window of exposure for the vulnerability in question.
(updated to integrate a somewhat language-theoretic view of exploits)
back to index
Tags:
definition,
exploit
Friday, May 05, 2006
socketshield hope and hype
i recently heard about an anti-malware app called socketshield that, like most new apps, is being hyped as being the best thing since sliced bread so i decided to look a little deeper...
essentially the product is a known-malware scanner (specifically a known-exploit scanner) that operates at the socket level, meaning that anything that connects to the internet would have it's traffic filtered by this scanner... that's not hugely interesting, though it does fill a niche as other somewhat comparable products i've heard of restrict themselves to specific protocols like smtp or http, so they play up their intelligence gathering efforts hoping (probably correctly) that most prospective users won't realize that much of those things are entirely analogous to methods used by more conventional anti-malware vendors... for example:
they're just trying to differentiate themselves from the countless other security vendors on the market, and i can understand that but it seems like some people misunderstand both the originality and application of the technology... it's not an anti-rootkit technology (or an anti-stealthkit technology) per se, it's just a known malware scanner operating on network traffic and focusing on exploit code that would otherwise be used to download/install/execute more conventional malware (which i suppose makes it anti-exploit technology - which is fitting since it comes from a company called exploit prevention labs)... they're also not doing anything wildly unique when it comes to gathering information about new exploits - they're just playing it up more because the average consumer doesn't understand or appreciate the implications of what is really noteworthy about the technology - it scans for malware on the wire rather than on the disk, so it has an opportunity to stop exploits for your browser, email client, or other internet-facing software from reaching their intended targets... conventional scanners often have difficulty with this because they focus on scanning files on the local hard disk and more and more frequently systems are being initially compromized by code that never reaches the hard disk or reaches it after it's already been run...
on that matter, they do have a claim on their product information page that i must take issue with... they claim "Anti-virus and anti-spyware programs only detect exploits after the damage has been done." which is technically false - these programs detect malware on disk... sometimes that winds up being after the damage has been done, but historically they've dealt with the kind of malware that has to be a file on your hard disk before it can run and so that window of opportunity allowed their products to be used for prevention as well as cleanup...
that said, as the internet becomes increasingly ubiquitous and feature-rich, we're moving closer and closer to a network computing platform and this technology represents the network computing analog of on-access scanning - we need anti-malware vendors to make this kind of technology more ubiquitous as well... malware that runs in the browser or in a plug-in or some other network-related application is not as vulnerable to being scanned on disk before execution and the anti-malware world needs to catch up..
essentially the product is a known-malware scanner (specifically a known-exploit scanner) that operates at the socket level, meaning that anything that connects to the internet would have it's traffic filtered by this scanner... that's not hugely interesting, though it does fill a niche as other somewhat comparable products i've heard of restrict themselves to specific protocols like smtp or http, so they play up their intelligence gathering efforts hoping (probably correctly) that most prospective users won't realize that much of those things are entirely analogous to methods used by more conventional anti-malware vendors... for example:
- an extended network of human researchers exists in the anti-virus industry, one need only look at the list of contributors to the wildlist to see this...
- honey pots and search bots and the like are used routinely in the anti-malware domain... just look here, here, and here for a few examples...
- a "technology that creates a filter for known and suspected exploit distributor sites" sounds an aweful lot like automatic signature extraction... i know they're not exactly the same thing but they both boil down to technologies to generate matching criteria algorithmically...
- a "community of ... users who allow information about attempted exploitation of their computers to be transferred back to..." is very much like the statistics gathered by many online virus scanners...
- a "correlation engine" that collects all the intelligence gathered by various means and distributes it back to the users sounds suspiciously like an auto-update facility... theirs is real-time instead of periodically polled - big whoop...
they're just trying to differentiate themselves from the countless other security vendors on the market, and i can understand that but it seems like some people misunderstand both the originality and application of the technology... it's not an anti-rootkit technology (or an anti-stealthkit technology) per se, it's just a known malware scanner operating on network traffic and focusing on exploit code that would otherwise be used to download/install/execute more conventional malware (which i suppose makes it anti-exploit technology - which is fitting since it comes from a company called exploit prevention labs)... they're also not doing anything wildly unique when it comes to gathering information about new exploits - they're just playing it up more because the average consumer doesn't understand or appreciate the implications of what is really noteworthy about the technology - it scans for malware on the wire rather than on the disk, so it has an opportunity to stop exploits for your browser, email client, or other internet-facing software from reaching their intended targets... conventional scanners often have difficulty with this because they focus on scanning files on the local hard disk and more and more frequently systems are being initially compromized by code that never reaches the hard disk or reaches it after it's already been run...
on that matter, they do have a claim on their product information page that i must take issue with... they claim "Anti-virus and anti-spyware programs only detect exploits after the damage has been done." which is technically false - these programs detect malware on disk... sometimes that winds up being after the damage has been done, but historically they've dealt with the kind of malware that has to be a file on your hard disk before it can run and so that window of opportunity allowed their products to be used for prevention as well as cleanup...
that said, as the internet becomes increasingly ubiquitous and feature-rich, we're moving closer and closer to a network computing platform and this technology represents the network computing analog of on-access scanning - we need anti-malware vendors to make this kind of technology more ubiquitous as well... malware that runs in the browser or in a plug-in or some other network-related application is not as vulnerable to being scanned on disk before execution and the anti-malware world needs to catch up..
Thursday, May 04, 2006
FUD from bruce schneier
well, bruce schneier is at it again, spreading FUD about the anti-virus industry colluding with sony-bmg to prevent detection of their stealthkit (what passes for a rootkit these days)... and his kangaroo court readership buy it hook line and sinker, thinking the 'evidence' posted in the original thread was conclusive...
clearly people haven't actually read the original thead for meaning so i'm going to have to dissect the so-called evidence for them...
bruce schneier who claims to like the "be part of the solution, not part of the problem" metric is sowing fear, uncertainty, and doubt about an entire class of security company when he says things like:
of course, when he also says:
the security guru has a blind spot and that's malware, but people accept his word on the subject as security gospel - unable to apply basic logic and available facts to recognize when he's in error, unable to think for themselves... so who owns your opinions? you or some guy whose supposed to know about these things?
clearly people haven't actually read the original thead for meaning so i'm going to have to dissect the so-called evidence for them...
- f-secure, who's blacklight product was already detecting xcp (the sony/bmg stealthkit), admitted that they didn't immediately alert the public to the (low risk) threat because they knew the script kiddies would jump on the news of a large population of pre-compromized systems (and when the news came out that's exactly what happened) and were in talks with sony when mark russinovich broken the story... it's never been disclosed what those talks were about and it may seem like fun to assume the worst (collusion) but that's just the same paranoia that makes people want to believe anti-virus companies hire virus writers... there are far more reasonable possibilities, such as f-secure following responsible disclosure by trying to convince sony-bmg/first4internet to close the vulnerability in their stealthkit or possibly remove it altogether before releasing information about how the vulnerability could be exploited was released to the public (and scores of script kiddies)...
- norman's sandbox technology was also able to detect xcp before the news broke...
- symantec was implicated in the collusion through a claim that they had approved xcp but that claim was later explicitly corrected (link points to the exact google cache page pointed to in the aforementioned original thread)...
- multiple av companies downplayed the threat that xcp posed... again, one could be a conspiracy nut and believe it's because they were in on it, or one could be reasonable and recognize that it's quite normal for people to downplay the significance of their own failures...
- first4internet (the people who made xcp in the first place) apparently claimed to have worked with the big anti-virus companies to make sure their software was safe, but names were ultimately not given, and first4internet had their own arses to protect so there was plenty of motivation to try to fabricate vague legitimizing circumstances...
bruce schneier who claims to like the "be part of the solution, not part of the problem" metric is sowing fear, uncertainty, and doubt about an entire class of security company when he says things like:
You might have expected your antivirus software to detect Sony's rootkit. After all, that's why you bought it. But initially, the security programs sold by Symantec and others did not detect it, because Sony had asked them not to. You might have thought that the software you bought was working for you, but you would have been wrong.and that can lead to people away from the only real tools there are for dealing with malware... that's certainly seems more like being part of the problem than part of the solution if you ask me...
of course, when he also says:
McAfee didn't add detection code until Nov. 9, and as of Nov. 15 it doesn't remove the rootkit, only the cloaking device.he demonstrates that he's provably ignorant of the malware domain... under current parlance, the cloaking device IS the rootkit... under the more classical definition xcp was never a rootkit at all...
the security guru has a blind spot and that's malware, but people accept his word on the subject as security gospel - unable to apply basic logic and available facts to recognize when he's in error, unable to think for themselves... so who owns your opinions? you or some guy whose supposed to know about these things?
Tags:
anti-virus,
bruce schneier,
first4internet,
fud,
malware,
rootkit,
sony bmg,
stealthkit,
xcp
automated malware classification? how cool is that?
i was quite impressed when i read halvar flake's blog post about the automated malware classification they've developed at sabre security... the basic premise is that you have a binary difference engine (some technology that can analyze two programs and determine how similar or different they are) and a large corpus (a body of work, a set of reference samples, like a kind of library) of existing binaries that have already been analyzed, classified, and named and you use the binary difference engine to compare new samples against those in the corpus to determine which one(s) in the corpus the new sample is most similar to - which in turn allows you to say with some confidence that the new sample is the same type, family, or even the same malware as the one in the corpus, depending on the accuracy of the match...
the idea of binary comparison technology isn't particularly new... over a decade ago zvi netiv was using a kind of binary comparison technology to identify infected executables based on a suspect sample... of course that technology is in no way comparable to the bindiff2 that halvar wrote about... zvi's ivx program would compare programs suspected of being infected by a particular virus with one known (or at least believed to be) infected by the same virus - different programs that all had a very similar chunk of code in them were thought likely to be infected by the same virus and so the end user was supposed to be able to use this to help detect files infected with previously unknown viruses... bindiff2, on the other hand, apparently leverages the reverse engineering prowess of the ida disassembler... i kind of expected to hear about binary comparison technology like this when i read about how f-secure generates those call graphs of theirs because visual comparison seems like it could be pretty inaccurate in some cases)...
automated classification isn't particularly new either... one of the classes i took getting my undergraduate computer science degree had a project where we were to perform automated classification of natural language documents (by making comparisons with a representative corpus as above but using vectors to represent the documents and judging their similarity based on how close the vectors were to each other) and we were graded on our accuracy (it probably sounds more complicated than it really was)...
it's the combination of the two ideas that's really interesting and i think it's got some great potential, especially when combined with some of the other automated technologies that have been developed over the years... imagine, a new sample comes into the virus lab and the first thing that happens is it's run through this automated classification system that compares it to every other sample the company has on record... if it turns out ot be similar to something already known it's fed into an automatic signature extraction system and given to a researcher to double check the findings... further, if it was automatically classified as being related to a known virus it could be run through an automatic and controlled virus execution system to determine whether or not it was a real virus or just an intended virus... something similar could also be done if it was classified as a worm...
all of which makes the virus analyst's job more efficient and less tedious (because who want's to look at 1,000 different samples that all come from the same 5 families?)... virus analysts probably aren't in danger of becoming redundant anytime soon, of course, but the more efficient they get the faster the companies can react to new threats and that's good for everyone...
i also suspect the technology could aid in the CME's deconfliction process...
some people think this technology could help solve the naming problem, but i really don't agree... this technology alone will not address the real issues behind the naming problem - it will not tell a group of vendors which of them discovered a particular piece of malware first and it won't tell them what name that discoverer has already given the malwre... without that information each vendor is forced to make up a name so they can make signatures available to their customers as fast as possible... that's why the naming problem exists...
the idea of binary comparison technology isn't particularly new... over a decade ago zvi netiv was using a kind of binary comparison technology to identify infected executables based on a suspect sample... of course that technology is in no way comparable to the bindiff2 that halvar wrote about... zvi's ivx program would compare programs suspected of being infected by a particular virus with one known (or at least believed to be) infected by the same virus - different programs that all had a very similar chunk of code in them were thought likely to be infected by the same virus and so the end user was supposed to be able to use this to help detect files infected with previously unknown viruses... bindiff2, on the other hand, apparently leverages the reverse engineering prowess of the ida disassembler... i kind of expected to hear about binary comparison technology like this when i read about how f-secure generates those call graphs of theirs because visual comparison seems like it could be pretty inaccurate in some cases)...
automated classification isn't particularly new either... one of the classes i took getting my undergraduate computer science degree had a project where we were to perform automated classification of natural language documents (by making comparisons with a representative corpus as above but using vectors to represent the documents and judging their similarity based on how close the vectors were to each other) and we were graded on our accuracy (it probably sounds more complicated than it really was)...
it's the combination of the two ideas that's really interesting and i think it's got some great potential, especially when combined with some of the other automated technologies that have been developed over the years... imagine, a new sample comes into the virus lab and the first thing that happens is it's run through this automated classification system that compares it to every other sample the company has on record... if it turns out ot be similar to something already known it's fed into an automatic signature extraction system and given to a researcher to double check the findings... further, if it was automatically classified as being related to a known virus it could be run through an automatic and controlled virus execution system to determine whether or not it was a real virus or just an intended virus... something similar could also be done if it was classified as a worm...
all of which makes the virus analyst's job more efficient and less tedious (because who want's to look at 1,000 different samples that all come from the same 5 families?)... virus analysts probably aren't in danger of becoming redundant anytime soon, of course, but the more efficient they get the faster the companies can react to new threats and that's good for everyone...
i also suspect the technology could aid in the CME's deconfliction process...
some people think this technology could help solve the naming problem, but i really don't agree... this technology alone will not address the real issues behind the naming problem - it will not tell a group of vendors which of them discovered a particular piece of malware first and it won't tell them what name that discoverer has already given the malwre... without that information each vendor is forced to make up a name so they can make signatures available to their customers as fast as possible... that's why the naming problem exists...
Tags:
classification,
halvar flake,
malware,
saber security
Tuesday, May 02, 2006
"PCs, not Macs" - snake oil straight from apple?
if you haven't heard about it yet, apple has produced a bunch of commercials comparing the mac to the pc and one of them gives the distinct impression that macs don't get viruses...
of course mac computers are not immune to viruses, they had viruses in the past and a little over a month ago they got their first osx virus (osx/leap.a) so the mac's virus immunity is definitely the stuff of fiction, which makes the commercial in question all the more damning...
you see it's pretty much a foregone conclusion that when the average person hears the dialog:
but do you think a false advertising charge is going to hold up in court? maybe not... if you look closely at what is actually being said, there are no unambiguously false statements... the mac guy could have simply been saying the virus figures for the pc apply only to the pc and not the mac, which is true - or he could habe been saying that the viruses for the pc are for the pc and not the mac, which is also currently true...
the mac guy was awefully ambiguous, and the commercial walked a very fine line between giving a false impression and making an actual false statement - just enough to get the benefits without any legal problems... and that's what makes me think it was intentional and why i have no qualms about calling it snake oil (rather than just an accident)... you see you don't have to explicitly lie in order to be a snake oil peddlar, you just have to make people believe something that isn't true in order to make your product look better and that is exactly what apple is doing...
one of the biggest security problems in the mac world, especially in light of all the vulnerabilities that have been uncovered recently and proof of concept malware that's been developed, is denial... mac users don't see security as being a problem they need to worry about and apple is actively fostering this delusion with the commercial in question and with statements like that of bud tribble that macs are designed to be run "without the need for firewalls or additional security software"... in that sense, apple is being part of the problem rather than part of the solution...
of course mac computers are not immune to viruses, they had viruses in the past and a little over a month ago they got their first osx virus (osx/leap.a) so the mac's virus immunity is definitely the stuff of fiction, which makes the commercial in question all the more damning...
you see it's pretty much a foregone conclusion that when the average person hears the dialog:
PC - last year there were 114,000 known viruses for pcsthey're going to get the impression that macs don't get viruses, which is false...
MAC - pcs, not macs
but do you think a false advertising charge is going to hold up in court? maybe not... if you look closely at what is actually being said, there are no unambiguously false statements... the mac guy could have simply been saying the virus figures for the pc apply only to the pc and not the mac, which is true - or he could habe been saying that the viruses for the pc are for the pc and not the mac, which is also currently true...
the mac guy was awefully ambiguous, and the commercial walked a very fine line between giving a false impression and making an actual false statement - just enough to get the benefits without any legal problems... and that's what makes me think it was intentional and why i have no qualms about calling it snake oil (rather than just an accident)... you see you don't have to explicitly lie in order to be a snake oil peddlar, you just have to make people believe something that isn't true in order to make your product look better and that is exactly what apple is doing...
one of the biggest security problems in the mac world, especially in light of all the vulnerabilities that have been uncovered recently and proof of concept malware that's been developed, is denial... mac users don't see security as being a problem they need to worry about and apple is actively fostering this delusion with the commercial in question and with statements like that of bud tribble that macs are designed to be run "without the need for firewalls or additional security software"... in that sense, apple is being part of the problem rather than part of the solution...
Subscribe to:
Posts (Atom)