Sunday, August 31, 2008

suggested reading

viruses on the international space station

this past week there was a lot of buzz surrounding the news that an autorun worm had infected 2 laptops aboard the international space station... i wasn't sure i was going to bother saying anything about it at first but then i decided it might serve as an interesting object lesson so let's look at what we can learn from this event and what could have been done differently....

my first reaction when reading of the event was that this just goes to show how pernicious and autonomous self-replicating malware truly is... that notion (that viruses/worms are somehow worse or more autonomous than other forms of malware) has been scoffed at in the past but viruses in space stand as a testament to their ability to get into places no one intended or would have imagined... no other form of malware besides self-replicators would have been able to find new victims in that sort of environment...

another thing we can learn from this is to stop clinging to the fantasy that the only kind of malware we need to worry about anymore is the new stuff, that old-style viruses and worms aren't worth worrying about anymore... this wasn't brand new malware, it wasn't state of the art, and it wasn't something researchers on the bleeding edge would have taken notice of even when it was new... the malware threat landscape isn't composed exclusively of novelties, there's a heck of a lot of banality out there as well...

yet another lesson is that can be learned is that for all the whining about how AV is failing, at least some of the evidence used to support that argument (in other words, some of the failures) is actually a result of not using AV in the first place, not keeping it up to date, or not following the various other best practices for AV...

actually using an AV program is the first thing the astronauts and/or NASA could have done differently... while i'm sure there are plenty of arguments for why one might not want an anti-virus program on them, such as highly critical real-time processing of experimental data, these were laptops running windows and so were already unsuitable for real-time processing ('what do you mean the OS must have been busy don't something else during that time period?')... i assume someone up there must have had AV or else we wouldn't have a name for the malware...

failing that, they could have used some other sort of anti-malware technology like application whitelisting... in fact, considering the environment that might even be a more appropriate approach since it's unlikely that astronauts need to introduce new software to those machines very often... that is unless part of their job requires them to rewrite or apply patches to software being used in the experiments to collect/analyze data... come to think of it, that might actually be the case - it's not like the folks designing the experimental payloads have a lot of chances to test and debug their software under real-world conditions when the real-world in question is actually out of this world...

the astronauts could have operated the machines under non-administrative accounts - actually there isn't really anything to suggest they didn't, nor is there anything specific to an autorun worm's replication technique that should require administrative access... despite a previous post i made highlighting the ways in which least privilege can fail to stop malware, it still is fairly effective against a lot of existing malware...

they could have disabled autorun on those machines - in fact, they probably still should disable it... autorun is purely a convenience feature for the technologically inept; hopefully that's not the sort of folks NASA is sending into space (then again, they did get infected by somewhat old malware)...

finally, they could have used something other than windows machines... although technically not immune to malware, macs and linux machines have a far smaller pool of threat agents to worry about and the lower population density means that they are less connected to other similar endpoints that could pass on something they'd be susceptible to... of course, once again this is likely subject to what the machines are being used for - if they're running or monitoring experiments with them then they may be stuck with whatever the people who designed those experiments wrote their software for (and considering the cost of doing anything in space, cutting corners on the ground and using cheap windows developers is pretty likely)...

according to NASA this is not the first time they've had a virus infection in space... let's hope they also look at these sorts of events as learning experiences and figure out how to do things better in future...

what is an autorun worm?

an autorun worm is a type of worm that is carried (literally) from one machine to another on removable media (such as CDs, DVDs, or USB flash drives) and uses the autorun feature of windows to either automate or at the very least facilitate it's execution when the media is put into the next computer...

this type of worm makes use of the autorun facility by copying not only itself to removable media but also placing an autorun.inf file on the media that contains the instructions necessary to run the worm's program when the media is inserted into a machine that has the autorun facility enabled...

although the autorun facility works by default for CDs and DVDs (as you've no doubt noticed when inserting some of them into your computer), it doesn't work by default for standard USB flash drives - something called autoplay (which shows the user a menu of convenient actions s/he can take such as playing audio/video, opening the drive in explorer, etc) is initiated instead...

that said, there are changes one can make to a computer to make it initiate autorun instead of autoplay, there are specially designed USB flash drives that lie to windows about what kind of device they are in order to make use of autorun, and other USB devices can't reasonably be expected to identify themselves as standard USB flash drives when that's not what they are and so pose the potential of initiating autorun when used... also, even if autorun doesn't automatically initiate as soon as the media is inserted into the computer, it may initiate when you double-click on the drive in explorer... additionally, for contexts where autoplay is initiated, the autorun.inf file can specify actions (such as executing the worm) to be added to the top of the menu that the user is presented with and which can be presented in a deceptive manner so as to trick the user into choosing the malicious action added by autorun.inf file...

back to index

Tuesday, August 26, 2008

can facebook sanitize application content?

while reading through some blogs this evening i happened across this article by ryan naraine pointing to a register article by dan goodin about an apparent security vulnerability in facebook's 3rd party application platform that allows arbitrary (and possibly malicious) javascript to be executed...

what struck my eye was a notion that facebook was failing to sanitize application content...

this struck me as an odd statement... i've heard of sanitizing input/data - that's something that makes sense since data should adhere to a strictly defined format... names don't go where dates should be, numbers should actually be numeric, escape sequences should be filtered, etc... it's possible to reject everything that doesn't meet the pre-defined criteria...

i'm going to put on my computer scientist hat and say the same is not applicable to applications written in a turing complete language (which javascript is)... the only way to truly 'sanitize' them is to come up with an entirely new and decidedly less than turing complete language for people to use instead... that is likely outside the scope of what facebook or any other application platform provider is willing to invest their time in - especially since it creates a far more restrictive and less rewarding environment for 3rd party application developers to work with, leading to the very likely result of driving them to other, less restrictive platforms...

that's not to say there aren't some specific instruction sequences that facebook can try to filter out, however with all the ways to obfuscate javascript i doubt it would accomplish much - and even if obfuscation weren't a problem the turing completeness would be; you'd never be able to list (never mind filter) all the possible malicious sequences of instructions, nor could you definitively say that those instructions would ever actually be interpreted by the javascript interpreter of a visitor's browser... detecting malicious sequences of javascript instructions in 3rd party applications is similar to detecting malicious programs like viruses or worms - in fact, they are identical problems and so it should come as no surprise that detecting such malicious sequences of javascript instructions (a necessary precursor to sanitizing the javascript) is reducible to the halting problem... therefore facebook may technically be correct in saying there's no vulnerability (though i have no idea if they actually analyzed the issue in this manner) in the sense that most people would consider... just because you can do bad things doesn't mean there's a vulnerability... sometimes it just means there's more flexibility than you were expecting or wanted...

at any rate, facebook has other ways of dealing with malicious application content - they block the entire application once they discover it's bad... after all, even if they did somehow manage to sanitize javascript, what about flash? or java? or activex? it's not feasible to manually audit all applications before they're allowed to go live on facebook, and it's not possible to find all the malicious code with automated auditing because of the halting problem - so, like more traditional forms of malicious code, malicious facebook applications are something that must inevitably be dealt with after the fact...

Monday, August 18, 2008

xkcd, diebold voting machines, and anti-virus

i think by now just about everyone has seen this xkcd comic:



lots of people have posted their thoughts about it but unfortunately quite a number just don't get what it's about and missed the point by a wide margin...

the comic, titled "voting machines" (not "anti-virus") is about voting machines (not anti-virus)... imagine that... the artist compares av to condoms as an entirely acceptable preventative measure under ordinary circumstance but then points out that there are some circumstances where it's presence indicates something else has gone horribly wrong... that is the voting machine itself - there's no reason (other than diebold/premier election solutions being cheap and lazy) for a voting machine to be capable of running viruses, let alone anti-virus software, or even windows for that matter...

voting machines need to do one very narrowly defined thing, and they have to be incredibly reliable and resistant to tampering (after all, the future of your government depends on them)... those design criteria call for a special purpose (rather than general purpose) computer... computers that are physically incapable of doing anything more than carrying out the very narrowly defined set of tasks they were designed for... you've seen them before - cheap pocket calculators (as opposed to the fancy scientific ones), wrist watches, etc... putting extra power needlessly into the voting machines made them less reliable and less secure but it's cheaper to use off-the-shelf components and it's cheaper to pay for developers who work on general purpose platforms than those who work in embedded systems...

as the old saying goes: cheap, fast, good - pick 2...

(oh, and i should thank the comic's author, randall munroe, since, for a brief period anyways, guess who was the top result for searches on 'diebold antivirus')

Sunday, August 17, 2008

terminology rant

remember me saying the word infect (and it's derivatives) was overused/misused? probably you don't, this blog had a lot fewer readers back then...

but infect is still being overused and here's an example... describing the deployment of a drive-by download attack as putting up an infected website... viruses are what infect things but viruses only infect programs (or computers in the case of the subset known as worms)... websites aren't programs (though they may contain one or more of them in the form of scripts/flash/etc) and what gets put on them for drive-by downloads are generally not viruses but rather some non-viral form of malware...

misusing the term 'infect' like this reminds us that collectively we still haven't gotten over calling all malware a virus, and i think by now we've all realized that such imprecise thinking/communicating only leads to ambiguity and confusion...

so, like i did when i suggested an alternate term for when a computer has non-viral malware, i'm going to suggest a term (2 actually) for the websites used in drive-by downloads... the first should be rather obvious, when the entire website itself is put up by the blackhat then it's a malicious site - no need to mince words, the site attacks visitors in one way or another so it's malicious... the other comes about because sometimes the malicious content is on what is otherwise a completely legitimate site like yahoo or cnn or the superbowl... we can't exactly call those malicious sites, however we can call them tainted sites (maybe even poisoned sites for those who prefer more pejorative terms)...

china, disclosure, and malware

not too long ago, amrit williams wrote on his experience traveling in south-east asia and specifically his observation of their attitudes towards malware disclosure:
What I was shown was the most active and open distribution of malware, kits, and exploits I have ever witnessed. I will refrain from the details but considering the perceived insular nature of China and the openness of the US, I can tell you from the sharing of knowledge perspective we are way behind.

I asked some questions about disclosure and was met with puzzled looks and shaking heads.
though it's not completely unambiguous, i get the distinct impression (especially from wording of the statement that we're way behind) that he thinks we should be more like them... but i have to wonder what all that openness with regards to malware disclosure has actually done for china... are they better off than we are? are they better equipped to keep the malware problem in check?

i suppose perhaps if you were only looking at the technological side of the malware problem the easy availability of malware for study should theoretically be of help... but i have to wonder: when most of the population is not involved in the creation of tools to help defend against malware, and when those that are involved have fairly open access to malware even over here in the west, what advantage is china's undifferentiated openness really giving them in practice?...

if we don't just look at the technological side, however, if we include the social component as well (especially as it relates to malware creation) then that openness comes under a different light... what do we know about china and malware in broad terms? well, although finjan's figures suggest china is not the biggest host of malware, according to kaspersky they are the largest producer of malware...

sharing malware materials helps learning about malware, there's no doubt about that, but uncontrolled/undifferentiated sharing (as opposed to the more reserved 'only if i know and trust you' type of sharing) helps the creation of malware more than it does the defense against malware, and china may well be serving as an unrecognized object lesson in that fact... so the next time we look at how other cultures handle issues differently, before concluding that they're doing things better because they're adhering more closely to abstract principles that we value, lets make sure we look at the larger picture and not lose the forest amongst the trees...

look who's drinking the whitelist koolaid now

from a recent blackhat-related post on symantec's blog:
Symantec has been stressing for quite some time that we are on the cusp of a critical inflection point where the number of unique malicious code instances is surpassing the number of legitimate code instances.


the inflection point (shouldn't that really be an intersection of two curves?) will not and can not be reached... symantec is ignoring the figures provided by bit9 whose core business is application whitelists (so when they say good software is growing at a rate that is greater than what anyone says malware is growing at, despite the fact that it's in their interests to suggest the opposite, you better believe they know what the heck they're talking about)...

symantec is also ignoring the kind of basic logic i used 2 years ago when i said (without the benefit of figures) that good software outnumbers malware and grows faster than malware, and that i described in detail more recently: basically that malware writers are a small subset of the set of all programmers and there's no realistic way for a small group to outproduce a large group...

this reminds me of an earlier symantec gaffe wherein john thompson claimed the virus problem was solved... sometimes i wonder if symantec works on the philosophy of not letting the facts get in the way of good marketing...

Saturday, August 09, 2008

is sympatico training their users to be victims?

sympatico, for those of you who don't know, is one of the largest (maybe the largest) ISPs in canada, owned and operated by bell canada (virtually our sole major telecom up here in the great white north)... i happen to be a sympatico subscriber, as i imagine many canadians are, so obviously i get to see all the various emails they send out... most of it is junk, complete rubbish that i have no interest in receiving telling me about products/services they're offering that i have no interest in using let alone paying for... unfortunately i can't very well block email from my ISP because someday far in the future they may possibly send me an email that's actually important... i don't think it's happened yet, but i can't rule out the possibility that it might happen in the future so i'm stuck weeding through mandatory spam...

i've been a sympatico subscriber for many years now, however, so this really isn't news... i've become largely numb to their marketing messages but there was one this past week that set off all sorts of alarms - it promises to enter the user into a draw for various valuable prizes if they'll download and run some executable that's described as internet check-up software... it's like something out of a malware spreader's social engineering playbook... even thunderbird thought it was a scam...


the completely wild thing is that all links point to sympatico/bell, it appears to be a genuine (if ill conceived) offer... i thought for sure the email was only pretending to be from sympatico, that while many of it's URLs might point to actual sympatico content there would still be one key url pointing to malicious content, but i was wrong... even the link to the software itself is theirs...

it seems to me that this highlights a need for adaptation from a group you normally wouldn't think would be particularly impacted by security threats: marketing departments... they need to change the way they market wares in order for their marketing message to not appear nefarious, but at the same time the black hats are going to continue to adopt any new marketing styles that marketing professionals come up with in order for their social engineering to appear legitimate... it's a strange concept to imagine that marketing needs to stay one step ahead of the bad guys, but at the end of the day i suspect that security threats (of all things) are going to change the face of marketing on the internet...

(and no, i'm not going to install the checkup software even though it appears to be legit - the whole thing just creeps me out too much)

on locks and keys

perhaps you've come across this story elsewhere, but boing boing has a piece on security researchers supposedly cracking the security of some physical lock system by making duplicate keys based on photos of the original...

is it just me or does that seem like a non-issue? what key-based lock isn't susceptible to replica keys? isn't that pretty much the nature of any token-based security system that if you can produce a duplicate token you're in? and aren't keys basically archaic tokens?

to me it seems that such research borders on captain obvious' territory, but here's a suggestion for lock-makers to help avoid this specific form of attack - retractable keys (hey, if they can do it for USB flash drives they ought to be able to do it for keys too) that you only extend directly into the lock mechanism... that way, in practice, the key portion should never need to be visible and thus wouldn't be susceptible to photographic acquisition...

Friday, August 01, 2008

suggested reading

oops, a little on the late side, but not by too much...