Thursday, May 27, 2010

man 'infects' self with computer virus?

that's pretty much what the story says (read it here) and if it were true (or even possible) then it would be a pretty boneheaded thing to do. and they call this guy "doctor".

but if you haven't figured it out by now, the whole thing is complete rubbish. over 4 years ago a bunch of researchers came up with some less than amazing research that showed that certain RFID tagging systems could be susceptible to compromise by malware stored on RFID tags themselves (which i wrote about here). specifically, the data on the tags could exploit programming flaws in the back-end database the system runs off of. in essence you could write malware (even viruses or worms) that operates in the context of the database, and you could use RFID tags as a storage and distribution medium.

4 years later and another bunch of researchers (including a one dr. mark gasson)  have the (not so) bright idea to stick one of those RFID tags in a person. yeah, that really is all that happened here. that and some effective huckstering get into the media spotlight (if even only for a short while).

it really is quite unspectacular though. dr. gasson took something that supposedly contained a computer virus and then put that inside yet another container (his own body) and then called that container 'infected' as if that's all it took. by the same token i could put one of those tags in a tupperware container and call that 'infected' - and wouldn't that be something; not only 'infecting' something that's inanimate but well and truly inert. what's more, i could take one of these specially prepared RFID tags and put it between two slices of bread and make an 'infected' sandwich. i could even put one in a hole in the ground and 'infect' the planet. clearly this notion of 'infection' by containment is absurd.

but it gets a little more absurd, because dr. gasson claims to be the first. now a very clever commenter over at boingboing pointed out (here) that you get basically the same effect by sticking an infected USB thumb drive in your rectum. while you may consider that to be a juvenile observation to make, his assertion that it constitutes prior art has merit. USB thumb drives are small and portable and easily inserted into a variety of bodily cavities for the purposes of hiding or smuggling data. furthermore, USB thumb drives are notorious for hosting things like autorun worms. as such, with the bizarre "containment == 'infection'" logic being used, it's almost a certainty that dr. gasson was actually not the first human to be 'infected' by a computer virus in this (non)sense, he's simply the first to make a media spectacle of himself over the issue. (one might argue that unlike an RFID tag, a USB thumb drive wouldn't be able to pass on the infection once in your rectum - but it was never specified that the business end couldn't be left sticking out)

i can't discuss this topic without touching upon what i think is (at least partially) to blame for the ridiculous and rather scary statement being made by dr. gasson, though. it's one of my long-standing pet peeves: terminology misuse. you may have noticed my persistent decoration of words related to the word "infect". that's because it's been used in a sense that is so far from an reasonable meaning of the word infect that it doesn't deserve to be treated as a normal word. terminology misuse has become de rigeur to such an extent that even well known anti-malware personalities such as graham cluley and mikko hypponen endorse it for some terms. you can argue about natural semantic drift of words over time until you're blue in the face but technical jargon is not (and should not be) subject to the whim and whimsy of the unwashed masses (even when it comes to subjects that affect them). this particular instance of sensational absurdity is a consequence of and an argument against unfettered semantic drift in the realm of technical jargon. in no reasonable sense of the word was this person ever infected by a computer virus. absurd statements like the one the researcher is making are only possible because of slightly less absurd statements made before it that haven't been corrected yet, which in turn can eventually be traced back to basic terminology misuse.

Saturday, May 15, 2010

KHOBE extortion

not long after my last post on the subject of KHOBE, david harley posted a collection of links about it on the AMTSO blog and one in particular caught my eye. specifically the one by ralf benzmüller on the gdata blog.


what's interesting about it is that it shows the KHOBE media circus in a new light that i think deserves a lot more attention. 

it appears that there was a very good reason for the sensationalistic headline that matousec used when announcing their research (the reference to an 8.0 earthquake sounded positively cataclysmic) - they're looking to cash in. they're expecting the companies they claim are affected to fork out a ton of cash for the paper containing the details (and they're offering their services to those companies too). how much is a ton of cash? the combined amount for all companies will apparently be in the 6 figure range (ka-ching!). on top of that their correspondence is anonymous (what?!, why use anonymity in this context?) which, when taken with the money grab they're attempting and their sensationalism in announcing the attack, takes what once looked like just an irresponsible action (that happens altogether too often in security research circles) and makes it seem downright shady.


whichever way you look at it, if ralf benzmüller's account of events is accurate then it's clear that matousec's interests do not lie in helping the community become more secure and i can only hope the majority of the companies named in their initial release avoid the kinds of back-alley dealings these mustache-twirling individuals seem to have in mind.

understanding KHOBE

there's been quite a lot of press given to the research surrounding KHOBE this week, and while the coverage on the normal anti-malware blogs i follow was quite good, coverage outside that community left something to be desired (accuracy? comprehension?).

to start with, KHOBE is not so much an attack as it is a technology. KHOBE stands for Kernel HOok Bypassing Engine. it's technology developed by matousec to do exactly what that name suggests - bypass kernel hooks. it does this with a technique that matousec calls argument switching but which has in the past been called a time-of-check-to-time-of-use attack (or TOCTTOU attack; see this post on the eset threatblog).

this sort of attack is an active countermeasure, meaning a piece of malware implementing this attack has to be active in memory in order for attack to be performed. this means, first and foremost, that in order to mount this sort of attack the malware already has to be able to get past traditional scanning technologies without using argument-switching, since they examine the program before it's allowed to execute and become active.

as such, the anti-malware techniques that argument-switching is a countermeasure for are behavioural techniques (including, but not limited to certain self-defense techniques that anti-malware products use to prevent being shut down by malware that becomes active). by switching the contents of memory after a chunk of code has been checked for potentially malicious intent but before it's passed to the system to execute, this attack manages to prevent behavioural detection/prevention techniques from seeing the code that is actually going to be executed and in that way constitutes a kind of stealth that works against behavioural techniques (though it only works against behavioural techniques that are implemented in a particular way - using the particular kernel hooks that KHOBE was designed to bypass).

behavioural stealth is interesting at least so far as it demonstrates some of the limits of behavioural techniques to protect systems from attack (all techniques have limitations). from a strategic point of view, once a piece of malware is able to execute it's already passed every opportunity you had to prevent compromise outright - at that stage the only thing left is to contain the damage using behavioural techniques and/or sandboxing. behavioural stealth could clearly render containment by behavioural blocks ineffective (again, if it's implemented using the kernel hooks in question) and might even bypass the sandboxing technology, depending on how it's implemented.

although the attack has apparently been known about for some time (much longer, it seems, than the folks at matousec realized), it hasn't yet been used. however, that may now change as the current threat ecosystem is much different from when TOCTTOU was originally discussed. matousec has raised the attack back out of the depths of obscurity and there are now any number of computer criminals out there who would be more than willing to take matousec's research and use it for their own gains (much like what happened to eEye's bootroot research). this is the risk one takes when publishing attack research for the world to see, so you have to weigh your options very carefully to see if it's really worth arming the bad guys. and of course don't just follow full disclosure dogmatically simply because it seems like that's what everybody else does - as emerson said "a foolish consistency is the hobgoblin of little minds".