Saturday, May 15, 2010

understanding KHOBE

there's been quite a lot of press given to the research surrounding KHOBE this week, and while the coverage on the normal anti-malware blogs i follow was quite good, coverage outside that community left something to be desired (accuracy? comprehension?).

to start with, KHOBE is not so much an attack as it is a technology. KHOBE stands for Kernel HOok Bypassing Engine. it's technology developed by matousec to do exactly what that name suggests - bypass kernel hooks. it does this with a technique that matousec calls argument switching but which has in the past been called a time-of-check-to-time-of-use attack (or TOCTTOU attack; see this post on the eset threatblog).

this sort of attack is an active countermeasure, meaning a piece of malware implementing this attack has to be active in memory in order for attack to be performed. this means, first and foremost, that in order to mount this sort of attack the malware already has to be able to get past traditional scanning technologies without using argument-switching, since they examine the program before it's allowed to execute and become active.

as such, the anti-malware techniques that argument-switching is a countermeasure for are behavioural techniques (including, but not limited to certain self-defense techniques that anti-malware products use to prevent being shut down by malware that becomes active). by switching the contents of memory after a chunk of code has been checked for potentially malicious intent but before it's passed to the system to execute, this attack manages to prevent behavioural detection/prevention techniques from seeing the code that is actually going to be executed and in that way constitutes a kind of stealth that works against behavioural techniques (though it only works against behavioural techniques that are implemented in a particular way - using the particular kernel hooks that KHOBE was designed to bypass).

behavioural stealth is interesting at least so far as it demonstrates some of the limits of behavioural techniques to protect systems from attack (all techniques have limitations). from a strategic point of view, once a piece of malware is able to execute it's already passed every opportunity you had to prevent compromise outright - at that stage the only thing left is to contain the damage using behavioural techniques and/or sandboxing. behavioural stealth could clearly render containment by behavioural blocks ineffective (again, if it's implemented using the kernel hooks in question) and might even bypass the sandboxing technology, depending on how it's implemented.

although the attack has apparently been known about for some time (much longer, it seems, than the folks at matousec realized), it hasn't yet been used. however, that may now change as the current threat ecosystem is much different from when TOCTTOU was originally discussed. matousec has raised the attack back out of the depths of obscurity and there are now any number of computer criminals out there who would be more than willing to take matousec's research and use it for their own gains (much like what happened to eEye's bootroot research). this is the risk one takes when publishing attack research for the world to see, so you have to weigh your options very carefully to see if it's really worth arming the bad guys. and of course don't just follow full disclosure dogmatically simply because it seems like that's what everybody else does - as emerson said "a foolish consistency is the hobgoblin of little minds".


Vess said...

It's not stealth - it's tunneling. Or at least this is what we used to call such techniques in the good old days of DOS.

kurt wismer said...

hmmm, i'm not sure argument switching is the same as or analogous to tunneling.

that being said, it seems to me that even tunneling would constitute a kind of behavioural stealth.

i'm not talking about normal stealth, just so we're clear, i'm talking about behavioural stealth - hiding from behavioural checks. tunneling qualifies, as far as i'm concerned.