Saturday, January 05, 2008

eEye and malware creation ethics

wow, i just got done throwing out a mention to the new MBR stealthkit and now i find out it's pedigree...

folks, it's time you go over your malware researchers with a fine-tooth comb because i don't think anybody in the industry really wants to be responsible for another proof of concept getting into the wild...

that is, unless you're eEye digital security, apparently... not only was it their researchers derek soeder and ryan permeh that created bootroot and presented it at blackhat in 2005, but even as i type eEye is hosting that malware on their website right now...

this is absolutely NOT responsible behaviour and the people who were and are responsible for both the stealthkit and the company should be ashamed of themselves for being part of the problem instead of part of the solution... anti-malware researchers and companies should not be creating and distributing malware - this is so fundamentally wrong i'm actually tongue-tied trying to describe how wrong it is... needless to say i won't be touching anything of theirs with a 10 foot barge pole and others might want to consider the same...

*update* - tyler reguly caught or otherwise helped me see a factual error i made... the stealthkit in the wild is not the exact one developed and distributed by eEye but rather is a modified version of it... that doesn't really change much, however, because they still armed the bad guys...

8 comments:

cdman83 said...

Researchers must play a fine balancing act between revealing to much information and to little. Both of these approaches have problems:

- To much information means that you practically given the "bad guys" a ready-to-use weapon

- To little information means that "good" people don't have enough information to counteract the threat or that they have to waste their time reproducing the exact details

Also, one must not forget that "guns don't kill people, people kill people": it is easy to blame the researchers for a flaw (because they "revealed" the information and provide a single target for attacks), but:

- the designer / implementer of the system actually created the problem (either consciously as a compromise between different aspects like usability, security and cost or unconsciously as a bug)

- the problem may already be known to a group of people who might or might not be "malicious"

- the people exploiting the problem to cause damage are actually the "bad guys"

kurt wismer said...

"Researchers must play a fine balancing act between revealing to much information and to little."

and when it comes to malware it should be understood that creating and distributing the malware is going well over the line...

"To much information means that you practically given the "bad guys" a ready-to-use weapon"

which is precisely what has happened...

"To little information means that "good" people don't have enough information to counteract the threat or that they have to waste their time reproducing the exact details"

a) this naively assumes that your only options are to give information to everyone or no one - the anti-virus industry works on the basis of giving sensitive information only to people the giver knows s/he can trust...
b) giving information about a threat and creating the threat in the first place are 2 wildly different things...

"Also, one must not forget that "guns don't kill people, people kill people": it is easy to blame the researchers for a flaw (because they "revealed" the information and provide a single target for attacks), but:"

i've tried to make this clear on multiple occasions in the past but i'll put it bluntly here as well - malware research is not comparable to vulnerability research... what works for one doesn't necessarily work for the other as far as disclosure goes...

but even if they were comparable and it made sense to disclose findings in the same way for both, you'd certainly not expect researchers to create new flaws so why should we accept it when they create new threat agents?

both in general and in this particular case the creation of malware does not highlight the existing of a software vulnerability but rather an inherent capability of a system, therefore the researchers in question did not advance any vulnerability disclosure goals...

cdman83 said...

It's nice arguing with you :-)

First of all the researchers didn't "create" the flaw. They didn't make Microsoft implement the access to the "PhysicalDrive0" pseudo-device such a way that it allows writing to the first sectors (thus allowing to overwrite the MBR). They didn't "make" the PC platform use the first sector of the hard-drive as initialization code. Security research is about using available systems in creative ways. It's about working inside of some strict restrictions (which are - hopefully - getting stricter and stricter). Again: security researchers do not "create" the flaw, they just point to it.

Second of all: you are right, there are backchannels / "trusted areas" where supposedly only people from the industry communicate. However, such an approach is not scalable. You can't create a "separate Internet" just for a particular industry and (a) expect that everybody who matters be part of it and (b) make sure that the "bad guys" don't get in. (Just imagine the thought process: who do we include? anti-virus companies? how about anti-spyware? how about personal firewall makers? web security companies? anti-spam? penetration testers? how do you decide? and how do you know that somebody won't leak information? as they say: "what two people know, isn't a secret anymore")

Lastly: exploit code (aka POC) is the best documentation for a flaw. I can fill pages with techno-speak to describe a flaw or I can give you a half-page long source code which exploits the flaw. Getting the source code means efficiency, because you don't have to spend time recreating it, you can immediately use it to test systems you have permission to (as a penetration tester for example). While I advocate for being responsible and trying to contact and work with the vendor first, I also think that publishing a POC (eventually) is a good thing.

kurt wismer said...

"First of all the researchers didn't "create" the flaw."

the ability to support malware is not a flaw... it is an inherent property of a system...

"They didn't make Microsoft implement the access to the "PhysicalDrive0" pseudo-device such a way that it allows writing to the first sectors (thus allowing to overwrite the MBR)."

the ability to write to sector 0 from ring3 might arguably be considered a flaw, but it's demonstrated by writing to that sector, not by what you put in that sector...

"They didn't "make" the PC platform use the first sector of the hard-drive as initialization code."

if it had been the second sector or the last sector or a sector in the middle would it have made a difference? no... 'initialization code' has to go somewhere and that's as good a place as any and probably better than most...

"Second of all: you are right, there are backchannels / "trusted areas" where supposedly only people from the industry communicate. However, such an approach is not scalable."

darknets most certainly are scalable...

"You can't create a "separate Internet" just for a particular industry and (a) expect that everybody who matters be part of it and (b) make sure that the "bad guys" don't get in."

the web of trust may not be perfect, but it does work...

"(Just imagine the thought process: who do we include? anti-virus companies? how about anti-spyware? how about personal firewall makers? web security companies? anti-spam? penetration testers? how do you decide? and how do you know that somebody won't leak information? as they say: "what two people know, isn't a secret anymore")"

this is fundamentally NOT how a darknet works...


"Lastly: exploit code (aka POC) is the best documentation for a flaw."

the only thing mildly resembling a flaw is the ability to write to the mbr from user code, which is demonstrated by the loader and doesn't require a stealthkit at all... using the loader to deploy an mbr stealthkit was them stroking their own egos ("look at me, i can turn anything into important 'rootkit' research")...

cdman83 said...

It's me again :-)

My point with all the talk about MBR and initialization code was that the researchers didn't "set the stage". They had to work inside of a very well defined system.

And an other point relating to POC being the best documentation: it is only after Joanna wrote a POC demonstrating how she could infiltrate the kernel mode by writing to the hard disk (to the page file more specifically) that Microsoft took some measures to prevent that, which IMHO demonstrates that 1 POC can achieve something that 10 weeks of technical argument can't.

With regards to darknets / web of trust: IMHO they would only serve to marginalize people who are new (because "hacking" is not a skill, it's a mindset and anybody can posses it, regardless of their technical knowledge) and make them turn to the "dark side" (which would be easier to get into than the legitimate discussions). It would do nothing to prevent "leaks", since it is enough to have just one out of the many tens of thousands member be a "mole" for the whole thing to be compromised.

My conclusion (at least to this comment) is: research can be carried out the fastest when there is a free flow of information. Anything aimed at censoring it (be it with the best of intentions like "national security") will only slow it down without any other benefits.

kurt wismer said...

"My point with all the talk about MBR and initialization code was that the researchers didn't "set the stage". They had to work inside of a very well defined system."

and my point on that subject was that it wasn't a flaw and therefore not a valid subject for vulnerability disclosure (and even if it were, it's over 20 years old and well known)...

"And an other point relating to POC being the best documentation: it is only after Joanna wrote a POC demonstrating how she could infiltrate the kernel mode by writing to the hard disk (to the page file more specifically) that Microsoft took some measures to prevent that, which IMHO demonstrates that 1 POC can achieve something that 10 weeks of technical argument can't."

yes it can achieve something but what precisely do you think it achieved? was there an actual flaw involved? was there a mistake that could be fixed and/or learned from?

writing a proof of concept exploit for a flaw demonstrates the existence of the flaw and points to why it needs to be fixed... writing proof of concept malware has nothing to do with flaws or learning but rather turns a theoretical attack into a practical one for which countermeasures are needed...

there are a countably infinite number of possible attacks - it's not only impractical, it's infeasible to enumerate and add countermeasures for them all... as such, the only practical approach is to stick to the ones where the risk isn't purely theoretical - and supposed white hats do nobody any favours by increasing the size of that set...

"With regards to darknets / web of trust: IMHO they would only serve to marginalize people who are new (because "hacking" is not a skill, it's a mindset and anybody can posses it, regardless of their technical knowledge) and make them turn to the "dark side" (which would be easier to get into than the legitimate discussions)."

a) i'm not talking about using darknet channels for all research communications, rather i'm suggesting it's use exclusively for the exchange of sensitive materials (ie. malware)...
b) i've been preaching the virtues of using the web of trust for malware sharing (something the av community has practiced for decades) for years and you are the first verifiable good guy to whine about it being too hard to earn trust... that difficulty is the price of doing things the right way - things worth doing aren't always easy...

"It would do nothing to prevent "leaks", since it is enough to have just one out of the many tens of thousands member be a "mole" for the whole thing to be compromised."

that's like saying an umbrella with a small hole in it is useless because the rain can still get through... yes the web of trust is not perfect, but it's better than nothing (and nothing is exactly what's used by granting unrestricted public access)... not only is it an entirely specious argument, history has proven it false because it is in use and although occasional leaks have happened those leaks have resulted in people refining their circles of trust to exclude parties they mistakenly trusted in the past...

"
My conclusion (at least to this comment) is: research can be carried out the fastest when there is a free flow of information. Anything aimed at censoring it (be it with the best of intentions like "national security") will only slow it down without any other benefits."

and what you fail to realize is some research (such as malware creation) doesn't benefit defenders in the first place...

cdman83 said...

"and my point on that subject was that it wasn't a flaw and therefore not a valid subject for vulnerability disclosure (and even if it were, it's over 20 years old and well known)..."

If it wasn't a flaw, why are we talking about it? If it is harmless, then there should be no objection to publishing the data.

"yes it can achieve something but what precisely do you think it achieved? was there an actual flaw involved? was there a mistake that could be fixed and/or learned from?"

It achieved the fact that the raw disk is no longer writable in Windows Vista. The flaw was that this was permitted in the first place. And yes, we learned a lot from it (in fact, when going public, the research contained suggestions on how to fix it, including avoiding swapping out the kernel address space and/or encrypting the swap file).

"writing a proof of concept exploit for a flaw demonstrates the existence of the flaw and points to why it needs to be fixed... writing proof of concept malware has nothing to do with flaws or learning but rather turns a theoretical attack into a practical one for which countermeasures are needed..."

Code is code. It's written in a programming language and compiled down to machine code and executed by the CPU. There is no "evil bit" saying "this is an exploit" or "this is malware". Code is code is code. Malware is code. Exploit is code. In fact the definition of malware is so loose that it can define exploits. Malware can use exploits to spread. Exploits can drop / execute / download malware during their execution.

My point: code is code is code. Separating "exploit research" and "malware research" and having different standards for them is illogical, since both are the same basically.

"there are a countably infinite number of possible attacks - it's not only impractical, it's infeasible to enumerate and add countermeasures for them all... as such, the only practical approach is to stick to the ones where the risk isn't purely theoretical - and supposed white hats do nobody any favors by increasing the size of that set..."

This is just (IMHO) a "stick our heads in the sand" attitude. If people studying todays malicious code can extrapolate with some reasonable level of confidence what tomorrow will bring, they should do so (and of course take the next logical step and prepare our defenses). There are many shades of gray (or in this case white - like telling / not telling the vendor in advance, how much time is given to the vendor, etc), but I take anytime a researcher who goes public with her/his results as opposed to hoping that "nobody from the bad guys will find it" and also assume that if somebody does, we get to find out before it wreaks havoc.

I think (and please correct my if I'm wrong, I don't want to put words in your mouth) that the whole ANI file exploit is a good example for what you advocate and the shortcomings for it. It was first discovered in the wild, it was not published by a security researcher, but rather directly incorporated in the attack mechanisms of the "bad guys". When the news finally break, they already used it to compromise many, many computers (again, how can we guarantee that if an exploit appears on the black market we would word of it in a timely fashion). Now compare this to all the problems reported to Microsoft where they got first fixed in patches and public exploits appeared only after it (either by the original researcher or by reverse engineering the patch). Which situation caused lesser damage?

Finally (because I have some work to do know, sorry, but I'll get back as soon as possible) to comment on an older remark of yours

"the ability to write to sector 0 from ring3 might arguably be considered a flaw, but it's demonstrated by writing to that sector, not by what you put in that sector..."

No, the flaw demonstrated was the injection of code into kernel mode from real mode. Also they didn't write "malware" (if we choose to create some arbitrary categories - which - as I mentioned earlier - is illogical), they created code. Code is code is code.

kurt wismer said...

"If it wasn't a flaw, why are we talking about it? If it is harmless, then there should be no objection to publishing the data."

not being a flaw is different from being harmless... a baseball bat can be very harmful to my head, but that is not due to any flaw...

"It achieved the fact that the raw disk is no longer writable in Windows Vista. The flaw was that this was permitted in the first place."

and that flaw could have been demonstrated by writing anything to the disk... this is the same flaw used by the loader for the mbr stealthkit; and it was the loader, not the stealthkit, that demonstrated it... the stealthkit was unnecessary... a hello world mbr would have worked just as well and been less useful to the bad guys...

"Code is code. It's written in a programming language and compiled down to machine code and executed by the CPU."

mammals are mammals, they have cells, they eat, they excrete - does that make me the same as my dog? no...

"There is no "evil bit" saying "this is an exploit" or "this is malware"."

what there is is the fact that the class of malware known as exploits depend on and therefore demonstrate/prove the existence of vulnerabilities while the entire rest of the malware set does not... the only way a virus or worm or stealthkit or rat or bot program can demonstrate a vulnerability is if it uses exploit code - exploit code that can (and should) exist separately and demonstrate the vulnerability without the viral/stealth/bot-like baggage...

"My point: code is code is code. Separating "exploit research" and "malware research" and having different standards for them is illogical, since both are the same basically."

both are threats but one is completely unnecessary for demonstrating the existence of a vulnerability - how is that basically the same?

"This is just (IMHO) a "stick our heads in the sand" attitude. If people studying todays malicious code can extrapolate with some reasonable level of confidence what tomorrow will bring, they should do so"

a) extrapolating what tomorrow will bring and sharing samples of it with the world are 2 entirely different things...
b) it remains to be seen if that extrapolation is possible in general...
c) we'll never know if that extrapolation was possible in this case because the research sample was what the attack was based on - instead of predicting they future they helped shape it...

"I think (and please correct my if I'm wrong, I don't want to put words in your mouth) that the whole ANI file exploit is a good example for what you advocate"

if white hats had discovered the ANI vulnerability first and released benign exploits for it i wouldn't have had a problem with that... the reason? because there was an actual vulnerability involved and an exploit is the only reasonable way to demonstrate a vulnerability... if one of them had built a worm or some other type of malware out of the exploit, however, i would have taken aim at them as soon as i found out about it...

"Now compare this to all the problems reported to Microsoft where they got first fixed in patches and public exploits appeared only after it (either by the original researcher or by reverse engineering the patch). Which situation caused lesser damage?"

clearly when a vulnerability gets patched before exploits appear in the wild, that is the situation with lesser damage... however this has absolutely nothing to do with the question of making benign exploits vs. making other forms of malware...

"No, the flaw demonstrated was the injection of code into kernel mode from real mode. "

that is not a flaw, it's a forgone conclusion - when you control the machine at a lower level than the operating system there's nothing that can be done to protect the operating system from that...

"Also they didn't write "malware" (if we choose to create some arbitrary categories - which - as I mentioned earlier - is illogical),"

they created a stealthkit, which was unnecessary to prove the existence of the flaw of allowing direct writes to the mbr from usercode...

"they created code. Code is code is code."

if all code is created equal, as you are absurdly suggesting, then why are there anti-malware applications but no anti-word-processor applications?