Saturday, March 21, 2009

the best laid plans of mice and men often go awry

by now news of the bbc's botnet blunder has spread pretty far... the bbc's actions seem to have violated the law, and there's no question that they were unethical and that prevx's complicity also points to an ethics problem in that company...

many people have written about it already, but alex eckelberry's point about the potential for unintended consequences in taking down a botnet reminded me of a discussion at securosis that i had intended to revisit here but never got around to...

the securosis post in question involved a proposal to set up some system by which an authority could shut down botnets - basically to nullify the legal and ethical hurdles that currently keep most malware researchers from taking down the botnets they're studying...

my two comments on the subject were as follows:
#
kurt wismer Feb 17

you make a compelling argument - botted machines are a public security hazard and some hazards are grievous enough to warrant unauthorized intervention…

i instinctively rebelled against this notion because i don’t like the idea of authorities mucking around on my computer out of some potentially misguided notion that they know better than i do… but i can’t find any flaw in the applicability of your analogy…

the only problem i foresee is that if the bad guys can’t hide their creations behind legal red tape then they’ll hide them behind something equally compelling, like commands to self-destruct and wipe the host machines (to get rid of evidence and also to just be mean) if the network is tampered with… this switch from legal to technical controls may mirror anti-tampering efforts in other domains… if they can figure out a way to make killing the botnet do more harm than good then it will be equivalent to the situation we’re in now and no change in law will affect such a technological adaptation…
#
kurt wismer Feb 17

@rich:
i’m not sure the good it would do would outweigh the bad… when 1,000,000 people suddenly have no operating system, what do you think will happen? steve balmer is already balding, the rest of his hair would be gone the instant microsoft started receiving support calls from all the victims… and that’s just the home users…

what happens when some of those machines are in the enterprise? or in government or military? what if they’re part of critical infrastructure? worse still if it’s in such machines in other countries - taking down the botnet could cause an international incident…

self-destructing botnets are something i wouldn’t want to touch with a 10 foot pole…


currently, ethical malware researchers steer clear of tampering with botnets or the machines they're on - if we change the rules of engagement, if the forces (whatever they may be) that currently prevent us from tampering with botnets went away and we started behaving like the bbc did then the malware authors would have to adapt and self-destructing botnets are an obvious technical approach to regaining the tamper-resistance they currently enjoy thanks to the good behaviour that makes the good guys so 'good'...

2 comments:

pbust said...

While I agree with most of your comments on the ethical issues and leaving them aside for a moment, there's one point you did not touch. And that is that there's always the possibility of testing if the self-destroy/uninstall command of a botnet works as expected without negative impacts on the OS or common applications. If such validation is performed then most of the arguments against taking the botnet down seem to loose importance (again, without going into the ethicals).

kurt wismer said...

in theory, yes, you could test... but how accurate and globally applicable would the results of said test be?

there are a variety of complications that would make me avoid a take-down action on the scale of a typical botnet even with encouraging test results - not the least of which being things like unexpected/unintended behaviour due to bugs or interaction with other software, as well as the familiar prospect of malware writers putting code in their malware to check for test environments so as to behave differently in them than in the real world...