so ed moyle over at security curve has responded to some points i made in a previous post about the anti-malware community's ethical stance on malware creation.
ed's response is twofold. first he countered my assertion about what would happen if the CDC went around creating new diseases with a practical example by pointing out that some biologists actually do create new viruses. a little further on he makes mention of the concept of ethical relativity and this is important, because ethics are relative in a number of different ways. not only can ethics be relative in terms of degree (A is ethically worse than B) but also in terms of the frame of reference (the ethical rules for one group don't necessarily apply to a different group - for example there are things that would be unethical for a doctor to do but might be fine for you or i). i chose CDC specifically because, with their focus being on the control/prevention of disease, they are more analogous to the anti-malware community than biologists in general would be. if there were such a thing as computer virologists (or more specifically if there were ones who hadn't already chosen a side in the pro/anti-malware battle) they might be more in line with biologists ethically. from my perspective, though, i have to wonder if that makes them amoral with respect to malware.
philosophically (where ed's mention of ethical relativity actually came from) ed made the argument that something that is normally considered unethical might be considered alright if there was a bigger ethical 'win' as a result. what he's actually getting at is something that might be more readily recognized as the concept of the lesser of two evils. he contends that there might be scenarios in the realm of research where the good done as a result of creating malware outweighs the bad. i'm going to do something totally unexpected and agree with him, but with a caveat that you'll see in a minute.
from early on, fred cohen held out the possibility of beneficial viruses (no doubt there are even earlier citations possible but this will do), and in the beginning i thought they were possible too until i read vesselin bonchev's paper Are "Good" Viruses Still a Bad Idea? vesselin made perhaps the most salient of all points about the criteria by which a supposed good virus can be determined to be actually good. the "good" end result has to be something that can't be achieved any other way.
now vesselin had it slightly easier here because he was looking specifically at viruses, at self-replicating malware, which is a more narrowly defined problem than 'good malware' or 'good reasons to create malware in the lab'. vesselin's argument didn't leave a lot of room for good viruses - virtually everything you can think of doing with a self-replicator can also be done with non-replicative code and thus without the risks inherent in self-replication.
i mention vesselin's paper because that salient point he made extends to this case as well. unspoken in ed moyle's bank robbery example is that there is only 1 way to keep the hidden girl alive - by lying. if there were another way, would lying to save the little girl's life still be ok? if you choose the lesser of 2 evils, when a 3rd option with no evil whatsoever were available, then doesn't choosing the lesser evil mean that you're still doing evil unnecessarily?
that's where things stand in the anti-malware community. although it may be hypothetically possible to construct a scenario where malware creation is the least evil option, to my knowledge no one has managed to present such a scenario (with the exception of exploit code* for demonstrating the presence and importance of vulnerabilities), and so the no-malware-creation rule has no good exceptions yet. the need for new malware in testing (the root of the current discussion of malware creation ethics) can already be met in 2 different ways (retrospective testing or real-time/real-life testing that tests against suspect samples as they're discovered) that don't involve malware creation at all.
(* 2010/07/19: edited to add the exception case for exploit code, as pointed out by vesselin bontchev)
4 comments:
My colleagues fall on both sides of this argument, and there are valid points to each side. Ultimately, though, I don't see the benefit of the "ethical" creation of malware. To study it? By creating it, the study is invalidated -- it simply is not what the "bad guys" are creating, so studying something "ethically" created provides little (if any) benefit. What other possible good reason is there to create "ethical" malware? There is none.
Thanks again for great article!
i can't honestly think of a situation where it works out for the best either. maybe someday someone will come up with one. i know it's been a couple of decades now and no one has really come up with anything good yet, but the landscape changes over time so there's no way to know what the future might bring.
A few points here.
First of all, despite being a honorary member of AMTSO, I somewhat dislike this document of theirs. It's too politically correct. They did their best to present all possible arguments from both sides (which is good) - while being way too careful to avoid actually taking a stance on this matter. I don't like the latter. I think an organization like AMTSO should state clearly their ethical position. Either they think that virus creation is acceptable (which case I would resign), or they do not (in which case they should clearly say so).
Also, I very intentionally cover only virus creation in my paper - not malware creation in general. This is because I think that in some cases the creation of non-replicating malware, when done in a responsible way, can be useful and cannot be avoided. For instance, if you want to demonstrate a security flaw in a particular product, often you have to do it by writing an exploit for it. (Just pointing to what the product does wrong is often not sufficient, because it brings "but it's probably not exploitable" arguments from the vendor.) Note that I didn't say "publish the exploit on the Internet". At the same time, I have never been able to come up with a useful task than can only be performed by creating a virus. (And, no, testing AV products can't be done properly this way, either.)
Finally, regarding the "greater good" argument. Yes, I would agree that sometimes it is necessary to do what you consider evil just to prevent a greater evil. But that's not universal! There are degrees of "evil". Yes, I consider the creation of viruses unethical. Would I create a virus, in order to save the human race from nasty alien invaders? Yes. But would I create one to locate or otherwise inflict harm on what some government has decided to dub "terrorists"? Most adamantly no, even if I strongly disagree with the actions of the said terrorists.
@bonchev:
you got me - i forgot about exploit code when i posted my previous comment about not being able to think of a case where making malware might be the least evil option. i believe i have stated elsewhere on this site that exploit code is a bit of a special case in that regard. i should clear that up in the main body of this post - thanks for reminding me.
Post a Comment