one thing that the current controversy over the consumer reports av test brings to light is that people really don't understand the ethical considerations involved...
if you create a virus and no one ever hears about it or encounters it there is no ethical problem... it's analogous to that old question of if a tree falls in the forest and no one is around to here it, does it make a sound?... sound being merely an interpretation of vibrations in a medium, without anyone or anything to measure and/or interpret those vibrations there is no sound (of course this assumes an incredibly empty forest)...
if someone simply hears about it then there is a problem as it lends credence to the argument put forth by those who are not careful or even slightly responsible with the viruses they create that what they're doing is OK...
if they encounter it as an advertised virus sample (ie. something to be handled with care to avoid the live infection scenario) then it is up to that person to treat it carefully and responsibly... and if that person should share it with someone else that person must also treat it carefully and responsibly or else it might escape... the more people it gets shared with the greater the likelihood of it's escape, thus creating the live infection scenario... this is a problem because of the impact such an escape can have and the fact that once you share it with someone you basically have no control over whether that escape happens or not...
if someone does encounter it as a live infection then there is a problem, because viruses cost time and money to get rid of, they damage the integrity of all they infect, and they can (either intentionally or not) destroy data and render services and/or resources inaccessible/inoperable... worse still, live infections can live on for a long time after their release - unlike an exploit that stops being a threat after the people using it move on to something else, a virus will just keep going and going without intentional assistance... the monkey virus, for example, was in the wild for over a decade despite the fact that anti-virus software was able to detect and remove it for almost that entire time...
what, then, if the purpose for making the virus outweighs the risks? vesselin bontchev covered this hypothetical condition in his paper Are "Good" Viruses Still a Bad Idea? and the conclusion was that no matter what the virus was intended to do, the function could be just as easily be performed by non-replicative code and thus without many of the risks inherent with self-replicative code...
vesselin didn't specifically cover anti-virus testing as one of his examples in that paper, but then again the viruses used in virus detection tests don't actually carry out any function good or bad, they just sit there waiting to either be detected or not...
obviously viruses are needed in order to carry out virus detection testing, but do we need to create new viruses for the task? viruses are being created by the bad guys at an ever increasing rate and there's no sign of that ever stopping so there's certainly no shortage of viruses to use, not even if you're constraining yourself to just look at new viruses... furthermore, while the risk of virus escape is pretty much the same whether it's a virus captured from the wild or a virus you create in a lab, the implications of the escape of a virus you create are much worse... accidental escape of a real world virus means you're responsible for contributing to the spread of an existing threat while accident escape of a virus you create means you're responsible for the spread and creation of a brand new threat... that is the very definition of being part of the problem rather than being part of the solution...
of course critics will say "but you can take precautions against accidental release so it's really not an issue"... this is foolish, preventative measures are never 100% effective - the av industry, inspite of the extreme care they take, have still had accidents... nevermind the fact that proper virus detect tests don't use first generation samples, they often replicate the samples to 3 or more generations to ensure they actually are viruses - a high risk activity... add to that the fact that for the test to be good science it has to be exactly reproducible, which means other people have to be able to get samples of the viruses - at which point the precautions would be moot because the viruses would no longer be under your complete control and never would be again...
given that and the fact that tests using slightly old anti-virus products against viruses that appeared after those products were released give essentially the same results (ie. the av products don't fare well at all) as tests using viruses you create in a lab, we once again have the same principle that vesselin's conclusion gave us - that making viruses represents an unnecessary risk, and not just a risk to ones own computers or data but a risk for society at large... creating new risks and deciding unilaterally that society should be subject to those risks so that you can achieve some goal (especially when that goal can be achieved without those risks) is quite clearly unethical...
2 comments:
I would disagree. Virus creation can be a good thing.
Wildness isnt an issue, if your intended target, is taken down before that virus can be saved and passed to others.
you may not realize it but you've just contradicted yourself. you're suggesting viruses can be good (if apparently used as a weapon) if they kill the target machine before they can spread.
if it doesn't need to spread then it doesn't need to be a virus and can be something else (like a trojan).
you should definitely check out vesselin bontchev's paper. it talks to some length about the issue of whether something that can be done without self-replication is still good with self-replication.
Post a Comment