just a quick update / mea culpa.
although i stand by the general sentiment expressed in my previous post about research not always being victimless, i've finally gotten a chance to look at the specific example of the polypack service (i was unable to before because the site was down and i had to go by what was written about it rather than what was actually on the site).
i don't know if this is a change from how things were previously, but the polypack service is currently not open to the public. that's great news. although it's still possible that some among the select few who do gain access will be untrustworthy, at least it's not a free-for-all, the people behind it did put some thought into the potential consequences - something that's all too rare these days.
it would still be better if they weren't creating new malware at all (why not pack the eicar standard anti-malware test file instead?), but i felt obliged to at least give them credit for not being completely naive about openness.
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Monday, August 10, 2009
Saturday, August 08, 2009
research isn't always victimless
i read david maynor's negative reaction to an article by roel schouwenberg on the ethics of a particular instance of security research dubbed 'polypack' and, well, i'll be honest - my initial reaction was that someone needed to whack maynor upside the head with a clue-stick.
i didn't run right out and blog about it, mind you. as frustrating as this pervasive lack of understanding is for me (maynor's opinions echo those of other security pundits i've heard countless times before), standing up on a soapbox and declaring 'this is wrong' has not been a particular persuasive course of action.
this has been particularly frustrating to me because these issues are so clear-cut to me and i've been unable to understand what other people's problems are that they can't see it as clearly as i do (or as roel seems to for that matter). to me there is clearly a significant ethical dimension to things such as online services that turn known malware into unknown malware. that said, i'm not the kind of person to shy away from re-examining my own basic assumptions and one of the biggest assumptions i make about other people is that they are, more or less, just like me - they know the same things i know, they think about the same things i think about, they have an awareness of and appreciation for the same things i do. i know, i know, i assume way too much - while it's a nice theory to think that we are more similar than different, there are obviously more differences than i tend to account for. it's difficult to account for those differences since i only really know how i work, not how others do - maybe if i could read minds it would be different.
so, throwing out that assumption i recalled a comment david harley left on a post i made about fred cohen not too long ago. he said he suspected fred didn't really spend all that much time thinking about viruses anymore. it stands to reason that most people, even most security folks (so long as they're outside of the anti-malware subdomain of security) don't actually spend all that much time thinking about malware issues. specifically, i suspect that they don't really spend that much time thinking about the potential consequences of efforts like the polypack project, or the race2zero, or any number of other things i could name. why should they? the consequences are so far removed from them in their ivory research towers (not to mention their lack of focus on the very domain where those consequences would manifest themselves) that they're unlikely to learn of any consequences should they become real.
that's not always the way it goes, of course. look at bootroot, created by derek soeder and ryan permeh of eEye digital security. that was the basis of the mebroot mbr malware that became so well known and prevalent in the wild. the consequences of their actions are really quite plain to see - they armed the bad guys and the bad guys pulled the trigger. they are at least partially responsible for the damages done by this malware - and i say this not as some holier-than-thou security preacher, but rather as someone who has himself quite possibly unwittingly aided the bad guys who made conficker (though i'd like to think it wasn't quite as predictable as the bootroot case). there are real consequences to your words and actions in this field that i think people remain largely ignorant of and i think that if people were more aware of this and had a better appreciation for this then they'd likely see things through the same sort of ethical lens that i do.
i know of one well documented case where this awareness was arrived at a little too late. mike ellison, once known as stormbringer (among other names), was a virus writer in the vx group phalcon/skism who published an open letter in alt.comp.virus in 1994 announcing his retirement from the virus writing scene and recounting his encounter with a victim of his efforts.(from google's archive)
without that awareness, though, it's not so much a matter of ethics for people because they aren't consciously choosing an obviously evil path. saying that they lack ethics is a little bit like calling someone a liar when they don't know what they're saying is false. i'm as guilty of this as anyone, perhaps even more so (just look at how often i've used the ethics tag on this blog). those of us who are aware and appreciative of the consequences refer to it as a matter of ethics because we see it that way, because for us it would be a matter of ethics and we (perhaps incorrectly) think that others should be able to see the ethical dimension as well, should be aware of the consequences of their actions, especially with the not so subtle hint that we think someones ethics are lacking.
hinting isn't working though so let's try some thought exercises instead. for the first exercise i want to you imagine that you've discovered a new way to attack something. now let's say you publish your findings and you put a demonstration up on the web so that people can play with and build upon your creation (for some of you this might be easier to imagine as you've actually done this). and then let's say someone did just that, they built on your work but they did so with malicious intent. now i want you to imagine having a conversation with an ex small business owner whose company went belly up because of the various costs of downtime due to malware based on your designs. do you express regret or remorse? do you feel for this person? how about the mother whose lost the photographic memories of her child because malware you helped create wiped all her data - how would that make you feel? or then there's the retiree who now supplements his diet with pet food because his investments were wiped out with the aid of tools you inadvertently helped create and his pension isn't really enough on it's own to make ends meet. how do you think it would feel to know that you had darkened people's lives?
these are not impossible or even improbable outcomes, but i can imagine that there would still be some reluctance to accept this view. it's very natural - it's called denial. so for the second thought exercise i'm going to stay away from the hypothetical and instead deal with this historical. specifically your history, gentle reader. i want you to remember something. i want you to think back to a point in your past where you hurt someone unintentionally. maybe it was physical, maybe emotional, maybe something else but regardless of the type of hurt it should be something where you really didn't think anything would go wrong. i want you to try and remember it as clearly as possible. remember how easily it seemed to just happen, and remember how hurt the person was. finally, i want you to try and remember how you felt when you realized it was your fault, that it could have been avoided if only you'd done things differently, if only you'd thought about the consequences of your actions. i want you to remember what it was like both before and after you gained that awareness and compare it to how things are for you now - are you really aware of the consequences of your or other people's actions with respect to malware now?
most people should have at least one point in their lives like this that they can think back to. if you don't then you're either some kind of saint, or you lack the self-awareness to realize what an asshole you've been all your life. awareness is a funny thing, though. those of us who have it usually take it for granted, and those of us who don't rarely know it's missing. i'll probably still call it a lack of ethics in the future when people either create or do things that help others create malware, but maybe i'll be more conscious of the possibility that they simply know not what they do, and maybe those who still don't see the ethical dimension to aiding the bad guys will at least have a better idea of where people like me are coming from when we get on their case about it. research or proving a point seem like hollow justifications when victims enter the picture.
(edited: corrected mike ellison's name - thanks graham)
i didn't run right out and blog about it, mind you. as frustrating as this pervasive lack of understanding is for me (maynor's opinions echo those of other security pundits i've heard countless times before), standing up on a soapbox and declaring 'this is wrong' has not been a particular persuasive course of action.
this has been particularly frustrating to me because these issues are so clear-cut to me and i've been unable to understand what other people's problems are that they can't see it as clearly as i do (or as roel seems to for that matter). to me there is clearly a significant ethical dimension to things such as online services that turn known malware into unknown malware. that said, i'm not the kind of person to shy away from re-examining my own basic assumptions and one of the biggest assumptions i make about other people is that they are, more or less, just like me - they know the same things i know, they think about the same things i think about, they have an awareness of and appreciation for the same things i do. i know, i know, i assume way too much - while it's a nice theory to think that we are more similar than different, there are obviously more differences than i tend to account for. it's difficult to account for those differences since i only really know how i work, not how others do - maybe if i could read minds it would be different.
so, throwing out that assumption i recalled a comment david harley left on a post i made about fred cohen not too long ago. he said he suspected fred didn't really spend all that much time thinking about viruses anymore. it stands to reason that most people, even most security folks (so long as they're outside of the anti-malware subdomain of security) don't actually spend all that much time thinking about malware issues. specifically, i suspect that they don't really spend that much time thinking about the potential consequences of efforts like the polypack project, or the race2zero, or any number of other things i could name. why should they? the consequences are so far removed from them in their ivory research towers (not to mention their lack of focus on the very domain where those consequences would manifest themselves) that they're unlikely to learn of any consequences should they become real.
that's not always the way it goes, of course. look at bootroot, created by derek soeder and ryan permeh of eEye digital security. that was the basis of the mebroot mbr malware that became so well known and prevalent in the wild. the consequences of their actions are really quite plain to see - they armed the bad guys and the bad guys pulled the trigger. they are at least partially responsible for the damages done by this malware - and i say this not as some holier-than-thou security preacher, but rather as someone who has himself quite possibly unwittingly aided the bad guys who made conficker (though i'd like to think it wasn't quite as predictable as the bootroot case). there are real consequences to your words and actions in this field that i think people remain largely ignorant of and i think that if people were more aware of this and had a better appreciation for this then they'd likely see things through the same sort of ethical lens that i do.
i know of one well documented case where this awareness was arrived at a little too late. mike ellison, once known as stormbringer (among other names), was a virus writer in the vx group phalcon/skism who published an open letter in alt.comp.virus in 1994 announcing his retirement from the virus writing scene and recounting his encounter with a victim of his efforts.(from google's archive)
Greetings,here we have someone who you and i might have considered a bad guy but who didn't consider himself a bad guy. someone who was exploring and researching with computer viruses and when he finally became aware of the consequences his actions had had, when he discovered how real those consequences were, that 'research' came to an abrupt end.
For those of you who know me, you may know my various handles, my activities, and causes for what little fame I may possess... I am Stormbringer of Phalcon/SKISM, Black Wolf, and Jesus of the Trinity, and even some others probably no one would recognize. And today, I am retiring.
Last night, I got email from a guy in Singapore.... a nice guy, really, extraordinarily polite for the circumstances.... he had been infected with one of my viruses, Key Kapture 2, by some guy who wanted to fuck with his computer. It was filling up his drive (he had only 2 megs left) with captured keystrokes, and he had no way to disinfect it, so he wrote me, asking me for help to cure one of my own creations that had attacked his computer... I called him up voice, and talked with him.... and even then he was kind, almost like he didn't really blame me, although I feel he should.
Now, I never released my viruses against the public myself, never wanted them to be in the wild, but it happened. Fortunately, I also never wrote destructive viruses, so I didn't trash the poor guy's computer. It will be fixed, with the main harm being his time and security.
For some reason, this really shook me. I had always written my viruses as educational, research programs for people to learn more from, and for myself to explore my computer and a type of program that I found almost obsessively interesting. All of my viruses were written to explore something new that I had learned, something different, something cool..... and yet, I still managed to hurt someone...
A lot of you people are probably thinking that I'm a wuss, or whatever, and I really could care less.... fucking up people's stuff was never my intention, and yet it happened. I have decided that it is time for me to quit writing viruses, and continue on with something more productive, perhaps even benificial to others.
Don't really know what else to say, 'cept that it was an interesting journey..... and I'll still be hanging around somewhere on the nets.....
Cheers,
Stormbringer, Phalcon/Skism
Ex-Virus Writer
without that awareness, though, it's not so much a matter of ethics for people because they aren't consciously choosing an obviously evil path. saying that they lack ethics is a little bit like calling someone a liar when they don't know what they're saying is false. i'm as guilty of this as anyone, perhaps even more so (just look at how often i've used the ethics tag on this blog). those of us who are aware and appreciative of the consequences refer to it as a matter of ethics because we see it that way, because for us it would be a matter of ethics and we (perhaps incorrectly) think that others should be able to see the ethical dimension as well, should be aware of the consequences of their actions, especially with the not so subtle hint that we think someones ethics are lacking.
hinting isn't working though so let's try some thought exercises instead. for the first exercise i want to you imagine that you've discovered a new way to attack something. now let's say you publish your findings and you put a demonstration up on the web so that people can play with and build upon your creation (for some of you this might be easier to imagine as you've actually done this). and then let's say someone did just that, they built on your work but they did so with malicious intent. now i want you to imagine having a conversation with an ex small business owner whose company went belly up because of the various costs of downtime due to malware based on your designs. do you express regret or remorse? do you feel for this person? how about the mother whose lost the photographic memories of her child because malware you helped create wiped all her data - how would that make you feel? or then there's the retiree who now supplements his diet with pet food because his investments were wiped out with the aid of tools you inadvertently helped create and his pension isn't really enough on it's own to make ends meet. how do you think it would feel to know that you had darkened people's lives?
these are not impossible or even improbable outcomes, but i can imagine that there would still be some reluctance to accept this view. it's very natural - it's called denial. so for the second thought exercise i'm going to stay away from the hypothetical and instead deal with this historical. specifically your history, gentle reader. i want you to remember something. i want you to think back to a point in your past where you hurt someone unintentionally. maybe it was physical, maybe emotional, maybe something else but regardless of the type of hurt it should be something where you really didn't think anything would go wrong. i want you to try and remember it as clearly as possible. remember how easily it seemed to just happen, and remember how hurt the person was. finally, i want you to try and remember how you felt when you realized it was your fault, that it could have been avoided if only you'd done things differently, if only you'd thought about the consequences of your actions. i want you to remember what it was like both before and after you gained that awareness and compare it to how things are for you now - are you really aware of the consequences of your or other people's actions with respect to malware now?
most people should have at least one point in their lives like this that they can think back to. if you don't then you're either some kind of saint, or you lack the self-awareness to realize what an asshole you've been all your life. awareness is a funny thing, though. those of us who have it usually take it for granted, and those of us who don't rarely know it's missing. i'll probably still call it a lack of ethics in the future when people either create or do things that help others create malware, but maybe i'll be more conscious of the possibility that they simply know not what they do, and maybe those who still don't see the ethical dimension to aiding the bad guys will at least have a better idea of where people like me are coming from when we get on their case about it. research or proving a point seem like hollow justifications when victims enter the picture.
(edited: corrected mike ellison's name - thanks graham)
Subscribe to:
Posts (Atom)