... and some are considerably less equal than others.
one of the most fundamental problems with URL shorteners in general is that they obfuscate the final destination. when you follow one of these shortened URLs you're often putting your online safety at risk because you literally can't see where you're going.
some URL shortening services try to mitigate this problem. tinyurl.com, for example, allows anyone to add the preview subdomain to a shortened url (ex. tinyurl.com/whatever -> preview.tinyurl.com/whatever) in order to change the behaviour of the shortened URL so that instead of taking you directly to a potentially suspect site it will take you to a safe page on tinyurl.com's own site that displays the URL of your final destination so that you can examine it and make an informed decision about whether you want to continue. it's a nice feature, but it's hidden - only people familiar with tinyurl.com know about it and would have the know-how to use it.
other services like bit.ly forgo letting the user decide on safety and actually filter bad URLs themselves, so as to protect the user even when s/he doesn't care or know how to protect themselves. this is also a nice feature but there are limits to the effectiveness of filtering badness (as readers of this blog surely are aware).
yesterday, however, i encountered one of the worst URL shortening behaviours i've encountered in quite some time. the th8.us URL shortening service, instead of redirecting you to the original URL like most URL shorteners do, shows the contents of the original URL in a frame underneath some google ads (at least that's how it appeared in iron - a privacy enhanced derivative of google's chrome that i used in a sandbox after seeing some rather strange behaviour in firefox). this means that normally you can never see where you're going, even after you've gotten there. it gets worse, though.
if you're like me and normally use firefox with noscript, what you'll likely see when trying to visit a th8.us shortened URL is a blank page with a noscript icon on it indicating that you have to add the site to your whitelist in order to see any of the content. what that means is that not only can you not see where you're going, you have to lower your defenses in order to get there without knowing where there is. it gets even worse than this, however. while you do need to add th8.us to your whitelist, you also have to add the site with the actual content (so technically noscript is giving you at least some idea of where you're going) even though that site wouldn't require whitelisting if you go there directly. this essentially amounts to forcing an unnecessary privilege escalation for a site that would have normally been usable and (more importantly) examinable without it. a privilege escalation, moreover, before you have any information with which to judge the safety of the page or evaluate whether whitelisting is truly required for your purposes.
this is pretty horrendous behaviour from a security perspective (ironic, then, that it was a representative of a security software company i found using it) and i hope that people can see fit to steer clear of services like this and that th8.us can find some way to improve the security footprint of their service. while they do actually have a preview feature similar to tinyurl.com's, it suffers from the same drawback of being hidden functionality that only people familiar with the service would know about (and i dare say that there are far fewer people familiar with th8.us than with tinyurl.com).
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Thursday, July 29, 2010
Wednesday, July 28, 2010
making sense of the cyberwar debate
if you follow the security blogs you've probably realized that there's some considerable disagreement about the subject of cyberwar. while i did touch on the subject once before, i don't think that really served to clear up anything.
making things more clear is actually rather important because, although us average folks may not be directly targeted in the course of a cyberwar, the consequences of one would affect us as surely as the consequences of a regular war would. sorting out the cyberwar debate is important because we need to know whether we have cause to be afraid, so that we can act accordingly.
there are basically two opposing viewpoints to this debate. on the one hand you have people like richard bejtlich saying cyberwar is real, and on the other hand you have people like bruce schneier saying the threat of cyberwar has been grossly exaggerated or robert graham who outright says that cyberwar is fiction.
the difficult thing is that both sides actually make excellent, compelling arguments - and if you're like me you probably feel like both are correct. but how can they be when they seem to be contradicting each other?
evidence is usually a good determining factor in a debate and bejtlich presents some compelling evidence in the form of online attacks (repeated security breaches of the joint strike fighter program) probably leading to a military outcome (advantage of the military hardware in question being lost and then the US scaled back their plans for it). that seems pretty convincing to me.
schneier expresses the doubt about cyberwar the most eloquently in a video of a debate he participated in - in order to have cyberwar you need regular war. cyberwar doesn't make sense without regular war. that's really hard to argue against, it makes a lot of intuitive sense.
the schneier debate video is interesting to me because as i watched it something seemingly obvious struck me and i was amazed that none of the participants seemed to come to the same conclusion i did. schneier himself came closest when he wrote about two different meanings of 'war'. later, as i read yet another cyberwar post from bejtlich, specifically the quotes from the DoD joint publication 1, the idea that i was on the right track was reinforced.
the reason both sides can seem to be right is because they're talking about two different things. i mentioned that schneier pointed to two different meanings for 'war', and i suppose you could leave it at simple ambiguity of the term (though it seems strange to think of 'war' as being ambiguous), but sometimes ambiguity arises from the fact that there's actually a better/more accurate term.
it turns out there is a word that is similar to 'war', that describes a concept very much related to 'war', that is often used interchangeably with 'war', and often is replaced with the word 'war' simply as a mental/verbal shortcut; and yet a word that actually means something different than 'war. can you guess what that word is?
warfare is going on all the time in the form of activities meant to prepare for war - and not even necessarily a specific war, but just war in general. espionage is one example; though it's not generally considered an act of war, the use of spies is so critical to warfare that sun tzu dedicated an entire chapter of "the art of war" to that very subject. there are any number of military exercises that also qualify as warfare, as does the development of new/better tools, techniques, and means of attack. peace-time cyberwarfare could reasonably be understood to include the ongoing enumeration of weaknesses, probing, and (hopefully) non-disruptive breaches and theft of secrets in a wide variety of one's adversaries' networks and systems. war-time cyberwarfare would, by extension, be the disruption of those systems and networks using what was previously found at times that are most advantageous.
from a north american perspective there is no currently ongoing cyberwar because there is no accompanying war to associate it with (at least none where there's compelling evidence that the adversary has included the 'cyber' theatre of combat). furthermore there's nothing i'm aware of to suggest that such a war is anywhere on the horizon. as such the threat of cyberwar can be considered to not be credible at this time. that said, there's no reason to believe that peace-time cyberwarfare isn't going on right now. nation states that intend to enter the 'cyber' theatre during war-time at some unspecified point in the future need to first be prepared to do so, which means gathering information on weaknesses and gaining access beforehand (ie. now). should we be concerned about that? sure, but only to the extent that we would be concerned about any military build-up, and even then we should temper that with the realization that at least part of the build-up is due to the new-ness of this sort of offensive capability (ie. they'll be starting more or less from scratch as opposed to a build-up above and beyond some established baseline) and not take it as a sign of impending attack.
we don't want an opposing nation state to be able to launch a debilitating attack successfully and so finding and eliminating the weaknesses they would try to take advantage of is certainly important, as is developing the abilities to detect attacks and recover from them. but there's no reason to believe they'll be trying to disrupt critical infrastructure anytime soon. as such our reaction shouldn't be characterized by fear, but rather by purpose and informed direction. being prepared is always preferable to the alternative.
making things more clear is actually rather important because, although us average folks may not be directly targeted in the course of a cyberwar, the consequences of one would affect us as surely as the consequences of a regular war would. sorting out the cyberwar debate is important because we need to know whether we have cause to be afraid, so that we can act accordingly.
there are basically two opposing viewpoints to this debate. on the one hand you have people like richard bejtlich saying cyberwar is real, and on the other hand you have people like bruce schneier saying the threat of cyberwar has been grossly exaggerated or robert graham who outright says that cyberwar is fiction.
the difficult thing is that both sides actually make excellent, compelling arguments - and if you're like me you probably feel like both are correct. but how can they be when they seem to be contradicting each other?
evidence is usually a good determining factor in a debate and bejtlich presents some compelling evidence in the form of online attacks (repeated security breaches of the joint strike fighter program) probably leading to a military outcome (advantage of the military hardware in question being lost and then the US scaled back their plans for it). that seems pretty convincing to me.
schneier expresses the doubt about cyberwar the most eloquently in a video of a debate he participated in - in order to have cyberwar you need regular war. cyberwar doesn't make sense without regular war. that's really hard to argue against, it makes a lot of intuitive sense.
the schneier debate video is interesting to me because as i watched it something seemingly obvious struck me and i was amazed that none of the participants seemed to come to the same conclusion i did. schneier himself came closest when he wrote about two different meanings of 'war'. later, as i read yet another cyberwar post from bejtlich, specifically the quotes from the DoD joint publication 1, the idea that i was on the right track was reinforced.
the reason both sides can seem to be right is because they're talking about two different things. i mentioned that schneier pointed to two different meanings for 'war', and i suppose you could leave it at simple ambiguity of the term (though it seems strange to think of 'war' as being ambiguous), but sometimes ambiguity arises from the fact that there's actually a better/more accurate term.
it turns out there is a word that is similar to 'war', that describes a concept very much related to 'war', that is often used interchangeably with 'war', and often is replaced with the word 'war' simply as a mental/verbal shortcut; and yet a word that actually means something different than 'war. can you guess what that word is?
warfareas closely related as 'war' and 'warfare' are, there are important distinctions to make between them, and in this context specifically it's that warfare can exist outside the strict confines of a formally declared state of hostilities between two or more nation states (aka. a war).
warfare is going on all the time in the form of activities meant to prepare for war - and not even necessarily a specific war, but just war in general. espionage is one example; though it's not generally considered an act of war, the use of spies is so critical to warfare that sun tzu dedicated an entire chapter of "the art of war" to that very subject. there are any number of military exercises that also qualify as warfare, as does the development of new/better tools, techniques, and means of attack. peace-time cyberwarfare could reasonably be understood to include the ongoing enumeration of weaknesses, probing, and (hopefully) non-disruptive breaches and theft of secrets in a wide variety of one's adversaries' networks and systems. war-time cyberwarfare would, by extension, be the disruption of those systems and networks using what was previously found at times that are most advantageous.
from a north american perspective there is no currently ongoing cyberwar because there is no accompanying war to associate it with (at least none where there's compelling evidence that the adversary has included the 'cyber' theatre of combat). furthermore there's nothing i'm aware of to suggest that such a war is anywhere on the horizon. as such the threat of cyberwar can be considered to not be credible at this time. that said, there's no reason to believe that peace-time cyberwarfare isn't going on right now. nation states that intend to enter the 'cyber' theatre during war-time at some unspecified point in the future need to first be prepared to do so, which means gathering information on weaknesses and gaining access beforehand (ie. now). should we be concerned about that? sure, but only to the extent that we would be concerned about any military build-up, and even then we should temper that with the realization that at least part of the build-up is due to the new-ness of this sort of offensive capability (ie. they'll be starting more or less from scratch as opposed to a build-up above and beyond some established baseline) and not take it as a sign of impending attack.
we don't want an opposing nation state to be able to launch a debilitating attack successfully and so finding and eliminating the weaknesses they would try to take advantage of is certainly important, as is developing the abilities to detect attacks and recover from them. but there's no reason to believe they'll be trying to disrupt critical infrastructure anytime soon. as such our reaction shouldn't be characterized by fear, but rather by purpose and informed direction. being prepared is always preferable to the alternative.
Tuesday, July 27, 2010
privacy implications of cloud-based security
in my continuing efforts to get caught up on my rss feeds (i'll get there eventually) i came across an interesting post at the stopbadware blog about establishing expectations for av vendors. it raises some interesting concerns about data collection and lack of transparency / informed consent.
this is a tough call. recall that when trend bought hijack this and added a button to send the log to trend for analysis people went ape over the idea that it was sending data without informing the user (even though the behaviour seemed clear to me just by looking at what the UI stated).
all cloud-based security inherently is going to collect data and send it into the cloud - that's how you leverage the cloud for security. clearly there is the potential for a privacy violation if the vendor isn't careful and there is certainly room for people to assume there already is a privacy violation. some people are inevitably going to cry foul if you tell them that their security software is sending data into the cloud.
and yet on the other hand if you don't tell them then they can't make intelligent, informed decisions about whether they want to accept the risk associated with cloud-based security technologies and so may fall into a false sense of privacy (not unlike a false sense of security).
i'm a big fan of informed decisions - i think that given enough information, reasonable people will be able to make intelligent decisions and hopefully that works out in the vendor's favour (if they're doing enough to protect customer's privacy).
i think security vendors absolutely need to inform their users, and i think the failure to do so should be considered a badware sort of behaviour. i think the risk of backlash can be mostly mitigated by informing the user HOW their privacy is being protected in spite of the data collection (ie. exactly what data is being collected and how the data is being anonymized - maybe even let them audit the data being sent). assuming the vendor is doing an adequate job of protecting users' privacy, only unreasonable people should continue to have a problem in the face of such transparency - and there isn't really much you can do for unreasonable people. all cloud-based technologies involve data collection, so those unreasonable folks will simply have to learn to be reasonable or seek out one of the dwindling number of products that don't have any cloud-based components.
this is a tough call. recall that when trend bought hijack this and added a button to send the log to trend for analysis people went ape over the idea that it was sending data without informing the user (even though the behaviour seemed clear to me just by looking at what the UI stated).
all cloud-based security inherently is going to collect data and send it into the cloud - that's how you leverage the cloud for security. clearly there is the potential for a privacy violation if the vendor isn't careful and there is certainly room for people to assume there already is a privacy violation. some people are inevitably going to cry foul if you tell them that their security software is sending data into the cloud.
and yet on the other hand if you don't tell them then they can't make intelligent, informed decisions about whether they want to accept the risk associated with cloud-based security technologies and so may fall into a false sense of privacy (not unlike a false sense of security).
i'm a big fan of informed decisions - i think that given enough information, reasonable people will be able to make intelligent decisions and hopefully that works out in the vendor's favour (if they're doing enough to protect customer's privacy).
i think security vendors absolutely need to inform their users, and i think the failure to do so should be considered a badware sort of behaviour. i think the risk of backlash can be mostly mitigated by informing the user HOW their privacy is being protected in spite of the data collection (ie. exactly what data is being collected and how the data is being anonymized - maybe even let them audit the data being sent). assuming the vendor is doing an adequate job of protecting users' privacy, only unreasonable people should continue to have a problem in the face of such transparency - and there isn't really much you can do for unreasonable people. all cloud-based technologies involve data collection, so those unreasonable folks will simply have to learn to be reasonable or seek out one of the dwindling number of products that don't have any cloud-based components.
Monday, July 26, 2010
sandboxes, sandboxes everywhere
lysa myers drew my attention to some interesting developments in the sandboxing arena last week. at first i thought she might be referring to security companies adding sandboxing to their arsenal (which i'd really like to see more security vendors do), but that's not it at all. it's actually sandboxing being added to individual applications. not only will adobe reader now come with it's own sandbox, but dell is coming up with a sandboxed version of firefox too.
i have mixed feelings about this sort of development. on the one hand it's nice to see sandboxing getting more attention and use, and these efforts will surely bring the technique to the masses. but on the other hand i worry about the effects of baking sandboxing into applications - especially applications that might call each other.
if something is running inside a sandbox and it executes an external program, that program should also run inside the same sandbox (otherwise the sandbox is rather simple to escape from). what happens, then, when the dell kace secure browser (which runs in it's own sandbox) launches adobe reader (as it would generally be expected to do when you click on a pdf link)? would we have adobe's sandbox nested within the browser's sandbox? how would that work - specifically how well (or poorly) would it work? you may recall that google's chrome had it's own sandboxing right from day 1 but that there were problems coexisting with other sandboxing technology (that i discussed here). the conflict between chrome and sandboxie seems to have been resolved but, as more and more applications come with sandboxing baked in, further inter-sandbox incompatibilities seem like a pretty likely outcome.
not only is cross compatibility between sandboxes a problem, but the question of implementation efficacy is an issue too. chrome's baked in sandbox was easy to escape because chrome's sandbox wasn't complete. it was only able to sandbox a narrowly defined set of processes specific to the browser itself - it couldn't even sandbox the plugins that it ran in order to render rich content like flash. it stands to reason that any baked in sandboxing technology is only going to be good enough to sandbox the application it's baked into. if the sandboxing technology was good enough to handle all the secondary processes that an application might launch then the technology might as well be made into a general purpose sandboxing product instead of being baked into an application. some may be better than others and people may be lulled into a false sense of security by thinking that the sandbox baked into application X is as good at protecting them as the sandbox baked into application Y.
i'm a big proponent of sandboxing, but i don't think sandbox sprawl is a good thing. it would eventually replace a few discrete sandboxes with known properties and known shortcomings with a ridiculous number of sandboxes with unknowable properties and shortcomings and cross compatibility issues. i'd prefer to see users using one or two general purpose sandboxes than dozens of custom sandboxes.
i have mixed feelings about this sort of development. on the one hand it's nice to see sandboxing getting more attention and use, and these efforts will surely bring the technique to the masses. but on the other hand i worry about the effects of baking sandboxing into applications - especially applications that might call each other.
if something is running inside a sandbox and it executes an external program, that program should also run inside the same sandbox (otherwise the sandbox is rather simple to escape from). what happens, then, when the dell kace secure browser (which runs in it's own sandbox) launches adobe reader (as it would generally be expected to do when you click on a pdf link)? would we have adobe's sandbox nested within the browser's sandbox? how would that work - specifically how well (or poorly) would it work? you may recall that google's chrome had it's own sandboxing right from day 1 but that there were problems coexisting with other sandboxing technology (that i discussed here). the conflict between chrome and sandboxie seems to have been resolved but, as more and more applications come with sandboxing baked in, further inter-sandbox incompatibilities seem like a pretty likely outcome.
not only is cross compatibility between sandboxes a problem, but the question of implementation efficacy is an issue too. chrome's baked in sandbox was easy to escape because chrome's sandbox wasn't complete. it was only able to sandbox a narrowly defined set of processes specific to the browser itself - it couldn't even sandbox the plugins that it ran in order to render rich content like flash. it stands to reason that any baked in sandboxing technology is only going to be good enough to sandbox the application it's baked into. if the sandboxing technology was good enough to handle all the secondary processes that an application might launch then the technology might as well be made into a general purpose sandboxing product instead of being baked into an application. some may be better than others and people may be lulled into a false sense of security by thinking that the sandbox baked into application X is as good at protecting them as the sandbox baked into application Y.
i'm a big proponent of sandboxing, but i don't think sandbox sprawl is a good thing. it would eventually replace a few discrete sandboxes with known properties and known shortcomings with a ridiculous number of sandboxes with unknowable properties and shortcomings and cross compatibility issues. i'd prefer to see users using one or two general purpose sandboxes than dozens of custom sandboxes.
Tags:
adobe,
allysa myers,
dell,
firefox,
sandbox,
sandbox sprawl
Thursday, July 22, 2010
disclosure options
kevin townsend drew my attention to this post on the google online security blog about responsible disclosure (should you read the post, pay special attention to the list of authors - if you've been paying attention to the disclosure debate lately you'll recognize one of the names). it discusses the fact that responsible disclosure sometimes fails to provide adequate motivation for software vendors to fix their code and how that actually puts users in danger because it means the vulnerabilities in question won't get fixed.
unfortunately the security folks over at google fall into pretty much the same trap that a lot of the security community falls into - the trap of binary thinking (i'm sure there's a loftier term but this will do). when we talk about disclosure many people only consider two main alternatives: full disclosure and responsible disclosure. in reality, however, there are additional alternatives between those two and even some beyond those two.
unless you're quite new to the security field or have been living under a rock these past couple of years you probably have heard of partial disclosure - dan kaminski famously used it for the DNS vulnerability discovered back in 2008. whatever your opinion of partial disclosure might be, it clearly highlights the fact that there are more than 2 options when it comes to disclosure. in fact, partial disclosure opens up an entire range of options depending on how much information you disclose. the number of options for partial disclosure is limited only by one's own imagination and creativity.
what that means in practice is that it's actually possible to apply a strategy of 'graduated response' if you find a vendor is being too stubborn. it's not necessary to jump directly from responsible disclosure to full disclosure, you can gradually apply increasing amounts of pressure while at the same time not arming malicious attackers with new digital weapons.
unfortunately the disclosure debate has become religious. this is a big problem because it prevents people from considering alternatives. it polarizes opposing sides (which is odd when you consider that the true opposite of full disclosure is not responsible disclosure but actually non-disclosure) and creates an environment where people exhibit what emerson might well describe as 'a foolish consistency'. people need to get over themselves and their dogmas. disclosure (like many other things) is just a tool, and it's important to use the right tool for the job. sometimes the right tool may well be full disclosure, other times it may well be responsible disclosure, and still other times it might be something in between. the google folks are right to put the focus on protecting the users, but focus alone won't really help if they can't figure out how to use the right tool for the job (and one of them, at least, appears unable to).
unfortunately the security folks over at google fall into pretty much the same trap that a lot of the security community falls into - the trap of binary thinking (i'm sure there's a loftier term but this will do). when we talk about disclosure many people only consider two main alternatives: full disclosure and responsible disclosure. in reality, however, there are additional alternatives between those two and even some beyond those two.
unless you're quite new to the security field or have been living under a rock these past couple of years you probably have heard of partial disclosure - dan kaminski famously used it for the DNS vulnerability discovered back in 2008. whatever your opinion of partial disclosure might be, it clearly highlights the fact that there are more than 2 options when it comes to disclosure. in fact, partial disclosure opens up an entire range of options depending on how much information you disclose. the number of options for partial disclosure is limited only by one's own imagination and creativity.
what that means in practice is that it's actually possible to apply a strategy of 'graduated response' if you find a vendor is being too stubborn. it's not necessary to jump directly from responsible disclosure to full disclosure, you can gradually apply increasing amounts of pressure while at the same time not arming malicious attackers with new digital weapons.
unfortunately the disclosure debate has become religious. this is a big problem because it prevents people from considering alternatives. it polarizes opposing sides (which is odd when you consider that the true opposite of full disclosure is not responsible disclosure but actually non-disclosure) and creates an environment where people exhibit what emerson might well describe as 'a foolish consistency'. people need to get over themselves and their dogmas. disclosure (like many other things) is just a tool, and it's important to use the right tool for the job. sometimes the right tool may well be full disclosure, other times it may well be responsible disclosure, and still other times it might be something in between. the google folks are right to put the focus on protecting the users, but focus alone won't really help if they can't figure out how to use the right tool for the job (and one of them, at least, appears unable to).
Wednesday, July 21, 2010
why the disclosure debate does in fact matter
some time ago dennis fisher published a post on threatpost explaining why the disclosure debate doesn't matter.
from the article:
what this basically boils down to in practice (whether dennis likes it or not) is 'since they're going to get in anyways we might as well make it easy for them'. does that seem right to you? it doesn't to me. how about this - if it doesn't matter whether we keep a particular vulnerability out of the attacker's toolbox (since they'll just find some other way to get in), why does it matter if we fix the vulnerability at all? whether the vulnerability is kept hidden or made non-existent, it should have the same effect, namely that it doesn't get exploited, so if one of those is pointless doesn't that mean the other one is too?
this strikes me as the security equivalent of nihilism, which quite frankly is not conducive to progress. as such i have an exercise for all those agree with dennis' sentiments (that the disclosure debate doesn't matter) to rouse them from their apathy:
"that's not the same thing" you say? well of course not. in one instance you're handing over tools that enable attackers to victimize somebody somewhere (often many somebodies all over the place) and in the other you're handing over tools that enable attackers to victimize YOU. clearly things are a lot different when it's your own neck on the line than when it's some nameless faceless mass of people who are out of sight and out of mind.
will responsible disclosure prevent attackers from victimizing people or organizations? in the most general sense, no. but there is definitely value in making things harder for them, and it should be blatantly obvious that there is no value in making things easier for them. the concept of not making the attacker's job easier is why there's a disclosure debate in the first place, and the fact that so many people still don't understand that is why it's still important.
from the article:
In recent discussions I've had with both attackers and the folks on enterprise security staffs who are charged with stopping them, the common theme that emerged was this: Even if every vulnerability was "responsibly" disclosed from here on out, attackers would still be owning enterprises and consumers at will. A determined attacker (whatever that term means to you) doesn't need an 0-day and a two-week window of exposure before a patch is ready to get into a target network. All he needs is one weak spot. A six-year-old flaw in Internet Explorer or a careless employee using an open Wi-Fi hotspot is just as good as a brand-spanking-new hole in an Oracle database.his argument seems to be that since a determined adversary is going to get in regardless of whether people practice full disclosure or responsible disclosure, the method of disclosure makes no difference. if they don't use the vulnerability in question then they'll just use something else.
what this basically boils down to in practice (whether dennis likes it or not) is 'since they're going to get in anyways we might as well make it easy for them'. does that seem right to you? it doesn't to me. how about this - if it doesn't matter whether we keep a particular vulnerability out of the attacker's toolbox (since they'll just find some other way to get in), why does it matter if we fix the vulnerability at all? whether the vulnerability is kept hidden or made non-existent, it should have the same effect, namely that it doesn't get exploited, so if one of those is pointless doesn't that mean the other one is too?
this strikes me as the security equivalent of nihilism, which quite frankly is not conducive to progress. as such i have an exercise for all those agree with dennis' sentiments (that the disclosure debate doesn't matter) to rouse them from their apathy:
publicly post your full personal details, including name, address, phone number, bank account number, credit card number, social security number, etc, etc.after all, if someone really wants to steal your identity they're going to do it anyways, so you might as well hand the bad guys the tools they need on a silver platter, assume you're going to get pwned (in accordance with the defeatist mindset that has become so popular in security these days), and start the recovery process. just bend over and think warm thoughts.
"that's not the same thing" you say? well of course not. in one instance you're handing over tools that enable attackers to victimize somebody somewhere (often many somebodies all over the place) and in the other you're handing over tools that enable attackers to victimize YOU. clearly things are a lot different when it's your own neck on the line than when it's some nameless faceless mass of people who are out of sight and out of mind.
will responsible disclosure prevent attackers from victimizing people or organizations? in the most general sense, no. but there is definitely value in making things harder for them, and it should be blatantly obvious that there is no value in making things easier for them. the concept of not making the attacker's job easier is why there's a disclosure debate in the first place, and the fact that so many people still don't understand that is why it's still important.
Tuesday, July 20, 2010
full disclosure as disarmament
i suppose it was only a matter of time before i linked to some article that touched on the sad story of tavis ormandy (backstory for those living under a rock: he disclosed a serious windows flaw to the public after a rather pitiful amount of negotiation with microsoft over the patch timeline). i'm a little too late to the party to bother with vilifying him, but the arguments used to support him could stand and be reused in the future and those need to be addressed.
the most interesting of those arguments that i've witnessed so far comes from lurene grenier who wrote a guest article for zdnet about how tavis supposedly acted responsibly by fully disclosing the details of the vulnerability. what interests me so much about her argument is the idea that tavis was actually disarming the bad guys:
i propose a different way of looking at things:
the most interesting of those arguments that i've witnessed so far comes from lurene grenier who wrote a guest article for zdnet about how tavis supposedly acted responsibly by fully disclosing the details of the vulnerability. what interests me so much about her argument is the idea that tavis was actually disarming the bad guys:
So we must ask, were the actions that Tavis took responsible? Would it have been more responsible to allow a company to sit on a serious bug for an extended period of time? The bugs we are discussing are APT quality bugs. Disclosing them removes ammunition from APT attackers.now i shouldn't have to remind people that full disclosure actually puts ammunition INTO the hands of bad guys - and this case was no exception. it was only a matter of days before people started seeing attacks leveraging tavis' discovery in the wild. that doesn't contradict lurene's argument, though, because her argument is carefully constructed around the concept of APT (advanced persistent threat) attackers. she contends that when vulnerabilities of this calibre are disclosed they are no longer of use to "serious attackers".
i propose a different way of looking at things:
- when an attack's days become numbered, an attacker who had been sitting on it with plans on using it might just let it fade away OR such an attacker could decide to pull the trigger right then and there to get as much use out of it as possible in the time s/he has left. patches and mitigations take time to develop and deploy so any disarmament sort of effect would not be immediate.
- it seems entirely probable to me that a so-called serious attacker, specifically one who qualifies as an advanced persistent threat, would not just sit on the attack and wait. such an attacker would have used the attack to gain access to whatever high-value target they had in mind long ago. it is easier to retain access (which can be done through more mundane methods) than to gain it in the first place. so while it may well be possible to liken the disclosure of a high value vulnerability to the removal of ammunition, it seems likely that that ammunition has already been spent, and removing spent ammunition doesn't have quite as big a benefit for us as lurene probably had in mind.
- the implication that only APT style attackers are serious, that they're the only ones we really need to worry about is quite frankly farcical. while it's true that they are one of the few credible threats to power generation facilities and other infrastructure level targets, to imply that they are the only ones we need to worry about displays the same narrow-mindedness that we've previously seen applied to financially motivated cybercrime. there are more things in heaven and earth than are dreampt of in your philosophy if all you care about is APT.
- finally, even if we were to accept the argument that full disclosure takes ammunition away from APT-style attackers, it demonstrably puts it in the hands of other attackers. taking ammunition out of the hands of a highly selective minority and putting it into the hands of a far larger, less discriminating and less controlled adversary doesn't seem like all that clear-cut a case of making things better. it may lessen the potential impact of the attack in theory (arguably), but it demonstrably increases the probability and scope of the attack in practice.
Monday, July 19, 2010
what is APT?
APT is an acronym that stands for advanced persistent threat. it is a classification for attackers that fall outside the normal attacker bell curve. advanced persistent threats are highly skilled (advanced) and well funded so that they can spend a great deal of time and effort (persistent) trying to compromise a single organization.
unlike conventional cybercriminals who will go after as many victims as possible in order to achieve their goal of financial profit, advanced persistent threats have relatively few targets because what they're after isn't so widely available that everyone has it. APT is well suited to espionage (including industrial espionage) and sabotage, especially (in the case of so-called cyberwar offensives) sabotage of infrastructural operations such as power generation or water pumping stations.
back to index
unlike conventional cybercriminals who will go after as many victims as possible in order to achieve their goal of financial profit, advanced persistent threats have relatively few targets because what they're after isn't so widely available that everyone has it. APT is well suited to espionage (including industrial espionage) and sabotage, especially (in the case of so-called cyberwar offensives) sabotage of infrastructural operations such as power generation or water pumping stations.
back to index
Tags:
advanced persistent threat,
apt,
definition
Friday, July 16, 2010
offensive full disclosure
in my previous post about full disclosure i came up with some pretty restrictive guidelines for when full disclosure was ok. that was specific to disclosure of vulnerabilities in legitimate software/services. some time ago richard bejtlich published a post about about performing full disclosure for vulnerabilities found in attack tools - in other words the sort of software used by the bad guys to victimize innocent users.
while there is admittedly some gray area where legitimate software and attack tools might conceivably overlap, the tools richard described aren't anywhere near being legitimate - they are geared to exploit vulnerable systems and there's virtually no legitimate reason for doing that.
obviously that changes the ethical landscape a lot with respect to full disclosure. the users of such software are not innocent by definition, so actions that put them in harm's way aren't nearly so poorly regarded as those that put innocent users in harm's way. in fact, when cybercriminals are subjected to the same sorts of digital attacks as those they might otherwise perpetrate on others, some might even consider it poetic justice.
richard doubts the tactical efficacy of full disclosure against attack tools for 2 basic reasons:
reason #2 is also true, but it assumes that the only people interested in attacking the black hats are the white hats and that simply isn't true. taking into account the low risks of getting caught that cybercriminals face, if i were a cybercriminal then the idea of letting some other poor shmuck run a criminal campaign and then simply breaking in and stealing their loot actually seems pretty appealing to me. there's less work involved, less visibility (and so even less risk than the poor shmuck who did things the hard way), etc. stealing from other thieves after they've already done all the hard work of collecting and aggregating whatever it is they were after just seems like the smarter way to go.
furthermore, from a white hat perspective having the black hats attacking each other is also appealing. the more they focus on each other the less they focus on the rest of us. of course, the point richard made about white hats no longer being able to use the vulnerability for their own operations against the attackers would still be true but only if full disclosure were followed in the case of all vulnerabilities. since that isn't the case for vulnerabilities in legitimate software, since only some of the vulnerabilities people know about get disclosed, i see no reason to worry about that happening in the case of offensive full disclosure. some vulnerabilities will no doubt be held back for precisely those sorts of operations.
frankly, i really like the idea of offensive full disclosure - it seems like a win-win proposition to me.
while there is admittedly some gray area where legitimate software and attack tools might conceivably overlap, the tools richard described aren't anywhere near being legitimate - they are geared to exploit vulnerable systems and there's virtually no legitimate reason for doing that.
obviously that changes the ethical landscape a lot with respect to full disclosure. the users of such software are not innocent by definition, so actions that put them in harm's way aren't nearly so poorly regarded as those that put innocent users in harm's way. in fact, when cybercriminals are subjected to the same sorts of digital attacks as those they might otherwise perpetrate on others, some might even consider it poetic justice.
richard doubts the tactical efficacy of full disclosure against attack tools for 2 basic reasons:
- it will alert the attack tool developers to the problem and ultimately result in the vulnerability being fixed
- white hats have to follow rules that prevent them from forming waves of attacks against black hats the way black hats would do if a vulnerability in IE were disclosed
reason #2 is also true, but it assumes that the only people interested in attacking the black hats are the white hats and that simply isn't true. taking into account the low risks of getting caught that cybercriminals face, if i were a cybercriminal then the idea of letting some other poor shmuck run a criminal campaign and then simply breaking in and stealing their loot actually seems pretty appealing to me. there's less work involved, less visibility (and so even less risk than the poor shmuck who did things the hard way), etc. stealing from other thieves after they've already done all the hard work of collecting and aggregating whatever it is they were after just seems like the smarter way to go.
furthermore, from a white hat perspective having the black hats attacking each other is also appealing. the more they focus on each other the less they focus on the rest of us. of course, the point richard made about white hats no longer being able to use the vulnerability for their own operations against the attackers would still be true but only if full disclosure were followed in the case of all vulnerabilities. since that isn't the case for vulnerabilities in legitimate software, since only some of the vulnerabilities people know about get disclosed, i see no reason to worry about that happening in the case of offensive full disclosure. some vulnerabilities will no doubt be held back for precisely those sorts of operations.
frankly, i really like the idea of offensive full disclosure - it seems like a win-win proposition to me.
Thursday, July 15, 2010
some thoughts on full disclosure
long time readers may recall that i'm not actually an opponent of full disclosure. i suppose this may come as a shock to some newer readers but it's true, i actually support full disclosure - but only in some very specific circumstances which i intend to refine here.
in contemplating my post about responsibility the concept of full disclosure came to mind. specifically the question of how it works. how does full disclosure convince vendors to act? why is it persuasive? where does full disclosure get it's teeth from? these are important questions because, as it turns out, the answers affect how we consider full disclosure in the context of responsible behaviour.
the concept of full disclosure is often framed so as to make it appear to be a vendor vs researcher issue where the researcher is the champion of some abstract notion of security and the vendor is being recalcitrant, perhaps even practicing outright denial that a problem exists. unfortunately this sort of framing ignores two very important players - the malicious attackers and the vendor's customers. of course the argument can be made that those two groups are taken care of when the vendor submits to the researcher's authority and fixes the vulnerability the researcher disclosed; and maybe that's true, but it glosses over a lot of the inner workings of the process.
now given the most reasonable characterization of a vendor is that their primary interest is the bottom line, it stands to reason that full disclosure persuades them to act by threatening their bottom line. the bottom line, of course, comes from their customers. but most customers don't read security forums or mailing lists. they aren't aware of any security problems, and the certainly aren't changing their behaviours or buying patterns based on things they aren't even aware of. malicious attackers, on the other hand, do read a variety of sources for information about new vulnerabilities, and they use that information to launch attacks which affect the vendor's customers. when a group of people are being victimized and the commonality between them is discovered to be the vendor's product/service, that harms the vendor's brand and that leads to fewer customers which ultimately affects the vendor's bottom line.
so let's look at full disclosure this way: it operates by giving attackers the information they need to launch attacks. the attackers then launch those attacks and victimize the customers. attackers have often proven able to produce attacks using a vulnerability faster than the vendor can create a fix for it (often a matter of days compared to weeks or months from the vendor's side). as a result the vendor rushes out a fix in order to protect as many of their customers (as much of their cash cow) as possible. at this point all is supposedly right with the world, unless you take into consideration the people who patch late or the fact that rushing things out the door generally results in poor workmanship and has the potential to cause more problems than it solves.
it needs to be said that this process, this chain of events breaks down if the attackers don't attack. they have to attack at least most of the time in order for threat to the vendor's bottom line to be credible. if full disclosure didn't result in attacks a significant amount of the time then there would be no reason for vendors to believe the disclosure would affect their bottom line and full disclosure would cease to be effective at persuading vendors to bow to the whims of researchers. consequently, whether researchers are aware of it or not or whether they're willing to admit it or not, hoping for full disclosure to affect change means hoping that attackers mount successful attacks as a result of full disclosure. they might hope that their particular disclosure is an exception to this rule, but that's more than a little unrealistic.
so if full disclosure only works by leveraging the bad guys, if it's a process that manipulates attackers into behaving in a way that forces the vendor's hand and throws innocent users under a bus along the way, then why on earth would i not be entirely against it? because under certain circumstances it's better than the alternatives. what circumstances would those be? when the attackers demonstrably already have the info without the benefit of the researcher's disclosure (better still if it's mainstream because then it's unlikely the disclosure will even raise the vulnerability's profile amongst attackers) and the vendor actually is in a state of denial. if the researcher's contribution to harming users will be demonstrably negligible and the vendor is stubborn beyond reason and really needs a swift kick in the ass then the ethical arguments against full disclosure break down.
in contemplating my post about responsibility the concept of full disclosure came to mind. specifically the question of how it works. how does full disclosure convince vendors to act? why is it persuasive? where does full disclosure get it's teeth from? these are important questions because, as it turns out, the answers affect how we consider full disclosure in the context of responsible behaviour.
the concept of full disclosure is often framed so as to make it appear to be a vendor vs researcher issue where the researcher is the champion of some abstract notion of security and the vendor is being recalcitrant, perhaps even practicing outright denial that a problem exists. unfortunately this sort of framing ignores two very important players - the malicious attackers and the vendor's customers. of course the argument can be made that those two groups are taken care of when the vendor submits to the researcher's authority and fixes the vulnerability the researcher disclosed; and maybe that's true, but it glosses over a lot of the inner workings of the process.
now given the most reasonable characterization of a vendor is that their primary interest is the bottom line, it stands to reason that full disclosure persuades them to act by threatening their bottom line. the bottom line, of course, comes from their customers. but most customers don't read security forums or mailing lists. they aren't aware of any security problems, and the certainly aren't changing their behaviours or buying patterns based on things they aren't even aware of. malicious attackers, on the other hand, do read a variety of sources for information about new vulnerabilities, and they use that information to launch attacks which affect the vendor's customers. when a group of people are being victimized and the commonality between them is discovered to be the vendor's product/service, that harms the vendor's brand and that leads to fewer customers which ultimately affects the vendor's bottom line.
so let's look at full disclosure this way: it operates by giving attackers the information they need to launch attacks. the attackers then launch those attacks and victimize the customers. attackers have often proven able to produce attacks using a vulnerability faster than the vendor can create a fix for it (often a matter of days compared to weeks or months from the vendor's side). as a result the vendor rushes out a fix in order to protect as many of their customers (as much of their cash cow) as possible. at this point all is supposedly right with the world, unless you take into consideration the people who patch late or the fact that rushing things out the door generally results in poor workmanship and has the potential to cause more problems than it solves.
it needs to be said that this process, this chain of events breaks down if the attackers don't attack. they have to attack at least most of the time in order for threat to the vendor's bottom line to be credible. if full disclosure didn't result in attacks a significant amount of the time then there would be no reason for vendors to believe the disclosure would affect their bottom line and full disclosure would cease to be effective at persuading vendors to bow to the whims of researchers. consequently, whether researchers are aware of it or not or whether they're willing to admit it or not, hoping for full disclosure to affect change means hoping that attackers mount successful attacks as a result of full disclosure. they might hope that their particular disclosure is an exception to this rule, but that's more than a little unrealistic.
so if full disclosure only works by leveraging the bad guys, if it's a process that manipulates attackers into behaving in a way that forces the vendor's hand and throws innocent users under a bus along the way, then why on earth would i not be entirely against it? because under certain circumstances it's better than the alternatives. what circumstances would those be? when the attackers demonstrably already have the info without the benefit of the researcher's disclosure (better still if it's mainstream because then it's unlikely the disclosure will even raise the vulnerability's profile amongst attackers) and the vendor actually is in a state of denial. if the researcher's contribution to harming users will be demonstrably negligible and the vendor is stubborn beyond reason and really needs a swift kick in the ass then the ethical arguments against full disclosure break down.
Tags:
ethics,
full disclosure,
responsibility
Wednesday, July 14, 2010
whose yer members
so apparently i've been defending AMTSO a bunch lately. well, i think it's time to restore a little bit of balance to the force and show that i can be an equal opportunity critic.
you see, there's something i just don't understand about AMTSO. something that just doesn't make sense. i know it's members are people - it's members (sorry if the assumption here is wrong, i just can't tell which individuals are in it and which aren't for the most part) even admit that AMTSO is made of people - so why does the members page list companies instead of people almost exclusively?
way to present yourselves as sell-outs, guys and gals. it's no wonder people worry about AMTSO being a tool of the 'av industrial complex' with a membership that's so impersonal and corporate.
it's not as though you can't list the person along with their organizational affiliation - that's how the WildList reporters have been listed for as long as i can remember.
recently a joint post by several members was published that spoke about (among other things) transparency - so where's the transparency in AMTSO's membership? who exactly are you and why are most of you hiding behind vendor names when AMTSO is supposedly not a tool of the vendors?
i mentioned in a previous post that i mostly agreed that AMTSO wasn't responsible for the misconceptions people have about them, but on this one point in particular i think they actually are responsible. membership is basically secret - although i can find out names of who's on what committee, the union of those committees does not appear to represent the entire membership - and with a secret membership i'm left wondering if AMTSO qualifies as a not-so-secret secret society (least secret secret society evar!).
i really think AMTSO is sending the wrong message by listing companies instead of people as members of the organization. i think the members should stand and be counted, take ownership of their participation and the views they bring to the table, even if those views happen to not be in their employers' best interests. also if the membership page showed people's names it would be harder for people to fraudulently claim membership (it may not be a problem yet, but as AMTSO gains more mindshare it will be).
you see, there's something i just don't understand about AMTSO. something that just doesn't make sense. i know it's members are people - it's members (sorry if the assumption here is wrong, i just can't tell which individuals are in it and which aren't for the most part) even admit that AMTSO is made of people - so why does the members page list companies instead of people almost exclusively?
way to present yourselves as sell-outs, guys and gals. it's no wonder people worry about AMTSO being a tool of the 'av industrial complex' with a membership that's so impersonal and corporate.
it's not as though you can't list the person along with their organizational affiliation - that's how the WildList reporters have been listed for as long as i can remember.
recently a joint post by several members was published that spoke about (among other things) transparency - so where's the transparency in AMTSO's membership? who exactly are you and why are most of you hiding behind vendor names when AMTSO is supposedly not a tool of the vendors?
i mentioned in a previous post that i mostly agreed that AMTSO wasn't responsible for the misconceptions people have about them, but on this one point in particular i think they actually are responsible. membership is basically secret - although i can find out names of who's on what committee, the union of those committees does not appear to represent the entire membership - and with a secret membership i'm left wondering if AMTSO qualifies as a not-so-secret secret society (least secret secret society evar!).
i really think AMTSO is sending the wrong message by listing companies instead of people as members of the organization. i think the members should stand and be counted, take ownership of their participation and the views they bring to the table, even if those views happen to not be in their employers' best interests. also if the membership page showed people's names it would be harder for people to fraudulently claim membership (it may not be a problem yet, but as AMTSO gains more mindshare it will be).
Tags:
amtso
Tuesday, July 13, 2010
i see a standards organization
ed moyle prefaced his recent post about how AMTSO is perceived by the industry by saying that he really didn't want to continue talking about this subject (he has, after all, penned a number of posts about AMTSO recently). having seen this blog go more or less dark in the past, i have no qualms about following whatever path my interests and creativity take. if the subject doesn't bore me, i see no reason not to write about it.
and the subject of how AMTSO is perceived has a few interesting bits to it, i think. first and foremost, while david harley may bend to the notion that the use of the word "standard" in AMTSO's name might mislead people, i think the use of the word "standard" is entirely appropriate. if people are mislead into thinking AMTSO is anything like ISO, it is actually ISO and organizations like it that have mislead people into thinking enforcement has anything to do with standards. a few pertinent definitions for standard:
page three of that same document has something for ed as well. he asks the following:
of course as an interpretive aid, the reader is free to pick and choose the ideals that are important to him/her - as ed does when he discusses testing features individually. the ideal that the testing industry is trying to move towards is whole product testing. the reason is because it's understood that different products have very different technologies and thus have different ways of stopping different threats. it's especially difficult for testers to devise testing methodologies that aren't biased in favour of certain technologies. if i test feature X and product A blocks 5 things but product B only blocks 3 things, how can i possibly show that a test of feature Y shows that product B blocks everything it missed in the feature X test and product A blocks nothing because it doesn't even have feature Y? and if i can't show that then is what i'm presenting really relevant? isn't the important thing that B blocks the threats in one way or another? does it really matter if it uses feature X or Y to do it? current opinion in the anti-malware community is no, it shouldn't matter, which is why whole product testing is becoming the standard. NSS themselves bang the drum of whole product testing pretty loudly, so it seems ironic to me that they failed to test the whole product (seemingly testing everything but the spam filter).
of course, as interpretive aids go, even AMTSO's analysis isn't necessarily perfect. i say this because point 7 of the NSS review analysis is interpreted by ed one way, and a different way by myself. i don't know if ed's interpretation is correct or if the analysis is implicitly assuming domain knowledge of NSS' practices. ed quotes the following from the analysis:
i understand why ed feels that perception is important, but the key to making perception match reality is knowledge and understanding and there's only so much anyone can do to impart those things on others. people have to be willing to look past their preconceptions and actually acquire new knowledge and understanding.
and the subject of how AMTSO is perceived has a few interesting bits to it, i think. first and foremost, while david harley may bend to the notion that the use of the word "standard" in AMTSO's name might mislead people, i think the use of the word "standard" is entirely appropriate. if people are mislead into thinking AMTSO is anything like ISO, it is actually ISO and organizations like it that have mislead people into thinking enforcement has anything to do with standards. a few pertinent definitions for standard:
noun: a basis for comparison; a reference point against which other things can be evaluated
noun: the ideal in terms of which something can be judged ("They live by the standards of their community")developing a basis upon which anti-malware tests can be evaluated or an ideal which testers should strive for is precisely what AMTSO is about. it is not about enforcement - following the standards is entirely voluntary. if enforcement were on the table at all then testers wouldn't participate for 2 reasons:
- many testing organizations were (and perhaps still are) too far away from the ideal. signing up for obligations at a time when one cannot meet them makes little or no sense. with voluntary standards the obligation, instead, is to keep improving and moving closer to the ideal.
- enforcement would mean that the standards were actually rules, and nobody thinks vendors should be involved in making rules for testers.
page three of that same document has something for ed as well. he asks the following:
So if it’s not the role of AMTSO to standardize, it’s also clearly not their role to accredit. But aren’t they doing just that?the answer is no, they are not. AMTSO makes no judgments or endorsements of reviewers or products. for an AMTSO member to suggest otherwise is considered misrepresentation. the analysis of reviews is just that, analysis. the output serves as an interpretive aid for individuals wishing to know how close to ideal a particular review was. the review analysis that ed looked at as an example (the analysis of NSS' review) was actually quite close to the ideal (though apparently not close enough for NSS' liking). only 2 real problems were found, and they've been described by members as 'minor'. in the analysis itself, the explanation for the first one even goes so far as to say that NSS' test is still better than most out there in spite of the problem. ed moyle interprets this as a pass/fail sort of judgment and i suppose in the strictest sense the NSS test did fail to reach the ideal but it's hard to say the analysis of their test is calling it a failure when it's clearly stating it's better than most out there.
of course as an interpretive aid, the reader is free to pick and choose the ideals that are important to him/her - as ed does when he discusses testing features individually. the ideal that the testing industry is trying to move towards is whole product testing. the reason is because it's understood that different products have very different technologies and thus have different ways of stopping different threats. it's especially difficult for testers to devise testing methodologies that aren't biased in favour of certain technologies. if i test feature X and product A blocks 5 things but product B only blocks 3 things, how can i possibly show that a test of feature Y shows that product B blocks everything it missed in the feature X test and product A blocks nothing because it doesn't even have feature Y? and if i can't show that then is what i'm presenting really relevant? isn't the important thing that B blocks the threats in one way or another? does it really matter if it uses feature X or Y to do it? current opinion in the anti-malware community is no, it shouldn't matter, which is why whole product testing is becoming the standard. NSS themselves bang the drum of whole product testing pretty loudly, so it seems ironic to me that they failed to test the whole product (seemingly testing everything but the spam filter).
of course, as interpretive aids go, even AMTSO's analysis isn't necessarily perfect. i say this because point 7 of the NSS review analysis is interpreted by ed one way, and a different way by myself. i don't know if ed's interpretation is correct or if the analysis is implicitly assuming domain knowledge of NSS' practices. ed quotes the following from the analysis:
Does the conclusion reflect the stated purpose? No. The report’s Executive Summary states that test’s purpose was to determine the protection of the products tested against socially-engineered malware only. Later in the report (Section 4 -product assessments) it says: “Products that earn a caution rating from NSS Labs should not be short-listed or renewed.” This is clearly a conclusion that you can’t make out of the detection for socially‐engineered malware only, as the products have other layers of protection that the test did not evaluate.ed's interpretation is that the conclusion supposedly didn't reflect the stated purpose simply because NSS failed to include spam filters in their test. my interpretation differs in part because i know that NSS breaks malware down into 2 categories and "socially engineered malware" is only one of those categories - so making purchasing recommendations on the basis of the results of the socially engineered malware test alone seems like a premature conclusion to me. i suspect that the spam filters were only 1 of many features that weren't tested since the other malware category NSS recognizes involves drive-by downloads and other sorts of malware that don't involve user intervention. but clearly, someone who doesn't know what i know may interpret the meaning of the analysis in an entirely different way than i did.
i understand why ed feels that perception is important, but the key to making perception match reality is knowledge and understanding and there's only so much anyone can do to impart those things on others. people have to be willing to look past their preconceptions and actually acquire new knowledge and understanding.
Tags:
amtso,
anti-malware testing,
ed moyle,
nss labs
Monday, July 12, 2010
certainty of bias
with all the talk of anti-malware testing recently, one of the subjects that has come up is the appearance of bias. more specifically, when vendors are involved in any way with the execution of the test, the development of the testing methodology, or even if they just funded the test, the suspicion is that those vendors have somehow influenced the test in subtle or not so subtle ways so that they'll come out better in the end.
this is why testing organizations often strive to maintain independence from vendors - so that they can avoid the appearance that their tests have been unduly biased by an association with a vendor.
so there seems to be a certain amount of irony at play here because for all NSS Labs' claims of independence, in fact of being one of the only truly independent testing organizations out there, vikram phatak (either CTO or CEO of NSS, depending on whether you go by how he's referenced in the media which says the former or by his linkedin profile which says the latter) sure seems cavalier about throwing all the bias minimizing benefits of independence away by openly declaring favourites, in public, on camera.
in the source boston anti-malware panel video that i've referenced a few times already, at approximately 55:30 minutes in, andrew jaquith asks what he characterizes as a "naughty question" - he asked the panelists to list their 2 most favourite and their 2 least favourite products. the fact that the panelists were told from the start that it was a "naughty question" should have been a great big neon sign of a clue that answering the question would cause trouble.
to his credit, av-comparatives' peter stelzhammer refused, without hesitation, to answer the question in the spirit it had been asked. in fact, he refused twice. it was a textbook example of how an independent tester should respond to that sort of question. mario vuksan of reversing labs didn't do too bad a job either - he beat around the bush a bit but the gist of it was that he couldn't give a real answer because he didn't have enough recent data about the full capabilities of all the products. vikram phatak, in contrast with the other 2 panelists, wasted no time nor minced any words in his answer - his favourites are trend and mcafee, and his least favourites are panda and avg.
it's hard to imagine that a testing organization lead by someone with such clear and unambiguous favourites, not to mention an apparent disregard for the consequences that picking favourites has, would manage to develop a testing methodology that doesn't express that favouritism, that bias in some subtle way. you might then expect that trend and mcafee do well in NSS tests (trend does, apparently). you might also expect avg and panda to do poorly - and given both avg and panda lent their support to sophos in requesting a review of an NSS report (PDF) that seems like a safe bet too.
at this point you could be thinking that vikram was just expressing the ceiling and floor of the results of recent testing and poorly wording it as 'favourites'. unfortunately, that interpretation doesn't quite explain why he later compares avg and panda to cheapskate american football owners (see the same video starting at approximately minute 87:00). there's no question in my mind that his bias against avg and panda goes beyond simple test performance explanations.
so the question i put to you the reader is this: how can party A be expected to judge party B in a fair, unbiased, and impartial way when party A has such clear animosity towards party B?
this is why testing organizations often strive to maintain independence from vendors - so that they can avoid the appearance that their tests have been unduly biased by an association with a vendor.
so there seems to be a certain amount of irony at play here because for all NSS Labs' claims of independence, in fact of being one of the only truly independent testing organizations out there, vikram phatak (either CTO or CEO of NSS, depending on whether you go by how he's referenced in the media which says the former or by his linkedin profile which says the latter) sure seems cavalier about throwing all the bias minimizing benefits of independence away by openly declaring favourites, in public, on camera.
in the source boston anti-malware panel video that i've referenced a few times already, at approximately 55:30 minutes in, andrew jaquith asks what he characterizes as a "naughty question" - he asked the panelists to list their 2 most favourite and their 2 least favourite products. the fact that the panelists were told from the start that it was a "naughty question" should have been a great big neon sign of a clue that answering the question would cause trouble.
to his credit, av-comparatives' peter stelzhammer refused, without hesitation, to answer the question in the spirit it had been asked. in fact, he refused twice. it was a textbook example of how an independent tester should respond to that sort of question. mario vuksan of reversing labs didn't do too bad a job either - he beat around the bush a bit but the gist of it was that he couldn't give a real answer because he didn't have enough recent data about the full capabilities of all the products. vikram phatak, in contrast with the other 2 panelists, wasted no time nor minced any words in his answer - his favourites are trend and mcafee, and his least favourites are panda and avg.
it's hard to imagine that a testing organization lead by someone with such clear and unambiguous favourites, not to mention an apparent disregard for the consequences that picking favourites has, would manage to develop a testing methodology that doesn't express that favouritism, that bias in some subtle way. you might then expect that trend and mcafee do well in NSS tests (trend does, apparently). you might also expect avg and panda to do poorly - and given both avg and panda lent their support to sophos in requesting a review of an NSS report (PDF) that seems like a safe bet too.
at this point you could be thinking that vikram was just expressing the ceiling and floor of the results of recent testing and poorly wording it as 'favourites'. unfortunately, that interpretation doesn't quite explain why he later compares avg and panda to cheapskate american football owners (see the same video starting at approximately minute 87:00). there's no question in my mind that his bias against avg and panda goes beyond simple test performance explanations.
so the question i put to you the reader is this: how can party A be expected to judge party B in a fair, unbiased, and impartial way when party A has such clear animosity towards party B?
Thursday, July 08, 2010
responsibility? what's that?...
... i don't wanna think about it. we'd be better off without it. </music>
responsibility is a concept that gets bandied around frequently in the security domain. accusations of irresponsibility fly one way, denials and dogmatic ideology fly the other. i wonder, though, if the concept has become so overused that it's become little more than an abstraction.
i suspect we've all been accused of being irresponsible, back when we were teenagers, and that's the first thing that pops into my head when i think about irresponsibility. i think that association between irresponsibility and immaturity is pretty strong and probably a serious driving force in some of the knee-jerk reactions to the claim.
the fact is, though, that it's not really a binary trait. you don't magically stop being irresponsible when you grow up. instead you (hopefully) become less irresponsible when you grow up - but you're always going to be a little irresponsible in one way or another.
part of the growing up process means learning to do things even when (or especially when) you don't want to because those things need to be done. that's part of what it means to be responsible - to be aware of and responsive to one's obligations to those around us, to society at large, etc. before we grow up, however, we don't think as much about consequences, or the big picture, or our obligations. as children we mostly think about ourselves. we begin as thoughtless beings and gradually become aware of more and more. much of our irresponsible childhood behaviour is rooted in this thoughtlessness - if we were aware of how we were affecting others (and not just in an abstract way) we probably would behave differently.
awareness and thoughtfulness, like responsibility, aren't binary traits. no one is perfectly thoughtful or completely aware as adults, so there is still room for those to lead to irresponsible behaviour, even in adults. drunk drivers are generally not aware of how impaired they are, even though they know they're drunk. they don't know how badly their reaction time or judgment had been affected. when we buy certain products we aren't aware of all the things that went into making that product and getting it to us. we aren't aware of the environmental or perhaps social consequences that supporting the industry that produces that product has. there's plenty of room for us, even as adults, to be more responsible than we actually are.
now when i was growing up, my favourite superhero, without a doubt, was spiderman. spiderman, as it turns out, is pretty much the poster boy for responsibility - always obsessing over doing the right thing, always blaming himself and beating himself up over his failures to protect those around him. he's even haunted by an admonition from his late uncle "with great power there must also come great responsibility". i mention this because it seems to me that in the security field we actually have an astounding amount of power. the things we say and do can sometimes affect millions of people - and i don't just mean the security researchers who put entire user populations in harms way by disclosing new vulnerabilities to the public before they're fixed - even someone like myself who mostly just talks about security can inadvertently put many people in harms way. it stands to reason, then, that since we have the potential to do so much harm we should be holding ourselves up to a much higher standard of responsibility than the average person. that's easier said than done, however.
if our irresponsible behaviour as adults is still rooted in a lack of awareness of some sort then you can't just say we should be more responsible and expect it to magically come true. having recently been lucky enough to deduce a different sort of lack of awareness in myself i can attest to the fact that if you lack awareness of something you probably won't know it. you won't be aware of it, and if you should become aware of some sort of awareness deficit that doesn't automatically mean you'll also gain the awareness you were lacking. it's like when you finally realize you don't know something - that simply means you've become aware of your own ignorance, the ignorance itself isn't eliminated.
oftentimes we're only aware of the things that directly affect us. we aren't aware of the impact we have on the world because the world doesn't always give us feedback. mike ellison (aka stormbringer, a virus writer from way way back) got that feedback and it changed him profoundly. in the absence of anything like that, however, we're left with what can be the surprisingly difficult task of trying to remain aware of the impact we have on individual people, companies, etc. all over the world. if that's not something that already comes naturally, nor a skill you've managed to develop, then maintaining that awareness (or even the presence of mind to try and maintain that awareness) can be very difficult - and that assumes you're even aware of the need to maintain that awareness.
if you're in security and never experience those peter parker-like moments where you question what you're doing, if you're not exercising restraint even when you don't want to, if you're just going through the motions of your day-to-day life, not thinking about the world and your obligation to the people in it to not cause them harm, then maybe you have actually been irresponsible. maybe you need to work harder to be better than you currently are.
responsibility is a concept that gets bandied around frequently in the security domain. accusations of irresponsibility fly one way, denials and dogmatic ideology fly the other. i wonder, though, if the concept has become so overused that it's become little more than an abstraction.
i suspect we've all been accused of being irresponsible, back when we were teenagers, and that's the first thing that pops into my head when i think about irresponsibility. i think that association between irresponsibility and immaturity is pretty strong and probably a serious driving force in some of the knee-jerk reactions to the claim.
the fact is, though, that it's not really a binary trait. you don't magically stop being irresponsible when you grow up. instead you (hopefully) become less irresponsible when you grow up - but you're always going to be a little irresponsible in one way or another.
part of the growing up process means learning to do things even when (or especially when) you don't want to because those things need to be done. that's part of what it means to be responsible - to be aware of and responsive to one's obligations to those around us, to society at large, etc. before we grow up, however, we don't think as much about consequences, or the big picture, or our obligations. as children we mostly think about ourselves. we begin as thoughtless beings and gradually become aware of more and more. much of our irresponsible childhood behaviour is rooted in this thoughtlessness - if we were aware of how we were affecting others (and not just in an abstract way) we probably would behave differently.
awareness and thoughtfulness, like responsibility, aren't binary traits. no one is perfectly thoughtful or completely aware as adults, so there is still room for those to lead to irresponsible behaviour, even in adults. drunk drivers are generally not aware of how impaired they are, even though they know they're drunk. they don't know how badly their reaction time or judgment had been affected. when we buy certain products we aren't aware of all the things that went into making that product and getting it to us. we aren't aware of the environmental or perhaps social consequences that supporting the industry that produces that product has. there's plenty of room for us, even as adults, to be more responsible than we actually are.
now when i was growing up, my favourite superhero, without a doubt, was spiderman. spiderman, as it turns out, is pretty much the poster boy for responsibility - always obsessing over doing the right thing, always blaming himself and beating himself up over his failures to protect those around him. he's even haunted by an admonition from his late uncle "with great power there must also come great responsibility". i mention this because it seems to me that in the security field we actually have an astounding amount of power. the things we say and do can sometimes affect millions of people - and i don't just mean the security researchers who put entire user populations in harms way by disclosing new vulnerabilities to the public before they're fixed - even someone like myself who mostly just talks about security can inadvertently put many people in harms way. it stands to reason, then, that since we have the potential to do so much harm we should be holding ourselves up to a much higher standard of responsibility than the average person. that's easier said than done, however.
if our irresponsible behaviour as adults is still rooted in a lack of awareness of some sort then you can't just say we should be more responsible and expect it to magically come true. having recently been lucky enough to deduce a different sort of lack of awareness in myself i can attest to the fact that if you lack awareness of something you probably won't know it. you won't be aware of it, and if you should become aware of some sort of awareness deficit that doesn't automatically mean you'll also gain the awareness you were lacking. it's like when you finally realize you don't know something - that simply means you've become aware of your own ignorance, the ignorance itself isn't eliminated.
oftentimes we're only aware of the things that directly affect us. we aren't aware of the impact we have on the world because the world doesn't always give us feedback. mike ellison (aka stormbringer, a virus writer from way way back) got that feedback and it changed him profoundly. in the absence of anything like that, however, we're left with what can be the surprisingly difficult task of trying to remain aware of the impact we have on individual people, companies, etc. all over the world. if that's not something that already comes naturally, nor a skill you've managed to develop, then maintaining that awareness (or even the presence of mind to try and maintain that awareness) can be very difficult - and that assumes you're even aware of the need to maintain that awareness.
if you're in security and never experience those peter parker-like moments where you question what you're doing, if you're not exercising restraint even when you don't want to, if you're just going through the motions of your day-to-day life, not thinking about the world and your obligation to the people in it to not cause them harm, then maybe you have actually been irresponsible. maybe you need to work harder to be better than you currently are.
Tags:
responsibility,
security
Tuesday, July 06, 2010
testing testing 1 2 3
last month ed moyle published a pair of posts about an incident in which a particular piece of open source IRC server software was found to have a backdoor planted (intentionally, by an unknown party) in the source code archive on the software's official site. the backdoor wasn't discovered for over half a year and in that time the trojanized copy of the software apparently made it into the gentoo linux distribution (which has since been corrected, but if you're a gentoo user/admin and you didn't hear about this then you'll want to go check some things real fast).
all of which calls into question the accuracy of linus' law which states (more or less) that 'given enough eyeballs all bugs are shallow'.
so the question is how can you know if your flaw finders (be they auditing code to find security flaws, or testing the product to find regular bugs, or even something else entirely) are doing a good job? how can we measure it? how can we test our testers or our code auditors or whatever other flaw finders we might have? as a software developer myself, i'm sensitive to the issue of software flaws, so this was a question that interested me and almost immediately a thought popped into my head - introduce your own flaws and see how good your flaw finders are at finding them. so long as they're fully documented you should be able to remove them before the final release of the code, and by measuring how many of these particular flaws get found you can get an estimate of how good a job your flaw finders are doing.
additionally, by comparing the number of artificially introduced flaws found to the total number of flaws being found you can even get an estimate of the size of the total flaw population. animal population sizes are often estimated this way, with one exception - usually it doesn't involve making new animals. that underscores one of the biggest problems with the idea; the flaws you artificially introduce may have little in common with natural flaws, and as such finding them may not be of comparable difficulty.
when estimating wild animal populations it's more common to capture some, tag them, release them, and then go on a second round of capturing to see how many of the tagged animals are captured a second time. doing this with software flaws would necessitate having 2 groups of flaw finders (or separating the ones you have into 2 groups) so that the flaws found and tagged by one group in the first pass are used to evaluate the other group in the second pass.
2 groups are necessary because, unlike animals, flaws don't move, so the group who found and tagged the flaws are going to be a little too good at finding them the second time. ideally they should also not discuss the flaws they tagged with the other group or they could wind up giving them hints that skew the score. keeping the 2 groups really separate would be what complicates this approach and where the artificially introduced flaws would have an advantage, since those would be easier to keep secret.
a compromise where the two approaches are combined could also be possible, and if the naturally occurring flaws are adequately classified it should be possible to use that information to draft more relevant artificial flaws. in addition this would enable finer grained metrics to be collected so as to find out if there are some types of flaws your flaw hunters have more trouble with than others.
of course, this has probably all been thought of before, but just in case it hasn't i just thought i'd throw it out there.
all of which calls into question the accuracy of linus' law which states (more or less) that 'given enough eyeballs all bugs are shallow'.
so the question is how can you know if your flaw finders (be they auditing code to find security flaws, or testing the product to find regular bugs, or even something else entirely) are doing a good job? how can we measure it? how can we test our testers or our code auditors or whatever other flaw finders we might have? as a software developer myself, i'm sensitive to the issue of software flaws, so this was a question that interested me and almost immediately a thought popped into my head - introduce your own flaws and see how good your flaw finders are at finding them. so long as they're fully documented you should be able to remove them before the final release of the code, and by measuring how many of these particular flaws get found you can get an estimate of how good a job your flaw finders are doing.
additionally, by comparing the number of artificially introduced flaws found to the total number of flaws being found you can even get an estimate of the size of the total flaw population. animal population sizes are often estimated this way, with one exception - usually it doesn't involve making new animals. that underscores one of the biggest problems with the idea; the flaws you artificially introduce may have little in common with natural flaws, and as such finding them may not be of comparable difficulty.
when estimating wild animal populations it's more common to capture some, tag them, release them, and then go on a second round of capturing to see how many of the tagged animals are captured a second time. doing this with software flaws would necessitate having 2 groups of flaw finders (or separating the ones you have into 2 groups) so that the flaws found and tagged by one group in the first pass are used to evaluate the other group in the second pass.
2 groups are necessary because, unlike animals, flaws don't move, so the group who found and tagged the flaws are going to be a little too good at finding them the second time. ideally they should also not discuss the flaws they tagged with the other group or they could wind up giving them hints that skew the score. keeping the 2 groups really separate would be what complicates this approach and where the artificially introduced flaws would have an advantage, since those would be easier to keep secret.
a compromise where the two approaches are combined could also be possible, and if the naturally occurring flaws are adequately classified it should be possible to use that information to draft more relevant artificial flaws. in addition this would enable finer grained metrics to be collected so as to find out if there are some types of flaws your flaw hunters have more trouble with than others.
of course, this has probably all been thought of before, but just in case it hasn't i just thought i'd throw it out there.
Monday, July 05, 2010
no i will not disable my anti-malware...
... or any part thereof just because you think it's causing your software to crash on my system.
and who is giving users the 'lower your guard for a better experience' advice today? why google of course.
and yes, i did notice that they only suggested disabling the internet monitoring; well guess where most new malware is coming from these day - yeah, that's right, the internet. some parts of an anti-malware product may be a little less useful than others but internet monitoring, when the internet is a major malware vector, is really kind of important.
i know google has a few smart folks when it comes to security, but they were also rather famously compromised using ancient microsoft software. one wonders if they disabled anti-malware protection in that case too. i'd like to see some of the security know-how that a few of them have distributed more broadly across the entire company so that well-meaning but misguided engineers don't give out dangerous advice in the name of improving the user experience with the browser.
and who is giving users the 'lower your guard for a better experience' advice today? why google of course.
Other software has incompatibilities with Google Chrome, but you may be able to solve your crash issues without disabling the entire software.now, to be fair they did also suggest upgrading, and that is a good option - in fact it should have been the only option they offered when it came potential anti-malware software conflicts for a couple of reasons.
If you have Internet Download Manager (IDM), disable the 'Advanced browser integration' option within IDM (go to Options > General).
If you have NVIDIA Desktop Explorer, try removing the nvshell.dll library using the steps on this site: http://www.spywareremove.com/security/how-to-remove-dll- files/
If you have FolderSize, try the fix on this site: http://sourceforge.net/tracker/?func=detail&aid= 2900504&group_id=127365&atid= 708425
If you have NOD32 version 2.7, upgrade to the latest version of NOD32 or disable internet monitoring in NOD32 2.7.
- turning things off is a lot less hassle than upgrading so if people think it'll work they're more likely to take that option
- the current landscape is such that anti-malware software is most people's only defense against malware.
and yes, i did notice that they only suggested disabling the internet monitoring; well guess where most new malware is coming from these day - yeah, that's right, the internet. some parts of an anti-malware product may be a little less useful than others but internet monitoring, when the internet is a major malware vector, is really kind of important.
i know google has a few smart folks when it comes to security, but they were also rather famously compromised using ancient microsoft software. one wonders if they disabled anti-malware protection in that case too. i'd like to see some of the security know-how that a few of them have distributed more broadly across the entire company so that well-meaning but misguided engineers don't give out dangerous advice in the name of improving the user experience with the browser.
Tags:
anti-malware,
google,
malware
Friday, July 02, 2010
to create malware or not to create malware
so ed moyle over at security curve has responded to some points i made in a previous post about the anti-malware community's ethical stance on malware creation.
ed's response is twofold. first he countered my assertion about what would happen if the CDC went around creating new diseases with a practical example by pointing out that some biologists actually do create new viruses. a little further on he makes mention of the concept of ethical relativity and this is important, because ethics are relative in a number of different ways. not only can ethics be relative in terms of degree (A is ethically worse than B) but also in terms of the frame of reference (the ethical rules for one group don't necessarily apply to a different group - for example there are things that would be unethical for a doctor to do but might be fine for you or i). i chose CDC specifically because, with their focus being on the control/prevention of disease, they are more analogous to the anti-malware community than biologists in general would be. if there were such a thing as computer virologists (or more specifically if there were ones who hadn't already chosen a side in the pro/anti-malware battle) they might be more in line with biologists ethically. from my perspective, though, i have to wonder if that makes them amoral with respect to malware.
philosophically (where ed's mention of ethical relativity actually came from) ed made the argument that something that is normally considered unethical might be considered alright if there was a bigger ethical 'win' as a result. what he's actually getting at is something that might be more readily recognized as the concept of the lesser of two evils. he contends that there might be scenarios in the realm of research where the good done as a result of creating malware outweighs the bad. i'm going to do something totally unexpected and agree with him, but with a caveat that you'll see in a minute.
from early on, fred cohen held out the possibility of beneficial viruses (no doubt there are even earlier citations possible but this will do), and in the beginning i thought they were possible too until i read vesselin bonchev's paper Are "Good" Viruses Still a Bad Idea? vesselin made perhaps the most salient of all points about the criteria by which a supposed good virus can be determined to be actually good. the "good" end result has to be something that can't be achieved any other way.
now vesselin had it slightly easier here because he was looking specifically at viruses, at self-replicating malware, which is a more narrowly defined problem than 'good malware' or 'good reasons to create malware in the lab'. vesselin's argument didn't leave a lot of room for good viruses - virtually everything you can think of doing with a self-replicator can also be done with non-replicative code and thus without the risks inherent in self-replication.
i mention vesselin's paper because that salient point he made extends to this case as well. unspoken in ed moyle's bank robbery example is that there is only 1 way to keep the hidden girl alive - by lying. if there were another way, would lying to save the little girl's life still be ok? if you choose the lesser of 2 evils, when a 3rd option with no evil whatsoever were available, then doesn't choosing the lesser evil mean that you're still doing evil unnecessarily?
that's where things stand in the anti-malware community. although it may be hypothetically possible to construct a scenario where malware creation is the least evil option, to my knowledge no one has managed to present such a scenario (with the exception of exploit code* for demonstrating the presence and importance of vulnerabilities), and so the no-malware-creation rule has no good exceptions yet. the need for new malware in testing (the root of the current discussion of malware creation ethics) can already be met in 2 different ways (retrospective testing or real-time/real-life testing that tests against suspect samples as they're discovered) that don't involve malware creation at all.
(* 2010/07/19: edited to add the exception case for exploit code, as pointed out by vesselin bontchev)
ed's response is twofold. first he countered my assertion about what would happen if the CDC went around creating new diseases with a practical example by pointing out that some biologists actually do create new viruses. a little further on he makes mention of the concept of ethical relativity and this is important, because ethics are relative in a number of different ways. not only can ethics be relative in terms of degree (A is ethically worse than B) but also in terms of the frame of reference (the ethical rules for one group don't necessarily apply to a different group - for example there are things that would be unethical for a doctor to do but might be fine for you or i). i chose CDC specifically because, with their focus being on the control/prevention of disease, they are more analogous to the anti-malware community than biologists in general would be. if there were such a thing as computer virologists (or more specifically if there were ones who hadn't already chosen a side in the pro/anti-malware battle) they might be more in line with biologists ethically. from my perspective, though, i have to wonder if that makes them amoral with respect to malware.
philosophically (where ed's mention of ethical relativity actually came from) ed made the argument that something that is normally considered unethical might be considered alright if there was a bigger ethical 'win' as a result. what he's actually getting at is something that might be more readily recognized as the concept of the lesser of two evils. he contends that there might be scenarios in the realm of research where the good done as a result of creating malware outweighs the bad. i'm going to do something totally unexpected and agree with him, but with a caveat that you'll see in a minute.
from early on, fred cohen held out the possibility of beneficial viruses (no doubt there are even earlier citations possible but this will do), and in the beginning i thought they were possible too until i read vesselin bonchev's paper Are "Good" Viruses Still a Bad Idea? vesselin made perhaps the most salient of all points about the criteria by which a supposed good virus can be determined to be actually good. the "good" end result has to be something that can't be achieved any other way.
now vesselin had it slightly easier here because he was looking specifically at viruses, at self-replicating malware, which is a more narrowly defined problem than 'good malware' or 'good reasons to create malware in the lab'. vesselin's argument didn't leave a lot of room for good viruses - virtually everything you can think of doing with a self-replicator can also be done with non-replicative code and thus without the risks inherent in self-replication.
i mention vesselin's paper because that salient point he made extends to this case as well. unspoken in ed moyle's bank robbery example is that there is only 1 way to keep the hidden girl alive - by lying. if there were another way, would lying to save the little girl's life still be ok? if you choose the lesser of 2 evils, when a 3rd option with no evil whatsoever were available, then doesn't choosing the lesser evil mean that you're still doing evil unnecessarily?
that's where things stand in the anti-malware community. although it may be hypothetically possible to construct a scenario where malware creation is the least evil option, to my knowledge no one has managed to present such a scenario (with the exception of exploit code* for demonstrating the presence and importance of vulnerabilities), and so the no-malware-creation rule has no good exceptions yet. the need for new malware in testing (the root of the current discussion of malware creation ethics) can already be met in 2 different ways (retrospective testing or real-time/real-life testing that tests against suspect samples as they're discovered) that don't involve malware creation at all.
(* 2010/07/19: edited to add the exception case for exploit code, as pointed out by vesselin bontchev)
Thursday, July 01, 2010
AMTSO revisited
in keeping with my habit of subtracting 1 from infinity, i've found that kevin townsend's recent post about AMTSO is just calling out for correction. the challenge, it seems, is where to start.
there are two primary questions he tries to answer, the first of which being whether or not AMTSO is serious about improving anti-malware testing. he concludes that the answer is no and holds up VB100 as an example to support this conclusion because he thinks if they were serious about improving anti-malware testing they'd ban the VB100 test on the basis that it misleads the public. of course, in reality AMTSO hasn't done that because they can't. they don't have the power to do so. AMTSO is trying to create improved standards but they don't have the authority to enforce those standards. all they can do is use indirect means to exert pressure on testing organizations to improve their methods. anyone reading the AMTSO FAQs, especially the one about their charter, can plainly see that enforcement is neither explicitly mentioned nor implicitly referred to.
additionally, VB100 isn't actually a test in it's own right. it's a certification/award based on a subset of the results of a larger comparative review. kevin should have known this had he bothered to read the first sentence of the VB100 test procedures. furthermore, the VB100 award itself is not misleading. this is one instance when we really ought to be shooting the messenger because the misleading is being done by vendor marketing departments which happen to use the VB100 award in an incredibly superficial and manipulative way (and frankly there's little that testers could do to stop that). there actually isn't all that much wrong with the VB100 award except that, due to it's being based on the WildList, it has lost most of it's relevance. that said, certifications in general have limited relevance as all they really do is help to establish a lower bound on quality. a lot of people don't understand this or even what VB100 really is, but that lack of understanding is hardly the fault of virus bulletin, especially when most people don't even go to the virus bulletin site to learn what the results mean.
before i move on to the second of kevin's main questions, i'd like to take an aside and look at something he wrote about the WildList itself:
the second of kevin's main questions had to do with whose interests AMTSO was really serving. he concludes that they serve the vendors interests rather than the end user's based on his assumptions about the reason behind their adherence to the rule about not creating new malware, but also based on his decision to buy into the spin being put forth by NSS Labs CEO rick moy.
for starters i can't believe that after all these years people are still getting bent out of shape or trying to read ulterior motives into the 'no malware creation' rule. it's one of the oldest and most fundamental ethical principles in the anti-malware community. if people found out that the CDC was creating new diseases they'd be up in arms - worse still if one of those new diseases got out (something which has happened in the malware world) - but in the case of the anti-malware community outsiders assume it's because everyone in the anti-malware community has vendor ties and the vendors don't want to look bad in tests. we're not talking about the 'we mostly frown on malware except when it's useful to us' community, it's the ANTI-malware community. you can't really call yourself anti-X if you go around making X's. that would just make you a hypocrite.
furthermore, and speaking directly to the following rather uninformed rhetorical question kevin puts forward about the 'no malware creation' rule:
as for believing the NSS spin and using the 2 test reviews available (yes, what an incredibly small sample size) on the AMTSO site to try and support that view i offer the following support to the counter-argument: retrospective tests have made far more vendors look far worse than NSS's test did and no one is challenging the results. no one is using the AMTSO review process to dismiss those tests, as kevin phrased it. how can the conspiracy theory about protectionism in AMTSO be true if nobody is trying to discredit tests that are even more damning and damaging than NSS'? if you think it's because the tests are too obscure, think again - they're produced by some of the top names in independent anti-malware testing (even NSS' own vikram phatak recognized one of the organizations as being independent in that video i've referenced twice before), who also happen to be a part of AMTSO.
kevin believed the spin, i suspect, because he was predisposed to. previous posts on his blog show an existing bias against AMTSO, apparently due in part to the involvement of vendors. there is a very sad tendency in the general security community to not be able to see past a person's vendor affiliations. apparently people think that if you work for a vendor you're nothing more than a mouthpiece for your employer and that the entire company is one big unified collective entity. no attempts are made to distinguish between divisions within the company and recognize the huge difference between the technical people and the business people in those companies (you never know what you're going to get when the two overlap, though - just compare frisk with eugene kaspersky). it's not the business people, the marketroids (so called in order to distinguish them from actual human beings), or the HR departments participating in AMTSO, it's the researchers.
one final idea that kevin put forward in his post is the importance of the user - going so far as to suggest that users should be part of AMTSO, that users determine whether tests are any good, etc. i don't know what on earth he was thinking, but the layman hasn't the tools to divine good science from bad. most users (and i say this as a user myself) haven't got the first clue about what makes a good test or a biased test. in fact most users don't read or interact with tests at all. the only thing they know about tests is what they read in vendor marketing material (usually on the cover of the box), which, as previously mentioned, neither testers nor AMTSO have any control over. i really don't see what users could bring to AMTSO, but i do see something that AMTSO could bring to users - that being tools for to help them understand the tests, to put them in the proper perspective, and yes to also be able to pick out the good ones from the bad.
to be perfectly honest, i understand some of the indignation kevin is directing towards testers and by extension AMTSO, but i think it's misdirected. for most people, marketing is the first and sometimes only voice they hear with respect to security. it's marketing's job to distort and/or omit facts in order to make the company and it's product/service look good. of course marketing does this at the behest of management, of CEOs and shareholders, and people whose concerns are business and profit rather than the good of the user. none of that has anything to do with testing or AMTSO, however.
there are two primary questions he tries to answer, the first of which being whether or not AMTSO is serious about improving anti-malware testing. he concludes that the answer is no and holds up VB100 as an example to support this conclusion because he thinks if they were serious about improving anti-malware testing they'd ban the VB100 test on the basis that it misleads the public. of course, in reality AMTSO hasn't done that because they can't. they don't have the power to do so. AMTSO is trying to create improved standards but they don't have the authority to enforce those standards. all they can do is use indirect means to exert pressure on testing organizations to improve their methods. anyone reading the AMTSO FAQs, especially the one about their charter, can plainly see that enforcement is neither explicitly mentioned nor implicitly referred to.
additionally, VB100 isn't actually a test in it's own right. it's a certification/award based on a subset of the results of a larger comparative review. kevin should have known this had he bothered to read the first sentence of the VB100 test procedures. furthermore, the VB100 award itself is not misleading. this is one instance when we really ought to be shooting the messenger because the misleading is being done by vendor marketing departments which happen to use the VB100 award in an incredibly superficial and manipulative way (and frankly there's little that testers could do to stop that). there actually isn't all that much wrong with the VB100 award except that, due to it's being based on the WildList, it has lost most of it's relevance. that said, certifications in general have limited relevance as all they really do is help to establish a lower bound on quality. a lot of people don't understand this or even what VB100 really is, but that lack of understanding is hardly the fault of virus bulletin, especially when most people don't even go to the virus bulletin site to learn what the results mean.
before i move on to the second of kevin's main questions, i'd like to take an aside and look at something he wrote about the WildList itself:
this latency means that, almost by definition, the Wild List includes little, if any, of the biggest threat to end-users: zero-day malwarewhat kevin and many before him have failed to realize is that the reason zero-day malware is as big a threat as it is today is because it's competition has been largely eliminated thanks to a focus on the WildList. without that we'd still be getting compromised by the exact same malware year after year because the stuff that was demonstrably in the wild wouldn't be getting higher priority treatment.
the second of kevin's main questions had to do with whose interests AMTSO was really serving. he concludes that they serve the vendors interests rather than the end user's based on his assumptions about the reason behind their adherence to the rule about not creating new malware, but also based on his decision to buy into the spin being put forth by NSS Labs CEO rick moy.
for starters i can't believe that after all these years people are still getting bent out of shape or trying to read ulterior motives into the 'no malware creation' rule. it's one of the oldest and most fundamental ethical principles in the anti-malware community. if people found out that the CDC was creating new diseases they'd be up in arms - worse still if one of those new diseases got out (something which has happened in the malware world) - but in the case of the anti-malware community outsiders assume it's because everyone in the anti-malware community has vendor ties and the vendors don't want to look bad in tests. we're not talking about the 'we mostly frown on malware except when it's useful to us' community, it's the ANTI-malware community. you can't really call yourself anti-X if you go around making X's. that would just make you a hypocrite.
furthermore, and speaking directly to the following rather uninformed rhetorical question kevin puts forward about the 'no malware creation' rule:
Why not? How can you test the true heuristic behavioral capabilities of an AV product without testing it against a brand new sample that you absolutely know it has never experienced before?it is already possible to test anti-malware products against malware they've never seen before without creating new malware. it's been possible for a long, long time. it's called retrospective testing, and anyone with familiarity with tests (not even testing issues, just the tests themselves) knows that retrospective tests make vendors look terrible. detection rates around 40% used to be the norm but in more recent times they've edged up closer to 50%. there are still some below the 40% mark, though, and even some below the 20% mark.
as for believing the NSS spin and using the 2 test reviews available (yes, what an incredibly small sample size) on the AMTSO site to try and support that view i offer the following support to the counter-argument: retrospective tests have made far more vendors look far worse than NSS's test did and no one is challenging the results. no one is using the AMTSO review process to dismiss those tests, as kevin phrased it. how can the conspiracy theory about protectionism in AMTSO be true if nobody is trying to discredit tests that are even more damning and damaging than NSS'? if you think it's because the tests are too obscure, think again - they're produced by some of the top names in independent anti-malware testing (even NSS' own vikram phatak recognized one of the organizations as being independent in that video i've referenced twice before), who also happen to be a part of AMTSO.
kevin believed the spin, i suspect, because he was predisposed to. previous posts on his blog show an existing bias against AMTSO, apparently due in part to the involvement of vendors. there is a very sad tendency in the general security community to not be able to see past a person's vendor affiliations. apparently people think that if you work for a vendor you're nothing more than a mouthpiece for your employer and that the entire company is one big unified collective entity. no attempts are made to distinguish between divisions within the company and recognize the huge difference between the technical people and the business people in those companies (you never know what you're going to get when the two overlap, though - just compare frisk with eugene kaspersky). it's not the business people, the marketroids (so called in order to distinguish them from actual human beings), or the HR departments participating in AMTSO, it's the researchers.
one final idea that kevin put forward in his post is the importance of the user - going so far as to suggest that users should be part of AMTSO, that users determine whether tests are any good, etc. i don't know what on earth he was thinking, but the layman hasn't the tools to divine good science from bad. most users (and i say this as a user myself) haven't got the first clue about what makes a good test or a biased test. in fact most users don't read or interact with tests at all. the only thing they know about tests is what they read in vendor marketing material (usually on the cover of the box), which, as previously mentioned, neither testers nor AMTSO have any control over. i really don't see what users could bring to AMTSO, but i do see something that AMTSO could bring to users - that being tools for to help them understand the tests, to put them in the proper perspective, and yes to also be able to pick out the good ones from the bad.
to be perfectly honest, i understand some of the indignation kevin is directing towards testers and by extension AMTSO, but i think it's misdirected. for most people, marketing is the first and sometimes only voice they hear with respect to security. it's marketing's job to distort and/or omit facts in order to make the company and it's product/service look good. of course marketing does this at the behest of management, of CEOs and shareholders, and people whose concerns are business and profit rather than the good of the user. none of that has anything to do with testing or AMTSO, however.
Subscribe to:
Posts (Atom)