first, waves upon waves of people complain that signatures are the wrong way to go, that heuristics are too weak and need to be stronger
then, in reaction to that the vendors make the heuristics more strict and guess what happens - false alarms increase and people think anti-virus is getting 'dumb'...
the truth is anti-virus is no more dumb than it ever was, it's simply less tolerant...
now i'm sure there will be waves upon waves of people calling for other things - among them probably smarter heuristics.. to those people i'd like to say this: show me your solution to the halting problem and i'll show you a smarter heuristic...
sure you could make heuristics that are better able to distinguish between past malware and legitimate files without solving the halting problem, and no doubt that is one of the many things designers of heuristic analysis engines are constantly trying to do... unfortunately future malware is the class of malware that most needs alternative detection technologies like heuristics (because past malware is where signatures actually shine)... future malware is, by definition, different from past malware and since malware is created by intelligent adversaries, optimizing for accuracy on past malware doesn't help with future malware... in fact, there's no guarantee stricter heuristics will help either (at least not against those malware writers who use scanners to help optimize their malware for non-detection), but it does have a better chance and frankly, for a business the cost to of a false positive is generally a lot less than the cost of a false negative...
as for making heuristics that are better able to distinguish between future malware and legitimate files, not only would such a heuristic engine need to be better able to determine what a file does by analyzing it's contents (a necessity due to not knowing beforehand what future malware will look like, and a pursuit where the halting problem has real and significant relevance) but also be better able to determine factors that are actually beyond the scope of computation such as the context in which such code would be executed (ie. at what point do you definitively label format.com with a possibly different filename as good or bad)... as such, even if we could solve the halting problem we still wouldn't have a perfect heuristic engine, only a smarter one - and without the contextual understanding, smarter might not even turn out to be that much better...
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Monday, January 28, 2008
Friday, January 11, 2008
vulnerability research vs. malware research
well, i said i was tempted to write a post on this topic, and even though it turns out i've written about it several times in the past (viruses and disclosure, malware and disclosure, full disclosure to name but a few) i'm going to try it again in hopes that after this one i won't ever have to repeat myself on this subject again (yeah, as if)...
as always, the most interesting debates wind up being about meaning, so lets define some terms for the sake of this discussion... a vulnerability is in essence a mistake that allows a component or service of a system to behave in an unexpected and unintended way that has a negative impact on the security of the system (i know not everyone ascribes to the mistake definition, but it's useful to draw a distinction between those vulnerabilities which can be fixed and those that can't - rest assured i will cover the alternative later)... vulnerability research, then, is the process of discovering these mistakes; sometimes by looking at source code, sometimes by reverse engineering the vulnerable system, sometimes even by accidentally or intentionally triggering the unintended behaviour... now there's a pretty broad consensus (with the exception of the bad guys) that vulnerabilities are bad so naturally we want to get rid of them, and because they're mistakes they can be fixed... however, because vulnerability research is often performed by people other than those who made the vulnerable system in the first place, it becomes necessary to find a way to demonstrate the existence of the vulnerability to others... further, those tasked with fixing the vulnerability require a reliable means of triggering the unintended behaviour in order to determine why that behaviour occurs and in order to ensure that whatever fix they put in place actually fixes the vulnerability... therefore we need an exploit to trigger the unintended behaviour so that the vulnerability can be demonstrated to others to convince them that a fix is needed, and also to test for the presence of the vulnerability when developing (or even applying) a fix...
of course with those uses in mind it's important to make the distinction that what we want in the way of exploits are actually benign exploits (as opposed to malicious/weaponized ones)... for example, when dealing with an arbitrary code execution vulnerability, an exploit that launches notepad is preferable to one that launches tftp with command line parameters to download something malicious from the internet and then subsequently launch that... you see exploits can be used for malicious purposes as well as the positive ones already described, and the less benign the exploit is the easier it is for a bad guy to do something bad with it...
now so far i've described exploits as things that are able to demonstrate the existence of a mistake that we want to fix... clearly when that mistake isn't present the exploit shouldn't work so we can say that the exploit depends on the existence of the mistake/vulnerability...
malware is a different beast entirely because it doesn't (in general) depend on the existence of a mistake or vulnerability... what does it depend on? well, from one of the seminal pieces of research on viruses we know that self-replicating malware depends on 3 things: the ability to share data (necessary for the attacker to get his malicious software from his own machine to the victim's machine), the ability to pass data that's been shared with you along to others (sharing transitivity - necessary for self-replicating malware to spread or pass itself along), and the ability to interpret data that's been shared with you as program code (ie. the ability to add to or change the set of things the computer is willing to execute - the generality of interpretation - necessary for the computer to be able to execute the new incoming malware)... none of these things can really be considered flaws or mistakes, in fact it's hard to imagine computers being as useful as they are without them... without sharing there would be no internet, no floppy/cd/dvd drives or printers (they would allow sharing over the sneakernet), no store-bought software (you'd have to make all the software, including the operating system, yourself), etc... not being able to pass anything along would be equally bizarre (though that's the world that hollywood envisions and tries to make real through the use of DRM), it would also make the division of labour very hard to manage because hierarchies would be impossible (your boss wouldn't be able to pass along to you anything his boss had given him, nor would he be able to pass along to his boss anything you had produced)... not being able to interpret data as program code would be the most profound change of all as it is this property that makes the general purpose computer a general purpose computer rather than a pocket calculator - it allows a single computer to be flexible enough to be used for many changing purposes and without that flexibility we'd need a different piece of hardware specially designed for each and every new task that came our way... it should be clear that these are things we neither can nor want to 'fix' and so self-replicating malware will always be possible... it turns out that all sharing transitivity does for self-replicating malware is allow it to spread, so we can therefore say that in general non-replicative malware doesn't need it (though that doesn't necessarily mean some specific instances won't use it, such as for the command and control of more sophisticated botnets)... there are no additional dependencies worth mentioning for non-replicative malware in general (the exception being exploits, which can be considered a type of malware and obviously depend on vulnerabilities) because the underlying functionality they use is the same functionality used by normal software, it's just that that functionality is used to perform actions we don't want performed... communicating with a remote site the way RATs, botnets, and spyware does is just sharing of messages, and we've already covered sharing... writing to the hard disk is pretty fundamental and can even be considered a kind of sharing between sessions or a prerequisite of media-based sharing... outputting to the screen (something adware necessarily has to do) is necessary in order for the user to properly interact with the computer (and really all input and output can be considered as falling under the umbrella of sharing)... since there are no dependencies in general that can be 'fixed', this case therefore covers the alternative of the vulnerability that isn't a mistake...
now a clever reader at this point would be asking him/herself how much sense it makes to perform the previously described type of vulnerability research on a vulnerability that can't be fixed... a knowledgeable reader would know the answer: it doesn't make sense to perform that sort of research... an argumentative reader might point out that such malware can exploit fixable vulnerabilities too, but the answer to that is to research the fixable vulnerability and look at the exploit for that on its own rather than connected to more general malware...
so how does that change the nature of research around malware? well for one thing the previously discussed benefits of creating benign exploits don't apply in malware research because they were all predicated on the idea that what allowed the thing to exist could be fixed... we know that's not generally true for malware and in the cases where it is true it's because the malware contains an exploit that can exist separate from the malware and be used that way instead of connected to the malware (which only serves to make the exploit less benign)... additionally, because malware in general depends on non-fixable properties of a system (not to mention they're less benign), the benefits that society reaps from sharing benign exploits with the public at large also don't apply to malware for essentially the same reason...
another, more intrinsic difference between the two research fields is that in vulnerability research we create a special kind of malware called exploits as indicators of the presence of the vulnerability but in malware research we do not create any kind of malware as an indicator of the presence of malware (that just wouldn't make sense)... creating malware in the course of doing malware research would actually be analogous to creating vulnerabilities in the course of doing vulnerability research... the reason we create indicators for vulnerabilities is that the vulnerabilities themselves are not exactly visible or shareable, they aren't distinct entities in and of themselves, whereas if i want to show you a piece of malware i give you the actual malware...
a third difference between the two research fields is that there is (according to popular informed opinion) a discrete, finite number of vulnerabilities while the number of possible malware is, theoretically, countably infinite - limited in practice only by the available storage space... so while it may make a certain kind of sense to enumerate vulnerabilities, making an exploit for each one so that the vulnerability can be mitigated (hopefully before it gets used by the bad guys), it's a very different idea to try and enumerate all possible malware in hopes of being able to mitigate all of them (because there's just too many of them and because there are so many, most will actually never actually be made by either the good guys or the bad guys)... furthermore, while it might seem feasible (or at least within the realm of hope) to stay ahead of the bad guys with regards to vulnerabilities, no matter how many pieces of malware we preemptively make in order to mitigate them the chances that we will block a bad guy by beating him to the creation of a particular piece of malware is essentially zero... the malware we might make is practically guaranteed to be different from the malware the bad guys will make...
by now you should be starting to see how different malware research is from vulnerability research and how pointless creating malware for the purposes of malware research is compared to creating benign exploits for the purposes of vulnerability research... instead, malware research focuses on malware that has already been made and classes of malware that already exist... rather than creating malware and hoping in vain that the bad guys malware will have the same properties, malware researchers share data with each other about new malware trends while they're still in their early stages and make educated guesses about what the next step will be... sometimes their predictions are right (such as storm botnet nodes being used in phishing, which was predicted when it was observed that the botnet was becoming segmented) and sometimes they aren't (or at least aren't yet, like some of the more ambitious predictions about mobile malware)... as such, malware researchers also look at the human attackers creating/using the malware - trying to guess their motivations, gaging their skill level, determining their connections with others, etc, because all those things are important in predicting what they might do next... enumerating possible future malware is not nearly as useful or accurate as a predictive tool in malware research as it's counterpart is in vulnerability research...
given these profound differences, it really makes very little sense to treat malware research the same as vulnerability research or malware in general the same as a benign exploit...
as always, the most interesting debates wind up being about meaning, so lets define some terms for the sake of this discussion... a vulnerability is in essence a mistake that allows a component or service of a system to behave in an unexpected and unintended way that has a negative impact on the security of the system (i know not everyone ascribes to the mistake definition, but it's useful to draw a distinction between those vulnerabilities which can be fixed and those that can't - rest assured i will cover the alternative later)... vulnerability research, then, is the process of discovering these mistakes; sometimes by looking at source code, sometimes by reverse engineering the vulnerable system, sometimes even by accidentally or intentionally triggering the unintended behaviour... now there's a pretty broad consensus (with the exception of the bad guys) that vulnerabilities are bad so naturally we want to get rid of them, and because they're mistakes they can be fixed... however, because vulnerability research is often performed by people other than those who made the vulnerable system in the first place, it becomes necessary to find a way to demonstrate the existence of the vulnerability to others... further, those tasked with fixing the vulnerability require a reliable means of triggering the unintended behaviour in order to determine why that behaviour occurs and in order to ensure that whatever fix they put in place actually fixes the vulnerability... therefore we need an exploit to trigger the unintended behaviour so that the vulnerability can be demonstrated to others to convince them that a fix is needed, and also to test for the presence of the vulnerability when developing (or even applying) a fix...
of course with those uses in mind it's important to make the distinction that what we want in the way of exploits are actually benign exploits (as opposed to malicious/weaponized ones)... for example, when dealing with an arbitrary code execution vulnerability, an exploit that launches notepad is preferable to one that launches tftp with command line parameters to download something malicious from the internet and then subsequently launch that... you see exploits can be used for malicious purposes as well as the positive ones already described, and the less benign the exploit is the easier it is for a bad guy to do something bad with it...
now so far i've described exploits as things that are able to demonstrate the existence of a mistake that we want to fix... clearly when that mistake isn't present the exploit shouldn't work so we can say that the exploit depends on the existence of the mistake/vulnerability...
malware is a different beast entirely because it doesn't (in general) depend on the existence of a mistake or vulnerability... what does it depend on? well, from one of the seminal pieces of research on viruses we know that self-replicating malware depends on 3 things: the ability to share data (necessary for the attacker to get his malicious software from his own machine to the victim's machine), the ability to pass data that's been shared with you along to others (sharing transitivity - necessary for self-replicating malware to spread or pass itself along), and the ability to interpret data that's been shared with you as program code (ie. the ability to add to or change the set of things the computer is willing to execute - the generality of interpretation - necessary for the computer to be able to execute the new incoming malware)... none of these things can really be considered flaws or mistakes, in fact it's hard to imagine computers being as useful as they are without them... without sharing there would be no internet, no floppy/cd/dvd drives or printers (they would allow sharing over the sneakernet), no store-bought software (you'd have to make all the software, including the operating system, yourself), etc... not being able to pass anything along would be equally bizarre (though that's the world that hollywood envisions and tries to make real through the use of DRM), it would also make the division of labour very hard to manage because hierarchies would be impossible (your boss wouldn't be able to pass along to you anything his boss had given him, nor would he be able to pass along to his boss anything you had produced)... not being able to interpret data as program code would be the most profound change of all as it is this property that makes the general purpose computer a general purpose computer rather than a pocket calculator - it allows a single computer to be flexible enough to be used for many changing purposes and without that flexibility we'd need a different piece of hardware specially designed for each and every new task that came our way... it should be clear that these are things we neither can nor want to 'fix' and so self-replicating malware will always be possible... it turns out that all sharing transitivity does for self-replicating malware is allow it to spread, so we can therefore say that in general non-replicative malware doesn't need it (though that doesn't necessarily mean some specific instances won't use it, such as for the command and control of more sophisticated botnets)... there are no additional dependencies worth mentioning for non-replicative malware in general (the exception being exploits, which can be considered a type of malware and obviously depend on vulnerabilities) because the underlying functionality they use is the same functionality used by normal software, it's just that that functionality is used to perform actions we don't want performed... communicating with a remote site the way RATs, botnets, and spyware does is just sharing of messages, and we've already covered sharing... writing to the hard disk is pretty fundamental and can even be considered a kind of sharing between sessions or a prerequisite of media-based sharing... outputting to the screen (something adware necessarily has to do) is necessary in order for the user to properly interact with the computer (and really all input and output can be considered as falling under the umbrella of sharing)... since there are no dependencies in general that can be 'fixed', this case therefore covers the alternative of the vulnerability that isn't a mistake...
now a clever reader at this point would be asking him/herself how much sense it makes to perform the previously described type of vulnerability research on a vulnerability that can't be fixed... a knowledgeable reader would know the answer: it doesn't make sense to perform that sort of research... an argumentative reader might point out that such malware can exploit fixable vulnerabilities too, but the answer to that is to research the fixable vulnerability and look at the exploit for that on its own rather than connected to more general malware...
so how does that change the nature of research around malware? well for one thing the previously discussed benefits of creating benign exploits don't apply in malware research because they were all predicated on the idea that what allowed the thing to exist could be fixed... we know that's not generally true for malware and in the cases where it is true it's because the malware contains an exploit that can exist separate from the malware and be used that way instead of connected to the malware (which only serves to make the exploit less benign)... additionally, because malware in general depends on non-fixable properties of a system (not to mention they're less benign), the benefits that society reaps from sharing benign exploits with the public at large also don't apply to malware for essentially the same reason...
another, more intrinsic difference between the two research fields is that in vulnerability research we create a special kind of malware called exploits as indicators of the presence of the vulnerability but in malware research we do not create any kind of malware as an indicator of the presence of malware (that just wouldn't make sense)... creating malware in the course of doing malware research would actually be analogous to creating vulnerabilities in the course of doing vulnerability research... the reason we create indicators for vulnerabilities is that the vulnerabilities themselves are not exactly visible or shareable, they aren't distinct entities in and of themselves, whereas if i want to show you a piece of malware i give you the actual malware...
a third difference between the two research fields is that there is (according to popular informed opinion) a discrete, finite number of vulnerabilities while the number of possible malware is, theoretically, countably infinite - limited in practice only by the available storage space... so while it may make a certain kind of sense to enumerate vulnerabilities, making an exploit for each one so that the vulnerability can be mitigated (hopefully before it gets used by the bad guys), it's a very different idea to try and enumerate all possible malware in hopes of being able to mitigate all of them (because there's just too many of them and because there are so many, most will actually never actually be made by either the good guys or the bad guys)... furthermore, while it might seem feasible (or at least within the realm of hope) to stay ahead of the bad guys with regards to vulnerabilities, no matter how many pieces of malware we preemptively make in order to mitigate them the chances that we will block a bad guy by beating him to the creation of a particular piece of malware is essentially zero... the malware we might make is practically guaranteed to be different from the malware the bad guys will make...
by now you should be starting to see how different malware research is from vulnerability research and how pointless creating malware for the purposes of malware research is compared to creating benign exploits for the purposes of vulnerability research... instead, malware research focuses on malware that has already been made and classes of malware that already exist... rather than creating malware and hoping in vain that the bad guys malware will have the same properties, malware researchers share data with each other about new malware trends while they're still in their early stages and make educated guesses about what the next step will be... sometimes their predictions are right (such as storm botnet nodes being used in phishing, which was predicted when it was observed that the botnet was becoming segmented) and sometimes they aren't (or at least aren't yet, like some of the more ambitious predictions about mobile malware)... as such, malware researchers also look at the human attackers creating/using the malware - trying to guess their motivations, gaging their skill level, determining their connections with others, etc, because all those things are important in predicting what they might do next... enumerating possible future malware is not nearly as useful or accurate as a predictive tool in malware research as it's counterpart is in vulnerability research...
given these profound differences, it really makes very little sense to treat malware research the same as vulnerability research or malware in general the same as a benign exploit...
Tags:
exploit,
malware,
vulnerability
Wednesday, January 09, 2008
follow-up on the ethical conflict in the webappsec domain
well, it's been several days now and there have been a number of reactions (such as the comment thread for the article i originally linked to or christofer hoff's reaction to my previous post on the subject)...
i wanted to post a follow-up to address robert hansen's reaction to my post and i was actually planning on posting it sooner but now i'm glad i didn't because i get to throw in material from mike rothman as well...
i'm going to start with mike, actually, since i can at least link to what he wrote (though past experience trying to comment on his blog has been less than stellar - ie. the comments seem to get lost)... mike posted his reaction in today's daily incite and although he thinks he knows what my argument is about he actually gets it rather spectacularly wrong...
creating threats, increasing the total number of threats out there is not something white hats are supposed to be doing... that contributes to the problem, not the solution...
popularizing the threat means that the bad guys' adoption of this class of threat agent will speed up, in other words the arrival of xss worms and other web-based malware (which are probably an inevitability given our current trend towards web-based computing) happen sooner...
both of which ultimately benefit mr. hansen...
moving right along to robert hansen's reaction (which, for previously described reasons, i'm not comfortable linking to so the quotes will just have to be good enough):
and of course there were numerous comments in support of mr. hansen's actions... from christofer hoff:
from digi7al64:
malware materials belong in the hands of the public about as much as infectious biological disease samples do...
from someone calling themselves spyware:
the ability to support malware such as worms and viruses an inherent property (rather than a flaw that can be corrected) of any system where sharing is allowed, where that sharing is transitive (what i share with you, you can share with others), and where data can be interpreted as program code... nothing in the particular implementations or even in larger patterns one sees in groups of implementations of xss worms is ultimately going to be able to stop xss worms once and for all - only stopping xss itself or all active content stops those things which allow xss worms to be...
from the very briefly named xs:
and from robert (presumably a different robert than mr. hansen who prefers to go by the pseudonym rsnake):
for example, you would not benefit if your personal details (full name, mailing address, telephone number, financial information, etc) became less obscure...
and that's where things stood the last time i checked... nobody (and i mean nobody) seems to get the argument about the contest leading to the threat class entering the mainstream sooner... mr. hansen came closest when he disingenuously tried to refute the argument that he stood to gain from the problem becoming worse... but the real head scratcher, the thing that i've found most frustrating of all is how people like to apply vulnerability research logic to malware research issues... i'm actually tempted to write a new post specifically about just that...
i wanted to post a follow-up to address robert hansen's reaction to my post and i was actually planning on posting it sooner but now i'm glad i didn't because i get to throw in material from mike rothman as well...
i'm going to start with mike, actually, since i can at least link to what he wrote (though past experience trying to comment on his blog has been less than stellar - ie. the comments seem to get lost)... mike posted his reaction in today's daily incite and although he thinks he knows what my argument is about he actually gets it rather spectacularly wrong...
Kurt thinks that this is going to uncover an attack that wouldn't have been discovered otherwise and that's a bad thing.no, i think that the contest is going to a) create new threat agents (ie. worms - that is the point of the contest after all) and b) popularize this class of threat...
creating threats, increasing the total number of threats out there is not something white hats are supposed to be doing... that contributes to the problem, not the solution...
popularizing the threat means that the bad guys' adoption of this class of threat agent will speed up, in other words the arrival of xss worms and other web-based malware (which are probably an inevitability given our current trend towards web-based computing) happen sooner...
both of which ultimately benefit mr. hansen...
That we shouldn't trust these researchers because they think like hackers.no, we shouldn't trust them because they act like bad guys... creating malware and/or requesting the creation of malware is fundamentally not a white hat activity because it contributes to the malware problem (the way dropping litter on a littered street contributes to the litter problem)...
My point is that in all likelihood they are working on smaller footprint and more innovative XSS attacks and they are going to figure stuff out.indeed they may well do so, but they sure as hell shouldn't receive help from us in order to get there...
So we need to engage in similar tactics to understand the attack surface and protect our stuff.similar tactics? defense is not the same as attack, they don't use the same tactics - and even if it were appropriate to use tactics similar to the bad guys (and there's a serious question of how we differentiate ourselves from the bad guys if we do), the bad guys are NOT doing their research in the public sphere... they at least have the good sense to use darknet channels to avoid helping us... that is the tactic we should be copying...
How will we defend ourselves if we aren't doing similar research?by analyzing threats rather than creating new ones... there is no real reason to expect the malware we write and the malware they write will be similar enough for defenses against our own malware to automatically work against theirs as well...
Kurt's entire argument is based on the assumption that the bad guys aren't going to figure the stuff out anyway.and once again, spectacularly wrong... my argument is based on the principle of not being part of the problem... it doesn't matter if they figure this stuff out on their own or not, we shouldn't be helping them...
But playing the ostrich game and hoping the problem goes away doesn't work very well.playing dumb and ignoring the unintended consequences of your own actions doesn't work very well either...
moving right along to robert hansen's reaction (which, for previously described reasons, i'm not comfortable linking to so the quotes will just have to be good enough):
Clearly, and admittedly most of these people have no background in the issue and have never read this siteand then there's me who both has a familiarity with worms and reads the site...
as there is lots of samples of existing worm code in lots of places on the Internet now. Just because they don’t know about it doesn’t mean it’s not there.and just because it's out there doesn't mean you should add to it...
I’ve always said, you don’t understand a problem until you see it and play with it.playing with malware and creating new malware are not on the same level...
If working to help the understanding of worm propagation makes me evil, so be it.understanding may be the ends but those ends don't justify the means... not everything done under the banner of a laudable goal is alright...
I’d rather be evil and be able to help solve problems than be good and be useless at solving the problemthough i've never personally referred to mr. hansen as evil, his current path will make the problem worse, not better, and it most certainly won't solve the problem... worms are not a solvable problem...
Will this empower bad guys? I’d be nieve to say there’s no chance of that.in truth some of the bad guys will gain at least as much knowledge from this experiment as the good guys will... further, the knowledge this produces will be directly usable by them, not us... the contest is advancing the art of xss worm creation (creating new threat agents is the job of the attacker, not the defender), with the intention that the results will be usable in a second stage of research into xss worm mitigation...
For people who liken me to an anti-virus company writing viruses,i think the closest anyone came to making that comparison was me, but what i actually compared mr. hansen to to was an anti-virus vendor motivating others to create malware that would ultimately benefit that vendor... and i made that comparison because, with the exception that mr. hansen isn't an anti-virus vendor, that's exactly what's going on...
I’d like to point out the fact of the matter which is that I don’t get paid to consult with browser companies on browser securityfine, the browser companies don't pay him, but the companies hoping to avoid getting hit by xss worms (among other things) do... the more popular that threat becomes, the greater the chance such companies will get hit and therefore the greater the demand for his services...
To date I also have never been paid by any company who has ever been hit by an XSS worm.yeah "to date"... that's because the number is still relatively low... but as i said, he does get paid by companies hoping to avoid that fate...
Also, unlike an anti-virus company, I don’t have a security product in development.instead he has services and a brand and who knows what he'll be able to leverage that into in the future... jamie butler wasn't part of an anti-stealthkit company when he was writing and popularizing stealthkits - it wasn't until later that he became the CTO of a government funded anti-stealthkit startup aiming to solve the problem he helped create...
Think the bad guys are going to stop their own research if we stop talking about it?no, i just think they won't get a helping hand anymore... or has the idea of not being part of the problem gone out of fashion?
But through it all, I’m 100% confident that this will lead to previously non-published/understood results about worm propagationand i'm 100% confident that he will hasten the onset of the web-based malware problem by popularizing it as he does...
Time to start working on solutions, rather than trying to keep the research quiet.said as if those two things were mutually exclusive... they aren't...
and of course there were numerous comments in support of mr. hansen's actions... from christofer hoff:
I found it rather interesting that Kurt took the tact that he did. I think his point regarding the potential for misuse of code generated as a result of the contest is plausible but unlikely.it was also the lesser of the two issues i was bringing up... however, since the bad guys have shown that they are in fact willing to use proof of concept code written by the good guys (see: bootroot) it's still something one should take seriously...
Honestly, PoC code for any sort of exploitable vulnerability has the potential for misuse, so I’m not convinced this is a corner case that deserves the flambe treatment it’s getting.apparently i'm going to have to get used to saying 'except these are worms, not just exploits' (because i already said it a couple times on hoff's own blog)...
However, I found it a bit of a reach to accuse you of ethical violations and seeding the world with Malware so you could profit from the results as part of a giant conspiracy theory.i didn't say he was seeding the world with malware, i said he was increasing the xss worm's mindshare... this contest does increase the raw number of xss worms, but it's worst impact is that it captures the imagination of more bad guys and makes them more apt to jump into this problem space sooner...
It’s clear that many of those posting their opinions fail to recognize which side of the fence you sit and the contributions toward making the world “better” you have made.the side of the fence on which he intends to sit and the one he winds up on are not necessarily the same thing... the road to hell is said to be paved with good intentions after all... whatever contributions he may have made, they have to be taken in context of how they were made...
from digi7al64:
Also in response to this from Dr. Vesselin Bontchev who stated “Respectable security researchers don’t encourage the creation of malware by running contests for it!”. Sir, I don’t believe that a single entity of peers should be solely those with the knowledge to determine who and when the general public should be “allowed” this type of information.and the reason s/he (sorry, but the pseudonym does not convey a definite gender) believes that is because s/he is either naive, believes malware research is the same as vulnerability research, or is ignorant of malware research/creation history...
malware materials belong in the hands of the public about as much as infectious biological disease samples do...
from someone calling themselves spyware:
Hiding the problem instead of stopping it? What are we, scared? Afraid for the consequences? Act NOW and you are safe.this person is obviously someone who believes xss worms can be stopped once and for all (thereby making one 'safe')... that's not going to happen unless xss itself can be wiped out (which doesn't require worm research much less worm creation) or unless the entire web regresses to purely static content...
the ability to support malware such as worms and viruses an inherent property (rather than a flaw that can be corrected) of any system where sharing is allowed, where that sharing is transitive (what i share with you, you can share with others), and where data can be interpreted as program code... nothing in the particular implementations or even in larger patterns one sees in groups of implementations of xss worms is ultimately going to be able to stop xss worms once and for all - only stopping xss itself or all active content stops those things which allow xss worms to be...
from the very briefly named xs:
Education should never be considered unethical.education at the expense of others may well be unethical depending on the nature of that expense...
and from robert (presumably a different robert than mr. hansen who prefers to go by the pseudonym rsnake):
Security Through Obscurity has never been a good approach,relying on obscurity for your security is not a good approach, but there are plenty of things for which removing obscurity does not help you...
for example, you would not benefit if your personal details (full name, mailing address, telephone number, financial information, etc) became less obscure...
and that's where things stood the last time i checked... nobody (and i mean nobody) seems to get the argument about the contest leading to the threat class entering the mainstream sooner... mr. hansen came closest when he disingenuously tried to refute the argument that he stood to gain from the problem becoming worse... but the real head scratcher, the thing that i've found most frustrating of all is how people like to apply vulnerability research logic to malware research issues... i'm actually tempted to write a new post specifically about just that...
Saturday, January 05, 2008
eEye and malware creation ethics
wow, i just got done throwing out a mention to the new MBR stealthkit and now i find out it's pedigree...
folks, it's time you go over your malware researchers with a fine-tooth comb because i don't think anybody in the industry really wants to be responsible for another proof of concept getting into the wild...
that is, unless you're eEye digital security, apparently... not only was it their researchers derek soeder and ryan permeh that created bootroot and presented it at blackhat in 2005, but even as i type eEye is hosting that malware on their website right now...
this is absolutely NOT responsible behaviour and the people who were and are responsible for both the stealthkit and the company should be ashamed of themselves for being part of the problem instead of part of the solution... anti-malware researchers and companies should not be creating and distributing malware - this is so fundamentally wrong i'm actually tongue-tied trying to describe how wrong it is... needless to say i won't be touching anything of theirs with a 10 foot barge pole and others might want to consider the same...
*update* - tyler reguly caught or otherwise helped me see a factual error i made... the stealthkit in the wild is not the exact one developed and distributed by eEye but rather is a modified version of it... that doesn't really change much, however, because they still armed the bad guys...
folks, it's time you go over your malware researchers with a fine-tooth comb because i don't think anybody in the industry really wants to be responsible for another proof of concept getting into the wild...
that is, unless you're eEye digital security, apparently... not only was it their researchers derek soeder and ryan permeh that created bootroot and presented it at blackhat in 2005, but even as i type eEye is hosting that malware on their website right now...
this is absolutely NOT responsible behaviour and the people who were and are responsible for both the stealthkit and the company should be ashamed of themselves for being part of the problem instead of part of the solution... anti-malware researchers and companies should not be creating and distributing malware - this is so fundamentally wrong i'm actually tongue-tied trying to describe how wrong it is... needless to say i won't be touching anything of theirs with a 10 foot barge pole and others might want to consider the same...
*update* - tyler reguly caught or otherwise helped me see a factual error i made... the stealthkit in the wild is not the exact one developed and distributed by eEye but rather is a modified version of it... that doesn't really change much, however, because they still armed the bad guys...
Tags:
bootroot,
derek soeder,
eEye,
ethics,
ryan permeh,
stealthkit
everything old is new again
well, it seems like the past is coming back into style again...
there's behavioural detection in anti-virus suites, there's boot sector malware (even though it never really went away, it just died down to background noise levels) in the form of a new MBR stealthkit (anyone want to hazard a guess what this will mean for the wipe and re-install folks? 'cause format doesn't touch the MBR and fdisk isn't necessarily as straightforward as an average user would like), and now there's home made anti-virus signatures?
yup, jose nazario posts to show how to create clam anti-virus signatures for the latest storm trojan emails...
y'know, i remember when more mainstream anti-virus products had virus description languages that were simple enough that arbitrary people could create their own signatures... however, that was back in the early '90s when they were still growing out of being simple string scanners - when handling polymorphism simply meant using wildcard characters in the scan string (the precursor of today's virus signature, composed of a sequence of bytes often in hexadecimal form)...
i can honestly say i was surprised to find out that such an ancient technique could still be used with clamav; and if you think i'm being too critical of clam, look closely at the instructions in the article - those appear to be quite literally scan strings (and apparently no wildcards) in ascii form...
there's behavioural detection in anti-virus suites, there's boot sector malware (even though it never really went away, it just died down to background noise levels) in the form of a new MBR stealthkit (anyone want to hazard a guess what this will mean for the wipe and re-install folks? 'cause format doesn't touch the MBR and fdisk isn't necessarily as straightforward as an average user would like), and now there's home made anti-virus signatures?
yup, jose nazario posts to show how to create clam anti-virus signatures for the latest storm trojan emails...
y'know, i remember when more mainstream anti-virus products had virus description languages that were simple enough that arbitrary people could create their own signatures... however, that was back in the early '90s when they were still growing out of being simple string scanners - when handling polymorphism simply meant using wildcard characters in the scan string (the precursor of today's virus signature, composed of a sequence of bytes often in hexadecimal form)...
i can honestly say i was surprised to find out that such an ancient technique could still be used with clamav; and if you think i'm being too critical of clam, look closely at the instructions in the article - those appear to be quite literally scan strings (and apparently no wildcards) in ascii form...
Tags:
anti-virus,
clamav,
jose nazario
no more non-distribution for virustotal
the hispasec folks announced thursday that they would be getting rid of the virustotal feature that allowed people to scan samples without submitting those samples to anti-malware companies when appropriate...
i think that's great news, because it makes it that much harder for the bad guys to use the feature maliciously... now the hispasec folks may want to believe that wasn't really an issue but lets face it, if there are still script kiddies asking for malware on usenet (and it does happen from time to time) then it stands to reason that some malware writers would misuse virustotal to test their malware... they're people, not perfectly rational automatons, they do things that don't always make sense to us - sometimes because what they know and what we know doesn't always completely overlap, sometimes just because they're cheap and lazy (and aren't we all to some extent?)...
the article also points out that there's a technical reason why using virustotal to test malware with isn't useful for the malware writers - the results don't tell the whole story with regards to the detectability of the malware sample by the various anti-malware products used because virustotal only uses the scanner portion of those products... now, while non-scanner based anti-malware components in anti-malware suites are nothing new, they did fall out of favour for quite a while... thankfully, as the hispasec folks point out, they're coming back and that renders virustotal results meaningless to a malware writer... what they don't point out, however, is that it also renders virustotal results meaningless for thecomplete tools anti-virus critics who use virustotal results against new malware as proof that anti-malware companies aren't able to stay ahead of the threat...
there are 2 things in the article that baffle me though... first is how anyone could think they're protecting confidential documents by using the non-distribute feature (hello, you're submitting the file to a 3rd party, confidentiality is lost at that moment, not when that 3rd party shares copies with other 3rd parties)... the second is why legitimate anti-malware vendors would need to use virustotal with the non-distribute feature when the bad guys have shown us it's easy enough to setup your own multi-product scanning system...
but all in all i always think it's good when we find ways to be less helpful to the bad guys...
i think that's great news, because it makes it that much harder for the bad guys to use the feature maliciously... now the hispasec folks may want to believe that wasn't really an issue but lets face it, if there are still script kiddies asking for malware on usenet (and it does happen from time to time) then it stands to reason that some malware writers would misuse virustotal to test their malware... they're people, not perfectly rational automatons, they do things that don't always make sense to us - sometimes because what they know and what we know doesn't always completely overlap, sometimes just because they're cheap and lazy (and aren't we all to some extent?)...
the article also points out that there's a technical reason why using virustotal to test malware with isn't useful for the malware writers - the results don't tell the whole story with regards to the detectability of the malware sample by the various anti-malware products used because virustotal only uses the scanner portion of those products... now, while non-scanner based anti-malware components in anti-malware suites are nothing new, they did fall out of favour for quite a while... thankfully, as the hispasec folks point out, they're coming back and that renders virustotal results meaningless to a malware writer... what they don't point out, however, is that it also renders virustotal results meaningless for the
there are 2 things in the article that baffle me though... first is how anyone could think they're protecting confidential documents by using the non-distribute feature (hello, you're submitting the file to a 3rd party, confidentiality is lost at that moment, not when that 3rd party shares copies with other 3rd parties)... the second is why legitimate anti-malware vendors would need to use virustotal with the non-distribute feature when the bad guys have shown us it's easy enough to setup your own multi-product scanning system...
but all in all i always think it's good when we find ways to be less helpful to the bad guys...
Tags:
anti-malware,
hispasec,
malware,
malware creation,
virustotal
ethical conflict in the webappsec domain
anyone remember back in 2005 when the folks at dvforge decided to give malware authors an incentive to create mac os x malware? well don't look now but something very similar has just happened with regards to xss worms...
yes, folks... robert hansen (aka rsnake), the founder and ceo of sectheory, felt it would be a good idea to hold a contest to see who could create the smallest xss worm... ok, so there's no money changing hands this time, but that doesn't mean the winner isn't getting rewarded - there are absolutely rewards to be had for the winner of a contest like this and that's a big problem because lots of people want rewards and this kind of contest will make people think about and create xss worms when they wouldn't have before...
would you trust your security to a person who makes or made malware? how about a person or company that intentionally motivates others to do so? why do you suppose the anti-virus industry works so hard to fight the conspiracy theories that suggest they are the cause of the viruses? at the very least mr. hansen is playing fast and loose with the publics trust and ultimately harming security in the process, but there's a more insidious angle too...
while the worms he's soliciting from others are supposed to be merely proof of concept, the fact of the matter is that proof of concept worms can still cause problems (the recent orkut worm was a proof of concept)... moreover, although the winner of the contest doesn't get any money, at the end of the day there will almost certainly be a windfall for mr. hansen - after all, what do you suppose happens when you're one of the few experts on some relatively obscure type of threat and that threat is artificially made more popular? well, demand for your services goes up of course... this is precisely the type of shady marketing model i described before where the people who stand to gain the most out of a problem becoming worse directly contribute to that problem becoming worse... it made greg hoglund and jamie butler household names in security circles, and it made john mcafee (pariah though he may be) a millionaire...
and just in case you bought into that argument that the idea was to distill the samples down to the pure essence of what a xss worm is without all that obfuscation muddying the waters, consider this: in every past instance of extreme code size optimization (which is a forgone conclusion in a contest to find the smallest anything) the final outcome has actually been made more obscure by virtue of the hacks and tricks used to squeeze out as many unnecessary bytes as possible...
how is it that the security industry has arrived at a state where the people in it promote the threats they're supposed to be helping protect against? how does making problems worse serve the greater good?
*update* - there's now a follow-up to this article here
yes, folks... robert hansen (aka rsnake), the founder and ceo of sectheory, felt it would be a good idea to hold a contest to see who could create the smallest xss worm... ok, so there's no money changing hands this time, but that doesn't mean the winner isn't getting rewarded - there are absolutely rewards to be had for the winner of a contest like this and that's a big problem because lots of people want rewards and this kind of contest will make people think about and create xss worms when they wouldn't have before...
would you trust your security to a person who makes or made malware? how about a person or company that intentionally motivates others to do so? why do you suppose the anti-virus industry works so hard to fight the conspiracy theories that suggest they are the cause of the viruses? at the very least mr. hansen is playing fast and loose with the publics trust and ultimately harming security in the process, but there's a more insidious angle too...
while the worms he's soliciting from others are supposed to be merely proof of concept, the fact of the matter is that proof of concept worms can still cause problems (the recent orkut worm was a proof of concept)... moreover, although the winner of the contest doesn't get any money, at the end of the day there will almost certainly be a windfall for mr. hansen - after all, what do you suppose happens when you're one of the few experts on some relatively obscure type of threat and that threat is artificially made more popular? well, demand for your services goes up of course... this is precisely the type of shady marketing model i described before where the people who stand to gain the most out of a problem becoming worse directly contribute to that problem becoming worse... it made greg hoglund and jamie butler household names in security circles, and it made john mcafee (pariah though he may be) a millionaire...
and just in case you bought into that argument that the idea was to distill the samples down to the pure essence of what a xss worm is without all that obfuscation muddying the waters, consider this: in every past instance of extreme code size optimization (which is a forgone conclusion in a contest to find the smallest anything) the final outcome has actually been made more obscure by virtue of the hacks and tricks used to squeeze out as many unnecessary bytes as possible...
how is it that the security industry has arrived at a state where the people in it promote the threats they're supposed to be helping protect against? how does making problems worse serve the greater good?
*update* - there's now a follow-up to this article here
Tags:
ethics,
robert hansen,
rsnake,
sectheory,
xss worm contest
Subscribe to:
Posts (Atom)