recently, some old news has become new again. when i first read about DLL search order hijacking in nick harbour's post on mandiant's blog i thought to myself 'gee, that sounds an awful lot like a companion virus, except without the self-replication part'.
apparently i wasn't the only one. robert sandilands made the companion virus connection too, except for some reason he combined execution precedence companion viruses with path precedence companion viruses. path precedence is the only one needed here, and if you really think about path precedence companions or you go back and read nick harbour's post again you'll see that the all this focus people are giving to the issue of the current directory is taking attention away from a broader issue.
while the current directory could be used to initiate a compromise of a machine, if the malicious DLL is anywhere in the search path before the legitimate DLL there will still be a problem with an exotic form of persistence, and if an additional vulnerability (say in a browser, for example) can be used to plant a malicious DLL at the beginning of the search path then we will once again be able to compromise a machine regardless of the current directory centric mitigations that are available right now.
the real problem with DLL search paths is that they exist at all. there should be no searching, no question of which DLL your process is trying to load or where to find it. so long as there is searching for DLLs, there exists the possibility of finding and loading the wrong one (the malicious one).
devising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...
Thursday, August 26, 2010
what is an alias companion virus?
an alias companion virus is a type of companion virus that takes advantage of the command aliasing functionality of operating systems that support it in order to get executed instead of the program the user intended to execute.
command aliasing is a feature whereby long, complex commands can be given simple, easy to type aliases in order to enhance the usability of the system. MS-DOS didn't support this functionality, nor is there much demand for it in the face of graphical user interfaces, but some alternative DOS command shells did, and various flavours of *nix (including linux) do as well.
if a viral program creates an alias that points to itself but gives that alias the same name as some other program (the so-called host program), and if the operating system in question gives execution precedence to aliases (which they often do), then alias (or more specifically the viral program it points to) will get executed instead of the host program the alias got it's name from.
back to index
command aliasing is a feature whereby long, complex commands can be given simple, easy to type aliases in order to enhance the usability of the system. MS-DOS didn't support this functionality, nor is there much demand for it in the face of graphical user interfaces, but some alternative DOS command shells did, and various flavours of *nix (including linux) do as well.
if a viral program creates an alias that points to itself but gives that alias the same name as some other program (the so-called host program), and if the operating system in question gives execution precedence to aliases (which they often do), then alias (or more specifically the viral program it points to) will get executed instead of the host program the alias got it's name from.
back to index
Tags:
alias companion virus,
definition
what is a renaming companion virus?
a renaming companion virus is a type of companion virus that uses the 'rename file' functionality of the operating system to assume the identity of the host program it infects.
a renaming companion virus will rename the host program it's infecting (often to something with the same name but a different file extension so that it's easy to find later) and then rename itself so that it has it's host's original name. this way, when people attempt to execute the host program they will be executing the virus instead and the virus can then locate the renamed host and execute it so that the user doesn't suspect anything is wrong.
back to index
a renaming companion virus will rename the host program it's infecting (often to something with the same name but a different file extension so that it's easy to find later) and then rename itself so that it has it's host's original name. this way, when people attempt to execute the host program they will be executing the virus instead and the virus can then locate the renamed host and execute it so that the user doesn't suspect anything is wrong.
back to index
what is a path precedence companion virus?
a path precedence companion virus is a type of companion virus that takes advantage of the precedence or order in which DOS traverses the PATH variable (a delimited list of directories) to find a partially specified executable file when attempting to execute it.
as a simple example, suppose the PATH variable holds just two directory names. if we issue a command to execute a file that happens to be in the second directory but we don't specify which directory it's in, DOS will search the first directory and then the second directory in order to find and execute the file. if another executable with the same name as our intended command happens to exist in the first directory then it will get executed instead of the one we intended.
regardless of where the program we intend to run is, if a companion program exists in a directory closer to the beginning of the PATH, that program will get executed instead.
back to index
as a simple example, suppose the PATH variable holds just two directory names. if we issue a command to execute a file that happens to be in the second directory but we don't specify which directory it's in, DOS will search the first directory and then the second directory in order to find and execute the file. if another executable with the same name as our intended command happens to exist in the first directory then it will get executed instead of the one we intended.
regardless of where the program we intend to run is, if a companion program exists in a directory closer to the beginning of the PATH, that program will get executed instead.
back to index
what is an execution precedence companion virus?
an execution precedence companion virus is a type of companion virus that takes advantage of the precedence or order in which DOS tried to find a partially specified executable file when attempting to execute it.
the canonical example is one where a DOS command is issued that would normally execute a file with an *.EXE file extension - should there be a file with the same name but a *.COM extension in the same directory, the *.COM file will be executed instead. in order to hide the fact that there was something unusual going on, the *.COM file would generally then explicitly execute the *.EXE file so that the program the user intended to execute actually gets executed.
whenever there are multiple executables with the same name but different file extension in the same directory, the system will always use a set rule to decide which takes precedence over the others. *.COM files take precedence over *.EXE files, which in turn take precedence over *.BAT files, and so on...
back to index
the canonical example is one where a DOS command is issued that would normally execute a file with an *.EXE file extension - should there be a file with the same name but a *.COM extension in the same directory, the *.COM file will be executed instead. in order to hide the fact that there was something unusual going on, the *.COM file would generally then explicitly execute the *.EXE file so that the program the user intended to execute actually gets executed.
whenever there are multiple executables with the same name but different file extension in the same directory, the system will always use a set rule to decide which takes precedence over the others. *.COM files take precedence over *.EXE files, which in turn take precedence over *.BAT files, and so on...
back to index
Wednesday, August 25, 2010
a malware writer's conference?
WTF? no, seriously, what the f@%k??
ever since i first read about this a day or so ago on brian krebs' blog i've been struck by the utter stupidity of this idea.
to say that such a conference would have a strong law enforcement contingent present would be stating the patently obvious.
if security companies sent representatives it would be for the same reason - not to learn from these malware writers, but to learn about them. profiling these sorts of people for the purposes of assisting law enforcement is something security researchers have been doing behind the scenes for a long time.
but even disregarding the practical complications of exposing oneself to greater scrutiny by attending, the supposed beneficial objective (and i say supposed because the site itself makes no hint at any beneficial objectives whatsoever) of helping security researchers learn about the next generation of malware is just as stupid.
anti-malware vendors do not have any difficulty figuring out malware. each of the several dozen companies out there has it's own army of reverse engineers picking these things apart all the time, and they've gotten so good at it and automated so much of it that in 2006 the average piece of malware could be processed in about 5 minutes. and since it's automated by computer, moore's law should make that process even faster now.
if (and i'm speaking hypothetically here) anti-malware companies have problems, understanding malware isn't among them. keeping up with the volume of malware is perhaps a challenge, but unless the malware writers are offering to stop what they're doing, there's little they can say or do to help that ameliorate problem.
arguably, the vendors also have a problem getting 'ahead of' the threat - which is to say that they can't assist in protecting you until after they've seen the malware (except, of course, they can - you just need to be using the right tools). the reason for this is because predicting malware (necessary if you expect a known malware scanner to be able to stop things that haven't been seen yet) is a bit like predicting the weather - no one can do it reliably. i don't see malware writers being able to help in that regard, even if they are showing off so-called next-gen malware. i also don't see them agreeing to stop performing malware q/a, which would nullify any advance in malware prediction they could offer.
last but not least, the concept of an ethical malcoder is laughable at best. comparisons to ethical hackers ignore the fact that hacker was originally a benign term that was later twisted by the media to include criminals. the term ethical hacker simply tries to highlight the benign examples that have been masked by the inclusion of a malicious set. malware creation, on the other hand, has never been benign - the most one could hope to do is highlight the non-financially motivated malware writers that have been masked by the inclusion of the financially motivated set, but those non-financially motivated ones are still pretty far from benign or ethical. until they understand their responsibility to the wider population and their impact on it and stop what they're doing, it will remain that way.
this isn't like vulnerability research, where developing exploits for newly discovered vulnerabilities serves to highlight flaws that need to be fixed. malware (in the general sense) does not depend on any flaw and thus it's creation does not provide the same benefits. advancing the state of the art of attack does not serve the common good, it is not a positive contribution and can by no means be considered ethical.
ever since i first read about this a day or so ago on brian krebs' blog i've been struck by the utter stupidity of this idea.
to say that such a conference would have a strong law enforcement contingent present would be stating the patently obvious.
if security companies sent representatives it would be for the same reason - not to learn from these malware writers, but to learn about them. profiling these sorts of people for the purposes of assisting law enforcement is something security researchers have been doing behind the scenes for a long time.
but even disregarding the practical complications of exposing oneself to greater scrutiny by attending, the supposed beneficial objective (and i say supposed because the site itself makes no hint at any beneficial objectives whatsoever) of helping security researchers learn about the next generation of malware is just as stupid.
anti-malware vendors do not have any difficulty figuring out malware. each of the several dozen companies out there has it's own army of reverse engineers picking these things apart all the time, and they've gotten so good at it and automated so much of it that in 2006 the average piece of malware could be processed in about 5 minutes. and since it's automated by computer, moore's law should make that process even faster now.
if (and i'm speaking hypothetically here) anti-malware companies have problems, understanding malware isn't among them. keeping up with the volume of malware is perhaps a challenge, but unless the malware writers are offering to stop what they're doing, there's little they can say or do to help that ameliorate problem.
arguably, the vendors also have a problem getting 'ahead of' the threat - which is to say that they can't assist in protecting you until after they've seen the malware (except, of course, they can - you just need to be using the right tools). the reason for this is because predicting malware (necessary if you expect a known malware scanner to be able to stop things that haven't been seen yet) is a bit like predicting the weather - no one can do it reliably. i don't see malware writers being able to help in that regard, even if they are showing off so-called next-gen malware. i also don't see them agreeing to stop performing malware q/a, which would nullify any advance in malware prediction they could offer.
last but not least, the concept of an ethical malcoder is laughable at best. comparisons to ethical hackers ignore the fact that hacker was originally a benign term that was later twisted by the media to include criminals. the term ethical hacker simply tries to highlight the benign examples that have been masked by the inclusion of a malicious set. malware creation, on the other hand, has never been benign - the most one could hope to do is highlight the non-financially motivated malware writers that have been masked by the inclusion of the financially motivated set, but those non-financially motivated ones are still pretty far from benign or ethical. until they understand their responsibility to the wider population and their impact on it and stop what they're doing, it will remain that way.
this isn't like vulnerability research, where developing exploits for newly discovered vulnerabilities serves to highlight flaws that need to be fixed. malware (in the general sense) does not depend on any flaw and thus it's creation does not provide the same benefits. advancing the state of the art of attack does not serve the common good, it is not a positive contribution and can by no means be considered ethical.
what is a malware writer?
like a virus writer, a malware writer is someone who writes/programs malware. nothing more, nothing less.
the term malware writer is essentially the same as virus writer, only more general to include those who write non-viral malware.
back to index
the term malware writer is essentially the same as virus writer, only more general to include those who write non-viral malware.
back to index
Tags:
definition,
malware writer
Tuesday, August 24, 2010
market-speak is a tough habit to quit
on of the anti-malware marketing world's greatest victories was installing their market-speak as the lingua franca of anti-malware security, and to have done so in such a way that hardly anyone even notices. i even catch myself sometimes talking about a product offering protection instead of a product offering assistance (products don't protect you, you protect you with the product's help).
i happened upon a marketing video produced by f-secure for their safe and savvy blog not too long ago. it's possible it wasn't intended to be a marketing video, but... well... i think the video speaks for itself in that regard. it clearly tries to sell product. let's follow along and play the market-speak bingo.
so did you notice the nice big "100%"? it didn't stay for very long. the correct answer to the question "how can i be 100% sure i'm safe?" is that you can't. there is no absolute protection and anyone who says differently is trying to sell you snake-oil. offering 100% certainty you're protected is basically equivalent to claiming 100% protection - which is one of the oldest AV snake-oil tricks in the book. for shame, f-secure, for shame.
how about the references to a "solution", did everyone catch that? yeah, unfortunately the only problems these products solve are the business (or similar) problems that state 'thou must useth anti-virus'. actual security problems are not solved by these products - they don't make the problem go away, they don't make it so you don't have to worry anymore (even though they intentionally lead you into a false sense of security by suggesting you can stop worrying). security products are tools, not solutions - they don't solve real problems anymore than hammers do.
and did you happen to catch all the times when they said they "protect" you or the product protects you without qualifying that it's only partial protection? <sarcasm>yeah, that's not going to lead to a false sense of security (where people treat the product as install-and-forget security) at all</sarcasm>. why would a person continue to think about security and how to be and stay secure when vendors tell that person that they'll take care of that for them?
now i could sit here continuing to roast f-secure for their snake-oil trifecta, but as i said before even i catch myself falling into the same language patterns - early anti-malware marketing has left quite a mark on us. besides which, there's actually a lot to like in that video. the portrayal of the threat landscape and the technologies brought to bear on it are humanized and relate-able. heck, there's even someone labeled "Customer" who takes a tool offered by someone labeled "F-Secure" to chase off a 3rd person labeled "Virus" - even when the words are wrong and give the impression "we protect you", the action itself is right.
oh well, maybe their next video will feature more of what was good in this video and less of what was bad. it's not easy to break out of the pattern. we can only hope they try.
i happened upon a marketing video produced by f-secure for their safe and savvy blog not too long ago. it's possible it wasn't intended to be a marketing video, but... well... i think the video speaks for itself in that regard. it clearly tries to sell product. let's follow along and play the market-speak bingo.
so did you notice the nice big "100%"? it didn't stay for very long. the correct answer to the question "how can i be 100% sure i'm safe?" is that you can't. there is no absolute protection and anyone who says differently is trying to sell you snake-oil. offering 100% certainty you're protected is basically equivalent to claiming 100% protection - which is one of the oldest AV snake-oil tricks in the book. for shame, f-secure, for shame.
how about the references to a "solution", did everyone catch that? yeah, unfortunately the only problems these products solve are the business (or similar) problems that state 'thou must useth anti-virus'. actual security problems are not solved by these products - they don't make the problem go away, they don't make it so you don't have to worry anymore (even though they intentionally lead you into a false sense of security by suggesting you can stop worrying). security products are tools, not solutions - they don't solve real problems anymore than hammers do.
and did you happen to catch all the times when they said they "protect" you or the product protects you without qualifying that it's only partial protection? <sarcasm>yeah, that's not going to lead to a false sense of security (where people treat the product as install-and-forget security) at all</sarcasm>. why would a person continue to think about security and how to be and stay secure when vendors tell that person that they'll take care of that for them?
now i could sit here continuing to roast f-secure for their snake-oil trifecta, but as i said before even i catch myself falling into the same language patterns - early anti-malware marketing has left quite a mark on us. besides which, there's actually a lot to like in that video. the portrayal of the threat landscape and the technologies brought to bear on it are humanized and relate-able. heck, there's even someone labeled "Customer" who takes a tool offered by someone labeled "F-Secure" to chase off a 3rd person labeled "Virus" - even when the words are wrong and give the impression "we protect you", the action itself is right.
oh well, maybe their next video will feature more of what was good in this video and less of what was bad. it's not easy to break out of the pattern. we can only hope they try.
Monday, August 09, 2010
numbers, context, and background
one of the things i've come across while reading various sources is an attempt to pin down an intended target nation for the stuxnet worm based on prevalence data. the theory goes something like 'since nation X is where most instances of stuxnet are found, therefore nation X was the intended target (because obviously more work was put into spreading it there)'.
this theory has some problems, however. first and foremost is that not all the numbers agree. while we have symantec saying that ~60% is in Iran, we also have eset saying that ~60 are in the US just 3 days earlier. they can't both be right - or can they? and if they are, what are the implications for the targeted nation theory?
as is always the case when there are contradictory numbers, we have to look at how those numbers were arrived at. in fact, even when there aren't contradictory numbers, we should still be paying close attention to how those numbers were arrived at.
close examination of vikram thakur's post on the symantec site suggests that there number represents actual infected machines trying to connect to their C&C server (on top of everything else, stuxnet is also a botnet) during a 3 day period between july 19 and july 22. they were able to gather this data because they redirected the domains hosting the C&C servers to themselves so it seems like it would be a pretty accurate snapshot of the pool of infected machines at a particular point in time.
eset's numbers in david harley's post came from their installed clients throughout the world. their cloud-based technology reported the instances - however, since stuxnet employs stealth it's more likely that rather than reporting infected machines (where it would be active and hidden) it's actually reporting infected USB drives. it could also be reporting both if eset's products can see through the stealth, but the key point is that eset's numbers almost certainly include infected USB drives while symantec's do not. the USB numbers are important because that's how this worm spreads and if one were going to work on targeting a particular nation, spreading infected USB drives in that nation would be the way to do it.
furthermore eset's numbers appear to be from the time detection for the worm was added until the time the statistic was reported, rather than just the 3 day period covered by symatec's figures. this means that eset's figures represent a measurement of how many instances of the worm there were over it's detected lifetime to that point, while symantec's figures represent a measurement of how many infected machines remained at that point. this is important because by the time symatec started collecting it's data, negative population controls had already been in effect for some time.
controls which, like worms they intend to control, are not necessarily uniformly effective across the entire globe. some products have greater market share in some regions that they do in others, and the dominant product in certain regions might be poor at controlling particular worms and thus allow those worms greater reproductive advantages in those regions than they might find in others. the presence of population controls like anti-malware software affects both the death and birth rates of worm instances and as anyone who's heard johnny long discuss hackers for charity knows, such controls are not uniformly present or effective across all regions.
there are a actually a variety of other factors, in addition to such controls, that contribute to how well and in what way a worm or virus spreads, as discussed in some detail by jeff kephart, david chess, and steve white in "Computers and Epidemiology". some of these factors, like the degree of connectedness of susceptible hosts (and how often adequate contact between such hosts happens), can be influenced by computing culture, which in turn can be influenced by culture in the more general sense, geopolitical climate, and even socioeconomic considerations. hypothetically speaking, a nation that is cut off from US technologies due to trade sanctions (as metioned by by brian krebs) could well exhibit a higher rate of software sharing as part of 'alternative' procurement techniques and in so doing raise the region above the epidemic threshold for some unspecified worm.
ultimately both symantec and eset could be right since they were measuring different things over different periods of time. what that means for the targeted nation theory is that things aren't as clear-cut as either set of numbers would suggest on their own. what we do know is that stuxnet appeared to enjoy more reproductive success in Iran than elsewhere. whether that's down to purely epidemiological factors or intentional injection of the worm into the local computing population by a malicious actor is unclear, but eset's data would support an argument against the latter option as the effort seems to have been expended elsewhere. on the other hand, if we were to entertain the notion that the US was the target based on the amount of infected materials floating around the computing population, then we are left once again with the conclusion that stuxnet was a failure since in spite of all that effort the prevalence of actual infected machines in the US was minuscule.
i don't think much can be read into the fact that there were more infected machines in Iran than elsewhere since such pockets of infection are actually normal - especially for self-replicating malware that must be spread by physical media. some region had to draw the short straw and this time it was Iran.
this theory has some problems, however. first and foremost is that not all the numbers agree. while we have symantec saying that ~60% is in Iran, we also have eset saying that ~60 are in the US just 3 days earlier. they can't both be right - or can they? and if they are, what are the implications for the targeted nation theory?
as is always the case when there are contradictory numbers, we have to look at how those numbers were arrived at. in fact, even when there aren't contradictory numbers, we should still be paying close attention to how those numbers were arrived at.
close examination of vikram thakur's post on the symantec site suggests that there number represents actual infected machines trying to connect to their C&C server (on top of everything else, stuxnet is also a botnet) during a 3 day period between july 19 and july 22. they were able to gather this data because they redirected the domains hosting the C&C servers to themselves so it seems like it would be a pretty accurate snapshot of the pool of infected machines at a particular point in time.
eset's numbers in david harley's post came from their installed clients throughout the world. their cloud-based technology reported the instances - however, since stuxnet employs stealth it's more likely that rather than reporting infected machines (where it would be active and hidden) it's actually reporting infected USB drives. it could also be reporting both if eset's products can see through the stealth, but the key point is that eset's numbers almost certainly include infected USB drives while symantec's do not. the USB numbers are important because that's how this worm spreads and if one were going to work on targeting a particular nation, spreading infected USB drives in that nation would be the way to do it.
furthermore eset's numbers appear to be from the time detection for the worm was added until the time the statistic was reported, rather than just the 3 day period covered by symatec's figures. this means that eset's figures represent a measurement of how many instances of the worm there were over it's detected lifetime to that point, while symantec's figures represent a measurement of how many infected machines remained at that point. this is important because by the time symatec started collecting it's data, negative population controls had already been in effect for some time.
controls which, like worms they intend to control, are not necessarily uniformly effective across the entire globe. some products have greater market share in some regions that they do in others, and the dominant product in certain regions might be poor at controlling particular worms and thus allow those worms greater reproductive advantages in those regions than they might find in others. the presence of population controls like anti-malware software affects both the death and birth rates of worm instances and as anyone who's heard johnny long discuss hackers for charity knows, such controls are not uniformly present or effective across all regions.
there are a actually a variety of other factors, in addition to such controls, that contribute to how well and in what way a worm or virus spreads, as discussed in some detail by jeff kephart, david chess, and steve white in "Computers and Epidemiology". some of these factors, like the degree of connectedness of susceptible hosts (and how often adequate contact between such hosts happens), can be influenced by computing culture, which in turn can be influenced by culture in the more general sense, geopolitical climate, and even socioeconomic considerations. hypothetically speaking, a nation that is cut off from US technologies due to trade sanctions (as metioned by by brian krebs) could well exhibit a higher rate of software sharing as part of 'alternative' procurement techniques and in so doing raise the region above the epidemic threshold for some unspecified worm.
ultimately both symantec and eset could be right since they were measuring different things over different periods of time. what that means for the targeted nation theory is that things aren't as clear-cut as either set of numbers would suggest on their own. what we do know is that stuxnet appeared to enjoy more reproductive success in Iran than elsewhere. whether that's down to purely epidemiological factors or intentional injection of the worm into the local computing population by a malicious actor is unclear, but eset's data would support an argument against the latter option as the effort seems to have been expended elsewhere. on the other hand, if we were to entertain the notion that the US was the target based on the amount of infected materials floating around the computing population, then we are left once again with the conclusion that stuxnet was a failure since in spite of all that effort the prevalence of actual infected machines in the US was minuscule.
i don't think much can be read into the fact that there were more infected machines in Iran than elsewhere since such pockets of infection are actually normal - especially for self-replicating malware that must be spread by physical media. some region had to draw the short straw and this time it was Iran.
Friday, August 06, 2010
why the stuxnet worm is an abject failure
in a previous post i called the stuxnet worm an abject failure, and intimated that it was because it was a worm. some might wonder why i said that when there have been a number of worms in recent memory (conficker, for example) that have spread far and wide and were considered successful, not just in spite of becoming widespread, but rather because of it. why should the stuxnet worm be subjected to an apparent double standard?
to understand why you have to look who the intended pool of victims are for the malware in question. for most common worms or viruses the goal is to infect or infest as many machines as possible. there is no special subset that is being targeted, they're just looking to add computing resources to their attack platform; where that attack platform can be used for anything from simply infecting/infesting still more machines to something as complex as building a botnet. anything where more is better.
in this kind of scenario there is a rather large number of machines for which public knowledge of the attack makes little or no difference with respect to whether the attack succeeds. the virus or worm will be able to thrive within this population because the machines are poorly administered, have misconfigured anti-malware software (assuming it's present and enabled at all), and no special mitigating steps have been taken to deal with the possibility of getting infected by the virus or worm (like disabling autorun, for example). if you guessed these were home user machines you'd be right (although there are a significant number of these in other environments as well). these are generally considered low value targets so the people charged with taking care of them generally don't do all that good a job.
the stuxnet worm, in contrast to this, was targeting machines that controlled industrial processes like manufacturing, power generation, water treatment, etc. these are, quite clearly, near the opposite end of the spectrum of valuable targets. as a result the people tasked with taking care of these machines are generally trained professionals who, in all likelihood, will take special steps to mitigate the threat of a new attack that specifically targets those machines once that attack becomes public knowledge.
and therein lies the rub. self-replicating malware does not stay below the radar. malware that makes copies of itself always winds up drawing attention to itself. each copy it makes increases the chance that someone will catch on, and when someone does that's the first step in the inevitable process of the attack becoming public knowledge.
consider the difference between plant eating animals and meat eating animals. plants don't generally react to protect themselves when they sense a plant eater coming near them, but prey animals definitely do react to protect themselves so meat eaters have to behave in a way that prevents their prey from knowing they're coming. if meat eaters bumbled along like plant eaters they'd never catch anything and eventually starve. they'd be abject failures just as the stuxnet worm is.
to understand why you have to look who the intended pool of victims are for the malware in question. for most common worms or viruses the goal is to infect or infest as many machines as possible. there is no special subset that is being targeted, they're just looking to add computing resources to their attack platform; where that attack platform can be used for anything from simply infecting/infesting still more machines to something as complex as building a botnet. anything where more is better.
in this kind of scenario there is a rather large number of machines for which public knowledge of the attack makes little or no difference with respect to whether the attack succeeds. the virus or worm will be able to thrive within this population because the machines are poorly administered, have misconfigured anti-malware software (assuming it's present and enabled at all), and no special mitigating steps have been taken to deal with the possibility of getting infected by the virus or worm (like disabling autorun, for example). if you guessed these were home user machines you'd be right (although there are a significant number of these in other environments as well). these are generally considered low value targets so the people charged with taking care of them generally don't do all that good a job.
the stuxnet worm, in contrast to this, was targeting machines that controlled industrial processes like manufacturing, power generation, water treatment, etc. these are, quite clearly, near the opposite end of the spectrum of valuable targets. as a result the people tasked with taking care of these machines are generally trained professionals who, in all likelihood, will take special steps to mitigate the threat of a new attack that specifically targets those machines once that attack becomes public knowledge.
and therein lies the rub. self-replicating malware does not stay below the radar. malware that makes copies of itself always winds up drawing attention to itself. each copy it makes increases the chance that someone will catch on, and when someone does that's the first step in the inevitable process of the attack becoming public knowledge.
consider the difference between plant eating animals and meat eating animals. plants don't generally react to protect themselves when they sense a plant eater coming near them, but prey animals definitely do react to protect themselves so meat eaters have to behave in a way that prevents their prey from knowing they're coming. if meat eaters bumbled along like plant eaters they'd never catch anything and eventually starve. they'd be abject failures just as the stuxnet worm is.
Wednesday, August 04, 2010
digital signatures are not a poor man's whitelist
back when mcafee had their catastrophic false positive i made the suggestion that av vendors in general (and mcafee in particular) could use a whitelist of critical system files to avoid false alarms that render systems unbootable/unusable. basically the idea being that known trusted files could be ignored by anti-malware components prone to false alarm. in the comments of that post didier stevens suggested checking digital signatures of files could be used as an alternative. at first i thought that was an OK compromise to developing a proper whitelist of critical files, but recent events have made me rethink that.
as kaspersky's alexander gostev described, a digital signature will cause a file to be regarded as trusted by security software, much like didier suggested in his comment, and if it's malware that means it will be effectively hidden from the anti-malware software.
i think that's a pretty glaring problem and underscores the fact that digital signatures are a poor substitute for a proper whitelist. a proper whitelist is constructed of items that are known and trusted, but with digital signatures it's neither the anti-malware vendor nor the user who's constructing this implied whitelist. instead, the implicit whitelist is constructed jointly by every tom, dick, and harry who happens by hook or by crook to get his/her hands on a valid digital certificate. that means the people who make the determination of whether a file is safe and/or trustworthy are (generally) the same ones who created it. however, those people could lie, they could fail to actually check the safety/integrity of the file they're signing, or they might not even be qualified to check the safety/integrity of the file they're signing. even if the owner of the digital certificate used to sign the file is a trustworthy entity, that doesn't mean the file is also trustworthy. first and foremost, trust is not commutative. besides that, though, as the stuxnet example shows us, the certificate can fall into the wrong hands. treating digital signatures as whitelist entries creates a situation where altogether too many entities have influence over what is considered trusted and safe.
thus digital signatures can not tell us what we need a whitelist to tell us, namely that the file is trusted and safe. the presence of a valid digital signature only tells us two things: that the file was signed with certificate X (which may or may not be under the exclusive control of the party it was issued to), and that the file hasn't changed since it was signed. whether it's safe, whether it's fit for use, is entirely outside the scope of what a digital signature can tell us.
i think the idea of using a whitelist to avoid false alarms is a good one, but using digital signatures in it's place is a shortcut. av vendors need to stop cutting corners and implement proper whitelists for this particular application. there are already tens of thousands of digitally signed malware samples in f-secure's collection so i'm at a loss trying to figure out why this technique of false positive avoidance hasn't been abandoned already. now that news that detections of stuxnet didn't start until after the digital signature expired, malware authors will surely be looking to exploit this behaviour more and more.
as kaspersky's alexander gostev described, a digital signature will cause a file to be regarded as trusted by security software, much like didier suggested in his comment, and if it's malware that means it will be effectively hidden from the anti-malware software.
i think that's a pretty glaring problem and underscores the fact that digital signatures are a poor substitute for a proper whitelist. a proper whitelist is constructed of items that are known and trusted, but with digital signatures it's neither the anti-malware vendor nor the user who's constructing this implied whitelist. instead, the implicit whitelist is constructed jointly by every tom, dick, and harry who happens by hook or by crook to get his/her hands on a valid digital certificate. that means the people who make the determination of whether a file is safe and/or trustworthy are (generally) the same ones who created it. however, those people could lie, they could fail to actually check the safety/integrity of the file they're signing, or they might not even be qualified to check the safety/integrity of the file they're signing. even if the owner of the digital certificate used to sign the file is a trustworthy entity, that doesn't mean the file is also trustworthy. first and foremost, trust is not commutative. besides that, though, as the stuxnet example shows us, the certificate can fall into the wrong hands. treating digital signatures as whitelist entries creates a situation where altogether too many entities have influence over what is considered trusted and safe.
thus digital signatures can not tell us what we need a whitelist to tell us, namely that the file is trusted and safe. the presence of a valid digital signature only tells us two things: that the file was signed with certificate X (which may or may not be under the exclusive control of the party it was issued to), and that the file hasn't changed since it was signed. whether it's safe, whether it's fit for use, is entirely outside the scope of what a digital signature can tell us.
i think the idea of using a whitelist to avoid false alarms is a good one, but using digital signatures in it's place is a shortcut. av vendors need to stop cutting corners and implement proper whitelists for this particular application. there are already tens of thousands of digitally signed malware samples in f-secure's collection so i'm at a loss trying to figure out why this technique of false positive avoidance hasn't been abandoned already. now that news that detections of stuxnet didn't start until after the digital signature expired, malware authors will surely be looking to exploit this behaviour more and more.
Tuesday, August 03, 2010
APT or WAPT?
... and by WAPT i mean wannabe APT.
so, one of the more colourful stories this past little while has been the stuxnet worm. apparently some people are having fun speculating about whether it's an example of a nation state targeting the critical infrastructure of another.
really i think we're just so uncertain about APT style threats that we're trying to find examples so as to make things more clear. does this case qualify? that's the question of the hour, isn't it. i guess i'll throw in my own 2 cents in the speculation game.
the components of this malware certainly lend themselves to a conclusion that it was part of an attack launched by an APT level of attacker. it's got a 0-day exploit to auto-execute itself when the directory containing the malware is viewed in explorer, a stealthkit to hide it's presence, digitally signed binaries using digital certificates from multiple well known companies to cause anti-malware software to overlook them, and a payload that targets a particular brand of SCADA (supervisory control and data acquisition) system.
do those properties really mean what they seem to mean though?
the people behind stuxnet certainly seem likely to have had financial backing, and the targeting conclusion seems unavoidable, but if they were advanced at all it was in a purely academic sense. they may have come up with the 0-day exploit and thereby qualified as researchers of some skill but they clearly don't have experience designing full attack campaigns from scratch. they don't understand the strategic strengths and weaknesses of the pieces they cobbled together and seem to have a somewhat antiquated idea of the malware threat landscape. if they were backed by a government, officially or otherwise, then that government must be in pretty dire straights to have employed the services of someone so green. it could, i suppose, have also been an attempt at industrial espionage, but either way the attackers' inexperience has tipped off the entire world to their efforts and that's pretty much an abject failure.
so, one of the more colourful stories this past little while has been the stuxnet worm. apparently some people are having fun speculating about whether it's an example of a nation state targeting the critical infrastructure of another.
really i think we're just so uncertain about APT style threats that we're trying to find examples so as to make things more clear. does this case qualify? that's the question of the hour, isn't it. i guess i'll throw in my own 2 cents in the speculation game.
the components of this malware certainly lend themselves to a conclusion that it was part of an attack launched by an APT level of attacker. it's got a 0-day exploit to auto-execute itself when the directory containing the malware is viewed in explorer, a stealthkit to hide it's presence, digitally signed binaries using digital certificates from multiple well known companies to cause anti-malware software to overlook them, and a payload that targets a particular brand of SCADA (supervisory control and data acquisition) system.
do those properties really mean what they seem to mean though?
- we assume that the 0-day security flaw was developed by the attacker, which would seem to make the attacker technically advanced - but it is conceivable that the vulnerability was instead purchased, presumably for a very high price considering the calibre of the vulnerability, so this could instead be an example of the attacker being well funded and thus probably satisfying the persistent criteria for APT.
- stealthkits aren't really that earth shattering these days, there are books and websites dedicated to teaching the reader how to make them so there's not much that can be inferred just from that.
- getting access to a well known company's digital certificate in order sign one's binaries seems like a rather mysterious feat that could point to advanced skills, or insider access gained as part of the kind of detailed plan you'd expect from someone with persistence, except that it could also have been done with a turnkey crimeware kit like zeus. getting access to the digital certificates of two companies in the same geographic location makes the probabilities of advanced skills or persistence much less likely and simple opportunity much more likely.
- the SCADA-specific payload rather unambiguously points to a targeted attack (which is what a persistent threat would carry out), and also suggests access to similar SCADA systems for the purposes of R&D (which would probably tend to imply some financial backing), but it was put in a piece of self-replicating malware (malware that spreads itself in an automated fashion) which is pretty much the antithesis of targeted.
the people behind stuxnet certainly seem likely to have had financial backing, and the targeting conclusion seems unavoidable, but if they were advanced at all it was in a purely academic sense. they may have come up with the 0-day exploit and thereby qualified as researchers of some skill but they clearly don't have experience designing full attack campaigns from scratch. they don't understand the strategic strengths and weaknesses of the pieces they cobbled together and seem to have a somewhat antiquated idea of the malware threat landscape. if they were backed by a government, officially or otherwise, then that government must be in pretty dire straights to have employed the services of someone so green. it could, i suppose, have also been an attempt at industrial espionage, but either way the attackers' inexperience has tipped off the entire world to their efforts and that's pretty much an abject failure.
Monday, August 02, 2010
what is a crimeware kit?
a crimeware kit is a product and/or service that allows a prospective online criminal to carry out his/her criminal enterprise without needing the technical know-how to build that enterprise in the first place.
the technical aspects of an online criminal enterprise that a crimeware kit might typically provide for the criminal/customer include the malware itself which is often some sort of bot, a command and control interface, and potentially a facility for spreading the bot to new victims and thereby increase the size of the botnet.
by taking all the technically challenging parts of the online criminal enterprise out of the criminal's hands a malware kit could be said to create a turn-key system for online criminal enterprise. it's users could likewise be considered the professional criminal equivalent of script kiddies.
back to index
the technical aspects of an online criminal enterprise that a crimeware kit might typically provide for the criminal/customer include the malware itself which is often some sort of bot, a command and control interface, and potentially a facility for spreading the bot to new victims and thereby increase the size of the botnet.
by taking all the technically challenging parts of the online criminal enterprise out of the criminal's hands a malware kit could be said to create a turn-key system for online criminal enterprise. it's users could likewise be considered the professional criminal equivalent of script kiddies.
back to index
Tags:
crimeware kit,
definition,
malware
Subscribe to:
Posts (Atom)