Monday, May 26, 2008

post of the week

Saturday, May 24, 2008

gaming the mac malware game theory

adam j. o'donnell has put up an interesting bit of analysis of malware attack strategies... it's interesting (to me at least) for 2 reasons: the focus on strategy and because i can't honestly say i've seen game theory applied to the malware domain before... that said, i think there are some problems with it...

right off the bat there's the explanation of game theory describing how rational players would interact with one another... no doubt this is not a problem that is unique to the current discussion but not all the players involved in this particular game are rational; on the attackers' side many probably are but some definitely aren't (and you probably need look no further than your junk mail folder for evidence of that), while on the users side many don't even know they're playing this game in the first place... rationality is a surprisingly subjective thing - it's an attribute characterized by the making of reasoned, logical decisions based on available facts, but how can person A know what facts person B has available to them? a decision where a particular fact is significant will most likely seem illogical to the other party if only one of them knows the fact... this is problematic because one of the conclusions adam draws (that mac malware won't reach a tipping point until the mac reaches 1/6th market share) seems to require the entire malware writing population to adhere to the rationality that game theory assumes... i don't doubt that game theory can be used to model behaviour in populations, but i think it's descriptive value exceeds it's predictive value...

the subjectivity of rationality isn't the only problem with this sort of model, however, as adam demonstrates by focusing on a single motivation and therefore ignoring the subjectivity of value... while it is certainly true that financially motivated malware creation is a significant trend these days, there's little reason to think that people magically stopped using it as a means to acquire the rewards they sought before the commercial malware trend started... there is a wide variety of motives and by extension a wide variety of value propositions for prospective malware creators...

beyond the overly narrow constraints on value, there appears to be an additional problem with the way the game itself is framed... by that i mean that it's implied that the attacker has 2 strategies: attack platform A or attack platform B... it's still very early in the evolution of malware for the mac osx platform but we already know there's a 3rd strategy, we've seen it in the wild with the zlob gang's approach i described before: attack platform A AND platform B...

this mischaracterization of the game points to a classic mac vs. pc mindset - a platform rivalry that has more meaning to users than it does to attackers... a rational attacker would recognize that the platforms are not fundamentally different, that attacks that can be applied to one can generally also be applied to the other (ie. the 2 suggested strategies are not mutually exclusive), and that the cost of mounting an equivalent attack on a second platform is the cost of porting the malware rather than developing the entire attack from scratch... in this way the minority platform becomes little more than a special case that the attackers may decide to accommodate when revising their attack for the next wave of their malware campaign... and revising their attack is exactly what the more professional, financially motivated, rational attackers do when a malware campaign proves successful... after all, adding features (like mac attack capability) adds to their payoff rather than selecting a different (less optimal) payoff...

Tuesday, May 20, 2008

the user is responsible but ill-equipped

no doubt by now many of you have read about microsoft saying that the reason there was more malware on vista than on windows 2000 was because of the user... there's been a bit of a knee-jerk reaction against this but to a certain extent microsoft is actually right...

robert sandilands demonstrates this reaction against microsoft's argument quite well in his post "Is Vista more or less secure than Windows 2000?"... in fact, when he started talking about how the user shouldn't need to be security experts and just want to get their job done i felt like i was greeting an old acquaintance (i'm referring of course to the argument - robert and i are not actually acquainted yet)...

it's true that computer users shouldn't need to be security experts and it's certainly not realistic to expect they can be, that much i'll agree with, but there's another truth that some don't seem like they want to face: being security vegetables isn't really going to work out for the user either...

i shouldn't need to be an automotive expert in order to drive from point A to point B... i just want to get to my destination and don't want to be bothered with all the technical details... that should be possible, shouldn't it? sure is, but if i want to get there safely i have to follow certain safety protocols colloquially known as the rules of the road... the average person (with some notable darwinian exceptions) understands the need for following safety rules while cruising down the highway, but for the most part they aren't even aware of the existence of the security rules for using computers (were you expecting an information superhighway reference here?)...

mostly they just know they need an anti-virus product... maybe some of them have heard of a firewall, but for most people that's the extent of their awareness of secure computing behaviour and unfortunately that is not enough to keep them safe/secure...

people often liken using a computer to using a mundane household appliance like a toaster, but such people should get over themselves because even with a toaster people need to know not to stick a fork in it... there are safety rules for virtually every tool in existence - some are simply a matter of common sense (though really they're often things we pick up from safety awareness initiatives when we're young), some should be a matter of common sense (like not operating a propane barbecue indoors), and some require the consumer be informed of how to use the tool safely...

using a computer is one of those things where the consumer needs to be informed because safe/secure computer use behaviours haven't penetrated our culture yet, and because the cause is often too far removed from the effect for users to make the necessary connection... and like it or not, the cause often involves the user - when the user gets malware on their system it is usually at least partially as a result of something s/he did or did not do...

that makes the user responsible for what happens to their machine... note that this isn't the same as blaming the user, being responsible and being at fault are two different things... you can't blame someone if there wasn't a reasonable expectation for them to know better, and currently such an expectation wouldn't be reasonable... in the grand scheme of things, however, it should be reasonable; we should be able to expect that of computer users - if boy scouts can "always be prepared" then why are we still feeding computer users pablum instead of teaching them to take responsibility for their actions/inactions and the consequences thereof...

Monday, May 19, 2008

the decline of anti-malware protocols

if you've read this blog for any amount of time you've probably encountered posts where i try to debunk the idea that anti-virus is dead or falling behind or something similar... those generally have to do with anti-virus technology or the anti-virus industry, but there is one aspect of av that seems to be doing more than a little poorly these days and that is the application of proper anti-malware protocols - in other words, the practice of AV...

my interest in malware has lead me along many seemingly tangential paths in security over the years, and although i won't pretend to have any great insight in most of them i did pick up a few gems along the way... one of the gems i picked up as a result of following sci.crypt for a decade or so was the idea that no matter how secure an encryption algorithm is a cryptosystem that uses it is not secure unless it uses the algorithm in the proper way... in part that means using best practices for the algorithm (such as not using a one-time pad more than one time, or sanitizing an rc4 keystream), but it also means following a secure cryptographic protocol...

i propose that a similar principle is true for anti-virus or more generally anti-malware: you cannot gain the full benefits from a particular anti-malware technology unless you use it properly... again, sometimes this means best practices like keeping all your software up-to-date, and sometimes it means following correct procedures (implementing an anti-malware protocol)... this is one of the things i had in mind way back when when i wrote my anti-virus cook book but unfortunately protocols have seen little if any attention in the intervening years (and my cook book is way out of date these days, though some of the underlying principles remain unchanged)...

i understand why there hasn't been much focus on it... there is a pervasive opinion that anti-malware technology should stand on it's own, it shouldn't need informed users to act in a particular way because you can't depend on users in general to be informed or act the way you think they should... this has lead to the unfortunate consequence of making people think there is a purely technological solution to the malware problem when there isn't... it also ignores the fact that not only are some people actually capable of following proper anti-malware protocols, the population in general is at least capable of improving even if we'll never see complete uniform adherence to a particular code of conduct...

with that in mind, however, i was disappointed to discover that not only have protocols not advanced much in their adoption outside the anti-malware community but they're even being dropped within the anti-malware community... 10 years ago you would not have encountered a well respected 3rd party tester testing known-malware scanners against active stealth-capable malware so imagine my dismay at discovering that av-test.org did precisely that as described in a paper linked to by this darkreading article... the paper describes 2 separate tests, one on XP using anti-malware suites that claim some measure of stealthkit detection, and one on vista using so-called "pure anti-virus" products (ie. known-malware scanners only)...

the issue here is primarily the second test... known-malware scanning is primarily a preventative technological control... it is quite effective at filtering known-malware out of incoming materials... to their credit, anti-malware vendors have also made their scanner products useful in the contexts of detection and correction as well, but with the caveat that the user should use them from a clean environment (by booting from known-clean bootable non-writable media or by slaving the suspect drive in a known clean computer)... outside-the-box analysis is the only reliable countermeasure to whatever active offensive and defensive capabilities a particular piece of malware might have - it is a well known principle that you cannot trust a compromised system to accurately report it's state...

while there are technological shortcuts that can prove moderately successful (as demonstrated by av-test.org's XP test), those shortcuts are not reliable enough to completely obviate the need for the formal outside-the-box analysis... the benefits of shortcuts is that they allow one to extend the time between performing otherwise costly formal methods like outside-the-box analysis (up to a point) without increasing the risk of something getting through... in that respect, the XP test wasn't so bad since the products tested actually claimed to perform such shortcuts, but the vista test was methodologically unsound because it misused the technology it purported to test by testing it in a compromised environment...

now there have long been proponents of this sort of testing... they would often cite such methodologies as reflecting actual usage conditions because that's the way people use the products in real life... but the fact of the matter is that that's the way most people misuse the products in real life... when you're developing a benchmark the metric that is more interesting is the one that shows what a product is capable of when used properly, not the one that shows what happens when people don't know what they're doing...

when a respected testing body performs tests like this it reinforces worst-practices and the mismatched expectation that anti-malware software should just protect us automagically... i can't accept as valid any testing methodology that misuses the technology it purports to test and i can't believe andreas marx didn't know better since he referenced 2 of his own previous papers on the use of recovery disks and made a feeble mention in his conclusion that maybe their use is a better way to go... they are absolutely the better way to go when you suspect a system might be compromised and this has been known in the anti-malware community for a long, long time... when the anti-malware community itself starts dropping best practices and proper protocols, the practice of AV doesn't just fall behind, it regresses...

posts of the week

Sunday, May 11, 2008

posts of the week



only 2 that really caught my eye... been a quiet week this week i guess... good thing too - my wrists needed a break...

Tuesday, May 06, 2008

not enough innovation

you know the anti-av revolt is gaining momentum when you can start picking out repeating patterns in the propaganda... anti-virus is dead was one such pattern but here's another one i've seen plenty of times: there is "not enough innovation" in the anti-malware space...

whenever i read the "not enough innovation" claim i imagine a small child in upper management's body squinting his eyes, stamping his feet and yelling
i don't care how intractable the problem is - fix it, FIX it, FIX IT!
the essence of it is the belief that the people actually doing anti-malware aren't doing enough or aren't doing it right... the answer to this is really quite simple, however... if you think it can be done better then you must know something the people who are actually doing it don't know... if you know something they don't then it seems like you should be able to do it better... if you think you can do it better then get off your lazy butt and fix the problem yourself (or at least make concrete suggestions) - otherwise shut your pie-hole, or at the very least concede that you're talking out of your arse...

there's little in life that annoys me more than people eager to complain about a situation but unwilling to do anything to fix it, and that's precisely what vague, nebulous claims of "not enough innovation" represent...

the anti-av revolt

i briefly made reference before to a growing anti-av revolt... that is the customer base for the anti-virus/anti-malware industry rebelling against the industry out of some perceived wrong-ness in it... i'm sure you've probably heard of people who swore off anti-virus years ago, likewise readers of this blog probably recall more overt manifestations of this revolt such as the "anti-virus is dead" campaign or more topically the "race to zero" contest...

usually it's been technically possible to write off individual av detractors as uninformed cranks, but their numbers are growing and their pool of influence is increasing... bruce schneier crying conspiracy when f-secure attempted responsible disclosure with sony was just the tip of the iceberg... now there are a variety of security experts lending their voices to an escalating sequence of expressions of dissatisfaction with the anti-malware industry...

and it's not even like they don't have just cause to be dissatisfied - they do... vendors have let their marketroids run amok for decades, building false expectations of protection in the customer base that are so in-grained we may never be able to undo them, and then predictably failing to live up to those unreasonable expectations...

on the other hand, however, the marketing folks were just telling people what they wanted to hear... there's a school of thought that says if a marketing person isn't showing you a rose-coloured-glasses version of the world then they aren't doing their job - anything less and they hurt their own company by admitting the product/service isn't the best thing since sliced bread... that's generally not a good idea in a competitive market as your competitors will capitalize on that as a display of weakness...

furthermore, there's no good reason for people to actually believe the marketing... a real life whopper doesn't look as perfect and juicy as the one on tv, beer doesn't come with a bevy of buxom beauties all playfully vying for your attention, and cars can't leap over traffic to get you where you're going faster and with a funky soundtrack... we've all learned these things through our real-life experiences and we should have also learned that anti-virus software cannot provide complete protection so why are people getting so bent out of shape over the inevitable failures?...

it really doesn't make a lot of sense but that's the irrationality of the human element for you... unfortunately i don't think it's good enough to just observe the fact and then go on about you're day like it was business as usual - not if (as i suggest) the problem really is escalating...

this may be hard for most to believe but there's actually been a long history of cooperation between competing companies at the more technical levels... this hasn't held true for marketing however; these are businesses after all, they need to make money and generally that's at the expense of their competitors... on a technical level, anti-malware vendors have always had a common enemy - the malware writers - but on a marketing level their opponents have always been each other... now the various marketing departments have a common foe as well (the anti-av movement), but it remains to be seen if they'll recognize their common interests and start working together as the analysts/researchers have...

the industry needs to get serious about image management... they have to start working to repair the damage their marketing departments have done to the public's perception of both the technology and the industry... that means not selling snake-oil in order to pander to unreasonable desires for complete protection (mcafee total protection)... that means putting your creative new technologies into your existing products instead using them as creative new ways to bilk more money out of customers and thereby reinforcing the image of av as blood sucking parasites (norton anti-bot)... and that definitely means not saying asinine things like the malware problem is solved...

it also means marketing something other than scanners... it's all the vast majority of the public knows and it's all anyone seems to think the industry produces... it's like they've been going to the same grocery store for years but only ever went down this one particular aisle, they're barely aware the rest of the store exists and they're getting fed up with what they're finding in that one aisle... they need to be made aware that scanning alone isn't enough (something the more technical members of the industry have freely admitted in public forums for a decade or more) and that the vendors have other technologies available besides just scanners...

much of the av industry is at the mercy of the big 2-3 av companies, unfortunately... it is those companies that have the most influence over how people perceive av but it is also those that would suffer the least by the destabilizing effects of their own PR gone awry... those with the most capability to do something positive have the least motivation to shape up, and if they just keep on keeping on then the public's perception is unlikely to change and the anti-av revolt will continue to grow...

harnessing the power of spam

one of the things i really like is the concept of using the bad guy's tactics against them... i enjoy the subtle irony, so when dmitry chan mentioned the possibility of harnessing the power of spam over on the securiteam blog my creative juices started to flow... i thought i'd share the idea i came up with if for no other reason than because i think it's kind of funny...

the idea is that you use spam emails in CAPTCHAs... if you can pick the ham out of the spam then you pass the test... as the bad guys make advances to beat such spam-based CAPTCHA systems, we use their advances in our spam filters and remove the now detectable spams from the spam-based CAPTCHA so that the bad guys have to keep advancing the art of spam detection in order to bust the CAPTCHAs that stand between them and the ability to produce more spam...

this may well not be workable in practice (i imagine it may simply get too hard for real people to identify the ham) but it's still fun to imagine spammers working against themselves (or more likely against each other since it offers them a new way to compete against each other)...

Sunday, May 04, 2008

posts of the week

  • hype-free: Race to Zero
    this one really stuck in my head... ostensibly it's about race to zero but in the process cdman rightly places the blame for the growing anti-av revolt squarely on the av vendors themselves for letting their marketroids run amok...
  • Viruslist.com - Analyst's Diary - More thoughts on drawing the line
    an interesting look at the asymmetry between obfuscation and deobfuscation as it pertains to the race to zero contest...
  • Security Myths - McAfee Avert Labs Blog
    if i had found more time i would have written this post (or one very much like it) myself... seriously, it was on my list of things to do, and now i don't have to do it...
  • PDF, Let Me Count the Ways… « Didier Stevens
    at first i thought PDF canonicalization would be better suited for heuristics but then i realized seeing through this kind of obfuscation can be as beneficial for known-malware scanning as seeing through packer-based obfuscation or really anything that's 'added on' to the base malware...
  • Emergent Chaos: Quantum Uncertainty
    so much for the superiority of quantum computers - or so it would seem for now...
  • Microsoft botnet-hunting tool helps bust hackers - Network World
    interesting to hear details of how botnet busts go down, and obviously microsoft is in an excellent position to help this happen if they've got intelligence gathering tools on hundreds of millions of enterprise and end user machines...

Saturday, May 03, 2008

race to zero is no pwn2own

from mike rothman on the race to zero controversy:
It's like the PwnToOwn context at CanSec. Some folks will find some interesting holes and the vendors will patch them. Same deal here.
simply put, modifying known malware so that it no longer resembles known malware closely enough for anti-malware products to recognize it is not the same as finding new software flaws that need to be fixed...

with an infinite number of possible modifications, it's technically impossible for known-malware scanning producers to anticipate them all so they stay out of the pointless business of anticipating them entirely... as such failing to anticipate the ones used in this contest doesn't represent a flaw that needs to be fixed anymore than failing to read minds does... dealing with the new/unknown threats is the job of other technologies like behaviour-based HIPS (which i've already shown is available from a surprising number of traditional av vendors)...

coming from the guy who put me on to the phrase "mismatched expectations", this incite was a little off... but i guess i should expect as much when the prevailing wisdom in the security industry can't distinguish between malware research issues and vulnerability research issues...

all in all, the race to zero contest is really nothing like pwn2own... it's more like a cross between anti-virus fight-club and the consumer reports fiasco...

bad really is in the minority

from liam tung's article signature-based antivirus is dead: get over it:
However, there is a problem with the use of blacklists, said Turner. "When the majority of stuff you're handling is malicious, it makes more sense to use a white list because that deals with the exception — blacklists only work if 'bad' is in the minority."
i totally agree with this statement... there's just one thing that turner and just about every other av detractor out there fail to realize... bad really is in the minority... bit9 (an application whitelist vendor) has shown that there are several orders of magnitude more good stuff (on the order of billions) than bad stuff (about a half million at the time) and that microsoft alone produced as many good binaries in a day as there had been bad binaries produced in the previous 20+ years combined (from the bit9 presentation at the international anti-virus testing workshop in 2007)...

furthermore, most of the stuff anyone (other than the anti-malware industry) handles is non-malicious (unless you're looking only at email and are considering spam)... most web pages are safe, most binaries are safe, the majority of stuff most regular people encounter on a day to day basis is safe so if you're going to advocate a security technology that focuses on the exceptions you're going to have to get over your perceptual biases and realize that bad stuff is the exception so blacklists make more sense (at least by that logic)...

you've heard the argument that blacklisting is inferior to whitelisting because the list of all bad things is growing too big too fast, but we have quantifiable proof that the list of all good things is far, far bigger and growing far, far faster... that doesn't mean blacklists don't have serious problems (they do) or that whitelists are unusable (they aren't), it simply means that particular argument is fundamentally flawed and if people took the time to become familiar with the reality of the situation they'd know that...

an inconvenient truth about race to zero

from noah shiffman's article av vendors race-to-zero clue:
New viruses will not be created and no modified or variant code will be publicly released.
it's amazing to me how many people don't seem to realize that when you modify something you are effectively creating something new... this has been one of the more prevalent misunderstandings i've seen from people in favour of the race to zero contest at defcon this year and one i really didn't expect...

now, i realize i haven't always had as good a definition of variant as i do now... i should be more understanding of people who may not know as much as i do on the subject of malware... but even long before my understanding of variants reached it's current state i still had the logical capacity and intuition to realize that when you modify a virus, especially when you modify it to the point where anti-malware products can no longer detect it, then it is no longer the same as the original virus, it is no longer like anything anyone has seen before, it is (dare i say it) new...

the only way the race to zero won't be producing new virus variants is if cdman83 is right about them probably not using actual viruses in the first place (we can only hope they were dumb enough to misuse the terminology)...

even then, though, they will still be producing new malware... creating new threats is really all this contest will accomplish... demonstrating that known-malware scanning can't detect things that no one has ever seen before (ie. things that aren't known) is like demonstrating a spoon can't cut through bone...

what is a variant?

a malware variant is a modified version of a previously existing piece of malware where the modification was performed by something other than the malware itself...

any modification that results in something that isn't a byte-for-byte match with the original is enough to classify the result as a new variant so long as the modification wasn't made entirely by the malware itself... that means flipping a single bit by hand creates a new variant while traditional polymorphism does not...

the reason traditional polymorphism isn't considered to produce new variants is because every possible form the malware can take as a result of the polymorphism is knowable by analyzing the code of that malware... the transformation is known and fixed, while modifications performed by forces beyond the malware itself are unknown and unpredictable...

perhaps confusingly, server-side polymorphism can arguably be considered to produce new variants... this is because the modification is not performed by the malware itself but by a server-side component that the malware analysts may not have access to and as a result cannot predict all possible forms a given malware may take when transformed by it... if the server-side component becomes known (and assuming it's entirely algorithmic) then it becomes possible to detect all possible instances of a piece of server-side polymorphic malware, but it's difficult in general to justify saying that they magically stop being different variants once the transformation function becomes known...

back to index

(thanks to vesselin for the previous discussions on the objective definition of variant)