Tuesday, April 29, 2008

posts of the week

i'm going to try something a little different... it hasn't escaped my attention that this blog can often be a little on the negative side... i'm not particularly comfortable writing up blog posts that do nothing but agree with other blog posts - it makes me feel like a yes man (something i have a deep-seated aversion to)...

but there are good posts out there that genuinely deserve acknowledgment, and this blog could stand to have a little bit of positive energy injected into it so i'm going to take a page out of the blogging playbook of some of the bloggers i read and post some links to articles i thought stood out... i'm thinking once a week should be adequate, though i'm a little on the late side with this first one (these will be posts from last week)...

(update: crap, i should have known linkrolls and feeds wouldn't mix)

oh, and while we're on the subject of change, i've replaced the 'posts of interest' (which were basically posts i thought people ought to read) with a widget from aiderss that calculates post quality/popularity/whatever (you know that saying about leading a horse to water)... i've also added a tagroll as a navigational alternative to the date-based archive (and the ever present search)... now i just have to work on that audio format alternative i keep meaning to do...

some critical thinking on fake alerts for the user

tad heppner over at the mcafee avert blog had an interesting user-centric post about fake alerts pushing rogue anti-malware products... what interested me was that it presented some ideas that a user could theoretically apply (like safe hex) to determine whether a particular security alert is trustworthy or not, but i think he could have taken this further...

his suggestion of using responsible browsing practices is a little vague (and some fake alerts don't even make it obvious that they have anything to do with browsing in the first place), doing a bit of research on what you're being asked to install/buy is a good idea (though i suspect people won't have the presence of mind to do that when scare tactics of "you have a virus" are being used), and looking for secondary indications of infection is just a little too technical...

now, presence of mind when scare tactics are being used will always be a problem, but if a user can keep their head and not panic then this line of thought might come in handy: if the alert is warning you that you have something bad on your computer and then asking you to install/buy something then it stands to reason the alert didn't come from something you already installed/bought... now unless you intentionally went to a website that scans your system without needing an install that means that whatever is giving you an alert examined your system without your permission... i think we all intuitively understand that things shouldn't be searching through your files without your permission so that alone should signal to the user to stay away from it, whatever it is...

another good (and somewhat related) idea is to be familiar with what the security alerts from your actual security software looks like so that when a strange one pops up you'll be able to tell that it doesn't belong... downloading the eicar standard anti-malware test file should probably cause anti-virus software to pop up an alert so you can see what that alert is supposed to look like and how it's supposed to behave... that's not the only security software a user may have installed, mind you, but there aren't necessarily simple standard ways of triggering alerts from those other security products, and that's something someone might want to look into...

race to zero in the security special olympics

news has started floating around about a contest in virus obfuscation being held at defcon this year... there have been a couple of mentions of it elsewhere as well, such as robert graham's "race to zero" post, sunnet beskerming's post "defcon competition has antivirus vendors complaining", and even an ars technica post titled "antivirus vendors pan free research from defcon contest" by david cartier...

now i'm obviously opposed to this and think it's irresponsible and unethical... not to mention they've chosen just about the worst malware type to play with - viruses... note to contest organizers, participants, and proponents: in the event that something goes wrong, self-replicators have a tendency to go on and on and on long after they are released or spread (old viruses never die)... they aren't called viruses because they make your computer feel bad, they're called viruses because they spread by themselves just like any infectious biological pathogen... this contest and those in it aim to play with fire...

with that out of the way, it's time i got to debunking a lot of the wrong thought surrounding this contest...

contest organizers (as quoted by pcworld):
Contest organizers say that they're trying to help computer users understand just how much effort is required to skirt antivirus products
if i'm not mistaken one of the key aspects to presenting things is to know your audience... the computer users at defcon already know it's easy to evade a scanner...

race to zero website:
The event involves contestants being given a sample set of viruses and malcode to modify and upload through the contest portal. The portal passes the modified samples through a number of antivirus engines and determines if the sample is a known threat.
this is a logical failure... a modified sample is by definition not a known threat until such time as a signature has been added to the scanner... if you're going to put on this kind of contest you might want to make sure you know what the difference between a known and unknown threat are...

Signature-based antivirus is dead, people need to look to heuristic, statistical and behaviour based techniques to identify emerging threats
this represents a complete failure to understand what the heck they're testing... changing a single bit can often foil signature detection - for contestants to get anywhere in this contest they will have to foil the heuristic, the statistical (also arguably a heuristic), and the behaviour based (heuristic here too) techniques that the scanners also implement... there are no naked known-malware scanners anymore (at least not as far as consumer products go), they all have some kind of heuristic engine in them...

Antivirus is just part of the larger picture,
and this is failure to understand their own messaging/marketing... how can anti-virus be part of the picture if it's dead?... on the one hand they want to tear down the practice of using anti-virus (anti-virus is dead) and on the other they expect people to keep it around as part of the picture... someone needs to make up their mind...

We are not creating new viruses
contrary to their mistaken belief, modifying existing viruses until such time as they're different enough from the original that neither known-malware scanning nor heuristics can recognize them is creating new viruses...

modified samples will not be released into the wild
one wonders how precisely they're going to keep that from happening? they're presenting contestants with samples, not a locked down environment that prevents them from taking their samples with them at the end of the day and doing something stupid/careless/malicious with them after the fact... how do i know this? because providing that kind of environment would be prohibitively expensive and restrictive... it might work in an educational institution (where the costs are offset by student tuition) but not as a contest..

robert graham:
The 'protectors" (product vendors) have big marketing
budgets to tell us their side of the story
it's not their side of the story, it's their attempt to get people to buy their product... on some level we all know that marketing and advertising is just another kind of lying... you don't honestly believe a whopper looks as perfect and juicy in real life as it does on tv, do you? then stop being disingenuous by treating av marketing as anything more factual than that...

We only get one side of the story
that is bullshit... we hear about the failures of anti-virus all the time, we hear about people giving up entirely on anti-virus, and we hear about anti-virus being dead... we get a lot more than just the vendor's side...

Yet, such contests also help customers
defcon isn't a customer education setting, this contest isn't going to teach them anything because they aren't going to be there...

The educating needed here is that the mainstream anti-virus technologies are easily evaded, and that such evasion happens a lot
if it happens so much then why do people need to be educated about it? surely they'll have seen it for themselves or they'll know someone who's seen it and has related the story to them?

Anti-virus vendors publish tests "proving" a 99% detection rate
this is the point where one realizes that robert graham is playing the part of an uninformed crank... anti-virus vendors do not publish tests like that... that kind of self-serving behaviour would absolutely NOT be tolerated by competitors or by the community... vendors point to tests carried out and published by independent 3rd parties...

However, that doesn't apply to customers. Often, the best way to test an anti-virus product is to create your own virus.
this is beyond stupid... you don't test your airbags by smashing your car into things and you don't test your anti-virus product by making new viruses... you leave such testing to the people with the expertise and resources to do it properly... customers generally have neither...

sunnet beskerming:
It should also show up the antivirus tools that
are making use of poor signature detection mechanisms
just like the race to zero website, this is a failure to understand what the contestants will really be bypassing... in order to not be detected by the scanners the samples will have to bypass the heuristic engines in those scanners... getting past a good signature scanner can be as easy as changing a single bit (because a good signature scanner will be very exacting so as to avoid false alarms)...

and those that are using weak heuristics to detect previously unknown malware.
heuristics have to be weakened in order to reduce the number of false alarms to an acceptable level... customers are generally unprepared to resolve potentially false alarms...

It is strange, though, how competitions like CTF, or the recent 0-day competition at CanSecWest, do not attract much complaint, but as soon as antivirus or antimalware tools are targeted it is too much for people
an interesting point, but one that highlights the fact that those other contests revolve around software flaws, whereas showing that new malware doesn't get picked up by blacklists is no more a flaw than notepad's inability to act as a hex editor...

david chartier/ars:
Instead of trying to deride Race to Zero, the AV industry could have a chance at working with the contest to harness what, in reality, could turn out to be some of the best research available on new malicious techniques. "You get what you pay for," as the old saying goes, but in the case of Race to Zero, the AV industry could be passing up a veritable gold mine of free ideas on how to better fight new threats.
except there's nothing for the av industry to learn from this contest... it's already known that malware can be modified, it can be modified in a countably infinite number of ways and if you protect against one the bad guys will just choose (not even find, choose) another... uninformed people think that things would be different if we used heuristics or behaviour blockers or application whitelisting, but the reality is that those can be bypassed in an equally numerous ways... their failures aren't discussed as much because because few people make use of them except for heuristics... and as far as heuristic failures
go people just misinterpret that as a failure in signature scanning, as most people involved in or commenting on this contest have already done...

to repurpose a train of thought from the riskanalys.is blog, the chance of any malware authors coincidentally creating the same malware or using the same modifications as the participants in this contest is basically 0 (n/infinity) so the value of trying to anticipate malware creation/modification techniques and use that knowledge for prevention is also generally 0... conversely, the chance of malware creators/users making use of what is revealed by this contest is greater than zero (because they're lazy, just look at eeye's bootroot to see an example of this having happened) so the value in adding detection for these new samples after the fact (or better still, avoiding the creation of those samples in the first place) is greater than zero...

ultimately i'm reminded once again of something david harley wrote some time ago... he hit the nail on the head when he said the rest of the security industry still doesn't understand av technology, practice, or issues - what people (including the contest organizers) have been saying about this contest proves that much... worse still, robert graham's maligning the credibility of vendors with false statements underscores one of david harley's other points; that the av industry and community remain hugely untrusted...

one has to wonder about the security industry when it doesn't understand one of it's oldest segments (and it's not like the information and people involved aren't available)... the mistrust, on the other hand, is completely understandable under these circumstances - you fear/hate what you do not understand, after all... schneier recently stated that the security industry would be coming to an end and i'll admit i had a bit of a knee-jerk reaction to that (though not so much that i wrote about it - if i did that everytime i disagreed with schneier . . .) but rothman (i think) made a subtle change by saying that security as we know it will come to an end... that's a possibility i almost look forward too - not because i think security should be subsumed by other things, but because the things i'm seeing make me think the security industry has become fundamentally broken (and/or gone mad)... of course, long time readers might recall i have my own prediction about security...

Monday, April 28, 2008

adapting to malware quality assurance

as previously mentioned, malware quality assurance is an attack against heurstics... the idea is to make the malware sufficiently dissimilar to other malware that most heuristic engines find little of anything to be suspicious about...

and unfortunately, it works... at the time of writing, av-comparatives.org's latest retrospective test show that most of the results are grouped around a 40% detection rate... there are some outliers (in both directions) but 40% seems to be the general ballpark for most products when it comes to detecting new/unknown malware...

that's not really a score to be proud of... maybe once upon a time when new malware was comparatively rare, the remaining 60% of those few new pieces of malware for the small window of opportunity during which they remained new/unknown weren't really a big deal... at the current rate of new malware creation 60% miss rate is a big deal... users need to adapt and vendors need to adapt too...

users can adapt by taking any number of steps to prevent new/unknown programs (malware or otherwise) from executing or getting significant system privileges... this includes things like running as a limited user rather than an administrator, using behaviour blocking/HIPS software, using application whitelisting, using sandboxing, etc...

vendors can adapt by making those techniques/technologies easier to find, easier to understand, and easier to use... but there's one more thing... at one point i thought malware qa was pretty much the last nail in the coffin of heuristics but it's occurred to me that there's one thing vendors might be able to try to breathe some life back into heuristics...

polymorphism (sort of)... by which i mean changing/tweaking the heuristic algorithm frequently enough so as to make the results of malware qa less useful... the premise of malware qa is that if the malware is undetected right now it will stay undetected until the malware gets found by someone-somewhere, then gets sent to an anti-malware vendor for analysis, gets analyzed, gets added to the signature database, and that signature database update gets distributed to the potential victim population... create new malware fast enough and that window of opportunity, small though it may be in the ideal defender case, is still big enough... if instead that window of opportunity was significantly less predictable, malware quality assurance wouldn't offer the same kind of assurance it does now...

that's a high-level thought, though... i don't pretend to know how feasible it is to implement (i'm hoping that some engines have parameters originally intended to adjust it's sensitivity to various conditions and that could be randomized)... i do know that this would likely increase the number of false alarms, and probably worse still make those false alarms equally unpredictable, so maybe it's a bad idea but it's an idea none-the-less and i offer it for free to anyone who wants to try it...

what is malware qa?

malware quality assurance is a process in which malware writers test their malware in order to determine whether the malware meets the degree of effectiveness they had hoped for (and obviously one where they throw out those instances that don't meet the grade)...

malware quality assurance indicates a level of maturity and professionalism on the malware writer's part, and although the term could be used to describe a variety of things (some as banal as running a virus to see if it reproduces), it is generally used to refer to the practice of running a large number of anti-malware scanners on a given sample to determine how likely that sample is to go undetected when used in the wild (which may thereby establish an objective measure of the malware sample's monetary worth)...

it is sometimes incorrectly suggested that anti-detection malware qa underscores a weakness in traditional signature-based known-malware scanning... in reality, any new piece of malware will bypass a good known-malware scanner (because the good ones are so exacting so as to eliminate false alarms) so long as it's not a byte-for-byte match with a previously known piece of malware - this is by definition, and there's no need to perform any quality assurance tests to prove it... malware qa is actually an attack against heuristics, because it is the heuristic engines that would be detecting these new/unknown malware samples the malware writers are testing, not the known-malware scanning engines...

back to index

what are heuristics?

in anti-malware, heuristics refer to a family of techniques/technologies meant to determine if a given program is malware based on a collection of rules (heuristic can be considered to be a fancy word for "rule of thumb") derived from past experience with malware...

heuristic analysis looks for structures/routines/behaviours/etc (depending on the implementation) commonly found in malware rather than looking for actual specific instances of malware... one could think of this as being a measure of similarity, not necessarily to a specific malware sample (though there are heuristic engines capable of reporting that X is a new or modified variant of Y), but rather to malware in general... this gives it the potential to detect new/unknown malware but also carries with it the potential to raise false alarms as there is nothing done by malware that is unique to just malware (rules of thumb aren't an exact science)... in order to prevent an unacceptable number of false alarms the heuristic detection typically gets watered down (by requiring more heuristic conditions to be met, or by requiring a higher heuristic score if the conditions aren't weighted equally, before the heuristic engine decides to raise an alarm), but this also has the effect of lowering the true alarm rate as well...

heuristics ability to detect unknown malware is meant to compensate for known-malware scanning's complete inability to do so, however it cannot detect all unknown malware and with the constraints placed on it to avoid false alarms that ability is reduced to an even lower level of effectiveness... as the instantaneous population of unknown malware increases, this shortfall in effectiveness becomes an increasingly troublesome problem...

back to index

Sunday, April 20, 2008

is anti-virus software falling behind?

as readers of my blog are probably aware i have a bit of a penchant for trying to dispel a variety of popular myths... one of the most popular ones i've dealt with is the notion that anti-virus software can't keep up - or as i put it before the myth of overwhelming numbers...

when i look at my previous treatments of this myth, however, i don't see something i could point a complete newbie at and have a reasonable expectation that they'd get it so i'm going to try and make this as simple as possible - there is no publicly available evidence that points conclusively to anti-virus software falling behind...

the only kind of evidence that would conclusively point to av vendors failing to keep up is a growing backlog of undetected malware... some people think the growing numbers of people who get hit with undetected malware while using up-to-date av products or the growing number of malware samples that are undetected at any given time is equivalent to this growing backlog but it isn't...

let's use an analogy to demonstrate this... let's say i have a dog... this dog shits on the ground and then i come along and clean up the shit... someone may (unfortunately) step in the shit before i get to it, but it does get cleaned up and so long as i clean up the same amount of shit my dog produces i'm not falling behind... now let's say i get a second dog - all of a sudden the amount of shit that hits the ground on any given day doubles, the chance of someone stepping in it before it gets cleaned up doubles, but so long as i'm still picking up as much as those two dogs drop i'm still keeping up with them and not falling behind...

this can go on and on with an ever increasing number of dogs and at some point i may eventually reach a point where i have so many dogs that i can't keep up, where they produce more shit in a day than i can clean up in a day and that leftover portion is a backlog which will build up day after day and become a real mess before too long... now, unless you're keeping track both of when the dogs shit and when i clean it up you have no way to determine if the increasing amount of shit currently on the ground getting stepped on represents a backlog or simply an increasing amount of shit being produced by an increasing number of dogs... if i were getting close to that undesirable point where the dogs produce more than i can clean up i would hire help to help me pick up after them (and/or maybe i'd build a robot or various other tools to help speed up the process)... with that help there would be more breathing room (figuratively) and we'd be able to increase the number of dogs and still not fall behind...

this is pretty much the same thing that goes on in av companies - as the amount of crap the malware writers produce increases the companies hire more analysts and develop better automated tools so that they can deal with an increasing amount of malware per day... that doesn't mean that the amount of undetected malware won't grow, it will... preventing that set from growing would require known-malware scanning to be able to detect malware before it was released (ie. before it was known)... known-malware scanning generally can't do that anymore than i can catch dog shit before it hits the ground... so long as they're analyzing as many as the malware writers are producing, though, they aren't falling behind - an increasing amount of undetected malware is not the same as a growing backlog of undetected malware... the growing pile of crap is an unavoidable consequence of the increasing production of crap and has nothing to do with whether or not anyone can keep up...

(yes, i did just compare malware to dog shit)

adware and why tracking usage sucks

phorm, the isp-side adware/spyware technology being tested out in the UK, has been getting a lot of attention in the press for the past couple of weeks... on reading the description of it on the f-secure blog i got to thinking about why exactly this sort of advertising is so bad...

leaving aside the question of all the company's past bad acts (which i know is important but i'm focusing on the type of advertising, not this particular instance) and the isp-side implementation (and all the consequences that has for informed consent), i want to look at just the basic idea of targeting ads based on where a user has browsed in the past...

obviously this requires tracking the user's activities and adware companies will no doubt try to assuage our privacy fears by claiming they don't keep identifying information... for my purposes i'm going to assume they live up to their word, regardless of how unlikely that may seem... my contention is that simply displaying ads based on past browsing behaviour represents a privacy breach because we very often don't use our computers in a vacuum...

what do i mean by that? let's consider the scenario of the road warrior employee... he (or she) has a laptop they take with them on the road and most likely also take home with them... chances are good they will do some personal browsing at some point... chances are also good that at some point they're going to come back into the office and turn on their computer... what happens then is that their co-workers or perhaps even their boss may see ads linked to browsing they were doing while in the privacy of their own home... maybe the ads are of an adult nature (porn), maybe they're of an immature nature (cartoons), or maybe they reveal employment intentions the employee isn't prepared to share with their boss (why are there ads for monster.com on your computer?)...

of course there are those who would say that such an employee shouldn't be doing private browsing on the work computer anyways (ignoring the possibility that they're actually doing work on their own personal laptop - something that can happen in smaller companies) so how about a more reasonable scenario... lets say you're a single guy browsing the web the way single guys do... lets further say you get a new girlfriend and she comes over to spend time with you... at some point she may ask to use your computer to check her webmail because she's been staying over quite a bit and hasn't been home in a while... you concede because letting her use your computer is the least you can do for her considering all the time the two of you have been spending together... then she sees the ads linked to your past browsing behaviours and stops coming over and doesn't return your calls anymore... alternatively, perhaps you're over at her place fairly often and all of a sudden dating site ads start showing up on her computer...

accurately targeted ads are a wet dream for the advertising industry - their efforts would be so much better received and have so much more impact if only they could get the right ads in front of the right people at the right time... developing sophisticated profiles based on past usage makes a certain amount of sense in that context... unfortunately, when it comes to computer use it's pretty much impossible to accurately tell who's looking at the screen, so there's a very real risk of revealing things about people and what has meaning to them to the people around them if your ad targeting is based on anything other than what the user is doing right this instant... you cannot capitalize on knowledge of the user through ad targeting without revealing that knowledge to the user and possibly others as well...

when people can start inferring things about you based on ads targeted at you then then the ads themselves represent a privacy breach, regardless of what personal information the ad company may or may not be keeping... the more accurate the targeting the more likely you are to breach the user's privacy... the more you try to protect the user's privacy by throwing random ads into the mix, the less accurate the targeting becomes and the less impact the ads have because they get buried amongst garbage ads...

advertising is big (and arguably legitimate) business, but those considering the kind of optimization that phorm or really any kind of targeted ad serving represents should be aware of these kinds of less obvious privacy implications...

a glimpse behind the kurt-ain

(it's times like this i wish i lived in australia - then i could be the wismer of oz...)

i got some 'link love' from mike rothman earlier this week that caught my eye... no word love, mind you, as it seems mike has decided i generally don't know what i'm talking about... hopefully our disagreements are over value judgments (like our respective moral compass' pointing is slightly different directions) rather than over facts, but that's not really what caught my eye... one of the reasons i read mike's blog is for the pointed criticism, it would be hypocritical of me to turn around and cry foul when that criticism is directed at me... what surprised me was mike's realization that i'm not in the anti-malware industry...

i generally don't write too much about myself on this blog, i don't think it's interesting or topical (though some schools of thought suggest it humanizes the blog and makes it easier for people to relate), but i thought i had been pretty transparent about who i am and where i fit in in the greater scheme of things in my about me post... mike now has me thinking that perhaps that's not the case...

so to start things off: i am not a member of the anti-malware industry... instead i'm a member of the anti-malware community... you know, those people who discuss the concepts with others online, who help people with malware problems, that sort of thing... i've done a bit of infiltration - an enlightening experience, but not really worth repeating... i collected viruses for a while, though i'm glad i stopped; with a half million malware projected for this year that would be like a full time job without the benefit of getting paid... i've done a bit of disassembly, though not so much that i ever got good at it, and there never really seemed to me to be a shortage of malware analysts... what there did seem to be a shortage of (at least at the time many moons ago) were people willing to get their hands dirty and help out their fellow man (or woman) with their immediate problems with viral infection and their more long term problem of learning how to cope with the reality of viruses in the future... at one time i committed myself to making sure that every plea for help in alt.comp.virus got an answer, but these days there are plenty of people down in the trenches now in multiple forums so i feel less pressure to do so myself... there's still that longer term problem, of course, which is why i put up my (admittedly under-maintained) av reference library site (to make the information more available), and why i spend so much time on this blog explaining things (when i'm not ranting on topics i know nothing about, of course)... i suppose you could say i've tried on roles the way some people try on personae (it was certainly the right developmental period)...

not being in the anti-malware industry means i don't have to wear the corporate muzzle, i can say what i like about the anti-malware field without consequences other than making some enemies and possibly making myself less employable in that field (and since i've never tried to be employed in that field and am not sure i could start wearing the corporate muzzle, that works out just fine)... i can criticize any and all vendors if i see fit (and i have)... i can point out their ethical misdeeds, their marketing mistakes, their snake-oil, their FUD, their hype, etc...

not being in the anti-malware industry also means i don't have as much insider knowledge as someone who is... the only insider knowledge that i have is what i've acquired by interacting with those who are part of the industry, but i've done a fair bit of that over the years...

second: i am not an IT/security professional... i'm not the guy that manages the networks or the desktops in my company, i don't decide whether the company roles out virtualization on it's servers or any of that stuff... i write code, and lucky for me i get to implement (and in some cases design) a variety of the security-related functions and features of my employer's product...

i'm also not under the illusion that the security professional's frame of reference is the only game in town... as technical as i may be sometimes, i generally adopt a more consumer-centric frame of reference with regards to the application of security controls... i like free tools and i like safe-hex and i think understanding the threat and the counter-measures helps a lot...

third (and this has nothing to do with mike's post, but it is something that comes up from time to time): i am not an expert... some people think i am, or think i consider myself to be one, but i'm not and i don't... i consider myself to be a specialist in the sense that i have specialized knowledge (something one tends to pick up when following a field for 18 years)... i have not and will not lay claim to the title of expert - i know experts in this field and they're far more knowledgeable than i...

finally (and this is probably of the most general interest to readers of this blog): there is nothing wrong with my shift key... once upon a time it was not entirely unusual for people to neglect using the shift key - and, if you haven't guessed from the distinction i make between hacker and cracker, i'm a little bit old-school... besides it's not like following traditional capitalization rules imparts any extra information... punctuation (whether properly used or misused as i'm want to do) is entirely sufficient to mark off the places where one statement ends and another begins... i am capable of following those rules, i just don't generally find it natural or necessary...

Tuesday, April 08, 2008

no such thing as non-executable files

i happened to notice a post by benny ketelslegers presenting a digested view of a symantec post by andrea lelli on file format vulnerabilities... i didn't give the symantec post much thought when i first encountered it, but when i read benny's post i realized that i (and perhaps many others) have been taking the issue of executable files for granted and that the word choices being made could lead to an altogether wrong concept of what a program is and by extension what sorts of things can effectively be malicious agents on a computer...

perhaps my first exposure to the idea that there are no non-executable files came when i was still a teenager learning that for every sequence of bytes there exists a turing machine for which that sequence of bytes is a virus... i didn't even really know that much about turing machines at the time (other than they were a way to model computers) but i understood what it meant - for every sequence of bytes there exists a possible computing environment where those bytes are interpretable as a self-replicating program...

it's an interesting statement, but it's awfully narrowly focused on viruses... it turns out that the focus on viruses was just because of the context... in reality, for every sequence of bytes there exists a possible computing environment where those bytes are interpretable as a program...

that probably still sounds pretty formal to the average person, but not as formal as fred cohen's reference to the generality of interpretation... the simplest way i can express this is that there is no fundamental difference between data and code, that program code basically is data but that it's data for which there happens to be some part of the computer (some other program or subsystem) that will treat it as a collection of one or more instructions... any piece of data you can imagine can potentially be interpreted (or even misinterpreted) as program code, although it may not have much meaning or do anything useful as a program...

this ability to treat arbitrary data as code is what allows us to add new, previously unimagined capabilities to our computers in the form of new software... it's what makes our general purpose computers 'general purpose', but there's still more to the story... the ability to treat a particular chunk of data as code in practice may not even be intentional, it can be an emergent property resulting from excessive complexity and/or poor implementation... this leads to what i often refer to as exotic execution and data that makes use of such a property is generally referred to as an exploit...

all this is to say that all data may also be considered by the computer to be program code under the right conditions, and determining if those conditions have or have not been met can be very hard... as such, to say something is not program code, to say that it is non-executable is an oversimplification that can (especially when uttered by authoritative sourced like symantec) promulgate an idea that there are safe, non-executable file types which in turn gives people a false sense of security when handling those types of files...

indeed, the majority of the file types referenced by the symantec post explicitly support the ability to contain code that their respective reader applications are intended to execute (whether that be javascript in pdf documents or visual basic in microsoft office documents)... the fact that those file types can also carry known exploits should have little bearing and the very fact that people need to be corrected about the safety of such files points to a more endemic false sense of security about data in general... there are in fact no safe file types and no such thing as non-executable files...

Monday, April 07, 2008

mac malware reality check

last month david harley wrote a really good piece about macs and malware... of course david is a writer (who i hope will forgive me if that's too much of an oversimplification) and has been following malware on the mac for as long as i can remember (you knew there was malware on the mac before osx, right?) so it's balance and reason comes as no great surprise... it was a fine piece of work...

perhaps too fine...

i regularly keep an eye out for new and interesting security related blogs and the name of adam o'donnell's blog NP-Incomplete appealed to the comp.sci. in me but some of his recent posts about malware for the mac (biological niches and malware prevalence, newsflash, its an issue of market share, not security), not to mention a heap of posts by others like rich mogull that i wrote about before, have made me wonder if perhaps a blunt instrument is called for instead... bluntness is something i ought to be able to handle, after all that's what being {c|k}urt is all about...

i think we need a reality check about where we are with respect to malware for the mac osx platform...

first, there are viruses for the mac osx platform... the first of which, osx/leap-a, has apparently even been seen in the wild by at least one anti-malware vendor (note the prevalence is not at it's lowest level as it is for the osx/inqtana-a)...

"but that's the old model", you say... "that's not what you need to worry about these days", you say... "it's all about the commercial malware now", you say... fine, then let's talk about the DNS changing trojan that was discovered last year... a port of zlob from windows to the mac... as craig schmugar pointed out on the avert blog, zlob was one of the most widely reported pieces of malware for windows and it was most definitely commercial malware...

"it's just one piece of malware" i can almost hear you saying - try one family of malware... it wasn't just a single instance that was made for the mac, there were many variants made as mikko hypponen clearly demonstrates... and if you have any doubts about how serious the zlob gang were about spreading malware onto macs you need only take a look at the posts alex eckelberry made for a while, while tracking the new instances found - it's a list that goes on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on, and on... personally, i'm glad alex stopped filling his blog with these posts, but they certainly are useful for making a point... the point in this case is that some criminals have already started looking at the mac and they've decided it's worth the effort and expense to target them in addition to targeting the windows platform...

"but it's still just one case", you say... WRONG!... did you not hear about the rogue anti-malware apps macsweeper and imunizer? they were (as far as anyone knows) produced by a different group than the folks making zlob so that's at least 2 criminal enterprises that have moved to include the mac platform in their pool of targets...

the mac may not have crossed the chasm in the malware underground yet, but there are already multiple early adopters... as such it's no longer a matter of if the criminals will start targeting the mac, nor is it even a matter of when will they do it... the only question that remains now is how long will it take before it gets bad enough that people to wake up and notice... mac osx' age of innocence is over...

enough with the financially motivated malware

i've mentioned before that the notion that malware purveyors are now looking for money as opposed to fame gets far too much attention... well enough is enough...

stop droning on about it... stop putting me to sleep with this tired old refrain... stop trying to impress me by saying exactly the same thing everyone else has been saying for the past couple of years now...

if you really want to impress me, don't tell tell me malware purveyors have become money-grubbing scumbags, tell me how you're using their new behaviour patterns against them...

show me, not that you understand the playing field is changing, but rather how you're changing with it... that or admit you aren't and are therefore unimpressive... you're supposed to be fighting the bad guys, after all, not simply observing them... i want to know that you're interfering with their criminal enterprise... i want to know that you possess info that's actually actionable, because the fact that they've switched motivations by itself isn't...

Thursday, April 03, 2008

autoexecuting is baaad, m'kay?

automatically executing code has a long history of having bad consequences for computer users, from the windows autorun functionality that enables malware to spread to our computers as soon as we plug in a usb flash drive/mp3 player/digital picture frame/etc, to auto-executing macros in office documents that were used by macro viruses and other macro malware to compromise our systems as soon as we open the document, to the auto-execute nature of web content that helps drive-by downloads happen and prompted the development of things like the noscript firefox extension...

automatically executing things is convenient, sure, but that convenience is at the cost of security so when i saw this lifehacker post about an extension to let you auto-execute your downloads i naturally asked if this wasn't an unwise action to take... following that my ability to comment on lifehacker was been disabled (coincidence? or maybe it's like that time i became persona non grata on hoff's blog) so i can't follow up on responses there and have to do it here...

there are a couple of arguments that one might propose for why in certain circumstances the risks posed by this sort of extension might be mitigated... the first is that this extension is only for those who know and trust the site they're downloading from and therefore know and trust the program they're downloading... this comes from the long running advice to only download programs from sites you know and trust... unfortunately trust isn't transitive in that way (despite what that advice may imply), you can't know and trust code before you've downloaded it... the advice to only download from sites you know and trust is actually intended to get the user to avoid the high-risk behaviour of downloading from a site which may have intentionally malicious downloads on it, but it doesn't completely eliminate the risk of downloading harmful software... you can't say a program is safe just because it came from a trusted source - even microsoft has been known to inadvertently distribute malware... there's also the not-so-little problem of getting users to give trust wisely, and the problem of being able to tell a trusted site from a forgery...

the second argument for why the risks are mitigated is that your on-access scanner will scan the program before it runs anyways so users should be pretty safe... unfortunately, as readers of this blog well know, known-malware scanning (on-access or otherwise) is essentially ineffective against malware in the beginning stages of it's life-cycle and in recent times malware profiteers have been making greater use of ways to exploit that fact (using such things as server-side polymorphism, malware creation kits, or generally any method of creating a large number of different malware instances in a short period of time)...

there are, of course, other techniques for protecting yourself from malware (such as application whitelisting which will have no effect here because you would presumably add your download to the whitelist, or behaviour blocking which would require you to know what behaviour should be allowed) but one of the simplest approaches to this problem is to quarantine the download for a few days/weeks to give your anti-malware signatures time to catch up... this is sometimes referred to as a cooling off period...

another option is to run the program in a sandbox of some sort like a test machine or VM... you may already be running your browser in a sandboxed environment but i would argue that new downloads should probably not be run in the same sandbox as your browser because it may get access to sensitive information in that sandbox...

in summary, automatically running things you download is something you probably don't want to do... it's risky behaviour... new downloads should be tested for safety first and our ability to do a good job at that in an automated fashion is rather limited...