Monday, March 31, 2008

av-comparatives vs panda

by now i would imagine that most people keeping an eye on the industry have noticed a bit of bad blood emerging between panda and av-comparatives...

luis corrons' post on the panda blog seemed a little like holding someone's feet to the fire - something i've done on occasion so on that level i can't really fault them (and really, sometimes people's feet do need to be held to the fire)... however, there are some very important tricks to doing this successfully... one of the most important being to not have a financial motivation to make the other party look bad - i get away with the things i say in part because i'm not actually in the anti-malware industry, when i say someone's doing something bad there's little or no possibility for my actions to be attributable sour grapes... luis, who represents a product that has apparently been tested by av-comparatives in the past has a financial motivation to call av-comparatives methods into question if that product didn't fare as well as one might hope in the past... perhaps his characterization of the testing at av-comparatives is correct, perhaps it isn't, but a panda representative is not in the best position to be pointing it out...

robert sandilands over at the authentium blog recognized this and threw in his own two cents coming from the perspective of one whose product has never been tested by av-comparatives... unfortunately, robert opted for the personal approach, subtracting the problem element that luis encountered but adding nothing good to replace it... personally, i'm not the least bit interested in his reaction to meeting andreas clementi, there's nothing of substance or value to be found in insinuations...

you see, the second important trick to pulling off holding someone's feet to the fire is to cite specific examples of problems, and back them up with evidence if at all possible so that your criticisms are more than just baseless claims (or worse, simple character assassination/defamation)... luis tried at least to point to a specific problem when he brought up the issue of paid services provided by av-comparatives, but botched the job by giving the apparently misleading impression that av-comparatives charges av vendors money to be included in their tests (which would certainly call the independence of av-comparatives into question if it were true)...

of course, even if the fees charged by av-comparatives are for things other than inclusion in the test they're still in an awkward position... it's very difficult to profess independence from av vendors when there's any kind of relationship with them... look at virus bulletin, sister company to sophos, which suffered questions about it's independence for years... those concerns fade away a little bit every time sophos fails to achieve a vb100 award but what a dysfunctional non-competitor relationship that winds up being; one company can only do well at the expense of the other... andreas clementi did well to clarify what av-comparatives' fees were for in his post about fee structure, more transparency may not dispel doubts but it at least gives people the information they need to judge for themselves, but i fear that so long as av-comparatives derives any financial benefit directly from members of the industry it's supposed to provide metrics for, the independence of av-comparatives and by extension the impartiality and validity of those metrics will be at least somewhat in question...

andreas' initial reaction to the panda blog post was pretty obviously written in the heat of the moment (which is perhaps why the post has been removed) but there are some issues raised that deserve attention, not the least of which being the apparently materially false statements made by panda, as well as how significant any specific problems with the testing at av-comparatives (should they actual exist) are...

(as a side note, this has highlighted a problem with using google cache to link to historical items as both cached documents appear to differ from what was originally seen)

Friday, March 21, 2008

has anti-virus' time come to an end?

no, i'm not talking about the end of the anti-virus industry or the technology, i'm talking about the term itself... that's what came to mind when i read rich mogull's post about the lack of need for anti-virus on the mac...

i think it may be time we retire the term anti-virus... it's no longer a completely accurate description of what the technology (or people) does... there hasn't been a stand-alone known-malware scanner that just detected viruses for a long time now... it's also become negatively loaded...

rich's post demonstrates this negativity (without being unreasonably anti-av, nor a rabid mac fanboy)... 'do mac users need anti-virus' indeed...

let's phrase it a different way, though - if there exists malware for the mac (and there does, it's in the wild and being actively spread by financially motivated cyber criminals) should mac users use tools to help them protect themselves from that malware?

it's hard to imagine these are actually the same question under the covers, it's so much easier to say no to the first one and so much harder for the second... harder still if you consider that mac users are no more immune to social engineering than anyone else and exploiting the user (rather than the system) has long been the primary method of getting malicious code executed (exploiting the system is generally just an optimization)...

granted all these additional considerations load the question in the opposing direction, but even without them i think you'll find that you'll get a faster no response to "do mace users need anti-virus" than you would to "do mac users need anti-malware"... the term anti-virus has come to be something people have a knee-jerk reaction to and only accept it in situations where it seems like a necessary evil (otherwise it's just an unnecessary evil) while no similar reaction has evolved yet for anti-malware...

anti-malware hasn't been loaded down with the baggage of all the anti-virus industry's mistakes and if those who adopt the terminology shift are smart and reinvent themselves rather than just relabeling themselves then perhaps the term anti-malware will remain baggage-fee...

swamped or not swamped?

there's this idea floating around that says the following:
We all know malware is starting to fly under the radar of black list style detection. Low volume malware is flooding the AV labs’ capability to build detection for it.


there are 2 key erroneous notions in this line of argument, the first is that malware is starting to fly under the radar of known-malware scanners (blacklist style detection)... malware has always flown under the radar of known-malware scanning in the beginning of it's life-cycle, the only difference now is that there is a whole lot more of it at that stage of it's life-cycle at any given moment...

the second erroneous notion is that av labs are being overwhelmed by number malware instances being produced... the implication here is that there is an ever-growing backlog of malware to be analyzed... let's take a hypothetical example - say an av vendor is able to process 5 malware samples a day but the malware writers are producing 6 malware samples per day... over the course of a year that av vendor has slipped behind by 365 samples and we can all see how farked they'd be in that sort of situation...

that is if they were morons who sat on their thumbs doing nothing about the problem, not realizing that it's possible to increase the number of malware samples they can process in a day in the short term by adding more analysts, and in the long term by pouring research dollars into malware analysis automation... there is always room for advancement in automation, and the reverse engineering competitions that go on point to a healthy pool of potential new analysts for the av companies to draw from...

it is true that more and more malware is flying under the radar of known-malware scanning, but only because there's more and more malware period... if malware writers are creating 6 samples per day when they used to only produce 3 per day, that will appear as a 100% increase in the amount of malware flying under the radar regardless of whether av vendors can process 3, 6, or 600 per day... the increased numbers don't necessarily exceed the vendors abilities, it just makes that initial window of opportunity seem more significant...

the balance between security and complexity

security software makes us less secure?... while i'm in complete agreement that complexity is the enemy of security, i find the idea that the security agents we install on our systems necessarily make them less secure instead of more secure is oversimplified nonsense...

one wonders if those promulgating the idea have ever balanced a checkbook because (in dennis fisher's explanation at least) the positive contributions those tools make to the net security change are apparently absent...

security agent software (also known as as security tools), when used properly most certainly have a beneficial impact on security - they implement access controls, they enforce policies, they detect malicious agents, etc...

security agents also add complexity to the system, making the system more difficult to model and therefore more likely to have vulnerabilities...

but if you only look at the downside of security tools you wind up with a completely unbalanced perspective... you need to consider both their positive and negative impact on a system in order to draw rational conclusions about the overall impact...

the price of anti-virus

i found an interesting opinion on the pricing of anti-virus products over at the agnitum blog a while ago - basically mikhail penkovsky is saying (among other things) that part of the development cost should have gone away a long time ago... well, i have a different opinion...

i've often said that signature updates tell a known-malware scanner what to look for while engine updates tell a scanner where and how to look...

the need for signature updates is patently obvious - malware authors keep writing more malware so more signatures are required to detect the new malware... you might think that after a while, however, you wouldn't need to update where and how the scanner looks for things, there should only be so many ways and places to hide the malware, right?

wrong - not only are there basically an infinite number of ways for malware to perform all the functions we already know about, as new legitimate software is developed new opportunities for exploitation open up along-side them (ex. create a new document format that contains macros and watch new macro-based malware get developed)... furthermore, there are a number of things that scanners still don't do well enough that require more research and development (like seeing through packers and other obfuscation techniques)...

these are just a couple of the reasons why the underlying scanning technology itself is constantly being redeveloped and reworked and why the cost of that development has never gone away...

if not mortgages then maybe car payments or psp's

one of the things that has interested me in the past but has generally seen little or no attention by the mainstream is the social dimension of malware... ultimately malware comes from people, and contrary to the traditional stereotype the people involved have generally not been anti-social - anything but in fact...

one of the few concepts from this domain to catch mainstream attention (too much so, in fact, like a song that gets played on the radio too often) is the idea that malware has become financially motivated, or more simply that malware creators are now interested in money rather than fame...

this is a description that gene hodges fleshes out a bit, going so far as to suggest that the malware writers have grown up and are paying their mortgages using monies gained from malware related activities...

allysa myers looks at recent arrests and sees something that doesn't fit in with this model, however... many of the arrests are still kids - paying a mortgage doesn't seem like the sort of thing kids would be doing...

to my mind, they're both right and they're both wrong... we like to model the world because it helps us put things in perspective and make sense of things, but models often lack sufficient complexity to accurately represent reality... the malware writer population is more complex than either is giving credit...

when financially motivated cybercrime crossed the chasm in the computer underground, it did not completely supplant existing motivations (indeed, monetary rewards do not replace the need for social rewards - you can buy status symbols but you can't buy acceptance or camaraderie), rather it broadened the spectrum of rewards that one could acquire through nefarious online means and in so doing it has allowed the population to expand and diversify... so now there are amateur kids and professionals and everything in between...

proper viruses aren't really in vogue anymore, of course, so the role models that newbies learn from and emulate are no longer a clique of experienced virus writers like you'd have found in the vx - the newbies are going to be learning the tricks of the trade from (or in some cases be made into patsies by) the more advanced cyber criminals who will quite possibly be paying their mortgages with their ill-gotten gains or (perhaps more likely if they're advanced enough to have developed 'assets' to do their bidding for them) they may never have to worry about mortgages again...

the kids are the most likely to be arrested because they're the least risk-averse and least experienced in criminal enterprise and therefore represent the low-hanging fruit to law enforcement... they brag, they make splashy purchases that attract attention, they fail to adequately hide money that kids have no business having... those that are lucky enough not to get caught will probably eventually turn pro... now that one can make a living at it, malware writing is no longer something that people will largely be growing out of... the more complete it's set of rewards becomes, the less those involved in it will need to go outside of it in order to get the rewards they need...

Sunday, March 09, 2008

cold boot attack good for more than just full disk encryption

while walking to the bus stop on my way to work one day last week i was thinking about the idea of measure vs. countermeasure (as i sometimes do) and an interesting juxtaposition of concepts popped into my head: non-persistence in malware and the so-called cold boot attack... bonus points if you already know where this is going...

you may recall that i ate some crow way back when i was describing how active stealth could be countered by outside-the-box analysis and then a certain stealthkit that shall not be named came along that used non-persistence to get around that problem... thanks to the work of ed felten and co. it appears i ate the crow a little too soon... non-persistence as a countermeasure depends on the very same assumptions about the volatility of RAM that many encryption implementations depend on - that RAM is volatile enough that it's contents are as good as gone once the system loses power... we now know not only that this assumption is false (there are those who will point out that that much was actually known for quite some time) but also that it's relatively straightforward to exploit...

that means it's technically possible to boot to a dedicated OS on a known-clean medium, dump the contents of memory and detect so-called non-persistent stealth malware using known-malware scanning (for known malware, obviously) or using cross-view diff with a second memory dump taken while the compromised system was still active (for unknown malware)...

this isn't the only countermeasure for non-persistent virtualized stealthkits, but it's a neat one none the less and it shows once again that for every measure there is a countermeasure...

what is non-persistence?

non-persistence in malware is a property whereby the malware doesn't get written to a persistent storage medium like the hard disk but instead resides only in RAM...

the primary advantage of this technique is that on-access scanners won't scan it because the condition that triggers on-access scanning is never met, and application whitelisting won't stop it because a non-persistent program requires exotic execution by definition...

a lesser appreciated benefit is that it circumvents outside-the-box analysis because the contents of RAM are supposed to be lost when the computer shuts down... recent developments suggest this benefit might not actually exist, however...

non-persistence can even have some advantages against behaviour blockers under certain circumstances (ex. if the malware injects itself into a process that is already authorized to perform all the behaviours the malware needs to perform then the behaviour blocker won't necessarily raise an alarm (unless it can detect the injection itself)...

on-demand scanning of RAM should be able to identify known non-persistent malware so long as it doesn't use stealth or any other countermeasures, however... further, since the network is nearly the only point of entry for completely non-persistent malware, scanning at the LSP level has a good chance of catching any known exploits it might use to get into the system in the first place...

back to index

what is on-demand scanning?

on-demand scanning is a type of known-malware scanning where you select a resource (a file, a folder, a disk, etc.) and tell (demand) the scanner software to scan it (hence on-demand)...

unlike on-access scanning, this type of scanning generally does not go on in the background and is not invisible to the user - rather the scanner's user interface is displayed on the screen so that the user can monitor it's progress and tell it to abort if need be...

on-demand scanning is the type of scanning that's performed when doing a scheduled scan of a system (though the exact resources that get scanned are generally pre-selected for the user's convenience)... it's also the type of scanning that is performed when scanning a system from outside-the-box...

back to index

what is on-access scanning?

on-access scanning is a type of known-malware scanning that scans file system objects when they get accessed (hence on-access)...

also known as real-time scanning or background scanning, this is the type of anti-malware protection that people are most familiar with as it comes closest to implementing the install-and-forget model that they've been tricked into expecting from anti-malware vendors...

this is also the type of scanning that shouldn't be performed by more than one product at the same time (and thus the reason anti-malware vendors use to justify suggesting you uninstall their competitors products before you install theirs)... when multiple on-access scanners are fighting over access to a file it can lead to unpredictable and undesirable results...

back to index

what is a cross-view diff?

a cross-view diff is a process for detecting the presence of active stealth techniques by looking for differences across two or more views of the same resource (often the file system)... when one view is different from the other it suggests that something is manipulating the results in an attempt to hide something...

the multiple views are generally accomplished by accessing the resource using multiple techniques - possibly a high-level API for one view and a lower level one for the other, though it's possible that if something is manipulating the results from one access technique it may also manipulate the results from the other...

another possibility is to get one view while the suspect system is running and other one from outside-the-box... this has the usual benefit of outside-the-box analysis in that the second view cannot be manipulated by any stealth malware because such malware won't be active...

looking for differences is fundamentally indistinguishable from change-detection, so this technique is essentially a type of integrity checking where the integrity of the resource access methods themselves are in question... compromising those methods is necessary for active stealth to work, so any evidence that their integrity is no longer intact is by extension evidence for the presence of stealth - and if an object (like a file) in the resource in question appears to be missing from the less trusted view then that is even stronger evidence for the presence of stealth...

back to index

what is outside-the-box analysis?

outside-the-box analysis refers to the practice of examining a suspect system from the outside using a trusted external environment while the suspect system's OS (and everything else installed on it) is inactive...

the primary benefit of this practice is that any malware on the suspect system will be unable to actively hide (using stealth) or actively defend itself against the tools used to perform the analysis because they won't be active in the first place... it is a long held axiom that you cannot trust a suspect system to accurately report it's own integrity because any active malware can force the system to lie or can even attack the software asking the questions... operating in an environment where this can't happen gives you an advantage over such malware...

in common practice there are two main ways of providing a trusted external environment in which to examine a suspect machine... the first is to use a second physical computer which you connect the suspect machine's hard disk to as a so-called 'slave' drive... this is a relatively straight-forward method that most people are able to do assuming they have a second machine available and can be talked through taking the hard disk out of the suspect machine and putting it in the second machine...

the second method is to boot the suspect system from an operating system on a known-clean removable medium like a floppy disk, CD, or USB flash drive... LiveCD's are a familiar concept to linux users, and old-timers will probably remember using known-clean bootable floppies on their old dos machines... for windows there is something similar called a pre-installation environment that provides much the same kind of functionality, but microsoft has been slow in recognizing the need to put that functionality in the hands of users which lead to the development of the BartPE disk (which has been mentioned many times on this blog before both by myself, and recently in a comment by vesselin, but is discussed even more by chris quirke in relation to the utility of a maintenance OS)...

back to index

Wednesday, March 05, 2008

mebroot shouldn't be so scary

there's been a lot of attention paid to mikko hypponen's statement that “You can’t execute any earlier than that” in reference to the execution of the MBR of a mebroot (that bootroot derivative i previously posted about) afflicted system...

the news media must love that sound-byte - it sounds scary, doesn't it? after all, as i said for years in usenet (and fidonet before that), the code that runs first wins - and clearly the MBR runs first... but if you remember back far enough then you'll remember that this is pretty much the same problem we had with the entire class of malware known as boot sector viruses (most of which infected the MBR)...

how did we manage to circumvent that little problem? simple, we booted from known-clean external media (back then it was a floppy, but now it would probably be a cd or some other mass storage medium that can house tools for fixing a windows machine)... as such, we had trusted code executing not before the bad code but instead of it... not really difficult, just a little obscure by today's standards (though still something people should know how to do, even in the absence of mebroot, since old viruses never die)...

why anti-virus vendors are having such a hard time

ok, first of all, the fact that there are a bunch of links back to this blog has absolutely nothing to do with why i'm responding to a post about why anti-virus products are having a hard time... that said, wow, i'm glad somebody liked those posts...

the links just spelled out some backstory, as it were... the main thrust was to highlight two reasons why av products are (or at least seem to be) having a hard time...

the two reasons given are technically about why scanners specifically are becoming less effective against malware... i agree that the reasons given are contributing to problems for scanners - packers make it easy to turn a known piece of malware into an unknown piece of malware, and pre-release detection testing helps avoid releasing malware that heuristics would detect...

but both of these things (besides being outside the scope of what known-malware scanning is supposed to handle since they specifically deal with new/unknown malware) are largely out of the av vendor's control... there is something that av vendors do have control over that is contributing even more to av products seeming to have a hard time - that being a failure to adequately manage their users... they've failed to manage user understanding of threats, they've failed to manage user awareness of the tools available for mitigating the risk posed by those threats (leading to the notion that av products are just scanners - a notion that is so pervasive that most security bloggers, including the one whose article i'm responding to, give opinions and pose logical arguments based on the assumption that it's true), and ultimately (and as a result of their other failures) they've failed to manage user expectations about what those tools can do... the real reason scanners specifically are having such a hard time is that they don't get the backup they require... they were never meant to handle all malware problems (and certainly never capable of it), only known malware problems...

one of the most novel examples of this failure is the rising anti-botnet market as discussed in this eweek article on the said market... one of the first anti-botnet applications i heard about in the mainstream was the one being provided by symantec... they released it as a separate stand alone tool and though i hoped they'd see the light and integrate it into their main anti-malware offering it seems that they've decided instead to treat it as and exciting new potential revenue stream and started charging money for it (not that i think there's anything wrong with charging money for a product, but if it's product-ready then why is it separate from the rest of their anti-malware offerings? or alternatively, if they're going to offer individual tools as well as suites, why aren't there more stand-alone tools?)... this fracturing of the anti-malware market comes at the expense of being able to communicate a clear and comprehensive message to the user/customer about anti-malware security.... without anything else to tell them how anti-malware security works (and for the most part there isn't anything else that regular people would be exposed to), the way the technologies themselves are presented implicitly communicates this to the user and fracturing your own set of offerings to make some extra green is a failure to properly manage this implicit message...