Wednesday, January 31, 2007

i think i 'hear' a malicious program

an interesting story that's come out of the recent vista release concerns the ability for a web page with an audio file to execute commands on the user's system through the speech recognition (not to be confused with voice recognition, though it often is)...

now, you may recall that i define a program as a collection of instructions meant to be executed or interpreted by the computer for the purposes of carrying out a task... microsoft has just made audio files (regardless of the specific file format) into programs with this speech recognition implementation... by that i mean that audio files can now contain instructions that the computer is able to interpret and carry out (in fact, the same holds true for video files - vista meet youtube)...

this is significant because audio (and video) files in general were not previously considered to be active/executable content... sure there have been cases of vulnerabilities in various media players being exploited in order to launch arbitrary code, or even specific formats allowing the embedding of instructions such as javascript within them, but those were isolated incidents... now all audio/video content, regardless of format, is potentially a program under vista and that is a malware vector that no average person will be expecting...

the technical potential for abuse is sizable but there are some practical limitations - obviously a computer is going to need speakers (quite common) and a microphone (not exactly unheard of) and speech recognition will need to be enabled and configured in order for it to mistake multi-media content for legitimate user commands... anyone who uses speech recognition is going to have it enabled and configured and they're going to have a microphone - and since most people also have speakers, most speech recognition users are going to be vulnerable to this form of remote code execution...

now, vista also has something called user access control (aka UAC, where potentially dangerous tasks require the user to type in the administrative password) and as has been said elsewhere this should limit the damage that multi-media malware can do but imagine the new social engineering opportunities for something like this... imagine you're browsing along and all of a sudden out of nowhere a voice starts speaking to you... he says his name is carl and he works for microsoft as part of their new remote system maintenance and repair service that they're rolling out with windows vista... he says microsoft's servers detected a problem on your machine and in order to fix it he's going to download and install a repair tool that will require your administrative password... do you think people would fall for this kind of multi-media downloader trojan? i have a sneaking suspicion some might - after all, people are still unzipping and executing attachments they get in email without verifying the intent of the sender (speaking of email, i wonder if multi-media files are among the content vista's email client tries to block)...

while we're dreaming up scenarios which may or may not be plausible, let's consider who many of the speech recognition users are going to be... speech recognition is something that greatly benefits those with special accessibility needs... another accessibility technology is the text-to-speech engine that visually impaired computer users use in order to have the textual content on the screen read out to them... if these two technologies get used together then even plain english (or whatever other natural language you happen to use) text could become a program or, worse still, malware (somehow i can't see vista's email client blocking text content)... this takes the complaint about microsoft violating the principle of separating data from code to a whole new level...

Tuesday, January 30, 2007

HIPS: what's in a name?

i was reading this HIPS article and i came upon something that struck me as being considerably less than optimal right in this first paragraph:
There are numerous techniques for providing host-based intrusion prevention capabilities, but eWeek Labs believes there are two that will best complement enterprises' current strategies: vulnerability inspection and application and process vetting.
no, i'm not about to criticize eweek's choice in technologies - i'm about to point out the fact that the term HIPS (host-based intrusion prevention systems) has basically come to mean nothing...

what do i mean? i mean that everything short of virus (or spyware) scanners and simplistic firewalls get lumped into the HIPS pile... just look at this example, scanning network traffic for exploits (vulnerability inspection) couldn't be more different than application whitelisting (application/process vetting)... then there are the HIPS' that use behaviour blocking, or the ones that try to implement application-centric access control lists, or even known-exploit scanners (which are apparently different/more specific than vulnerability inspectors)...

if you were a consumer looking to compare different HIPS products meaningfully you'd be out of luck - they don't just have different technology (each anti-virus scanner has slightly different technology), they implement entirely different techniques and literally represent different paradigms... comparing an application whitelist to a behaviour blocker or exploit scanner is like comparing apples to oranges, but such distinctions aren't transparent when everything gets called an HIPS...

maybe HIPS was the buzzword bingo that all the vendors wanted in on, but the big losers are average folks who don't know the difference and are mislead by this meaningless umbrella term that implies all those technologies are equal - they aren't and folks are being done a disservice when they aren't told up front what's what... they can't compare them, they can't combine them intelligently, all they can do pay their money and say thank you sir, may i have some more...

Thursday, January 25, 2007

eEye on malware naming

y'know, when you're in a position where you're supposed to be an authority on a subject and you're talking about something seemingly related to that subject, it behooves you to either know what you're talking about or stop talking...

in marc maiffret's case neither of those paths were taken... it's amazing to me that the chief technology officer at eEye would say something like this:
"In the vulnerability world, we have CVEs [Common Vulnerabilities and Exposures] as a way to know that we're all talking about the same vulnerability regardless of what we might have named it in our product. In the anti-virus world, there's not really anything like that."
has he been asleep for the past year or more?* because the CME (common malware enumeration) has been around since october 2005 (actually 2005 this time)...

it's difficult to take anything he says seriously after such an incredible gaffe but it's also difficult to let such clear (and frankly surprising considering his position) false authority syndrome slide...

you see, marc would have us believe that the vendors are fighting over who gets to name what and that because they're making "really good money" that they have no incentive to address the naming confusion and give users what they "are actually asking for"... apparently in marc's experience if you just put your mind to it you should be able to get 20-30 companies who operate independently (necessarily so since they're producing signatures for use with different technologies) to co-ordinate the naming of hundreds of malware samples per day while not compromising their top priority of getting detection capabilities (which necessarily require a name, any name, good or bad, to identify what is detected) to users as fast as possible...

that was sarcasm, of course... you can't co-ordinate malware naming without slowing down the process of getting signatures to customers and thus compromising that top priority - and i'm pretty sure that most people would choose a speedy signature turn-around (which directly aids in prevention) over harmonized naming (which doesn't)... while you're waiting for for those 20-30 companies to figure out if they already have a copy of your to-be-co-ordinated sample and which of their many samples that is, your analysts have already finished their analysis and have created signatures to be pushed out to customers...

what they can do (and the main CME page indicates they have done on occasion) is rename the malware after the fact, either to adopt the name other companies are using or to append the CME identifier to the name... unfortunately, this still requires time and effort to co-ordinate a harmonized name and thus cannot possibly be done for each of the hundreds of samples anti-virus companies process each day - especially when most of those hundred samples are complete flops in the wild (making the work to harmonize their names wasted effort)... even without explicitly renaming their samples, the CME lists the various names associated with particular CME id's and that resolves much of the naming ambiguity end users are likely to encounter...

so next time you see someone attributing the malware naming mess to lack of interest or petty rivalry, take a moment to consider the realities of the situation and ask yourself how or even if those logistical problems can be overcome (without fundamentally changing the anti-malware landscape, since obviously naming wouldn't be a problem if there were magically only one company)...

ADDENDUM: i've been informed by marc maiffret that he was misquoted in the article in question, at least a far there being nothing like the CVE in the malware world (maybe other things too?)... as such, i apologize for characterizing marc as suffering from false authority syndrome (and for being asleep for a year or more) since it's no longer clear that's the case... it seems the article's author, scott m. fulton, may be more responsible for the false picture that article painted... i do stand by my criticism of the notions put forth in that article though - the problems associated with malware naming are not easily overcome, nor trivially attributable to character flaws in the vendors...

Wednesday, January 24, 2007

the myth of overwhelming numbers

one of the myths put forward by the anti-virus is dead crowd is that anti-virus companies just can't keep up with the malware authors... this article about the death of anti-virus, balanced though it may be, contains the following quote that demonstrates just what i'm talking about:
"The traditional, signature-based technologies are simply not able to keep up with the sheer volume of malware that's out there," Jaquith told TechNewsWorld. "There are over 200,000 unique pieces of malware out there. Some host intrusion vendors say that number is closer to a million.
200,000 indeed - that does seem like an awfully big number, how could anti-virus companies possibly keep up? in actuality, this is an example of lying with numbers... although the 200,000 figure is fairly accurate (at least it's in line with what mcafee and f-secure were saying last time i checked), it is not something you keep up with... keeping up is something you do with a rate (a certain number of things per minute or hour or day or year) and 200,000 isn't a rate, it's a total... if we were to turn that total into a rate we'd need to include the time period over which it occurred, and although 200,000 sounds like a pretty big number, 200,000 over the course of 20 years doesn't sound quite so overwhelming...

i could end there, but that would be misleading in the opposite direction... 200,000 over 20 years works out to about 27 per day but that's not the right figure right now, that assumes a constant rate of malware production over those 20 years and it has been anything but constant... in this article stanislav shevchenko (the head of kaspersky labs virus lab) is quoted as saying that november 2006 saw 10,000 malware samples added to their database - that works out to about 333 per day... what's more, it's apparently 5 times the number from january 2006...

now you're probably thinking we're back in the overwhelming numbers range again, but wait - stanislav has more figures, specifically the time it takes for a good analyst to process an average malware sample is 5 minutes... if you add in the stated constraint of a 12 hour shift a good analyst should be able to process 144 samples per day so november's incredible figures could probably have been handled by a grand total of 3 analysts with time to spare assuming there weren't too many samples that were significantly more complicated than average - and even if that assumption is false, anti-virus labs tend to have more than just 3 analysts...

so clearly, right now anti-virus companies are not being overwhelmed by samples... that only leaves the future to worry about... if the numbers increased 5 fold from january to november in 2006, what are they going to be like this time next year or the year after that?...

the first graph shown here depicts the increase in the malware production rate over time for the past several years and admittedly it looks pretty grim but i saw a graph with a nearly identical shape (though obviously with very different numbers) ten or more years ago... somehow the world hasn't ended yet, anti-virus hasn't died yet, and life goes on... the more you do a repetitive task, the more patterns start to emerge and with those patterns comes the potential for more automation which in turn leads to faster malware analysis... advances in automation have sped up and will continue to speed up the analysis process so the future doesn't really look all that bleak either...

[edited to fix the year specified - apparently i'm still getting used to this whole 2007 thing]

Tuesday, January 23, 2007

ethical conflict in the anti-'rootkit' domain - part 2

while jamie butler, the creator/distributor of one of the most widely deployed stealthkits in the world (as i pointed out previously), is no longer the CTO of the government/military funded anti-stealthkit startup komoku (oh, to have been a fly on that wall), there's a new conflict to grab the spotlight...

found on the anti-rootkit blog a couple days ago, apparently the creator of a program called rootkit unhooker have created a stealthkit called unreal that no one (not even his own product) can detect and he's planning to release it...

tell me something - what kind of a world do we live in where people who are supposed to be anti-X go around making X's? if anti-virus vendors created and released viruses all hell would break loose... could you imagine buying anti-spam technology that you read about in a spam email? would you use an anti-spyware app created by people who make spyware?

so this person, known as EP_XOFF (wow, that instills confidence), expects people to trust his stealthkit detector after he's built and released a stealthkit that not even his own product detects? aside from the fact that he's just demonstrated that neither his nor any other detector is able to protect you (so far only outside-the-box cross-view analysis is going to pick it up and none of the mainstream stealthkit detectors use that technique), you now have to wonder if his detector actually comes with a similarly undetectable stealthkit that you don't know about... how could you know one way or another? and why would you trust someone whose making the technology to bypass all stealthkit detectors available for people to download and modify for their own ends?

i mean, i understand trying to find ways to break the security of something, but when it's the security in applications like the one you yourself make shouldn't you first come up with ways to prevent that break before giving the attack information to the public? who is he serving by releasing the info before even his own product is able to deal with it because it sure doesn't seem like he's serving the end user? though he may not be earning money off of making the problem worse as others have, there are still plenty of personal rewards he will almost certainly reap such as notoriety, influence, and social standing in the stealthkit research community...

what this all boils down to is this: EP_XOFF is gaining rewards at everyone else's expense... this is not the way you want your security software provider to behave...

Monday, January 22, 2007

anti-virus is not a faulty burglar alarm

wow, what a great analogy robin bloor makes here... too bad it's his reasoning that is faulty...

one of my favourite turns of phrase lately is mismatched expectations - robin bloor is quite clearly suffering from mismatched expectation by likening anti-virus to a faulty burglar alarm... anti-virus is nothing like a burglar alarm, faulty or otherwise, nor is the problem it is trying to solve amenable to a burglar alarm type of approach...

burglar alarms are pretty simple things - you have one or more sensors that detect basic, easily quantifiable environmental conditions (broken window, open door, motion, etc) and an alarm goes off when the sensor is triggered... this is pretty dumb, all things considered, but it works well enough when the thing you're trying to detect can be broken down to such simple events and even better when the home owner can easily decide whether something is a false alarm...

the malware problem, by comparison, cannot be broken down into simple elements quite so easily and end users are largely incapable of deciding whether an alarm from a malware detector is false or not... this is why behavioural detection techniques (which have been around for over a decade) are still not receiving the same kind of mainstream attention that known-virus scanning receives... instead of going the burglar alarm route, anti-virus incorporates a considerable amount of knowledge about the viruses (now more generally, malware) that the vendor has seen in order to minimize false alarms... anti-virus still has false alarms, of course, but just imagine how bad it would be if av were made to be as dumb as a burglar alarm... if we're going to stick to crime-related analogies, anti-virus is really much more like the criminal databases that have proven so useful in the past to keep known felons out of places they're not supposed to go or out of job positions they shouldn't have... not that those databases are perfect, but they sure do help...

of course, bad analogies are not the only thing robin has up his sleeve in that article... he spreads some clever FUD about the malware pandemic using carefully selected figures from a microsoft study i wrote about once before... yes, the malicious software removal tool removed malware from 5.7 million computers, but what robin fails to tell you is that it scanned over 270 million computers... that gives a malware penetration just 2.1% - hardly a pandemic...

all this is to build up to his conclusion that application whitelisting is a superior technology that should be used in place of anti-virus... maybe it is superior, maybe it isn't, it depends on things i've mentioned elsewhere, specifically whether the user can accurately decide what is safe to add to the whitelist and whether the whitelist can cover enough of the various types of program execution (it's not just *.exe's out there)... should whitelisting be used in place of blacklisting (anti-virus)? no, in reality blacklists and whitelists complement each other, they partially mitigate each others weaknesses, so they really ought to be used together...

and just as a point of correction - contrary to mr. bloor's assertion, application whitelisting is not relatively new... the basic idea dates back at least a decade as an anti-virus (gasp!) suite known as thunderbyte anti-virus included a rudimentary form of whitelisting in it's tbcheck module... tbav was a fairly well known product in it's day and there was plenty of opportunity for whitelisting to become popular, but it didn't... others have languished in obscurity too... anti-virus seems to have remained the most popular in part because it required the least amount of knowledge and/or thinking from the end user - and when it comes to malware, that can actually be a very good thing...

Thursday, January 18, 2007

where do stealthkit detectors belong

mike rothman brought up an interesting point today in a section of his daily incite - where does stealthkit (rootkit to those of you drinking the revisionist rootkit koolaid) belong? inside a larger security application suite (or perhaps a larger security application period) or on it's own as a separate tool? although they're often separate right now, mike thinks they probably shouldn't be...

but given how they operate and how best the fundamental technique they employ can be used, is merging them into larger products really reasonable?

most stealthkit detectors employ a generic detection technique known as a cross-view diff or cross-view analysis where essentially you look at some area of the computer in 2 or more different ways and see if there are any discrepancies between them... although most stealthkit detectors are trying to cut this particular corner, outside-the-box analysis is ultimately necessary to truly see through persistent stealthed malware... the av industry knew this back in the 90's (it was called clean booting back then, though microsoft has since recycled that term) but got microsoft's boot up their ass when NTFS became mainstream without any robust method to perform outside-the-box analysis on that type of file system so they've been squeaking by with safe-mode ever since... anti-spyware and other anti-malware apps that grew up in the interim have never even dealt with this methodology because active stealth went out of fashion for a number of years but for an anti-malware that deals specifically with stealth in a generic sort of way at least part of it's operation really should be done outside-the-box and that is going to set it apart from other anti-malware functionality...

as yet the only detector i know of that does an outside-the-box cross-view diff is the WinPE implementation of microsoft's strider ghostbuster rootkit detection technology but that's still just a research project... no one else does it because as yet microsoft have not made WinPE easily available to the public (though i've heard rumors this may change with vista) and the alternative, BartPE, must be generated by the user (due to copyright and licensing restrictions from microsoft) rather than distributed with the stealthkit detector, which is a not-so-simple task from the perspective of average-joe computer users...

but if the anti-stealthkit folks were to do things right, how well would the resulting product (that runs off a bootable cd) fit in with existing security apps? not very well would be my guess... it would be nice if anti-virus and other anti-malware apps could operate in a similar outside-the-box environment since stealth is just a means to hide what eventually becomes known-malware, but that defies centralized management consoles and scheduled scans and a variety of other convenience features (bloat) that they've accumulated over the years... outside-the-box is fundamentally at odds with convenience but it's necessary for reliable generic detection of stealthed objects so how can you reconcile stealth malware detection with the increasingly convenient unified anti-malware product?

Tuesday, January 16, 2007

malware doesn't hide in search results

the article Malware Now Hides in Search Results has got to have one of the most misleading titles i've seen in a long time...

maybe it's just me but when i saw that i was thinking maybe some clever malware purveyor was gaming search engines to get his/her wares installed on victims' computers - maybe some novel exploit or social engineering trick...

instead its about how there's nothing in the search results... when you use a search engine to search for information on a piece of suspected malware by filename you often won't find anything... yeah, not exactly news to anyone whose been down in the trenches helping people get rid of malware anytime in the last decade or more - i suppose i should have said something sooner so that you wouldn't waste valuable moments of your life reading the article...

but apparently this is something prevx has discovered by analyzing 250,000 filenames... malware is increasingly using deceptive naming tactics - which is true when you consider that virtually all malware uses deceptive naming tactics (when was the last time you encountered properly labeled malware in the wild?) and the set of all malware is increasing in number... supposedly even expert users will find it next to impossible to get malware information this way - though i should hope expert users would already realize how incredibly useless filenames are for identifying malware (or anything else for that matter) what with that expertise including the ability to rename files to anything they choose...

filenames have never been sufficient to identify (and therefore get further information on) a suspected malware sample... what's snowwhite.exe in today's email could be rootkitrevealer.exe in tomorrow's email... that's why known-malware scanners look inside the file instead of just at the filename, and then users use the malware name given by the scanner to look up additional information... the best you can hope for with search results is to use them in a process of elimination sort of way - narrowing down the field of probable suspects by eliminating those which search results suggest are known good files (and then hope that the malware hasn't replaced a legitimate file with itself or actually infected a legitimate file)...

prevx and techworld/cso magazine, in a brilliant portrayal of captain obvious, have essentially just informed masses that you can't judge a book by it's cover...

Monday, January 15, 2007

why 'wipe and reinstall' is wrong-headed

there's been a long running debate about the best way to handle malware... some say using malware removal tools is best while others make a strong argument for wiping the drive and reinstalling from backups or even from original media...

richard bejtlich is a wipe and reinstall from original media proponent*... this is odd because you basically have the guy who popularized the awareness of extrusion in the security context advocating a method that makes determining the extent of extrusion of sensitive data from a malware incident impossible...

you see, historically people have thought of the malware problem as being simply a type of intrusion ('something bad got into my system/network') and that all you needed to do was get rid of the intruder, but now-a-days with the criminally oriented malware (crimeware) out there extrusion of sensitive data ('my passwords/credit info/banking info/etc got out') is increasingly becoming a very real possibility... while prevention is still just concerned about what might get in, recovery must now be concerned both with what got in and what might have gotten out...

back in the day, the one advantage a wipe and reinstall had over the more surgical malware removal was expedience - though never actually necessary, sometimes it was faster/easier to just nuke the drive and rebuild from scratch... no, certainty was not one of it's benefits, certainty was not there - just as richard points out that backups could have been compromised, so too can original media be compromised (it's happened in the past and it will happen again in the future)... as such, certainty in malware recovery is unattainable...

as technology has marched forward, the expedience offered by a wipe and reinstall (or similar methods like restoring drive images) relative to surgical malware removal has only increased but it comes at the cost of masking compromises to assets both on and remote from the affected machine... further, that expedience is very tempting to the lazy and/or ill-informed, it becomes the knee-jerk reaction to even a suspected compromise - after all, why bother with anything else if a wipe and reinstall will make it right regardless? why bother even getting a diagnosis?

the answer to that question, of course, is that without a diagnosis (literally, thorough knowledge) of the malware you can't hope to address the consequences of the malware... you need to know what the malware is, what it can do, how it got in, what it might have leaked out, etc... the wipe and reinstall advice that is generally bandied about trains people not to worry about or even think about those things, it implies that all you need to worry about is getting the intruder out... with thorough knowledge, on the other hand, surgical malware removal (preferably by replacing affected software objects with known clean copies from original media or backups where available, or using a removal tool dedicated to that one malware or it's family, or as a last resort using general purpose removal functionality built into most known-malware scanners) is possible, as is determining the likely entry vector and assets compromised...

admittedly, diagnosis and surgical removal may not be a speedy process... while home user machines are generally not mission critical, businesses/organizations often can't necessarily afford to have a production machine out of commission for the length of time it takes to find out everything you need to know... even then, wipe and reinstall is not the answer - instead create an image of the drive from the compromised system (or remove the physical drive itself and replace it with a fresh one) and then rebuild the system so that it can go back into production while retaining all the information necessary to complete the diagnosis after the fact... this has to be done with the awareness that one is putting the machine back into a potentially compromising situation before figuring out how to prevent subsequent/additional compromise, however...

[*update: apparently, richard bejtlich didn't mean what i thought he meant when he said the safest method of malware removal was reinstallation from original media... apparently richard was talking about the process of actually removing malware abstracted from the broader concept of malware incident response procedures - my apologies for the mix-up, but let the following sink in and you'll understand how the confusion occurred... for most people the process of actually removing malware is their malware incident response procedure - so much so that malware removal has become synonymous with malware incident response (to the point that it's used in preference of malware incident response simply because it's a more familiar term and because it's less jargon-laden)... no one really talks about literal malware removal abstracted from the larger context of malware incident response either because they don't recognize the difference between them or they do but the removal part by itself just isn't that interesting...

so i misinterpreted richard when he actually did talk about the removal part by itself, but i'm fairly certain he in turn misinterpreted the use of malware removal that he was responding to - malware removal certification will almost certainly include more than just getting the bad stuff out...]

being cautious online

i found this interesting quote from rixstep by way of the security protocols blog:
People regularly download software they cannot claim they trust and just run it with no thought for the consequences. Yes, it's 'only' a computer - but listen to them wail if something goes wrong. They're living in their rose (pink) coloured world and are totally unaware of the threats lurking outside in the dark.
this is incredibly true and more people need to become aware of this... maybe not so much that they learn the exact extent of the threat that each online activity poses, but at least know that everything they click on online has the potential to download something onto their computer, that every time they download something to their computer it is fundamentally the same as putting it in their computer, and that everything they put in their computer (regardless of the source, because even microsoft has been known to inadvertently distribute malware in the past) has the potential to do bad (maybe even costly) things...

if people just used the same care when putting things in their computer as they do when putting things in their mouths, their computers would be a lot safer...

my suggestion to everyone is this: avoid that which that you don't know you can trust, and if/when you discover you put your trust in something you shouldn't have then you need to adjust their expectations for the next time...

Wednesday, January 10, 2007

phishing alarmism

there's a new post over on the securiteam blog that seems to be just a little too concerned about a bank of america suggestion to add an email address to customer's address books... the author seems to believe that following the suggestion will make customers more vulnerable to phishing, that the bank is asking them to lower their defenses...

now, it's not like they're telling people to lower the security settings on their browsers in order to view the bank's website (though perhaps they do that too, i don't know), they're just telling their customers to whitelist them so that their emails don't get rejected by spam filters... the author's contention is that phishers will then start using the same address the customers whitelist as the From: address on their own phishy emails and because the address is whitelisted the customers will be exposed to each and every one of those phishing emails...

but here's the thing, even if bank of america didn't advise their customers to whitelist the email address, the phishers would have used it anyways... phishers posing as bank of america will use any address bank of america uses, regardless of what customers do with that address... there's nothing bank of america can do to stop that and there's nothing customers can do to stop that so it makes little difference whether the customers whitelist it or not... whitelisting the address doesn't mean they can start implicitly trusting email apparently sent from that address, it just means that bank of america's legitimate correspondence won't get lost in the junk mail folder...

of course the phishing emails won't get lost there either, perhaps that's the problem? unfortunately, spam filters aren't really any good at stopping phishing emails... the phishers work hard to make their emails look legit (otherwise they wouldn't fool anyone) so they should be equally affected or equally immune to spam filters as the legitimate bank of america emails are... if the legit emails get through then so will the phishing emails, and if the legit emails don't get through then the users will have to look in their junk mail folders for them and then wind up being exposed to the phishing emails anyways...

basically it's a zero-sum situation as far as phishing goes - nothing the legitimate sender or the receiver do with the From: address can increase or decrease the exposure to phishing so the bank of america customers might as well whitelist bank of america's email address - and security folks might as well wake up to the fact that just because something can be used to a phisher's advantage doesn't mean the user is put at greater risk... spam filtering and anti-phishing are not the same, even though we don't want to see either spam or phishing emails in our inboxes, defenses against one are not necessarily appropriate or applicable towards the other... the officially whitelisted email address just means the phishers don't have to work so hard to figure out what email address to forge on their phishing emails - their convenience does not equal lower security for others...

(and if you're wondering why this isn't a comment on the securiteam blog, it's because i can't leave comments there anymore... they've been consistently rejected as spam for months regardless of content, email address used, or the presence of a url - and the blog admin who's supposed to be alerted to the comment in order to correct the issue if it was misclassified never does, nor is there any obvious way to follow the directions which say to contact him/her about it... they sure know how to make a guy feel welcome...)

Sunday, January 07, 2007

marcus ranum has found his saviour . . . or so he thinks

marcus ranum has finally found an execution control app (in other words application whitelisting software, the likes of which i've written about before) that he's happy with and apparently thinks could be the final nail in the coffin of anti-virus software - read it here...

yes, marcus is one of those people who thinks anti-virus is dead, or if it isn't it should be... he's taken shots at it before under guise of the stupidity of enumerating bad things instead of good things (nevermind the fact that there are far more good things in the world than bad) and in the above article makes reference to it as being an "default permit" technology... clearly, marcus likes to think in firewall terms...

despite that, calling anti-virus a "default permit" type of technology is a rather gross over-simplification... anti-virus software doesn't permit things, it deals with what it knows and what it knows is bad things - when it encounters one of those things it alerts you to the fact so that you can try to protect yourself from it and/or don't inadvertently do something dumb... what you do with everything else is entirely up to you, it's not anti-virus software's business to get involved in that... simply put, anti-virus software is a type of blacklist technology...

now, it shouldn't take a rocket scientist to figure out that blacklists and whitelists actually complement each other, especially here... yes, only allowing known good applications to run is more secure than simply filtering out the known bad ones, but how do you determine which ones to allow? predefined blacklists and whitelists provided by people with expertise allow us shrink the scope of this problem by reducing the number of things we'd need to determine the safety of unaided - but marcus would have us throw out one (or both, considering his reaction to prevx) sources of anti-malware intelligence and figure that sort of thing out for ourselves...

maybe i'm wrong, but i think most people use computers to get useful work done, not to waste their time researching whether executable X is good or bad when that information is often already available in downloadable form... most people aren't very good at that anyways (if they were the malware problem would never have become a big problem) and marcus is clearly no exception since he needed software to tell him that windows probably wouldn't operate properly if everything under the windows directory were denied the ability to execute...

when deciding what's ok to run the user needs help, especially with windows... microsoft has put a lot of magic into windows, by which i mean there's a lot going on behind the scenes that they make absolutely no attempt to be transparent about... without expert knowledge it's hard to know which of the hundreds of executables that get included with windows needs to be allowed to run in order for windows to operate properly...

marcus has apparently learned his lesson about the windows directory, but clearly not about the importance of making use of intelligence that has already been gathered... also not about knowing the enemy, and not about knowing oneself - by which i mean he appears to have not considered the weaknesses of application whitelisting technology... there are, and will always be, types of executable content that application whitelists won't be able to recognize (for example, new scripting languages won't be recognized by old whitelist software)... there are, and will probably always be, ways to get software that has already been authorized to do your dirty work for you (for example, internet exploder combined with some of those improperly labeled safe-for-scripting activex controls - or even buffer overflow exploits)... execution control software has a fairly limited notion of how execution can happen that doesn't actually reflect reality - most of the standard ways it can handle but would it have saved you from a WMF or VML exploit? would it have prevented macro viruses from ever becoming a problem? updating a blacklist to catch these things is relatively straight forward since you're just telling it additional things to look for, but updating a whitelist to deal with them involves making additions to how it looks for things being executed...

whitelisting is not the great stand-alone solution marcus thinks it is... it's a great complement to an anti-virus application, though, and an excellent technique to apply provided you can deal with figuring out what's safe to authorize...

Thursday, January 04, 2007

stealing your gmail contact list

early tuesday there was a burst of activity on a variety of blogs concerning a vulnerability that allowed an attacker to steal a gmail user's contact list if s/he visited a page with an exploit on it while logged into gmail in another window...

obviously the attacker in this scenario would be someone looking potential new victims to send spam or phishing emails or some other kind of malicious email to... leaking your own address or the addresses of your contacts to the bad guys is of course not something you want to allow to happen so gmail users were probably quite concerned about this for a while - at least until it was fixed, and it was fixed relatively quickly...

in the mean time people were advised to log out of gmail when not using it, the idea being that if you aren't logged in while you're browsing the rest of the web then if you happen to visit an exploit page it won't be able to do anything... that's all well and good if gmail is the only google service one happens to use but what about all those people who use multiple google services? the way the google account logon works, when you log into one of them you log into all of them... can you imagine trying to making this advice work with google reader? especially with the broken, er, i mean partial feeds that basically require you to visit foreign pages while you're still logged into google... then there are all those blogs that require you to log in before leaving comments - for blogger.com blogs that means logging into google... or perhaps you use the google notebook firefox extension that not only keeps you logged in while you browse, it literally doesn't have an option for logging out, you have to go to the full notebook web page to do that... google's efforts towards single sign-on make being logged into gmail a fairly ubiquitous state to be in and so give the attack a much broader range of potential avenues of success...

another, less friendly suggestion was to not store your contact information in gmail so that there would be nothing to steal (this was usually expressed in the form of 'what were you doing putting that information in there in the first place? you should know better')... of course some of us happen to have a lot of contacts and an email application (whether a client app or a web app) that doesn't have an address book is not all that usable... never mind the fact that google talk/chat/{whatever they call their IM service} requires you to store your IM contacts in that address book (maybe those same people would be telling you you shouldn't be using IM?)...

whatever, google fixed the problem so we don't have to worry about it anymore, right?... WRONG!... this is not the first time a google vulnerability has exposed the gmail contact list and it probably won't be the last... what happens the next time? what happens while the vulnerability is only known to the bad guys (when you won't even know you need to be careful)? logging out isn't all that feasible and clearing out your contact list makes gmail harder to use and breaks google's IM...

some will argue that erecting barriers around your contact info is something google should be doing with more secure coding practices and it's certainly fair to point out that improvements can be made (and maybe even are being made) but there is a fundamental barrier that we might want to consider... google makes one's contact list available from within it's other services to make various sharing and collaboration options easier for people to use... if you don't use or rarely use that functionality, wouldn't it be nice to be able to turn it off so that your contact list wouldn't be accessible through a public API at all?... google doesn't offer a way to opt-out of API access to your contact list, unfortunately, but there is a way for end users to get the same basic capability without google's direct assistance - use a secondary google account (perhaps one that isn't connected with a gmail account at all) for as many of the other google services one uses as possible... recall that the 'just log out of gmail' advice is potentially unworkable precisely because using most of google's other services require you to log right back in again, but if you're logging into a different account for those other services then your gmail contacts won't be accessible through the API and so shouldn't be capable of being leaked the next time a contact list exposing vulnerability comes along...

Monday, January 01, 2007

happy new year

i generally try to keep the administrivia posts to a minimum here, i don't think they're generally all that interesting, but in light of the new year and some reconsidering i've been doing i've decided to do something i previously said i wouldn't do...

i'm enabling comments on this blog (moderated, CAPTCHA'd comments, but comments all the same)... i had to manually enable them on a number of posts (due to some setting changes i made a while ago) so the feed is probably going to show some old posts - oops, sorry about that, i'm not sure why blogger re-inserts things into the feed when i don't make changes to the content of the post but oh well...

i still think usenet is a superior medium for intelligent discourse, but it's clear to me that people want to be able to leave a comment right there on the page and as i use other people's blogs i'm starting to see why...

enjoy the new feedback mechanism...