Friday, September 28, 2007

i know there's no panacea but i still want one, darnit!

have you read this post by lonervamp about the silver bullet syndrome? forgetting his specific example of jericho forums for the moment, i agree whole-heartedly with his observation that although we all seem to agree that there is no panacea people still seem to talk and act as though they expect their to be one...

need an example besides the jericho one? well anton chuvakin has been kind enough to provide us with one this evening that involves anti-virus... he's abandoned known-malware scanning completely after a friend of his had to rebuild their system despite being 'protected' by a major brand av....

now i'm not going to debate the merits of the decision itself, if he wants to use a whitelist instead of a blacklist, that's his decision and it's certainly a workable one... the problem is the motivation - getting fed up because of an instance of av failing (or even many instances) points to (as mike rothman would put it) mismatched expectations... if you know and agree that there is no panacea then you shouldn't be overly bothered by instances of failure, you should be expecting failure...

so what's going on here? i think that although we more or less all agree that there is no panacea, people don't seem to appreciate what that really means... i suspect that the use of the term panacea itself may be obscuring the real implications so i'm going to put it in simple terms that everyone, expert and novice alike, can understand:
all preventative measures fail
that's right, each and every single last one of them... that is what it means to truly accept that there is no panacea, you have to accept that there will be failures... getting fed up with the failures is an emotional rather than rational reaction, and if you base your decisions on it you are likely to be disappointed in the future when it turns out that the next big thing fails too...

people don't like failure, however, and they certainly don't want to accept it... this is a shame because if you're going to develop a successful security strategy you have to not only accept failure, you have to anticipate it... anticipating failure is really a cornerstone of strategic thinking, without it there would be no impetus to devise contingency plans, and without those a strategy is nothing more than a basic plan and a lot of poorly founded hope... in short you need to learn to succeed by planning for failure rather than running blindly from it...

Wednesday, September 26, 2007

how to partition your google identity

with all the reports lately of vulnerabilities in google i suppose it's time again for me to talk about how you can mitigate the threat these vulnerabilities pose to gmail users...

i last wrote about this subject at the beginning of the year when a similar vulnerability was in the news, and i've also made my feelings on single sign-on and federated identity (the direction identity management seems to be going these days) pretty clear... these google vulnerabilities and those that came before illustrate the problem - the 'one account to rule them all' approach creates a hugely valuable (to attackers) online identity and single sign-on integration between web applications (like that which google or microsoft or any number of other players provide) makes it that much harder to mitigate vulnerabilities by following advice like "log out of gmail"...

so what if you're like me? what if you use more google apps than just gmail? what if you use blogger for example, or google reader, or google notebook, or google groups, etc... if you're like most people you use the same google account for all of them - your gmail account... it's convenient, you only need to remember one username and password, and when you visit an exploit page while still logged in to one of these other google web applications your gmail account gets pwned because logging into one logs into all...

now, of course you could always hope google fixes these problems before you get caught, or use tools like the noscript firefox extension that should be able to help most of the time, but you might not realize (as some security folks hadn't) that you can also use a non-gmail google account for those web applications... then, not only is it easier to stay logged out of gmail while using the other web applications, logging into the account used for those other applications will actually force you to log out of your gmail account...

it's really quite simple:
  1. just head over to the google accounts page and create a new account using whatever non-gmail email address you want and presto - you have a non-gmail google account...
  2. you probably already have data on those other google web applications though but that's not a problem because many of them have ways of sharing that data with other users (ex. google reader exports opml files that can be imported to a different google reader account, google docs and spreadsheets can be shared literally, blogger lets you add a different account as the blog administrator, etc)... those sharing facilities can make it easy to migrate that data from your gmail account to your non-gmail google account...
  3. then all you have to worry about is remembering another username and password, or do you? i don't, i just use passwordsafe, then i only have to remember the master password and it works across all websites - even entering the username and password for me with the press of a key... in fact, password managers like passwordsafe work outside of the web too, for virtually any windows application that takes a username and password...

now you may have noticed i only described separating your gmail identity from your other google web application accounts - this is because right now gmail seems to be by far the most interesting target for attack (everyone seems to want your contact list or your emails)... you could just as easily have a different google account for each and every web application you use without having to remember anything extra if you feel your data or identity in the other applications warrant similar protection through compartmentalization...

Saturday, September 22, 2007

look who's talking about whitelists now

thanks to an email from james manning i got to see a company called signacert congratulating themselves for being part of the future of security technology...

you see, signacert produces what could be classified as a whitelist type of technology and symantec's canadian VP and GM, michael murphy, was quoted in the media as saying that whitelisting is the future of security technology...

now before i go further i'll make an obligatory disclaimer (because people have gotten the wrong idea in the past) that i am not anti-whitelist, i use whitelisting techniques, i think they can be a worthy addition to a security strategy, but unlike the hypesters i don't sweep their limitations under the carpet...

when a representative from a company as well known and respected as symantec says the type of technology your company happens to produce is the future of security, i suppose it's only natural to want to congratulate yourselves for being on the vanguard - but don't be so quick to pat yourselves on the back that you choose to highlight the words of someone saying foolish things as you'll only wind up looking foolish yourselves...

you see, michael murphy made a grievous error in his representation of scale leading to media statements like this:
The number of malicious software attacks, including viruses, Trojans, worms and spam, is rising exponentially, dwarfing the number of new benevolent programs being developed, making it increasingly difficult for security firms to keep up.
and this:
With more than 600,000 attacks catalogued – 212,000 of them added since January of this year – “we’re approaching a tipping point,” where there just won’t be room in antivirus databases for all of them, Murphy said. But legitimate applications are about the same in number as they were when only about 15,000 attacks had been documented.
that the signacert blogger wyatt compounded by characterizing the blacklist problem as infinite and the whitelist problem as finite... you cannot favourably compare the scale of the set of all known good programs to that of the set of all known bad programs unless your only intention is to say 'my database is bigger'... the set of good programs is orders of magnitude larger and growing faster than the set of bad programs, a fact that researchers from at least one whitelist vendor apparently concede...

if you're going to use the argument of scale against traditional blacklists then you cannot present a centralized whitelist as a viable alternative... the only conventional whitelist whose scale is more manageable than traditional blacklists is the one where the user him/herself decides what goes on the list (with all the potential for wrong decisions and the security implications thereof)... with whitelists you can have a manageable scale OR accuracy enough to protect users from their own bad decisions, but you can't have both...

this is something i would kinda hope the folks at signacert would already know, and i definitely expected the folks at symantec would know this - but the again, considering their CEO made the ridiculous claim that the problem of worms and viruses was solved, perhaps i should have known better to expect that from them... that or i should just know better than to listen to people in non-technical positions talking about technical things...

Saturday, September 15, 2007

the rumours of av's demise are greatly exaggerated

are you as sick of hearing about how 'anti-virus is dead' as i am? wow, what a worn out, tired, washed up meme... maybe now that amrit thinks stand alone av has actually finally died there will be a little less av is dead noise coming from his corner...

this isn't a freak out, though, just a reminder that what i said last year is still true - so long as people still want best-of-breed there will still be a market for stand alone av...

just because gartner changed the way they model the playing field (a necessity given the evolution, not death, of av), and just because vendors are gradually making the components of their security suites play nicer together (imagine that, they're actually managing to improve their products), doesn't mean stand alone av is going anywhere - and thinking it does probably qualifies as some sort of confirmation bias... personally, i'm going to wait for the fat lady to sing before i say it's dead... so long as i can still get an up to date and supported stand alone av app, stand alone av ain't dead...

Thursday, September 13, 2007

anti-virus as a commodity

i was reading the daily incite yesterday (as i tend to do) and i noticed one of the items was about anti-virus... it had an element that was pretty usual fare from mike rothman in that he talked about how this or that just reinforces his point that anti-virus has become a commodity - and i don't necessarily disagree with him, in fact i think i've said things that were more or less in line with that in the past...

however, as i was reading this particular instance i realized that there was a fundamental assumption to the idea that anti-virus is a commodity - the assumption that when it comes to choosing an anti-virus all malware is created more or less equal - and i began to wonder if that was really a justified assumption to make...

this may seem like nothing more than playing devil's advocate but humour me... let's look at what i think is a fairly typical thought pattern for calling av a commodity (from mike's post):
The one thing I come away with is that all the products are decent, thus I'm going to state the obvious. AV (and other malware defense) suites are true commodities. All stop viruses and other malware attacks.
so my question is what if we stopped treating the threats anti-virus deals with as one big amorphous mass but instead looked at the various subsets of malware and more specifically, drew a distinction between what is new and what is not... would av look like a commodity then? is the commoditization perspective born of an oversimplification of the problem? if we started paying specific attention to performance with new malware, wouldn't that provide a basis for vendors to differentiate themselves from the competition in a truly meaningful way? the retrospective testing at av-comparatives.org seem to show some significant variation in performance between the different products available so it certainly seems that if you drill down into the problem space that anti-malware products are suppose to address things can look a lot different than they do from a bird's eye view...

this isn't the only example of things looking different when you start considering the details... a few days earlier marcin wielgoszewski
posted this question about best of breed vs bundles... i have to admit if i were confronted with this question framed the way it was there i might actually go with bundles, but that really says more about the power of framing than it does about the efficacy of bundles... this is actually something that builds on the product class X is a commodity result from before because it only considers the presence of various broad classes of security technologies in the bundles and not the specific underlying properties of each implementation... if you were to again dig deeper into what the capabilities of the products are and evaluate what kind of coverage you get against the types of threat agents you're trying to defend against you're going to wind up not only with a much more granular picture but one that could easily lead to a different bundle selection... in fact, i feel rather confident that if you dug deep enough you might even see a picture where no bundle gave you satisfactory coverage... of course then you'd have to decide whether or not that level of granularity is worth it but that's another analysis entirely...

not that any of this is to say that anti-virus is not a commodity, i still think there's a level of abstraction or frame of reference where that's a perfectly valid thing to say - but it's not the only frame of reference... the more general you go the more true it becomes, but at the same time the more details get glossed over (and the devil is in the details)... i think it's important sometimes to question the assumptions that bind us to a particular frame of reference so as to remind ourselves that there are others out there that may be equally good or possibly even better depending on the circumstances...

Sunday, September 09, 2007

spyware terminator forum compromised

are you like me folks? does hearing about security site after security site being compromised make you more and more numb to the whole thing? i know i'm starting to feel desensitized...

isn't it weird that in trying to raise awareness for something important you can actually wind up doing the opposite in the long run...

anyways, luke tan pointed me towards these two threads about the spyware terminator forum being compromised (the second one is on the spyware terminator forum, by the way)...

now, maybe it's just me, but it seems to me that if you're going to run a security forum you might want to follow some basic security best practices and make sure you keep your software up to date!... i mean, come on, barring incidents like this, not following security best practices when you're supposed to know better teaches those who don't know better bad security habits...

then again, when incidents like this happen you serve as an object lesson to your users for what NOT to do... unfortunately it's an object lesson that has the potential to put those very same users in harm's way and do you think they visited the forum with the same precautions in place that they'd use when visiting a suspect site in order to analyze it? probably not...

that is, perhaps, another lesson users could learn from this sort of incident... although you might trust a given site's administrators not to do anything malicious with their site, you should never trust them not to make mistakes that would allow 3rd parties to do malicious things with their site, nor should you trust that the software the site runs on won't allow the same thing regardless of mistakes made or not made by administrators... make sure you have some kind of protection when visiting any site... this is probably one of the better arguments for always browsing from within a sandbox, whether a full virtual machine like the vmware browser appliance or an application sandbox product like sandboxie, so that possible malware intrusions as a result of visiting a supposedly safe site can be contained... there really isn't anyplace on the internet that is perfectly safe, you need some kind of protection in place at all times, for all sites...

and if you're a security site (or any other kind of site, actually) administrator that hasn't been hit yet, don't be the next one to get caught... please, think of the users... also, i might run out of ways to use it as an object lesson... maybe...

Tuesday, September 04, 2007

file infecting viruses vs digital signatures

vesselin's comments to my previous article inspired me to consider actual attack scenarios rather than just weaknesses in a proposed system so i've turned the title of this entry around to indicate more focus on the threats rather than the vulnerabilities...

in one of the responses to the comments i posited a scenario where a malicious entity compromises a legitimate software vendor and infects their software in such a way that they distribute infected programs with valid digital signatures and which can in turn sign executables they infect... today nishad herath posted a similar scenario over at the mcafee avert blog so at least i'm not the only one who thought of this possibility...

but you know what, that's actually a rather complicated scenario so i got to thinking maybe there's a simpler one... then it dawned on me; the premise for this protective technique is to manage the integrity of files and the assumption is that you can't infect files without changing them and thus affecting their integrity... it turns out that this assumption is wrong - certain types of companion viruses can infect a host program without modifying it at all... so if a malicious entity were to get a certificate they could easily sign their companion virus with it and so long as no one figured out the entity was signing malicious code their certificate would never be revoked and the virus could spread unhindered... and in joanna's world where digital signatures replace all the tricks that are used to detect the presence of viruses there would be nothing to alert the general public that the code was malicious and the certificate should be revoked...

ouch, that seems pretty damning, doesn't it... but you know what, companion viruses were always pretty obscure... it wasn't a particularly popular strategy in part because the extra files gave it away to those who were looking for extra files (oh, another one of those nasty virus detection tricks joanna thinks we can do without)... ok, so then how about a virus that inserts the host program into a copy of itself, rather than itself into the host program (the so-called amoeba infection technique) and then signs this new copy with the aforementioned maliciously obtained certificate? once again, a digital signature based whitelist isn't going to stop this from happening...

now that takes care of one of the aspects that kept companion viruses obscure but it doesn't really improve on the obscurity itself as this technique is even more obscure that companion infection... if you've followed me to this stage perhaps something else has dawned on you - if a virus can sign a copy of itself with a host program inside of it, why shouldn't it be able to sign a host program that it had inserted a copy of itself into using a completely conventional infection technique and the malicious entity's certificate? the answer is there is no reason it can't...

so it would seem that a digital signature based whitelist where vendors sign their own programs (effectively vouching for the safety of their own code) wouldn't really prevent file infecting viruses at all if that were the only thing the world were using... you still need all sorts of tricks to figure out when a vendor's certificate can't be trusted anymore, which points back to a fundamental problem with this kind of self-signing system - it correlates identity (which is the only thing a certificate authority can test) with trustworthiness in spite of the fact that they don't actually have anything to do with each other... just because the vendor's front man is who he says he is and hasn't done anything bad in the past (that anyone knows of) doesn't mean the vendor itself isn't a malicious entity... currently, standard certificates (like the ones used for websites) have become so easy for anyone to get that they've become meaningless and this lead to the creation of extended validation certificates which simply involves more in-depth investigation of the entity and which in turn has no bearing on what the entity will do after getting the certificate... i can see no way for a digital signature system for code to work any different than the one used for websites so the same problem will apply; and then even if we somehow figure out that the entity cannot be trusted, their virus(es) will continue to spread until a revocation is issued for their certificate and that information trickls down to all the affected systems...

trusting the vendor (or whatever else you want to call the software provider) to attest to the trustworthiness of their own software just seems far too naive from a security standpoint, which is why i originally didn't even consider it to be the model joanna had been talking about... a system where independent reviewers checked programs for malicious code before signing them (essentially certifying programs rather than program providers) seemed to be a safer solution, though it's got the same scaling problems that conventional centrally managed whitelists have... a system that certifies programs rather than program providers would be less vulnerable to the scenarios mentioned here (i think only a variation on the first one should be able to allow viruses to still spread) but either way, both options still allow viruses to operate if used on their own... at best (and by now this should sound familiar) digital signature based whitelists should be something we use with the more conventional tricks we're used to, not instead of them as joanna rutkowska would like you to believe...

Sunday, September 02, 2007

digital signatures vs file infecting viruses

so joanna rutkowska actually talks about things other than so-called rootkits... this time (i won't link to the article for known reasons) it's file infecting viruses...

from the article:
But could the industry have solved the problem of file infectors in an elegant, definite way? The answer is yes and we all know the solution – digital signatures for executable files. Right now, most of the executables (but unfortunately still not all) on the laptop I’m writing this text on are digitally signed. This includes programs from Microsoft, Adobe, Mozilla and even some open source ones like e.g. True Crypt.

With digital signatures we can "detect" any kind of executable modifications, starting form the simplest and ending with those most complex, metamorphic EPO infectors as presented e.g. by Z0mbie. All we need to do (or more precisely the OS needs to do) is to verify the signature of an executable before executing it.

I hear all the counter arguments: that many programs out there are still not digitally signed, that users are too stupid to decide which certificates to trust, that sometimes the bad guys might be able to obtain a legitimate certificate, etc...

But all those minor problems can be solved and probably will eventually be solved in the coming years. Moreover, solving all those problems will probably cost much less then all the research on file infectors cost over the last 20 year. But that also means no money for the A/V vendors.
first things first - this is essentially a whitelist technique (with the added bonus that the cryptographic component allows the proof of whitelist membership to be shipped with the file instead of requiring a lookup in a very big list) with all associated fundamental problems... think the problem of signing all good programs is small and will probably be solved? maybe for suitably large values of small... if you're going to focus on identifying good files instead of bad ones you have to keep in mind that the good files outnumber the bad by orders of magnitude and grows at an even faster rate... conceptually signing all good programs is simple, but in practice it's very, very hard...

but let's assume we do solve that problem... so if the file isn't signed then it doesn't run and if the file's signature is invalid then it doesn't run... the presence of a valid signature is assumed to mean that the file is a) not bad and b) hasn't had anything bad put into it after signing, but is that a valid assumption? given that mobile spyware can get digitally signed by symbian, i think not, at least not for the first part of the assumption... currently digital signatures like the ones joanna holds up as examples are meant to prove authenticity, not safety... putting the onus on the signatories to determine whether the code they're signing is safe doesn't solve any malware problem, it just offloads it onto the signatory... this is also not a small problem: distinguishing good from bad is and always has been the problem and offloading it onto someone else doesn't make it any easier to solve...

the second part of the assumption, that verified signature implies nothing bad has been put into the file, may well be true, assuming that the verification system itself hasn't been compromised... the digital signature proves authenticity and one of the prerequisites for authenticity is integrity and that's really the underlying ingredient here - managing system integrity... any application whitelist worth it's salt already keeps track of the integrity of the executables on the whitelist, otherwise it would be trivial to fool it by simply replacing a whitelisted application with a piece of malware with the same filename... but as yisreal radai showed in his paper "integrity checking for anti-viral purposes: theory and practice" (sorry, no suitable non-vx links at this time), systems that detect changes to the integrity of files are subject to attack and one based on digital signatures is no different... the signing key could be stolen (there's been malware designed to steal cryptographic keys in the past) and then generating valid signatures for infected files would be trivial, the key used to verify the signatures could be altered (either on disk or dynamically in memory) by a malicious process that has already been signed, or if the system allows adding new keys then one could be added maliciously that would allow the files to be modified (infected) and then resigned with the new key to trick the system into thinking the file's integrity is intact... in fact, taking a cue from the developers of stealth technology under windows, one could simply change the result returned by the signature verification function... in order to be immune from attack, an integrity checking system has to be offline and out of reach of the attacker, and that's not compatible with a system that checks integrity in real-time to prevent modified files from running...

of course there are other problems too... it's not just deciding what to sign, ensuring the signatures are themselves trustworthy, and finding the resources to sign every good program in existence... there's also the classic whitelist problem of deciding what to do in an environment where programs are being created (or even what's a program in the first place)... are we going to digitally sign word documents? yes? ok, and will that stop macro virus infection? no of course not... there are plenty of macro viruses that infect a document when the document is saved - a point at which a new digital signature would have to be created anyways... then when the person we send the document to opens it the virus runs and then proceeds to infect documents that person creates or modifies (and then signs) and so on and so on...

again, from the article:
So, do I want to say that all those years of A/V research on detecting file infections was a waste time? I’m afraid that is exactly what I want to say here. This is an example of how the security industry took a wrong path, the path that never could lead to an effective and elegant solution. This is an example of how people decided to employ tricks, instead looking for generic, simple and robust solutions.
unfortunately a digital signature based whitelist is no elegant solution either... whitelisting for anti-viral purposes dates back at least as far as thunderbyte anti-virus and there have always been ways to manually check the integrity of transfered files to make sure they haven't been altered from what original vendor was distributing by using crc's, hashes, and even digital signatures... a digital signature based whitelist makes certain aspects of usage a little more convenient, but it doesn't mitigate the inherent problems of a whitelist...

joanna may have wanted to use this to demonstrate the way security solutions ought to be in an ideal world, but the world is not ideal, and the virus problem as well as the many varied ways of addressing it are not as simple as she portrays them... thus her example of security gone wrong has no legs... in the real world there is a counter-measure for every protective measure, and elegance (subjective as it is) cannot be the basis upon which the measures we take are judged...