Wednesday, October 22, 2008

adobe clickjacking patch is a red herring

by now i'm sure just about everyone with even the slightest interest in security has heard about clickjacking, and most have probably even heard that adobe issued a patch that addresses clickjacking...

the problem is that clickjacking isn't exclusively a flash problem, it's a browser problem that could simply do a some extra things when flash is present...

specifically, without the flash patch a clickjacking attack could interact with a users microphone and/or webcam if either are present, allowing the attacker to spy on the victim...

that's pretty scary from an emotional point of view but not very interesting from a rational point of view... the majority (though not all) of online attacks these days are financially motivated and spying on individuals in the analog world doesn't easily lend itself to traditional models of cybercrime monetization where the victims' information is stolen en masse or their hardware is used to attack others... you might be able to steal information with a webcam or microphone, maybe, but that's something that definitely does not scale so you'd need to either target someone you expect to be able to get a lot of money out of or you won't make enough for it to be worth the trouble or risk...

what an attacker might be able to do is setup some sort of peep show website where the money comes from people paying him/her for access to feeds from compromised machines, but then the attacker would need to publicize his/her service and run an increased risk of capture...

what this ignores, however, is that clickjacking is not just about spying on people (or the other flash-specific things that fall under the clickjacking umbrella), that's just something you can do when flash isn't patched... clickjacking itself is still possible even after flash has been patched and all the attention given to adobe's flash patch may well cloud the issue that there is still a very troubling set of problems with virtually all browsers and, other than using firefox with noscript, very little ordinary people can do about it an the moment that doesn't break the internet for them... so while it is technically true that adobe did release a patch that addresses clickjacking, it only addresses those aspects of clickjacking that specifically affect flash... the rest of the set of attacks collectively known as clickjacking remain a problem for web users, site owners, and browser vendors alike...

Thursday, October 16, 2008

countering malware quality assurance

just a quick post to point out something i just realized - maybe it's obvious to others, maybe not...

i was reading dancho danchev's umpteenth post on malware q/a when it struck me that the recent trend by vendors to put the scanning engine in the cloud effectively kills malware q/a... i suggested before that randomizing heuristic parameters might combat it, but that's probabilistic and comes at the cost of false positives... cloud-based scanning on the other hand ensures that the scanner implementing this new architecture cannot be used effectively (if at all) in traditional malware q/a because the samples will either be given to a server that the av vendor controls (thus destroying the samples' value to an attacker), or if the malware tester manages to sever the ties with the av server then the testing will give an incomplete and misleading result regarding the detectability of the malware in question...

each new scanner that goes this route is another scanner removed from the pool of scanners that malware q/a testers can use and with symantec, mcafee, trend, and panda (and perhaps more that i can't think of at the moment) having already gone this route that's a significant portion of the av user-base which will soon no longer be at the mercy of malware q/a...

i have no idea if this was intended or serendipitous, but either way it's still a good thing - and once again it proves the point that for every measure there exists a countermeasure...

Tuesday, October 14, 2008

is secunia the new consumer reports?

(well, looks like i'm going to add to the noise about the secunia test... it's already been discussed on the security fix blog, eset's threatblog, the register, the sunbelt software blog, the panda security blog, and the zero day blog)

so secunia did a test with exploits they developed in the lab and found that av products sucked...

well gee, doesn't that sound an awful lot like the consumer reports test? if you don't make the distinction that exploits are a special case of malware then there would really be no difference between this and that terrible consumer reports test where they paid to have 5000 new pieces of malware created...

but exploit code is a special case, we need to create benign exploits, we need to be able to use them in order to determine whether our systems are vulnerable, whether the patches that supposedly fix the vulnerability have been applied properly, whether they truly fix the vulnerability, etc...

so then this test was alright then, right? nope, not by a long shot... first and foremost is the idea that anti-virus/anti-malware products should detect these lab-grown exploits in the first place... the issue is not so much that av is only in the business of detecting malicious software, it's that there are very good reasons why av can't and shouldn't be detecting benign exploits... as i just got finished saying, we need those exploits, we need to be able to use them, but how are you supposed to do that if your anti-virus is blocking access to them? it's one thing to use a benign exploit to test the vulnerable surface area of your systems, it's another thing altogether to turn off your security software to do so... there are a variety of technical, logistical, and legal reasons why anti-malware must be constrained to detecting only those things with a proven malicious pedigree, and if people don't like that it's just too bad - get over it, those reasons aren't going away just because they don't mesh with your ideology... either exploits are legitimate and necessary, in which case anti-malware apps shouldn't be alarming on them because it interferes with the proper use of exploits, or they aren't, in which case secunia acted in bad faith by creating new malware - secunia can't have their cake and eat it too...

the next problem was this notion of detecting exploitation... read that carefully - "detecting exploitation"... is exploitation a thing? no, it's a behaviour, and despite certain claims from various companies about dynamic behaviour-based heuristics, known-malware scanners (and by all indications that's the only part of the security suites secunia actually tested, begging the question why they bothered with the suites at all - incompetence maybe?) are built to detect bad actors not bad actions... that's not to say anti-malware companies don't have offerings to detect and even block bad or unauthorized behaviour, they do have HIPS offerings, but it's fundamentally different technology from what people are accustomed to with anti-malware and it's not always simple to setup/maintain properly so they don't necessarily bundle it with their anti-malware products or even in their internet security suites...

speaking of the distinction between actors and actions, that confusion seemed to be rooted in the use of the term "threat"... i have in the past remarked that "threat" is a bit of an ambiguous term where all kinds of things with "threat" in their name get called simply threats... in this case in particular, anti-malware apps use the term "threat" as a short form of "threat agent" (which is actually one of the more common things that "threat" is used to represent)... exploitation isn't an agent by any stretch of the imagination but because everything gets called simply a "threat" those who don't really understand what's going on (which surprisingly seems to include the folks at secunia) will treat all usages of the term the same and not realize that anti-malware scanners are only designed to catch some of the things that get called "threat"...

of course, a post on this site wouldn't be complete without pointing out the conflict of interest that is also present in this test... secunia's business is about vulnerabilities and exploits - they have a paid product for detecting vulnerable software (a different approach to the same ends as trying to catch/block the exploits) so it's in their financial best interests to publish a test that makes the anti-malware industry look bad (aka FUD) and the exploit problem look important (in other words, hyping up the problem)... it's a classic self-serving study and one wonders if the people responsible think the rest of us were born yesterday...

Wednesday, October 08, 2008

what i did on my sector vacation

well, today was the second/last day of sector '08 and now that it's over i figure i might as well write about my experience there... i don't do a lot of these sorts of posts primarily because i don't go to a lot of security conferences (the only other one i've been to was rsa '02) but since i'd heard good things about the last sector and since it is practically in my back yard (well, ok, it's approximately 1.5 hours away on public transit) i didn't feel all that guilty about broaching the subject with the higher-ups at work (the smaller price tag helps too)... it came at a pretty hectic time for me at work, but thankfully i was still able to attend...

the opening keynote of the first day was with the royal canadian mounted police; it was pretty dry - unless you're a fan of alphabet soup, there were a lot of acronyms that i had never heard before and i'm sure i'll never hear again...

the first talk i went to after that was kevvie (kevvie?) fowler's sql rootkits and encryption presentation... this was an excellent talk if for no other reason than it gave me information i can put to direct use when i get to work tomorrow (awesome - instant value for my employers in the first session of the first day)... it was also a pretty good at not misusing the term rootkit as so many others are want to do these days...

the lunch panel was unfortunately not all that memorable for me... maybe i was too busy eating or maybe the people talking just had too short a period in which to make a lasting impression, i dunno... maybe i'm just not a panel person...

the second talk i attended was jay beale's middler presentation... once again possible value for my employer, at least possibly... it's an unsettling realization that there's now an automated tool that can affect confidentiality, availability, and integrity of web data (by virtue of allowing an attacker to read, withhold, or even modify your data) basically if any part of your session happens outside of ssl...

next up for me was bruce potter's presentation on novel malware detection... now i admit this one was for me and not my employers - the first two sessions where the only ones where i could find that looked like they might touch anything relating to work so from here out it's purely for my own interest... bruce was a very entertaining speaker, however he got on a bit of an anti-av rant that wasn't really part of his presentation (that dealt more with detecting anomalous network activity by analyzing logs)... i just rolled my eyes at the rant - i considered saying something, but since 'the hoff' was just across the aisle and 1 row back i felt certain it would have resulted in smack upside the head and instructions to stop being such a jerk... ok, not really, but an in-person presentation is a very different forum from the online kind (where perhaps i'm known for being a jerk) in a number of ways, not the least of which being time constraints, and had no desire to sabotage the presentation (though as for that the length of the rant did that slightly anyways since bruce wound up running out of time)...

the final talk i attended the first day was matt sergeant's presentation on tracking current and future botnets... there was a fair bit of interesting details about current and past botnets, about their sizes and how those metrics were generated, about characteristics unique to the emails sent by each, etc., but matt (like bruce potter before) got a little anti-av saying they needed a kick in the butt about detecting those emails... my knee-jerk reaction (all internal because i didn't want to sabotage this talk either) was that av is in the business of detecting malicious code not emails generated by malicious code, but as i let that stew for a while i realized 2 things... the first was that that was remarkably like something i said back in the mid-to-late 90's about av software not detecting trojans... not that i thought they shouldn't detect trojans, but just that it was a defensible position to take - obviously detecting trojans was better than not doing so and i'm glad they started but there was a time when anti-virus software was literally just anti-virus... now that it's morphed into anti-malware it's once again defensible to say that detecting something that isn't malware (and emails aren't) is outside av's scope but (and this is the second thing i realized) the users would be better served and better protected if av did detect these things - it would serve as a negative control on a botnet's ability to acquire new nodes (at least until the bot designer change's the smtp footprint/fingerprint of the bot)...

so in that respect i think i'll agree with matt sergeant that av could be and perhaps should be doing more... his misapplication of a sophos graph of malware prevalence, however, i won't agree with... he really, really ought to know better than to try to compare a botnet's size with entries on a malware prevalence table... here's why it just doesn't work: a malware prevalence table breaks down malware prevalence on a per variant basis while botnets today are generally heterogeneous from a variant perspective (which is to say there are many times many different variants of a particular family of malware in any given botnet thanks to things like server-side polymorphism) so while a botnet may be huge, the prevalence of any particular variant in that botnet's ecology is still probably pretty low... that being said, something i've been mulling over in my mind for a little while now is whether prevalence tables broken down by family instead of variant are the more interesting metric these days in light of botnets and malware campaigns in general... personally, i'd like to see both types of tables...

the opening keynote for day two was with stephen toulouse and had the best opening ever ([looks at giant screen] 'dear lord, is that what i look like' - or something to that effect)... stepto thinks us security folks can bring some valuable insights and thought patterns to fields outside of security - i certainly hope so, i'm in software development and while i'm not high enough up the food chain to make the big decisions (and frankly don't want to be) i have been able to direct some things which i hope have been of benefit...

the first talk i went to on the second day was deviant ollam's presentation on lockpicking... i found the lockpicking at sector absolutely fascinating, both in this talk and also in the lockpick village... perhaps it goes back to me breaking into my own home as a kid when i (frequently) lost/forgot my keys, but i just went into sponge mode and absorbed as much as i possibly could... i imagine there were a lot of questions about dudley combination locks since that seems to be what we have up here in place of master combination locks and since they aren't exactly the same (our dials go up to 60, so there)... one of these days i should really put in some time and try to see if i can brute force a dudley lock combination because i have 4 here but only one with a known combination...

day 2's lunch keynote was johnny long's presentation on no-tech hacking... it was very entertaining to see the scope of the average (and often not-so-average) person's obliviousness to security concepts, but it was also a little disheartening especially when he ended the presentation without offering any hope for change... i think we all know there's a scarcity of security awareness in the general population, that's one of the reasons why i started looking into whether memetic engineering might be able to help things along (re: secmeme.com)... if only i had time to work on all the things i want to do (though i'm sure johnny's talk will provide a wealth of inspiration for the security idiot meme)...

the next talk i attended was james arlen's security heretic presentation... this presentation was in a rather unfortunate time slot, since chris hoff's virtualization presentation was going on at the same time (i thought of going to that one but really, the only thing i use virtualization for is sandboxing)... this was also the presentation that seemed to get the least amount of respect from attendees as people were constantly coming and going (and i picked a seat near the door, uggh!)... unfortunately it was also not the talk i was expecting it to be... while i was expecting to hear about one security pro's journey (as the description suggested) what i got instead was a very large number of calls for a show of hands... i'm sure it all makes sense to people who have been in similar positions but for someone like me who hasn't it just doesn't help me relate...

the last talk i went to was jason wright's presentation on finding cryptography in object code... strangely enough, i went to a talk on the same subject at rsa '02 where they talked about finding magic constants... jason lead off with that (which made me a little bit nervous) but that was only for context as the meat of the presentation was more about frequency of occurrence of operations usually only seen in crypto which was interesting... it also wound up being the shortest talk i saw...

and then it ended, and we gathered for one last time in the keynote/lunch hall, i interrupted hoff fulfilling his security rockstar duties to say hi (sorry i didn't see you later when everyone made for the door, chris, i did look though but i'm sure you'd already attracted another crowd), and then they handed out prizes (prizes! i don't remember that at rsa) and it was done... it was a great experience, i enjoyed the talks a lot, i didn't network as much as i probably should have ('cause i generally suck at that) but oh well, at least i can put some more faces to familiar names now - perhaps if the next time is soon enough (ie. not 6 years in the future) i'll be able to put that to good use...

Saturday, October 04, 2008

do we really need anti-virus

thanks to alan shimel for pointing out this post by kai roer asking if we need anti-virus in 2008...

alan is right, of course, that anti-virus does a lot more than just catch viruses these days, and that anti-virus helps control older virus populations (good on ya alan, most people don't consider that)... kai asked a variety of interesting questions, thouhg, which i tried to answer in his comments... like lonervamp mentions at the start of this post, discussions like this are something i'd prefer not to lose to the sands of time so i'm reposting my comments here (and i may start doing this more often, 'cause it seems like a great idea):
"Have the virus authors started to write smaller virus that stays below the radar - and thus are not detected by the AV-products?"

many of the virus authors of old have simply grown up and found more fulfilling things to do with their lives...

"Are they now only targeting special targets - like particular banks, SCADA or singled out corporations? Or countries and causes? Or are they too busy writing malware to care about virus? "

viruses are malware... non-viral malware, however, seems to be what the cyber-crooks prefer these days... self-replication has a way of getting out of hand and calling attention to the malware...

"Do we really need to pay out on gateway and client AV solutions if there are no virus knocking on the door? "

who says there isn't? just because you aren't hearing about new epidemics doesn't mean new viruses aren't getting written or even that the old ones have stopped... some of the most prevalent email-born malware are mass-mailing worms that are already a few years old (like netsky.p)...

"Do you believe that there are no more virus out there?"

absolutely not... some people are still getting infected by decades-old boot infectors...

"That other threats are taking over and rendering AV-solutions useless?"

other threats are just as detectable with av as viruses are...

"Is this the whole truth? Or have the AV solutions became so good that they catch everything, even without us noticing? That they are an absolute critical part of the solution for any entity connected to the net?"

let's put it this way - old viruses never die, their populations just shrink to a size too small to accurately report/track... av is one of the things that helps keep those populations small...

and when it comes to newer non-viral malware, av is what helps keep it's usability limited... without the blacklist, the bad guys would just find something that successfully bypassed other defenses and keep using it over and over because other defenses cannot be updated as fast as a blacklist...

Thursday, October 02, 2008

symantec's reputation is in the clouds

the folks at symantec posted something interesting today - It's All About Reputation...

well, they're not the first ones to go into the cloud (obviously, see panda, trend, mcafee, etc)... nor are they the first to go with a reputation system (drive sentry, for starters)... are they the first to put a reputation system in the cloud? i don't know, maybe, but at this point it still doesn't seem like such a big deal...

what gets me, though, is the idea that it's no longer using fingerprints... a reputation system that says X is good, Y is bad, and Z is unknown is basically just a combination of a blacklist and a whitelist - and it's not a bad idea, i've been saying they complement each other well for quite a while now so actually putting both paradigms into a single product makes a lot of sense... the blacklist is what says Y is bad, the whitelist is what says X is good, and since Z isn't on either list it gets called unknown... the thing is blacklists use signatures (fingerprints) and in their own way whitelists do to - they have to in order to make sure the thing you're looking at really is the same thing you saw before and determined to be good/bad... it can't work without a signature/fingerprint/whatever... this new reputation system may use a different form of signatures, but it definitely uses them...

and as for how this protects you from brand new threats as the post suggests, i can only imagine it works like this: things on the blacklist are stopped from executing automatically, things on the whitelist are allowed to execute transparently, and things that aren't on either list will cause the user to be given an "are you sure?" prompt... finally, someone's putting dr. solly's perfect.bat (which asked the user if the file being scanned was a virus or not) to good use...

the other way it might work is that the unknowns get automatically run in a sandbox of some sort... not a sandbox meant for malware classification, mind you (a number of products already do that), but a sandbox intended to separate the handling of untrusted items from the trusted host system... i mean, since they're already adding 2 of the 3 preventative paradigms into a single product (hopefully seamlessly), wouldn't it be cool if they added the 3rd as well? i won't hold my breath for them actually implementing this, though...

suggested reading