Tuesday, December 27, 2011

the problem with the "like" trade

earlier today randy abrams posted an interesting take on facebook advertising and how misleading the word "like" can be (http://randy-abrams.blogspot.com/2011/12/facebook-misleading-advertising.html). this reminded me of a beef that i've apparently had going back at least as far as may of this year (judging by the timestamp on the screenshot i took).

specifically, randy said the following:
If I have to “like” a page to get the information I want, I don’t have a problem with that
well, with all due respect to randy, i do have a problem with it. randy makes some good points about the way people's pictures get used in facebook ads when they "like" things, but a point he neglected is that forcing users to "like" or otherwise post about something before they can see the content they've been lured with is a popular tactic in facebook scams.

now, i'm not trying to suggest that security companies making use of this marketing methodology are scam artists (though i am tempted to say that all marketing is in some way a scam) but they should be aware that by utilizing this sort of marketing they are effectively endorsing a marketing methodology (developed by facebook) that breeds victims. i don't expect facebook to care about such things, since such trickery is how they make their money, but i certainly expect security companies (especially ones with as strong a leaning towards empowering users as eset) to know better than to go along with facebook's questionable methods and do things like this:
"like"s are not something to be bought from users in exchange for free or otherwise tempting content. they are an endorsement and as such can't be legitimate until after the user has sampled the content. the idea of exploiting illegitimate user endorsements should be recognized as unethical and should be understood to have consequences. by using the sort of techniques that scam artists thrive on, one is basically training people to be victims. i expect better from security companies and i think you should too.

Sunday, December 04, 2011

privacy in the age of forever

i've written before about what i think privacy is, though classifying it as an obscurity-based strategy for satisfying a basic need for safety was very high level and abstract. it could also be taken the wrong way, since people often think about safety as only applying to their physical person (i.e. physical safety). our physical bodies aren't the only thing we want to keep safe, of course. our families, our property, our reputations, our opportunities, etc. are all things we want to keep safe, all things we want to protect, and all things for which privacy can help offer some protection.

privacy is often described in terms of controlling information but on reading danah boyd's thoughts on privacy i realized it can and should be expressed a different way. controlling information is the means by which privacy is often accomplished, but it's not what privacy is actually about. while i don't agree with the narrow scope boyd used ('asserting control over social situations'), at it's root was a kernel of truth. while the means by which privacy is achieved may be the control of information, the point is the control of outcomes related to that information, whether they be social outcomes, business outcomes, educational outcomes, housing outcomes, health outcomes, political outcomes, legal outcomes, etc.

controlling outcomes is, of course, the point of any strategy. the thing about strategies, though, is that their appropriateness depends heavily on the situation, and what people largely don't realize is that there is a situation which is becoming increasingly ubiquitous under which many traditional privacy strategies don't work very well.

in the online world everything is recorded and stored for consumption at a different time or in a different place. it is essentially a persistent medium through which we can interact with each other. this is a significant point because for most of human history the real world has largely been an ephemeral medium for interaction. our behaviour, the strategies that we develop as we mature in the real world take great advantage of the ephemeral nature of our interactions with others. if you weren't present the day your best friend made a hurtful comment about you to others in your peer group then you missed out, that experience is gone, "you had to be there" as it were. this ephemeral property of the event, the fact that the information only exists in a very particular point in time and space, serves to restrict access to that information to only those who were present at the same point in time and space.

once we start interacting online that ephemeral property ceases to exist, so access to the information that we might have otherwise expected to be restricted due to it's ephemeral nature is no longer restricted in that way. we often don't realize that, however, because we take that 'ephemeral-ness' for granted. it's not easy adapting to a situation where that no longer applies.

for example, imagine for a moment that every word you speak goes into a speech bubble above your head, like in the comic books, except unlike the comics the speech bubble doesn't go away, it stays with you and allows people to read what you said 5 minutes ago or even 5 hours ago. every swear word, every uncharitable thought uttered under your breath in the heat of the moment, everything. can you imagine how you'd adapt to that sort of situation? you'd probably censor yourself a lot more than you currently do - since your utterances have become persistent the natural adaptation that would allow you to continue to control the outcomes associated with what you say is to say far less.

at first blush that might not seem unbearably bad, but let's take things a step farther because that example really only dealt with your words. this time (this is inspired by danah boyd's post, by the way), imagine you are stuck in a very large room and surrounded by everyone you ever have and ever will meet. imagine trying to live your life in this room. how do you play with your toddler in front of your business partner or a potential client? how do you woo your future wife in front of your children or your parents? how do you hang out with your high school friends in front of your future employers? how do you project an image of cool professionalism to people who saw you fall face first in a mud puddle? again, in such a situation, surrounded by people from disparate contexts of your life, the natural adaptation is to reduce the amount of information that you reveal about yourself, but think about those questions; there are certain outcomes that can't reasonably happen without revealing sensitive things about yourself.

these examples may seem absurd, but this is what it means to interact in a persistent medium. anyone, anywhere, at any time can (in theory) see the footprints you've left in that medium. your interactions in a persistent medium transcend time and space, allowing people to effectively 'TiVo' your life (or at least the portion of it that's been recorded).

obviously this represents an unacceptable state of affairs for online interaction. there's very little utility in it if it requires such profound self-censorship. that's the reason that technological privacy controls and privacy settings were invented - to help replace the access control that was lost when the information became recorded. unfortunately the technological controls don't operate the same way that ephemerality does, so trying to achieve a simliar outcome with them is complicated and often not intuitive.

sean sullivan (at least i assume it was that sean) made a post on the f-secure blog that highlighted a talk given by clay shirky where he said (as quoted by sean) that "managing privacy isn't natural". technically what shirky said was that managing privacy settings isn't natural. we manage privacy every day in every interaction we make with others, but managing privacy settings by definition can't be natural because the settings themselves are artificial. this has implications for the kind of privacy one can achieve though managing such settings - it is itself an artificial, man made analog to natural privacy, and prone not only to being incomplete in comparison to it's natural counterpart but also to breaking down as all man made things do.

but as untrustworthy as that sounds, it will have to be good enough, because we can't turn back the hands of time or halt progress. we can't even opt out of the persistent medium. oh, we might get away with staying out of the online world ourselves, but persistence is intruding into the real world more and more. public photography, for example, is turning the public sphere (which used to represent an ephemeral medium) into a much more persistent medium than it used to be. this can be a good thing when it helps to expose things like police brutality, but it poses a not insignificant problem for us as a society.

paul ducklin raised some concerns about this very problem last year on the sophos blog. at the time i didn't think his concept of public privacy made much sense, but when examined through the lens of a traditionally ephemeral medium of interaction being changed into a persistent one without people noticing or appreciating the consequences for their existing privacy strategies, it starts to be clear (to me) that this is a problem that deserves some consideration. i wouldn't consider it an invasion of privacy, per se, but perhaps it would qualify as a subversion of privacy, since it changes the environment to one where the strategies people were using to control outcomes no longer work properly, and it does so without making it clear that that had happened.

are we ready for the implications of living in a world where our actions live on beyond the moment? i don't really know. certainly we can manage our privacy settings online, and maybe we can obscure our identifying features offline (though that may interact poorly with some of our cultural norms) so that public photography becomes less of an issue. i just wonder if explaining to the next generation what it was like before everything became persistent will be the last time we ever get to use the phrase "you had to be there".

Thursday, November 03, 2011

thoughts on metasploit's impact

i listened to the network security podcast #257 this afternoon, specifically because i wanted to hear what martin mckeay, josh corman, and hd moore had to say about metasploit and what josh corman calls HD Moore's Law. there were a lot of mentions of PCI and being 'this tall to ride the internet', but the comment that really caught my ear (i was listening to it rather than reading it after all) was that metasploit allows people to test their security against the attacks that are readily available.

and then a voice in the back of my head said "yeah, but metasploit is what makes those attacks readily available". it's essentially equivalent to saying that the readily available attacks allow people to test their security against the readily available attacks - i believe the way the internet identifies such tautologies these days is by saying "obvious statement is obvious".

one of the interesting things josh corman brought to the conversation was a breakdown of adversary classes (i encourage you to read his post that i linked to above, not only for that breakdown but also a visualization of their relative success rates against a scale of defender strengths) and it occurs to me that, in the absence of metasploit, these so-called readily available attacks that are in the hands of the casual attacker wouldn't generally be in the hands of the casual attacker (and thus wouldn't be the readily available attacks) but rather in the hands of adversaries of a higher calibre.

one thing that isn't really mentioned but seems fairly obvious is that the higher up you go on the scale of adversary classes, the smaller the population will be (the more skills one has the rarer one becomes) and consequently the smaller the aggregate pool of practical targets will be (since there's a limit to what any one given person can pull off in a given period of time, the manpower available to an attacker is a finite resource). that means that in the absence of metasploit, these attacks would be directly impacting fewer systems - probably more important systems, but fewer systems in total.

now before i go any further, lets address an assumption i've made. i think it's an obvious one. you've probably had it on the tip of your tongue for at least the past two paragraphs. the assumption is that in the absence of metasploit nothing else would pop up to take it's place. for my purposes, that's actually not so much an assumption as it is an ideal starting point or degenerate case from which to build a more complex model.

so let's say that another group of well-meaning researchers decided to pick up the gauntlet. i see no intrinsic difference between that hypothetical case and the actual case we have with metasploit right now. that makes it really not that interesting an alternative, because it's not really alternative in any meaningful sense. the more interesting alternative lies in the argument that if the good guys didn't do it, if they were all too principled (for lack of a better word) to follow that path, then the bad guys certainly would.

so in that case let's say a group of ne'er do wells decided to produce a similar tool. would it be the same? would it have the same properties and present the same problems? i would argue that it wouldn't - that the incentives in the underground community are different enough that what would be produced either would not be free (and therefore not available to all casual adversaries) or it would not be as capable as metasploit would have been (perhaps because the best exploits would get held back in order to give the developers a competitive or strategic advantage over the attackers they're helping for free). the motive of doing it for the benefit of everyone (that drives the excellence found in the free edition of metasploit) simply isn't compatible with financially motivated cybercrime. greed and selflessness don't mix, so a criminal-driven equivalent to metasploit wouldn't lower the bar as far as metasploit does.

so what am i trying to say? what am i really getting at with all of this? the TL;DR version is that the metasploit folks are too nice to people, including the bad guys. the notion that metasploit represents the attacks that are readily available suggests to me that they lower the bar too much. no one seems to disagree that metasploit is a tool that is used by script kiddies (among others) and so i'm left to wonder very seriously whether there's an actual legitimate use case for metasploit that involves such a completely unskilled user. leave no user behind? i think under the circumstances an exception deserves to be made.

Wednesday, October 26, 2011

marketing bullshit isn't just from marketing departments

so apparently there's a conference going on right now called hacker halted. i heard mention of it a few days ago but paid little attention because frankly there are just too many security conferences to keep track of. what piqued my interest yesterday, however, was a retelling of something george kurtz is supposed to have said in one of the keynotes at the conference - specifically, he's quoted as saying the following (from @InfosecurityMag's tweet)
industry has to move beyond signatures and customers need to demand this from the vendors. We need to change and adapt
now i have to admit i had no idea who george kurtz was. fact of the matter is i have no idea who most people in the security field are (so if you're wondering why i don't follow you back on twitter or add you to a circle on google+, that's a strong contender for the reason why). i thought he was just some crank talking about things he didn't really know much about (more common than you might think, unfortunately) because the AV industry hasn't been relying exclusively on signatures for quite a long time.

imagine my surprise to discover george kurtz is actually the chief technology officer at mcafee, of all places. would such a highly titled representative of an AV company really say such a bizarre thing? well, if symantec's CEO can claim the virus problem is solved, then i guess so, but it still begs the question "what was he thinking?"

thankfully rik ferguson managed to tease a little something extra out of george on twitter
George Kurtz woke up in 2008 today "industry has to move beyond signatures". Helloooo? McFly?
@rik_ferguson maybe you missed the part about the hardware assisted security. Opps.. forgot you don't really have that at Trend.
and there we have it; the quote that people are fawning over (or scratching their heads over) was actually marketing bullshit. oh sure, on the the surface it looks like an AV big wig eating crow and admitting that his company isn't doing a good enough job and needs to improve; just the kind of frank confession we're all waiting for the AV industry to make. but with this added wrinkle we see that's not it at all. george's company supposedly already has improved and it's everybody else who still has the problem. mcafee has this licked, mcafee is the solution, buy mcafee.

no, he didn't actually say 'mcafee is the solution, buy mcafee' (to the best of my knowledge), but that is the reality distortion he's setting up - and distorting reality that way is the hallmark of marketing. all that remains is to publicly declare that deepsafe (their hardware assist technology that they announced over a month ago) is how you "move beyond signatures" and the marketing message will be complete with reality suitably distorted to mcafee's benefit and everyone else's detriment.

now you might be thinking to yourself that this can't be true, that such a highly placed and well respected security expert would never stoop to such base gamesmanship. the fact is that not only do most public faces of the industry practice marketing regularly in the process of representing their respective companies, but most high profile speakers rise above the rest not strictly by merit but by effectively selling themselves and building their personal brands - and if they have to stretch the truth or fuzz the facts or distort reality to make their message more palatable to the masses and give themselves more cache and influence, then so be it.

and the unfortunate consequence of this is that, due to the fact that the rest of us rely on such speakers to inform us, much of what the majority of us know about the the subject matter in question is actually somewhat wrong in subtle (or not so subtle) ways. reality distortion interferes with the formation of accurate mental models and that in turn interferes with people's ability to deal with the parts of the real world those models are supposed to represent. one of the things i've tried to impress on people in the past is that they need to stop listening to marketing, but i realize now that i don't have an easy method for them to recognize it in the first place. at least not without developing much more thorough knowledge than they currently have, and to do so without relying on apparent authorities on the subject in question is no easy task.

no, marketing bullshit isn't restricted to the glossy pages of a magazine or the cover of a box or an ad on tv. it's not just the product of marketing departments. it's woven into the very fabric of what we think we know, and it's hurting us.

Friday, September 23, 2011

facebook's ticker: a ticking privacy timebomb?

there's a lot of pixels and bits being dedicated to the major changes that are underway at facebook, but most of the attention seems to be focused on the timeline feature. i tend to think the ticker feature deserves a lot more attention and, frankly, concern.

first off, it seems clear that facebook wants to be the destination for an ever increasing number of activities - not just farmville or mafia wars, but consuming print, audio, and video media, and even purchasing goods too. fine, facebook wants to be the web portal to end all web portals - the next AOL or compuserve - it's fine to have aspirations like that; although actually being AOL or compuserve doesn't seem to have worked out that great for AOL or compuserve in the end.

but with that breadth possible activities in mind, the idea that facebook will now be sharing everything you do automatically seemed really rather stupid to me at first. maybe there is too much friction inhibiting sharing right now, maybe clicking a button isn't easy enough, but truly "frictionless" sharing that happens without any action taken on the user's part takes the intent out of sharing. sharing loses all meaning that way. it no longer tells you the sharer thought this article was insightful or that video was funny, it doesn't give any hint what so ever as to whether the sharer thought something was worthwhile, it just collects everything in one big activity profile. indeed, was the person performing those activities really the sharer in that situation, or is the sharer facebook themselves?

at first you might think that the resulting poor signal/noise ratio would render facebook as irrelevant as myspace and it's blinking, glittery profile pages has become. the folks at facebook seem to have realized this, though. they don't want people's main feeds to get filled with all that noise. they recognize that from a personal interaction standpoint, this data is too voluminous and unimportant. that's why they've relegated the data to a new place - the ticker.

the question you should be asking yourself right now, however, is this: if this data isn't actually useful to users when they're connecting with their friends, why is facebook interested in automatically sharing it? who is interested in that data? the answer is simple - advertisers. a large profile of everything you read, watched, listened to, and did online is for all intents and purposes your web history. in this case it will be your web history as seen through the eyes of facebook. we in the security community get upset about browser vulnerabilities leaking our browser history, or tracking cookies being used to track where we've been; the data collected for ticker is not going to be inherently different than the data acquired through those other means. furthermore, it's too voluminous and granular to be useful to anything other than an automated process that looks for certain types of patterns and trends. the kind of process you'd use for the automated targeting of ads - targeting based on your activities, your behaviour.

the ruse of "frictionless sharing" appears to be a trojan horse (not the malware variety one might traditionally think of, though) for introducing behavioural profiling for the purposes of targeted advertisements. social spyware at it's best.

but even if that's not the case, even if that data really is meant for a user's friends to see and use, there is a profound implication buried in the automated sharing of everything. you can't control your public image if the choice of what you share about yourself is taken away from you. for all the hand wringing recently about the damage that real name policies do (eliminating your ability to control the personal information that your identity represents), the elimination of your ability to control your public image means the elimination of the persona - something that has been part of the social experience of humans since the dawn of mankind if not longer.

our true selves, the nature that we keep hidden behind the masks that we each present the world, is something that we innately keep private. i simply cannot believe that our social norms are headed in the direction of completely removing those social masks. giving up that private information in exchange for access to a service is a privacy bargain that we have never faced before.

so, if you thought the ways facebook could violate our privacy couldn't get much worse, you were dead wrong.

Saturday, August 06, 2011

tavis ormandy's sophail presentation

at the black hat security conference this year tavis ormandy presented his research into the way an anti-virus product, namely sophos' product, operates in order to shine a light on something seems like it would be an important subject to consider when judging the efficacy of a product. the paper associated with the presentation can be found here.

tavis was not particularly kind in his evaluation of the code (which he apparently performed by reverse engineering the product). sophos' response is very measured and diplomatic, which is pretty much the perfect way (from a PR perspective) to respond to the kind of criticism being leveled at them. as usual, however, i don't have to be diplomatic.

tavis' paper betrays a conceit that i suspect is more common in those who break things than it is in those who make things. developers, upon dealing with someone else's code, inevitably learn an important lesson: the code tells you what the software does, but it doesn't tell you why it does it that way. tavis thinks he knows all he needs to know, but he only had the code to go by. so when it comes to why certain things were done the way they were, the only thing he could reasonably do is make educated guesses. in some cases those guesses may well have been quite good, but in others they were not.

i first realized this was going to be the case on the second page of the paper where he describes how weak the encryption used on the signatures was, often an XOR with an 8 bit key. if you were to guess that such encryption was to protect the signatures from an attacker, as tavis seems to have, then you'd be dead wrong. the primary purpose encrypting signatures serves is to prevent other anti-virus products from raising an alarm on the signature database (something that used to happen in the very early days).

on page 3 it's mentioned that the heavy use of CRC32 in the signatures means it's easy to maliciously create false alarms by creating files that have the same CRC32 values that a particular signature is looking for and in the same places. now i ask you, the reader, if someone is maliciously planting files on your network that are designed to raise an alarm, is that alarm really false? it may be a false identification, but there really is something malicious going on that an administrator needs to investigate.

also on page 3 he criticizes the quality of the signatures by stating they appear to ignore the context of the programs they come from, that they're for irrelevant, trivial, or even dead code. perhaps tavis expanded on this in his live presentation, but the paper doesn't make clear whether or not he actually looked at the malware the samples were supposed to be for. if he didn't, then the criticism about ignoring context would be particularly ironic. let's assume then that he did. how many malware samples did he examine? if only a handful then there's a not insignificant chance that he was dealing with bad examples that aren't really representative of the overall signature quality. did he ensure that his samples were actually malware? did he ensure that his samples were being identified by the right signatures? his previous criticism (on the same page!) about false identifications should highlight the fact that hey may have been looking at the wrong code when judging the quality of the signatures. but more importantly than that, there isn't a 1-to-1 relationship between signatures and samples. one signature may be intended to detect many (tens, hundreds, even thousands of) related samples - and however pointless tavis may think those sections of the malware code are, they may represent the optimal commonality between all those samples.

around about page 8 or so, tavis makes a point of highlighting the fact that the emulation the product does in order to let malware reveal it's unencrypted/unpacked form only goes for about 500 cycles. this highlights a failure to understand one of the core problems in malware detection: the halting problem. for any other criteria the code might look for in deciding it's seen enough, there's no guarantee that criteria will ever be encountered on a particular input program. there has to be a hardcoded cut-off point or the process runs the risk of never stopping - and that would severely impact the usability of the AV software. likewise if the hardcoded cut-off point isn't reached soon enough it also impacts the usability of the AV software.

there may yet be other examples of poor guesswork that i didn't see. my own knowledge of why certain design choices might be made is limited as i've never actually built an anti-virus product. i have considered it, of course, since i am a maker of things. perhaps tavis ormandy would benefit from more experience making things rather than breaking them. perhaps this was an unorthodox expression of the oft' repeated concept that the skills needed to attack are not the same as the skills needed to defend.

Tuesday, July 26, 2011

careful who you let write the history books

over the course of the previous two weeks, kevin mcaleavey has been publishing a series of blog posts on the infosecisland site (parts one, two, three, four, five, and six) about the history of the anti-virus industry. rob lewis thought this might be a subject that would interest me and he was right. unfortunately, rather than finding it mostly informative, i found with each passing part an increasing desire to post a serious critique. i'll take them one part at a time.

part one

to start with i think we need to pay attention to how the author presents himself. each part refers to him as a long time industry insider, but doesn't go into anything more verifiable or specific than that. with such vague credentials (and those are meant as credentials - there's no reason to put them there except to try to convince the reader that the author is an authority on the subject he's writing about) it really feels like he's saying "trust me, i'm a security expert". now i've never heard of him but that alone doesn't say that much, the anti-virus industry employs hundreds if not thousands of people and i'm really only familiar with a comparative handful of them. in part three we get to find out more specifics, but for the time being let's just say that such nebulous credentials makes me very suspicious.

to be fair i should point out my own credentials, so that i'm not being a hypocrite. unlike kevin, i am not now, nor have i ever been an industry insider. i have never been employed by any company in the anti-virus industry, i have never received financial remuneration for anything i've said or done involving the anti-virus field, and frankly i've only ever met a handful of people who were part of the industry. that being said, what i am (in terms relevant to this discussion) is a long time observer of the anti-virus industry. i've always been on the outside looking in, but from about the age of 15 onwards i basically grew up interacting with security hobbyists, security professionals, security software engineers, security researchers, and even some of the big names that kevin mentions in part two.

in theory, being an outsider seems like it should mean i have a more superficial view of the anti-virus industry.  we shall see.

the first part of kevin's series focuses mostly on where we are today so a lot of the things he mentions should sound familiar to people following the security news. one of the things he mentions is the rustock botnet. he presents it as a single piece of malware with a 5 year half-life. this is a bit misleading for a couple of reasons, the first being that rustock is actually a family of malware - there have been many versions since 2006, in part because the anti-virus industry keeps interfering with the utility of existing versions by detecting them. additionally, the term "half life" has a very specific meaning which doesn't really apply that well to a botnet that has for all intents and purposes been killed now that microsoft controls the command and control server.

subsequently he made the following quote:
"TDL4" however has publicly caused the security industry to transition into full panic mode and literally throw in the towel
this may seem pedantic, but in order to literally throw in the towel, there has to be an actual physical towel, and someone has to throw it. an argument could be made, i suppose, for saying that they figuratively threw in the towel, or metaphorically threw in the towel. maybe even virtually threw in the towel - but if he insists that they literally threw in the towel then i have five words - pics or it didn't happen.

now there was some early misinterpretation of the use of the word "indestructible" that got posted in less knowledgeable media circles, specifically that the malware was indestructible rather than the observation that malware authors were trying to create an indestructible or bulletproof network of compromised computers, but a knowledgeable industry insider should have been able to see through that. furthermore, from my perspective the only panic i saw was the panicky feeding frenzy in the media over a statement which, like so many more purposeful scams, was simply too spectacular to be true.

i got the distinct impression upon reading part one that he tends to take mainstream media as gospel when it suggests that all is lost in the fight against malware. that seems strange to me. why is an industry insider putting so much credence in what the mainstream media says? he acknowledges that the industry has called this interpretation wrong, and that they're trying to correct that misinterpretation but he seems to suggest that the "corrections" are a product of public relations rather than a genuine attempt to correct factually erroneous statements that spread fear, uncertainty, and doubt.


another statement i'd like to draw attention to is the following:
To see this public admission that 1980's technology has utterly failed is nothing short of breathtaking.   
admission by whom? where? what are the details? none are given and we must take him on his word that such an admission actually took place. or perhaps he thinks the misinterpretations spread by mainstream media represent that admission. could it be that he's unfamiliar with the degree to which they botch things up on a regular basis? it really makes me question his credibility if he placing his preference on the words of reporters over the words of researchers.

part two


part two is where he actually starts talking about history. he starts right at the beginning but there's a problem. he mistakenly thinks the brain virus 'destroyed' many hard disks. brain came out at a time when hard disks were a rarity. moreover, brain specifically avoided infecting hard disks. the link kevin himself provided says this so i can only assume that he's not actually reading the sources he's providing to the reader.

he also seems to mistakenly think brain required a reinstall to recover from. this is, of course, false. there have always been less drastic ways of restoring boot sectors. the main problem was information about doing so wasn't as easy to come by back then as it is now. still, thinking there wasn't a way to do it is essentially a form of ignorance that you wouldn't expect to find in a long time industry insider.

yet another point on the brain virus; he seems to think the backlash from it forced the developers who made it out of business. i guess he never saw this video by f-secure's mikko hypponen, where mikko found the developers of the virus still working at exactly the same address they were at 25 years ago when they originally wrote the virus.

brain isn't the only piece of malware whose details he gets wrong, though. he mistakenly thinks popureb requires a windows reinstall to remove, much like brain. in reality what's required is to restore the MBR with the recovery console. this seems to be another example where kevin has taken mainstream media at their word instead of digging deeper and getting actual inside information, the way you'd expect an insider to do.

he also seems to think that stoned was the first virus to go memory resident and infect any disk that was inserted in the drive. stoned was a boot sector infector, however. all boot sector infectors infect (more or less) every disk inserted into the drive, it's kinda how they spread, and since it wasn't the first of that sort of virus i can't imagine how he got the impression it was the first one to go memory resident.

more generally he seems to think the majority of viruses in the 90's were jokes and pranks and programs designed to delete files. my recollection is that most actually gave no outward indication of infection whatsoever. virus writers quickly decided they liked the notoriety that came with one of their viruses spreading far and wide, and the best way to help that to happen was to make sure it didn't make itself known by messing up people's computers. the ones that stuck out in people's memories were oftentimes jokes or pranks or had destructive payloads, however.

it's not just malware he seems to get wrong, however. he appears to be under the impression that VIRUS-L was a private echomail conference available to SysOps (system operators) of BBSes that carried FidoNet. VIRUS-L was actually an internet mailing list which was gated into usenet in the form of the newsgroup comp.virus. i have no doubt it was further gated into FidoNet - i know alt.comp.virus was as my first encounter with it was through a usenet-to-FidoNet gateway. i have a feeling i encountered comp.virus the same way (without ever being a SysOp). that said, it's possible kevin could have been thinking of something else; maybe VIRUS or VIRUS_INFO (or even VIRUS_NFO) which were in fact echomail conferences on FidoNet (the first two of which have me as their most recent moderator). they also aren't private, however; i really can't seem to figure out where that part of his recollection might have come from.

another curious thing i noticed is that he seems to think eugene kaspersky is the only member of the old guard of virus hunters still in the field after john mcafee, alan solomon, and peter norton (?) left. he also mentioned frisk software international but sort of as an afterthought, and somehow failed to mention frisk the person. frisk the person (and the company) are still out there, still working. they may not be attracting attention but they're still there. as such, kaspersky isn't the last so kevin's depiction of the industry as having been lobotomized seems all the more inaccurate. what's more, however, is that kevin only seems to be acknowledging a handful of the icons in the anti-virus industry. there were and are a lot more people from the old days out there. a great example would be aryeh goretsky, who was from the sounds of it like john mcafee's right hand man, and who currently is with eset. then there's the venerable vesselin bontchev, once a well respected virus expert at university of hamburg and now a well respected virus expert at FSI. then there's jimmy kuo of symantec and then mcafee and now microsoft. there are many great minds from the early days that are still with us today, so this focus on just a handful of icons and the suggestion that since most of those are gone that the industry has been lobotomized, seems like a decidedly superficial view.

part three

the third part in kevin's series was about the demise of the anti-virus industry (it's not dead, it's just sleeping). it's also where we finally find out about his actual credentials. he was part of privacy software corporation, the company behind BOClean; what was apparently originally a cleanup tool for the back orifice remote access trojan, but was later expanded to cover more malware. apparently this company was bought by comodo in 2007, but from the sounds of things it doesn't sound like kevin stayed on with comodo for all that long, and even if he was still with them today 4 years isn't exactly "long term".

he presents himself as a long term industry insider in a discussion about anti-virus vendors, but his actual first hand inside experience with the kinds of companies he talks about and criticizes in this series of posts only went on for 2 1/4 years according to his linkedin profile. that seems rather underwhelming. oh sure, before that he spent a long time in a different part of the anti-malware industry, but he spends very little time actually talking about the part of the industry where most of his actual experience is from - except to extoll the virtues of the techniques used by BOClean, of course. in any event, it's clear by this point why he was so vague about his credentials in earlier parts.

back to the demise of the anti-virus industry, however. kevin focuses on the point in which trojans started to take off as the death knell of the industry. it was, after all, the point at which his company was forced to start dealing with malware. he seems to think that the early failures to handle trojans reflected inability on the industry's part. just to be clear, when trojans began to take off the industry did hesitate, and i don't mean for a second or a minute or an hour or a day or week or a month or even a year. the anti-virus industry hesitated to redefine itself. it took a lot of doing to overcome the philosophical inertia that kept them out non-viral malware detection. trojans were not viruses, remote access 'tools' arguably had legitimate uses, and the industry was gun-shy with regards to litigation. i hope kevin can appreciate that last point as it was his revered dr solomon who explained that one to me. apparently they couldn't even add generic detection for MtE based polymorphic viruses using the existence of the MtE algorithm as an indicator because some brainiac went and put the algorithm in a 'legitimate' code obfuscation tool. the industry did eventually overcome their philosophical qualms about dealing with non-viral malware but it took the escalation of the trojan problem to make people (both inside and outside the industry) realize that the industry would be justified in going after this new kind of threat that was previously outside of their purview.

a great deal of kevin's characterization of the failure of the industry seems to focus on what his company handled better. the reader is left to take his word for it that his company did a better job. he's clearly not an impartial 3rd party, his perceptions should be taken with a grain of salt. he talks about a number of techniques used by the anti-virus industry and what their failings are - a number of which are almost certainly exaggerated. i've seen my share of alternative anti-malware approaches whose proponents go on ad nauseum about how their approach is the superior one (zvi netiv stands out as a remarkable example of this). better is always subjective, there's always good and bad points.

he describes BOClean's approach of only looking for the malware in memory, rather than trying to find it on disk like traditional anti-virus products do, as superior. there are benefits, of course. packers and cryptors have no effect on detection because that transformation is undone at runtime. however, there are also drawbacks. once the malware is in memory it is active. it's possible for active malware to interfere with security software in any number of ways. it could use stealth, it could shut down the security software (which i gather was a problem for BOClean), or it could even use the security software's own filesystem enumerating behaviour to find new host programs to infect (if you're dealing with a file infecting virus). once the malware is active the security software has lost an important tactical advantage. imagine you're in a street fight - do you wait for your opponent to strike first or do you try to strike before they're even in a position to defend themselves? and remember, i'm talking about a street fight here, not some fair fight with referees and judges. you do not wait for the malware to go active in memory if you can avoid it. the code that gets control first wins, and once the malware is active in memory, it's gained control. it's only by kevin's good fortune (or BOClean's low profile) that malware creators didn't try to stomp out or otherwise interfere with BOClean. the AV industry has already faced that, going back all the way to brain (which had stealth).

some of his exaggerations devolve into more factual errors. for example he calls win32/ska a simple file infector and criticises the industry for taking months to handle it. the fact is that win32/ska's infection technique was nothing like the traditional file infection technique kevin laid out elsewhere. win32/ska was not some appending file infector, it didn't simply insert itself into the winsock DLL, it carried a modified copy of the winsock DLL with it and replaced the original with the modified version. as such, recovery of the original winsock DLL was not comparable to recovering from a traditional file infector. it may have taken months to build the capabilities to recover from this type of infection into their general purpose tools, but that in part was because there were special purpose tools available to mitigate the problem and lower the priority of getting it into their general purpose products. there also happened to be manual instructions for removing win32/ska (i know because i have given those instructions to people) which also mitigated the problem and thus lowered the priority.

he also points out that even now systems are still getting compromised with back orifice 12 years later and characterizes that as an example of the anti-malware industry's failure. the fact is that his BOClean is just as much a failure in that as the rest of the anti-malware industry is. that is if you can really call it a failure. stopping that trojan from ever getting used again, making it become extinct, that's just not something anyone can actually do. and it doesn't really have that much to do with whether the malware can be detected or removed by software tools, but more with the heterogeneity of the computer population and the mind-share of the malware in question amongst those who would use it. old viruses never die, and apparently old trojans don't either.

he places the blame for why all windows malware works at all on microsoft. unfortunately it's not that simple. any general purpose computer is capable of supporting malware, no matter what the operating system. in this way malware can be said to be a 'feature' of general purpose computing. no matter how tightly you try to control things, there will always be the potential for software to maliciously do something you don't want it to do.


kevin seems to think that AV industry has for the past 30 years (which is longer than the industry has actually existed) simply been coming up with signatures and not looking for patterns that could be useful for future variants. the fact that we have the concept of malware families at all invalidates this line of reasoning. it wouldn't be possible to classify new samples as belonging to an existing family without looking for such patterns. this line of reasoning also ignores heuristic alerts that say "possibly modified variant of X". it also ignores the practice of consolidating many specific signatures into a few generic signatures.

he also calls automated analysis 'cheating' and characterizes it as simply creating an MD5 or SHA1 hash. this seems to ignore the fact that you can't logically identify that a sample is a variant of another malware family by simply creating a hash. it also clearly ignores the concept of automated classification, whereby a sample is compared with many other samples in order to determine which ones it's most similar to and thus which family it belongs to. one also has to wonder how exactly vendors are supposed to deal with 100,000 new samples a day without some kind of automation. even if you had a way to detect most of those new samples ahead of time with generic signatures, you still need to test and make sure each of those new samples is actually malware and is actually detected.

rather unsurprisingly at this point, he describes heuristics as bad because they emulate the malware while trying to catch it in the act of doing something bad. this is only (somewhat) true of dynamic heuristics, not static heuristics. it's also hypocritical coming from someone championing the idea of letting the malware become active in memory before trying to do anything about it.

he also referred to heuristics as creating "massive" false positive problems. in reality false positives are relatively rare. a single false positive can, of course, cause problems for many people if it's a system file, but those are rare even amongst regular false positives. those really high profile failures are something that has only happened a handful of times.

part four

the fourth part of kevin's series focuses on operating systems, mostly windows, but covering mac os x and linux too. i actually agree with a lot of what kevin has to say in this part (which makes me wonder if my knowledge of OS security might be a little too shallow), but there are a few things that are worth pointing out.

i tend not to agree with his suggestion that the concept of file associations based on file extensions are a bad thing and should be replaced by some kind of intelligent parsing. for one thing i think the parsing idea is logically infeasible. intelligent parsing would require windows to know about every file format, even the one i just invented yesterday, which it obviously can't. he also envisioned that the parser could generate a warning for the user to indicate what sort of action windows was about to take, but the warning could just as easily been done with file associations. the warning is based on the outcome and has little to do with how the outcome is decided upon (be it parsing or association).

on the matter of unix, kevin appears to be of the opinion that pure unix was secure. this would seem to ignore the fact that the original academic treatment of computer viruses (where the term actually got coined) had a virus spreading on a professionally administered unix system without the admin's assistance. how secure is that?

he also seems to believe that linux used to be secure in the beginning, but seems to forget or not care that rootkits worked on linux way back close to it's beginning. he admits that there was malicious software but contends that it got removed from distros quickly after being introduced. this ignores the fact that once end users start using such an OS, malware doesn't need to be bundled into the distro. if a user can run it, they will, and linux didn't and doesn't prevent running malware

part five

the fifth and penultimate part of the series deals the concept of defense in depth, or as kevin prefers layered security. he spends an inordinate amount of time talking about firewalls and how awful they are. he sort of presented defense in depth as a caricature, focusing only on the layers that gained widespread adoption and suggesting that anything else was outside of the regular user's price range. frankly i don't think integrity checking was ever priced that way, nor behaviour blocking, nor any number of other techniques that he conveniently ignores.

what's more, he criticizes each layer for being insufficient. it's as if he doesn't understand the purpose of having multiple layers. each one has flaws and weaknesses, sure, all security measures do, but in aggregate those weaknesses are diminished. they're never completely eliminated, of course, because there's no such thing as perfect security, but it's certainly something that we can approach. one might argue that one can approach that by re-engineering one particular layer instead of having many, but some of those weaknesses are inherent rather than being design or implementation flaws. such inherent weaknesses can only be braced by other additional, complementary layers.

part six

the final part of keven mcaleavey's series was meant to present solutions to the problems mentioned in the previous 5 parts. when i read the final part in the series all the pieces finally clicked into place. all the confusion i previously had was gone, all the WTF moments finally made sense. i finally knew what he was on about and what the series' true purpose was. kevin's got a product to sell and the series was an extraordinarily long sales pitch.

it followed a familiar pattern too, now that i think about it. first he dished out FUD about his competitors (and since he's now a secure systems provider his competitors include both the anti-malware software vendors and the operating system vendors), and then at the end hype up his own product and make it look like the blatantly obvious choice to avoid the horrible, horrible problems with everything else. he even threw in a smattering of snake oil phrases like "absolute security and protection" for good measure.

the product and/or service in question (KNOS) appears to my untrained self to be similar to some sort of freeBSD-based LiveCD composed entirely out of carefully audited modules, and with the provision that you can request custom configurations from to suit your needs. the idea appears to be that the code that is allowed to run is carefully controlled and limited by them (rather than being something the user can add code to him/herself) and that exploitable vulnerabilities have supposedly been eliminated. this is supposed to keep the users safe from both malware and from them themselves.

unfortunately, kevin seems to have fallen prey to the engineer's conceit - the (often mistaken) belief that a particular problem can solved through carefully engineered technology alone. KNOS does not offer absolute protection, and to say it does offers users a false sense of security. it may well stop all the malware that kevin can imagine, but (to paraphrase something that's often said in cryptographic circles) it's easy to come up with a security system so advanced that you yourself can't figure out a way around it - what's hard is figuring out one that other people can't figure out a way around either.

obviously i don't know enough of the details of KNOS to suggest specific scenarios where it's security can fail, but i do know enough about computation in general to see what kevin (and others who aim to ensure only good/safe code is allowed to run) has missed. what many people don't realize, what society's conventional thinking about computers fails to hint at, is that data is code. the distinction we draw between the two is little more than a mental construct that simplifies the task of building systems. it doesn't represent how computation actually works, and it isn't necessarily adhered to by the people who break systems.

unless and until someone can come up with a way to control which data gets processed as scrupulously as they control which code gets executed, data will remain an open window through which systems with locked down code can be attacked. and since determining the safety of data will ultimately require the data to be processed in some way, we arrive at a catch-22.

i fully expect that KNOS probably has some impressive security capabilities, but absolute security is a fantasy, and the more traditional security layers that kevin derides have managed to keep me essentially malware free (except for an externally non-addressable RAT installation on a sacrificial secondary system) for over two decades. beware security historians who are trying to sell you something - chances are they'll rewrite history to suit their sales objective.

Thursday, July 07, 2011

maybe we should blame the victim

pardon my iconoclasm, but a twitter conversation with jerome segura and maxim weinstein got me thinking about this. it was sparked by maxim's blog post "stop blaming the victims" where he argued that we shouldn't be blaming people for failing to follow security best practices (such as keeping web servers up to date). personally i consider this to be a form of infantilization. i've argued against coddling users before but i want to expand on the idea here.

the principle and practice of not blaming the users basically sends them the message that they're OK, they didn't do anything wrong, and they can keep doing things the way they have been. this is a marked departure from many of the other messages we send users trying to get them to be more aware of security and to make better decisions in security contexts. that makes the "don't blame the victim" dogma a substantially mixed message. have they really done nothing wrong? often times there are things they could/should have done differently, things they've been told about in the past but still failed to consider. can they be entirely free from responsibility for what happens to them in such a circumstance? i don't believe so. do we really want to send the message that they did nothing wrong and don't have to change? how will we ever get people to take better care of their security if we do that? many people are poorly adapted to the realities of the modern world and if there's no force giving them pushes in the right direction they'll never improve.

more fundamental than that is the fact that victims are victims of the word "victim". by acknowledging someone as a victim we accept and embrace the notion of powerlessness that the word engenders. recognizing people as victims gives them a license to be victims and to remain victims. when someone is taken advantage of we shouldn't be treating them as some helpless and fragile thing, we should be helping them to become empowered so that they don't get taken advantage of again and again and again. by telling them they're helpless victims we rob them of the opportunity to better master their fates and gain confidence in their abilities. perpetuating the notion of the victim keeps the lay-person down.

therefore, not only do i think we should hold people at least partially responsible for the consequences of their actions or inactions (to blame the victim in normal parlance), but i also think we should blame the people who say "don't blame the victim". their well-meaning but ultimately misplaced mollycoddling holds people back and stymies our collective growth and advancement. we can never adapt if we're taught that we can't change our fates.

Monday, July 04, 2011

i wandered lonely as a cloud

relax, i'm not about to start waxing poetic about daffodils. rather i'm thinking about cloud-based anti-malware software.

it's something i've been thinking about for a little while now but i've finally decided to commit my thoughts to a more permanent format and share them with others.

for the past couple of years the major anti-malware vendors have been deploying cloud technology to improve the effectiveness of their products. often this has been an optimization specifically for their known malware scanners, although some have also taken the opportunity to build reputation systems.

it occurred to me that the cloud could be used for a great deal more than just that. think about what those reputation systems are doing. the user is faced with a complex question - is file X safe - and the cloud answers. the cloud can do this either because there are experts feeding the cloud it's answers or because there's a community feeding the cloud it's answers (or both, come to think of it). the point is that the cloud reduced the complexity for the user.

now think for a moment about all those technologies that have sprung up and then fallen by the wayside over the years. how many of them fell out of favour because they required too much knowledge, because they asked too much of the user? do you see where i'm heading yet? the cloud as a complexity reducing technology (alright it technically transfers and collates that complexity, but from the user's perspective it reduces it) seems like it actually has the potential to breathe new life in virtually all of those other techniques, be they sandboxing, whitelisting, behaviour blocking, or even integrity checking.

and of course, as i was originally coming up with that list i was reminded of the fact that many of them have actually been augmented with some kind of cloud technology to help take the complexity out of their operation. those efforts simply haven't been particularly mainstream. the biggest vendors have been slow to recognize the opportunity to augment these technologies (which can be superior in the right hands) with complexity reduction as a service. the smaller vendors that are taking a chance with this don't necessarily have the stability to keep it going. it would be nice if those other options saw more more mainstream deployment and adoption.

quick thought on cyberwarfare

one of the topics that keeps coming up in discussions of cyberwarfare is 'attribution'. that ability to know where an attack came from, who's responsible for it, etc. it keeps coming up because many of us recognize that it's very difficult to do with attacks in cyberspace.

this is a source of confusion for many because our model of warfare involves things like deterrence, counter attack, and appropriate response. without attribution these things aren't possible.

cyberattacks are often likened to missiles or other kinetic warfare weapons where attribution is a much more straightforward process - i think the confusion is rooted in this comparison. instead of thinking about overt warfare, cyberwarfare would be better likened to covert warfare - black ops, wet works, that sort of thing. there is no meaningful attribution with these sort of warfare and so there is no meaningful deterrence or response to such attacks. it is an area of warfare where there is attack without counter attack, where one attacks simply because it is strategically advantageous.

don't picture these so-called cyberweapons as being like electronic missiles that anyone can launch simply by pressing a button, that gives entirely the wrong sort of character to the topic. if you must consider them cyberweapons at all (ie. if you must focus on the tool instead of the attack itself) think of them as guns with silencers on them. or better yet, think of them as knives, used with surgical precision and giving away no clue to those in the vicinity where the attack came from.

Monday, June 27, 2011

the dawn of insecurity

earlier today i sent out a tweet mentioning a security awareness initiative by the folks at eset and that started off a brief discussion with michael santarchangelo about security awareness and adoption thereof. it could have been a longer discussion, but i quickly realized i had more to say about the subject than could reasonably fit in twitter.

the reason we talk about security awareness is that most people seemingly lack such awareness. but what does that mean? well one of the things it means is that people don't think about the consequences of their actions, they don't think about the possible outcomes. this isn't just some people, either. as michael pointed out, it's all people, even you and i to some degree fail to account for all the possible outcomes. it's also not just about security, but rather about virtually any kind of awareness. some of us are more aware of certain things than others are, and aware of some things more than other things, and of course the amounts are different for each person.

we could, of course, think about such things more so why don't we? in a word: laziness. now i don't mean that in a judgmental way. although laziness certainly isn't well regarded in this day and age, it's not just some character flaw in humans. the argument could be made that it served a purpose, once upon a time. physical laziness, at least, serves to conserve energy, which would have been an advantageous evolutionary trait back before we started to gain mastery over our environment, when food was harder to come by. wasting energy foolishly could have hastened starvation, so the fact that we developed a tendency to conserve our strength for when we really needed it is probably a good thing, even though the adaptation doesn't serve us nearly as well now that food is (at least in developed nations) relatively plentiful.

i'm not about to suggest that mental laziness shares the same lineage as physical laziness, however. it would be quite the stretch to suggest that thinking too hard could lead to starvation. mental laziness is something i've been thinking about for a while, why it's there and how to overcome it*. at some point it occurred to me that every moment a person spends thinking about outcomes is a moment that one isn't being in the moment. being in the moment is one of the hallmarks of happiness. being focused on the present instead of the past or the future is something one only does when one is content. one could, then, argue that people don't think about outcomes unless they really have to because it means giving up (if only temporarily) a state of mind in which they experience happiness.

that seems a little wishy washy to me, though, and while i was chatting with michael i had an idea about a possible root of mental laziness that is more like the one for physical laziness i described above. having a tendency to focus on the here and now could have been an advantageous evolutionary trait. being lost in thought when you're out in the wild and you're not the top of the food chain is a good way to become lunch for something else. those that spent too long thinking about abstract concepts got eaten while those who maintained a presence of mind lived on. the fact that contentment and happiness are linked to that mental state could be a neurological reward that evolved to reinforce what was once a beneficial behaviour (it feels good so do it more), much like our tendency to prefer sugary/fatty foods would have aided us in prioritizing energy rich foods when food was less plentiful (it tastes good so eat it more).

of course, the world of today is much different than the world in which such evolutionary traits would have developed, so they don't serve us nearly as well as they once might have. those neurological rewards still reinforce the behaviours in spite of the fact that they're they're no longer advantageous. some of us have, whether through genetics or conscious effort, become better adapted to various realities of the modern world. those of use who have should count ourselves as lucky rather than looking down our noses at those who haven't adapted as quickly. fighting against millions of years of evolution can't be easy and few of us are really that much further along than anyone else.

i offer these thoughts to serve as a form of perspective. it would be nice if we could just read some books or articles, or attend some classes and then magically overcome whatever it is that is holding back our security awareness. but if i'm right then at least part of what holds us back dates back to the dawn of man, if not earlier. such intrinsic aspects of humanity are not so easily changed, and yet we continue to evolve and adapt.


(*to a certain extent i started trying to overcome others' mental laziness with respect to security with http://www.secmeme.com long before i ever started to think in terms of mental laziness. if i were to describe it uncharitably, i'd say i was trying to trick people into thinking more about security.)

Friday, May 27, 2011

the whitelister's dilemma

you remember marcus ranum's 6 dumbest ideas in computer security? #2 on that list was enumerating badness (aka blacklisting), which he believed should be replaced with enumerating goodness (aka whitelisting).

ignoring the fact that his underlying assumptions about relative sizes of the malware and legitimate software populations was incredibly wrong*, there's a much more fundamental problem with turfing blacklisting in favour of whitelisting:
the only meaningful criteria we have for deciding something is good or safe is that we haven't found anything bad in it yet.
oh sure you could assume that a system is currently malware free and start your whitelisting regimen from that (potentially pre-pwned) state. you could assume that software direct from the vendor is safe to add to a whitelist too (because microsoft never accidentally distributed infected materials, right?). you could even assume that things that are digitally signed are safe (it's not like stuxnet was digitally signed or anything).

of course, we know what happens when you assume. the reality is that even if we do adopt whitelisting we have to continue enumerating badness for the purposes of maintaining the whitelist. whitelisting stands on the shoulders of blacklisting - it has to, our only other criteria are assumptions that have all been proven false in practice.

as such, whitelisting can never replace blacklisting, it can only ever complement it.

[* according to figures by whitelisting vendor bit9 that i mentioned here, and frankly the idea of a malicious few coders out-producing the benign many seemed silly anyways]

Monday, May 23, 2011

the mac malware phenomenon

i posted something to twitter earlier (which was already too big to actually fit in a normal tweet) but i think there's more to be said.

those who downplay the mac threat landscape by comparing it to the pc are missing the point. the mac will never be the pc, it'll never follow the same path or be in exactly the same place, but the mac community was sold false hope and many of them are either unaware or in denial about the fact that they were lied to.

one of the most crippling thing about mac security awareness is the pc comparison. people can't look past the fact that things aren't as bad for the mac. does that really matter? crime in your neighborhood likely isn't as bad as crime in a ghetto (unless you happen to be unfortunate enough to be living in a ghetto), does that mean it's safe to leave the door to your house or car unlocked? no.

stop thinking about the comparison, stop thinking about the pc entirely. imagine there are no pc's. think about the things that have happened in the mac landscape over the past several years and what they mean to the various subsets of the mac user population.

there are users who believe macs are immune to viruses. that was proven technically false five years ago (i wrote about it here). the viruses that have been produced may have never reached epidemic proportions, but epidemics are a poor yardstick for measuring risk. car crashes aren't exactly an epidemic, but you should still fasten your seatbelt.

there are users who believe macs are inherently secure because of their *nix lineage. this, in spite of the fact that the initial academic investigation of the concept of computer viruses involved successful experiments in a professionally administered unix environment. also in spite of the fact that rootkits originally come from the *nix family of platforms. also in spite of the fact that a security researcher renowned for successfully attacking the platform on multiple occasions has explicitly contradicted the notion that macs are especially secure.

there are those who believe that for something to be a real threat it has to activate by itself without user intervention, that something that requires the user's help is only an issue for dumb users. this despite the obvious success of social engineering attacks like phishing that are already platform agnostic.

there are users who believe macs aren't really a target of criminals yet. they believe that the criminals have bigger fish to go after so the mac isn't worth the effort. they believe criminals have to make an either/or decision about which platform to attack. these beliefs are in stark contrast to a nearly 4 year old reality of professional cybercriminals attacking the mac platform - specifically the zlob gang taking their already successful windows trojan and porting the important functionality over to the mac (which i mentioned here and here). the more recent example of java based malware is an indication that the cybercriminals are trying to take the either/or question out of the equation entirely.

and then there are those wonderful users who are actually security aware but somehow believe the rest of the mac user community is largely like them so the efforts to raise awareness of security issues are pointless and alarmist. this is even though it's plain to see that security aware people are a minority in any population. and as far as the mac user population goes, apple's marketing was quite clearly designed to appeal to people based on style, image, and simplicity, and told users that security was something they didn't have to worry about. to imagine such a population is somehow better at dealing with security issues than the average person seems more than unjustifiably optimistic.

macs can be attacked, they have been attacked, their attackers are enjoying increasingly numerous successes, and not enough mac users know it. it's a growing threat and there has yet to be a compelling argument put forward that it won't continue to grow. knowledge is the first prerequisite for people to be able to protect themselves. stop pretending everyone knows what you know or are smart enough to make the threat a non-issue, there's just too much variety amongst humans for that to be true.

Thursday, May 19, 2011

Snake-Oil 'R' Us

it seems that snake-oil is changing with the times, evolving and getting worse.

worse? how could it possible get any worse?

well, i've mentioned in the past how certain products very names can represent snake-oil - names like "total protection" or "total security" instill in the user the false belief that they are totally protected and don't have to worry anymore.

well pretty soon there's going to be "total defense" too.

how is that worse? well, "total protection" and "total security" are just product names. total defense? that's apparently going to be a company name. a company that has snake-oil running through it's veins, i suppose. probably not a surprising move for updata partners, the technology venture company running the show - a venture company's focus is on making money, they're buying computer associates' internet security business unit, they aren't existing members of the security or anti-malware community/industry. but they're going to be part of the anti-malware industry, they're buying they're way into it, and they're starting from a position without the established norms and ethics of either the community or industry. no wonder the ethical landscape has been eroding over time.

Monday, May 09, 2011

security small talk

[i'm republishing this secmeme post here because, although the topic is more fitting for secmeme, the intended audience is better addressed here - and if you're like me, you probably hate being directed to some outside site when you're going through your RSS feed.]

pursuant to a brief discussion i had with @diami03 (aka michelle k.) on twitter earlier today, some thoughts popped into my head.

specifically, with regards to how well known the concept of the nigerian 419 scam is, i said

she was not happy with that. admittedly it was a rather crass way of expressing the principles i had in mind, but i stand by them (even if i also find them disappointing).

put differently there are two things in play. the first (and probably the one most are familiar with) is that people often prefer to be entertained rather than informed. if i'm being totally honest, i feel the same way sometimes.

the second is that (at least to my mind) a good indicator of how well our culture has assimilated a particular piece of information is how easily/frequently that information finds it's way into everyday chatter (i.e. small talk).

now normally my memetic ramblings are intended for the broadest audience i can manage, but this is a special case. injecting security into small talk logically must start with the people who are security aware. many security geeks probably already do this to a certain extent - after all, if people can talk about the weather or last night's game, why not security topics too?

now i'm not the best person to advise on how to engage in small talk (far from it in fact) but there are a few things i think are self-evident. first and foremost is that this is not an opportunity to give a lecture, or to talk like you're presenting at a conference. most people don't want to go back to school and if you start sounding like a teacher they're going to tune you out. so how can you shoot the breeze about security with non-security folks? here's a few strategies:
  1. everybody loves a spectacle so keep your eye out for them and use them opportunistically. database breaches aren't sexy or interesting, but sony's loss of over 100 million private records breaks the boredom barrier by sheer size alone. so much so, in fact that you may well find that people have already heard about it in the mainstream media. that's a bonus, it means you can talk about something they've already heard about.
  2. if they're really your friends then it stands to reason that they have at least a modicum of interest in how your day was. did you see a nigerian 419 scam in your email today? great, mention that in passing. did you see two or more of them in the same day? even better. after all, how many dead princes (or whatever) can there really be out there? if wealth and death are as strongly correlated as those scam emails suggest then i think i'd rather stay poor.
  3. when you mention things that you think might directly affect them, you're showing concern about them, you're showing an interest in their well-being. everyone wants their friends to be interested in them in some way so that display of interest should make them perk up their ears and take notice. i used this strategy myself with the epsilon breach, sending links to to the list of affected merchants to some of my friends so that they could look over the list and see if the breach was likely to affect them personally.
if you're concerned about the quality of information that is passing from person to person, it's up to you to help put better information into the mix. don't be afraid to throw in a few security topics when chatting with friends. they probably already know you're a security geek so they'll understand why you're interested in it, and if you can make it even a little bit interesting for them then they might pass it along.

Thursday, May 05, 2011

thoughts on viral facebook scams

in response to a certain discussion on twitter, i found some ideas floating around in my head that just don't fit in a tweet, so i thought i'd share them here instead.

one of the ongoing problems on facebook is the phenomenon of viral scams - scams that spread in a viral manner across the facebook userbase and trick users into doing various things (whether it be installing a rogue facebook app, clicking an invisible link, or copy-n-pasting javascript into the URL bar).

there are at least 2 contentious aspects to this phenomenon. the first is whether facebook has things under control. there are certainly those who are arguing that it's not under control, that the numbers are ever increasing. there are also those who argue that, on the whole, facebook is acting in a timely manner to deal with these threats to their users. my own experience is that i rarely actually encounter these viral scams so that certainly could support the argument that they're getting killed quickly - quickly enough that they die before they make their way to me. on the other hand, though, when i do encounter these scams it doesn't appear to me that facebook is dealing with them expediently at all. a day or more to kill a viral scam campaign? really? i think they could do better - in fact, i have some specific ideas that i intend to share a little further on.

the second contentious aspect is what's the best way to deal with these scams: whether it's better for facebook to police it's network more effectively or alternatively to go after the industry whose gray areas are responsible for the lions share of the scamming (ie. the cost per action / CPA marketing industry). technical defenses employed by facebook will always be a bit of a game of whack-a-mole, but they're relatively quick and easy to implement without involving a lot of other parties. investigation and enforcement of legal authority against CPA firms can certainly have some long-lasting effects but there are problems; it takes a lot of time and coordination from the law enforcement community, and CPA is only the current low-hanging fruit from a malicious business model perspective. that means, ultimately, going after CPA firms or even entire industries will also wind up being a game of whack-a-mole, it'll just much slower and it will be law enforcement playing the game instead of a technology company.

i believe both technical defenses and legal authority are appropriate tactics. technical defenses are useful for dealing in the near term with that which has not yet been dealt with in the long term, while exercising legal authority tends to have more of a long term effect and creates a much bigger disruption to malicious business models (potentially requiring entirely new malicious business models to be developed to compensate). right now we don't yet have the benefits that legal authority can provide so we need to use technical defenses as a stop-gap at the same time as we pursue legal avenues. if/when legal authority manages to take out most of the CPA abuse channels that the scammers are currently exploiting, those scammers will monetize something else so we'll continue to need those technical defenses as we adjust our legal tactics to their new business models. in essence, technical defenses and legal authority represent compensating controls in a multi-layered approach to the problem.

to that end, i have some specific technical ideas for facebook.

all viral scams must exploit one of facebook's many communications channels. facebook needs to monitor these channels and apply some heuristics to help identify the viral scams.

to start out with, they should apply a k-nearest-neighbor algorithm or some other suitable similarity measure  to the communications (do not use hashing - i hardly ever see the scams but even in the few i have seen, i've seen hash busters being used) in a sliding window of time (to limit the size of the corpus facebook would need to analyze). messages, wall posts, events, etc that cluster together as being highly similar are likely all part of the same viral campaign and should be classified as such. being part of a large cluster should cause the message (or rather the entire set of messages) to be flagged for additional review at the very least. if that review is manual, one could prioritize the review based on the size of the cluster. not all viral communications are bad, mind you - it could just be a really good joke, or a political activist campaign, or something else legitimate - that's why virality alone isn't enough to classify something as bad, but it is a good start for narrowing the scope of the analysis.

now, even if facebook stopped at this point, they'd still have something very useful for killing viral scams. identifying the set of nearly identical messages that a particular message belongs to means that you can aggregate data and judgments about those messages. an abuse report for one message is an abuse report for all, a flag to disable display of one message is a flag to disable display of all, a flag indicating one message was verified clean is a flag indicating all are verified clean, etc.

if facebook were to go further, though, the next thing they could do as a simple static heuristic is to check to see if the message contains javascript code that is displayed to the user. most users cannot read or understand javascript. for most, the only reason they'd see that in a viral message is if they're supposed to copy and paste it in order to bypass facebook's existing defenses against malicious javascript. that makes it a pretty good indicator of malicious intent.

another simple heuristic would be the presence of a link whose destination is obscured by a URL shortener. a much more contentious heuristic, of course, but i'm not really suggesting any of these on their own should be taken as proof of malice - only that they each incrementally increase the level of suspicion.

a third simple heuristic would be links to facebook applications where the app developers haven't been registered very long. it's not impossible to hit it big on facebook right out of the gate, but there are nuances that aid in growing an application's popularity that you wouldn't really expect a newbie to know right off the bat. a really big viral cluster for a really new app developer should definitely raise some eyebrows.

next, as an active heuristic, an automated process attached to a dummy facebook account could be sent to click it's way through the trail laid out in any message that has a link in it in order to see if any path results in the dummy account sending out a message belonging to the same viral campaign it started from. this means adding applications where requested, following links to outside pages, even pasting the contents of the clipboard into the URL bar when the trail leads back to facebook's domain.

finally, recognizing that different viral clusters could be related, there needs to be a mapping between inputs and outputs of those dummy accounts so that facebook can catch the condition when an incoming message from viral campaign A leads to outgoing message from viral campaign B and which in turn goes through 0-N other viral campaigns as inputs and outputs before arriving back at campaign A.


i don't know what methods facebook is using right now, but if they were aggregating nearly identical messages it shouldn't have taken a day or more to kill the last viral scam i encountered and it shouldn't still be possible to easily find something like this half a month later:
those may be neutered to the extent that they can't contribute to viral spread in facebook's environment anymore, but who knows what the pages those point to could be changed to do to people's PCs now that the main campaign is dead. those events should have been deleted a long time ago. leaving remnants of viral scams in people's accounts is a little like leaving remnants of viral code in disinfected programs.

Thursday, April 21, 2011

essential FUD

upon reading mike rothman's recent post on categorizing FUD i was struck with a rather surprising realization. not only has the much reviled APT suffered semantic dilution, but apparently so has the seemingly simple concept of FUD.

i say semantic dilution rather than semantic drift because, rather than taking on a new meaning, the apparent elimination of uncertainty and doubt from mike's description means that it's 2/3rds of the way to having no meaning at all. it seems that anything invoking fear is now some kind of FUD - but can that be true? are fear and FUD interchangeable? do we want to make them interchangeable? wouldn't we really only be saving a single keystroke in the process?

i don't agree with mike's characterization of FUD (which seems only fitting as mike doesn't agree with much i say). although i have tried to define FUD before, i've never gone into enough depth that it would contradict interpretations like mike's. that changes today.

if there is one part of fear, uncertainty, and doubt that could clear this all up if it weren't so often completely overlooked - one word that held a surprising amount of meaning - it would be:

AND

it's not fear, uncertainty, OR doubt, boys and girls, it's AND. we're talking about the intersection of the three, not the union. no single one of fear, uncertainty, or doubt qualifies as FUD on it's own. FUD requires the presence of all three. fear is only part of it. i'm tempted to say fear is only the beginning, but that's not true, something as yet unmentioned is the beginning and fear, uncertainty, and doubt are the consequences.

what's going on behind the scenes with FUD, what makes it such a bad thing beyond the simple fact that it's used as a manipulation, is that it introduces an inaccurate mental model that competes with superior ones and the results are rather insidious. mental models inform our actions. they allow us to predict outcomes and consequently allow us to make plans designed to control outcomes in our favour. they are a tool which allows us to effectively formulate strategies for satisfying our basic human needs.

unfortunately, mental models are never 100% complete. there are always holes, always missing pieces and weak points. these are what FUD models exploit in order to compete with existing mental models. obviously if someone's mental model is more complete and internally consistent they are less susceptible to FUD because they "know better" than to fall for it, but unfortunately many people have mental models that are largely incomplete so a FUD mental model has a good chance of taking hold and effectively competing with the model the person had.

that competition is a problem. it causes a person to be confused, to question what they thought they knew. this is the uncertainty - the U in FUD. subsequent to that a person would then logically start to distrust the sources that had informed them and helped them form their previous mental model. this makes it difficult for those or similar/consistent sources to fill in blanks in the original mental model and thus interferes with a person's ability to build a better mental model. this is the doubt - the D in FUD. finally comes the logical conclusion that if what one thought one knew was wrong then the steps one took based on that knowledge could also be wrong. the consequence of that being that the person is no longer prepared or capable of handling something they needed to handle and the emotional reaction to that is fear - the F in FUD.

recapping then, a more in-depth account of FUD is that it is a communicated inaccurate mental model that causes:
  • uncertainty about what you know
  • doubt in those whom you learned from or could learn from in the future
  • fear that you're no longer going to be able to satisfy some need that you have
it should be noted that that same fear can result when you fill in some of the blanks of an incomplete mental model. what differentiates that from FUD, however, is that although fear can result in the short term, there's no uncertainty or doubt. building a better, more complete mental model results in a person being better able develop strategies to satisfy their needs in the long run and is thus beneficial, as compared with FUD which stymies a person's ability to develop effective strategies.

now the argument could be made that mike himself was spreading FUD about FUD (meta-FUD). a model of FUD that seemingly allowed for anything involving fear to be called FUD would certainly make people uncertain about what they previously knew about FUD and doubt the people that had previously informed their opinions about FUD. and since mike also opened the door for the possibility of good FUD and suggested that FUD was more widespread than one would have otherwise thought (as a consequence of dropping 2/3rd's of the requirements), there would certainly be room for people to be concerned that they no longer knew how to navigate the sea of FUD mike was depicting and thus be afraid of getting duped.

on the other hand, however, the argument could also be made that mike's model of FUD is simply incomplete (seemingly missing uncertainty and doubt) and that what might appear to be meta-FUD is actually inaccurate conclusions drawn as a result of missing pieces of that model.

i'm not going to accuse mike of spreading meta-FUD, primarily because i feel accusations of FUD spreading should be reserved for those who should know better than to believe the model they're communicating. those spreading inaccurate or inferior mental models unwittingly should certainly be notified, however.