Wednesday, June 30, 2010

sample this

i seem to be getting a lot of mileage out of the NSS folks lately. one might think it's because i have some sort of grudge or maybe that i'm one of the av industry's 'pets' and just defending my master (like i did back when i accused a big chuck of the av industry of being snake oil peddlars, i'm sure). the truth is, however, that NSS Labs have simply given me a wealth of material to work with. please rick or vikram, talk some more.

in a previous post i linked to this video taken at the source boston conference this year. it's a panel discussion with vikram phatak, peter stelzhammer, and mario vuksan, with andrew jaquith acting as moderator. i'm going to be referring to it again in this post - specifically at about 25:00 minutes into the video where vikram talks about exploits.

now i'm going to be generous and overlook the fact that he confuses exploits with vulnerabilities at one point. it's vulnerabilities that are the flaws, exploits are what takes advantage of the flaws. the truth is public speaking isn't necessarily easy and things can come out wrong (or not come out at all), as i found out not too long ago, so it seems reasonable enough to me that he actually meant vulnerability when he said "it's an exploit, it's a flaw in IE or in Firefox or..."

what doesn't seem reasonable to me, however, is the idea he was actually talking about vulnerabilities rather than exploits for that entire part of the discussion. i'm fairly certain he was talking about exploits for the most part, so when he trotted out this idea that there are no samples when it comes to exploits my immediate reaction was "that's bullshit".

we exist in a universe of cause and effect. well, ok, determinism breaks down at the quantum level but at the macroscopic level determinism is the overriding trend - and even more so when we're talking about digital computers because they are deterministic finite state machines. the exploitation of a vulnerability doesn't just happen by magic, this isn't some sort of cyber-voodoo, there is a causal agent, an actor, some chunk of data that the computer receives which, when the computer operates on it, results in the vulnerability being exploited.

and guess what that chunk of data is - that's right, it's the supposedly fictional exploit sample. it doesn't matter that it's not an *.exe or that it may not even be saved to disk, that just makes collecting it more challenging. when asked for a sample, that chunk of data (along with a way to replay it's delivery to the vulnerable system/subsystem) is what is being requested - an example of the agent which causes the vulnerability to be exploited.

at this point some of you might be thinking about the fact that an exploit for a particular vulnerability can take many different forms and might even be thinking that that would make the previously described sample worthless. 'many forms' is not a new concept in the anti-malware field. the fact that an exploit can come in many forms is just another aspect of polymorphism. that's probably not a word that gets associated with exploits very much, in part because of a rather arbitrary convention of differentiating between 'code' and data. code is data, however, and any data the computer bases a decision on is for all intents and purposes the same as an instruction (the data dictates what the computer will do next) and can be considered a kind of code. so while exploits may not superficially resemble other polymorphic things from the past like polymorphic viruses or server-side polymorphic trojans, exploits are programs of a sort and multiple different exploits for the same vulnerability are functionally equivalent programs - which, when considered in that light, actually does bear some conceptual similarity to server-side polymorphic trojans. as such an exploit sample doesn't become worthless just because there are other forms it could take, it just means that detecting that one form represented by the sample is only the beginning.

Tuesday, June 29, 2010

NSS Labs vs. AMTSO?

further to my previous post about NSS Labs report as blogged about by brian krebs, while reading brian's post i found myself wondering how he got such a negative impression of the AMTSO.

the AMTSO is a bunch of people from the anti-malware community trying to hammer out some better approaches to testing anti-malware products because frankly there are a lot of bad tests out there. the people in question come from independent testing organizations like av-comparatives and av-test.org (as well as others i'm less familiar with), and also from anti-malware vendors. many of them are employed by vendors precisely because that's one of the primary places where one with expertise in this field would find employment. often-times those people will even represent their employers but that doesn't make it a "vendor-driven consortium" as claimed by rick moy here, nor any kind of "cartel" as claimed by vik phatak in this video (at approximately 84:15).

in fact, NSS folks themselves were members of AMTSO at one point but pulled out for some reason. surely if they were members they thought there was merit to the project so why pull out? one possible reason might be sour grapes (certainly in keeping with the verbiage) over a review of one of their tests which you can read here. while the review was requested by particular vendors (which one might interpret as being vendor driven - though reviews are ancillary to the formation of the testing standards) it was carried out by people from other organizations, two of which were testing organizations themselves.

there are always many sides to any story, but painting AMTSO as a vendor driven consortium (or worse, a cartel) in the current climate, where vendors are a favoured whipping boy of the general security community, effectively demonizes AMTSO and everything they're working towards. that rubs off on people and shows through when brian used words like "cantankerous" and "cobbled" (not to mention the #fail hashtag he used when promoting the post on twitter).

the story brian got for the split between NSS and AMTSO was that AMTSO favoured fairness to vendors over representing reality. one of AMTSO's goals is to (as much as possible) eliminate bias from tests. eliminating bias is fair to vendors and results in a more accurate representation of reality at the same time. do NSS and AMTSO have a fundamental difference of opinion over what constitutes bias? did NSS decide to take their ball and go home when they couldn't get their way? after reading AMTSO's review of the NSS report as well as listening to vik phatak on the subject (he makes an oblique reference at approximately 72:35 in the video) and reading rick moy's blog post, i'm left to conclude that this is the case.

if you're setting out to test a product's ability to protect users from "socially engineered malware" (NSS' wording, not mine) then it stands to reason that you should include in that test the technologies that help to block the social engineering itself (e.i. the spam filter) in addition to the technology that blocks the malware. NSS could easily change their wording to allow themselves a more narrowly defined scope, but that would be moving away from the sort of whole product testing that NSS evangelizes. alternatively NSS could accept that spam filters, though often taken for granted and even dismissed as a non sequitur in malware testing, do have protective value and by ignoring anti-spam technologies NSS introduced bias into their report. NSS has done neither of these things but instead parted ways with AMTSO and now try to discredit the organization (with some apparent success).

despite AMTSO's efforts there is never going to be a perfect test. there is never going to be a complete absence of bias or a complete absence of measurement error. there will always be some grounds upon which any test can be criticized and testing organizations who can't take criticism would do well and would serve the public's interests if they got over themselves and learned to take that criticism as an opportunity to improve - because they should always be improving.

Monday, June 28, 2010

of lines an graphs

i made a comment on brian krebs' recent blog post "Anti-virus is a Poor Substitute for Common Sense" that seems to have gotten a number of negative reactions from other readers. i thought perhaps i should expand on my comment here so as to demonstrate why i wasn't just nitpicking.

first we need an example of a line graph like the one i brian's post:
unlike the graph from the NSS Labs report that brian posted about, i've made it clear where my actual data points are so that you can see clearly what interpolation can do to a graph. while i have absolutely no data points above 70 the line still goes up above 70 and then comes back down, just like the graph in the report in question.

now, if you couldn't see where the data points actually were you might easily think that the data actually showed the value went above 70 and then came back down again. in fact one could easily make the mistake that every point on the graph represented what really happened, even the points for which there is no actual data, because a line graph without the data points clearly denoted implies that there is continuous data along the entire graph. but the reality is our data is not continuous, it's discrete. we take a finite number of measurements at fixed points in time.

even with the data points clearly marked on my own example graph above there is the implication that had i actually measured at some point i would have gotten a value on the line, even though that isn't necessarily true, and in fact there are many many points where that is most certainly false. let's say for example that my graph shows how the detection rate changes over time on a fixed set of 10 items. the detection rate can never be 15%, it's simply not numerically possible even though the line implies it is.

these are some of the problems you encounter using a continuous data visualization for discrete data. i'm not trying to suggest these are huge problems but they are problems because they mislead the reader. these subtle sorts of things are exactly what it means to lie with numbers/statistics. it makes no sense for there to have been periods of time during which the detection rate on a fixed set of malware went down instead of up (go on, pull my other leg) and that was almost certainly the result of interpolation rather than something reflected by actual data.

the misapplication of continuous data visualizations for discrete data is a hallmark of junk science. i don't know if that is representative of the work NSS Labs actually does (i've yet to successfully penetrate their reg-wall in order to see for myself) or if it was just a simple mistake - greater transparency on their part (as apparently discussed by david harley) would allow better peer review and help eliminate uncertainty about such things. as it stands, however, i have an admittedly small reason to be skeptical of them. their own marketing (PDF) bills them as being scientific and expert but a scientist ought to understand his/her data better than this.

Friday, June 25, 2010

lessons from the past

one of the criticisms i sometimes have of people in the security field is that they seem to fail to learn from the past. this isn't necessarily a fair criticism since often the people i'm directing that criticism towards aren't familiar with the past events i'm thinking of. as such this post is meant to serve a dual purpose - a) to help the people of today become familiar with the lessons of yesterday, and b) to help preserve something that i'd rather not see become a victim of the false sense of posterity that usenet archives like google's affords us. this was actually not my first choice for usenet posts to republish here, but my first choice seems to be well and truly gone.

the events described here took place some 21 years ago, back when the mores and traditions of the anti-virus community/industry weren't quite as strict as they ultimately became. in fact, i'd hazard a guess that the events described here and the lessons learned from them are at least part of the reason those mores and traditions became so strict. read on to find out what can happen when you handle malware either carelessly or even carefully but not quite carefully enough. from frisk's post in comp.virus/virus-l
From: fr...@rhi.hi.is (Fridrik Skulason)
Newsgroups: comp.virus
Subject: Two serious cases (PC)
Message-ID: <0007.9001031142.AA02943@ge.sei.cmu.edu>
Date: 27 Dec 89 12:47:52 GMT
Sender: Virus Discussion List 
Lines: 65
Approved: k...@sei.cmu.edu

Most virus researchers exchange/distribute viruses only on a strict
need-to-know basis, in order to limit the spread of viruses. However, this
does not work as well as intended. There are now two known cases where
untrustworthy people seem to have obtained viruses from researchers.

Case #1: Icelandic-1/Saratoga

     I discovered the Icelandic-1 virus here in Iceland in June this year.
     When I had disassembled it, I sent a disassembly of an infected file
     to several experts in the USA, UK and Israel, including the HomeBase
     folks (McAfee). Before I sent out the disassembly, I made one small
     change to it. This change had no effect on the operation of the virus,
     but it would make it possible to determine if a copy of this virus found
     outside of Iceland was based on my disassembly or not.

     Looking back, I can see that this was not a very good idea, simply
     because there was a possibility that somebody might select an invalid
     identification string, based on this disassembly. So, those of you having
     a copy of my disassembly, please contact me if you want to correct it.
     This change was also (by accident) included in the Icelandic-2
     disassembly, since I used the Icelandic-1 disassembly as a basis for
     that.

     Now - back to the Icelandic-1 virus.

     Three days after the virus was made available on the HomeBase bulletin
     board, in a restricted area that only a few people had access to, a new
     virus was discovered in Saratoga and uploaded to the HomeBase BBS. Some
     people thought for a while that Saratoga was an older variant of
     Icelandic-1, because it was at first said to have been found "a few
     months earlier", but this turned out to be a misunderstanding.

     Saratoga was just a minor variant of Icelandic-1, but the change I made
     was present in the virus, so it was obviously based on my disassembly.
     When Saratoga was found, I had only sent Icelandic-1 to three or four
     persons in the US - and, as far a I know, it had only been made available
     to other persons in one place (HomeBase).  They believe that the person
     responsible for the creating "Saratoga" has now been found, and his
     access to the restricted area has been terminated.


Case #2: Dbase

     The dBase virus was discovered by Ross Greenberg. It seems to have been
     planted at only a single site, because no other reports appeared for
     several months. Recently Ross made the virus available to a number of
     virus researchers. Within two weeks the first infection reports had
     started to arrive - the virus had escaped.

     We know that at least some of the reported infections were based on the
     copy from Ross, because he made one small change to the virus, before it
     was distributed. One instruction was overwritten by two "harmless"
     instructions, in order to disable the most harmful effect of the virus -
     the disk trashing part. This change is also present in some of the
     infected files that have been found recently. (In other cases the
     original instruction is present)

As I said before, I do not consider it a very good idea to make changes to
viruses, but it paid off in the two cases described above. Who knows how
many other cases of virus infections are (indirectly) the result of virus
collection/distribution by virus experts.

At least it is certain that we have to be a lot more careful in the future.

- -frisk

Thursday, June 24, 2010

these aren't the cyberweapons you're looking for, move along

i was reading robert graham's post about how cyberwar is fiction and i found myself not quite agreeing with it.

to be sure, the concept has been overhyped a lot and had his post been exclusively about that then i would have agreed with him wholeheartedly.

additionally, had it rested on the contention that so-called cyberwar actions that have naively been attributed to nation-states were actually the work of online militias lacking official endorsement from their respective governments, i would have agreed with that too.

but as the title of his post suggests, he goes as far as to say that cyberwar, and even cyberweapons are complete fiction, and that does not ring true. i'll do my best to avoid committing an act of cyber-douchery by condoning those particular terms, but the ideas they are meant to represent are not so far beyond the scope of reality as robert suggests.

virtual weapons, or perhaps logical weapons, would be the more straightforward of the two supposed fictitious concepts. robert contends that there are no such weapons, only tools. i would counter, however, with the assertion that a tool intended to cause harm (be it physical harm, logical harm, or some combination of the two) is a weapon. followers of the malware field know all too well there's no shortage of such tools in actual existence, so such weapons are far more than just a flight of fancy or an analogy taken too far.

robert's point regarding warfare is a little more involved. he's right that the way cracking works diverges wildly from the traditional western notions of realworld warfare involving tanks and planes and overwhelming forces, but just because the west has supersized everything including the way they wage war, doesn't mean that's the only way it can be waged. war doesn't have to be mindless (get a bigger dog indeed) and banal, it can also be subtle under the right circumstances. as robert described the opportunistic nature of cracking and how that was fundamentally incompatible with the goal-oriented nature of the military, i found myself thinking about how much better it would fit with the guerrilla warfare approach. i then started to think about the (perhaps apocryphal) stories of CIA operatives fostering insurgent forces in foreign lands to help overthrow governments and install political puppets friendly to US interests. surely that would be a case of one nation waging a secret war against another (if real), and surely it's more opportunistic in it's execution than what is normally depicted in war movies and the like.

as such, it seems to me that the concepts of cyberweapons and cyberwar are not fiction, at least not with a sufficiently general definition of weapon and war. the specific terminology may be poorly chosen, and the concepts misapplied in practice, but that's not really the same as being fiction.

Wednesday, June 23, 2010

public privacy is an oxymoron

i was just reading (yes, i know i'm behind) this post by paul ducklin about public unprivacy and i was struck by how absurd it sounded. perhaps things are different in australia but where i come from there is no reasonable expectation of privacy in public spaces. this isn't new, either, it's been this way for a long, long time - long before google arrived on the scene, that's for sure.

apologies to paul, but he seems to be misapplying the concepts of privacy outside the scope where it makes sense. i've seen similar things in my professional career as well. even so-called 'privacy experts' seem to want to try and get the benefits of privacy in situations where it just doesn't work.

privacy has limits. there are situations where the strategies that comprise privacy make sense and work well and there are situations where they fail for practical reasons. it doesn't matter if keeping X private would be useful or desirable, if the practical realities prevent it then privacy is the wrong tool. being out in public where everyone can see you is one situation where privacy logically can't work (unless perhaps you wear a shroud, but even then only if lots of other people are wearing shrouds too and only if one can't tell the shrouds apart).
i see london,
i see france,
i see the colour of paul ducklin's pants.
preventing filming in public doesn't protect or restore anyone's privacy. if you're in public then the public can still see you, they can see where you're headed, what you're wearing, what you're holding, who you're with, etc. everything that could be photographed can still be seen by the people around you. all preventing filming will do is make it more difficult to disseminate the information you were misguidedly trying to keep private in public. that information hasn't really remained private, it's just harder to share in some cases.

one wonders, if public photography were limited (and british cops, with their penchant for harassing photographers, would love such a policy) what would come next? would public twittering become outlawed? after all with a cellphone i could just as easily tweet that tom cruise and katie holmes are walking down john street and give away as much information as i could with a photo, if not more (a photo might not give any geo-location data - and real-time geo-location data is something celebrities really don't want getting out there). perhaps we should outlaw public observation in it's entirety.

in order to effectively protect ourselves it's important to know when and how to apply the various protective strategies at our disposal. privacy doesn't work in public - you can't magically stop people from seeing/hearing/sensing you. when we step out into a public space it should be understood that everything people can see or hear is no longer hidden and therefore not private. if there's something we want to keep private it's necessary to keep it hidden away where people can't get access to it, which means not bringing it out into a public space.

Tuesday, June 22, 2010

general purpose, what's it good for? absolutely too much

here's a blast from the past. over at tenable security blog this article from january by marcus ranum discusses the possibility of developing special purpose computers for the sake of online banking.

he comes to an interesting conclusion that we may be seeing a market develop for non-general purpose computers but what we have to keep in mind is that it is not the operating system that determines whether a machine is general purpose or not - if we want special purpose machines we have to start at the hardware layer. i know some people find it hard to believe that it's not the operating system that enables malware to be a problem but to demonstrate that that is indeed a fact let's look back to the old bootsector infectors: viral malware which executes and infects before the operating system is even loaded into memory, using nothing but the hardware API's. clearly, generality transcends the OS.

in order to produce a machine which is guaranteed malware free (in order to use as a trusted endpoint in online banking) we have to address either the generality of interpretation (which most people don't really understand, but which is the cornerstone of general purpose computing) or the sharing of data (which would be counter to the design requirements of any system that needs to communicate with another system - such as an online banking dumb terminal). those are the 2 key elements of general purpose computing that enable malware to exist, and since we can't really get rid of sharing for the application in question that just leaves getting rid of the generality of interpretation in favour of fixed first order functionality.

the consequence of doing this (creating a radically different hardware architecture that only allows for fixed first order functionality) means that we can no longer use commodity hardware. that means we would no longer enjoy the benefits of commodity hardware pricing. that in turn means that the cost of ownership of one of these dumb terminals will be rather high compared to commodity computing counterparts. i don't know if the theoretical market for such special purpose computers can withstand the practical realities such a device would entail.

Monday, June 21, 2010

mobile model won't stop malware

thanks to lysa myers for drawing my attention to this slate article about the security advantages of modern mobile device OSes like iOS (the OS for iphones, ipads, etc), android (google's mobile phone OS), and chromeOS (google's netbook OS).

i agree with lysa that it is a fairly well balanced article, in spite of the somewhat sensationalistic headline (and to the credit of a publication whose focus is not strictly about computer security). however, i also find it a bit short sighted.

i say this because the writing is already on the wall with respect to the future of malware, and that future is not encumbered by the modern mobile OS' attempts to be locked down.

i'm surprised that i haven't written about this concept earlier, i thought i had, but when it comes to chromeOS specifically, the reason such an OS has any chance at all is because more and more applications are moving onto the web, into the cloud. an operating system that only gives you access to the web browser wouldn't be very useful with the world wide web of 10 years ago, but now there are a wide variety of web apps to allow you to be productive with nothing more than the lowly web browser.

and where legitimate applications go, malware is sure to follow. we're already seeing malicious facebook apps, and malicious javascript that changes your router's DNS settings is not unheard of either. worms that spread on social networking sites instead of the user's computer are old news by now, and web-based spyware is out there. a locked down endpoint device is a non-issue to malware that operates in the cloud or finds other ways around actually changing the endpoint device itself.

mass adoption of these more stringently locked down platforms won't be the end of malware, it won't even mark a turning point in the evolution of malware since the development is already in progress. if such adoption takes place it would probably be most appropriate to think of it as punctuation in the evolution of malware.

Friday, June 18, 2010

a whitelist user's perspective on windows update

as i may have mentioned before, i use whitelists - for web content (via noscript), for basic network traffic (via my router's port forwarding functionality), and also for traditional applications (via the application launch control functionality of my software firewall). whitelisting is an important part of my security strategy but if there was one thing that made me wish i didn't use application whitelisting it would be windows update.

you see, i don't add the programs involved in software updates to my whitelist. in fact, i don't add the majority of programs on my system to the whitelist. i don't want them being able to run without my knowledge or authorization, especially those that give no indication that anything is running in the first place, and even more so for components that have any role in installation/update. i'm not about to enable silent installs using existing components. and frankly i expect updates to be new anyways so there's little point adding those. for the most part that's not a big deal, i just take note of what program is trying to run (so that i can catch anything that looks suspicious) give a one-time authorization and let the program do it's thing. even for updating most software that's not a big deal, but when it comes to windows update it's a very big deal.

microsoft, for reasons that i can't begin to fathom, are clearly cobbling the update procedure together from as many bits and pieces as they can. i know this because of the number of program executions i need to permit (it's a lot) and how big of an interruption it is. on my older, slower system that i only power on on the weekends i started permitting various parts of windows update on saturday. i don't really have the time (or patience) to sit around clicking every couple of minutes for hours on end so the confirmation dialog only gets clicked when i have to time to go back and check on it's progress (and i sit there for a little bit, clicking here and there until i have to leave again). as of tuesday morning the update was still going. it took until monday just to get to the point of saying "there are updates ready to install". i can only hope that by the time this post is published the update will finally be finished and i can power the machine down.

some people might consider this a failure of application whitelisting, but i don't.  microsoft has designed their update procedure to involve far, far too many invocations of far, far too many separate executables. they need to stop expecting that they can run whatever system components they want, whenever they want, however many times they want with impunity. it's wasteful of resources and it's wasteful of my time.

Thursday, June 17, 2010

facebook's developer verification isn't that bad an idea

earlier this month ryan naraine penned a post critical of facebook's latest effort to crack down on rogue facebook applications.

facebook plans to force app developers to verify themselves either by submitting a phone number or credit card information - essentially to establish a less anonymous identity.

ryan naraine contends:
While this is clearly a step in the right direction, this won’t stop rogue apps from wreaking havoc on the social network.
and he suggests:
Instead of these minor roadblocks, Facebook needs to implement some sort of code signing or code inspection process for every app that’s submitted to its platform.
while he is right that developer verification won't stop rogue apps, what he apparently fails to realize is that NOTHING will really stop the rogue apps, short of closing the application platform to the outside world entirely. preventing web-based malware is comparable to preventing traditional malware that runs on the desktop - there is no panacea, no magic bullet that will make the problem go away.

code signing, for example, can also be bypassed by the serious criminals. don't believe me? we've already seen examples of malware getting digitally signed by gatekeepers of the sort ryan naraine likely has in mind, and the reason is easily deduced - digital signing protects against malicious alteration of legitimate content, but it has no intrinsic strength against malicious parties getting their own content signed. the people who do the signing aren't qualified to determine the safety of the code, and that's not what a signature establishes anyways, it establishes the authenticity of the code.

likewise code inspection is not going to work because facebook just doesn't have the expertise necessary to determine the safety of the code running on it's platform. even if it developed that expertise, it wouldn't scale, and scalability is important. especially when you consider that the malicious facebook developers appear to be taking a page out of traditional malware creator's playbook and going for volume as a means of compensating for the disabling of individual apps.

verifying or certifying the developer instead of the code means that for a malicious developer to continue using a volume-based strategy he'd need to establish a fake identity for each copy of his rogue app. while facebook hasn't set the bar for this very high, it is higher than it used to be and in so doing makes the strategy more expensive for the attacker. additionally, with this framework facebook could easily make it harder or more expensive in the future without changing all that much besides the verification process itself.

it seems clear to me that developer verification will do some good. there will always be room for improvement, regardless of what approach one takes, so let's not pretend like there's a perfect solution out there and facebook is making a mistake by not choosing it - there isn't and they aren't. we'll see how the bad guys adapt to this countermeasure soon enough and then we can start dreaming up new countermeasures for their countermeasures. like the war against desktop malware, web-based malware is going to be another game of cat and mouse / measure-countermeasure - that much was inevitable.

Wednesday, June 16, 2010

mark zuckerberg, privacy, and default behaviour

a while back graham cluley had some choice words about how facebook founder mark zuckerberg has shaped the privacy paradigm at the social networking site.
Mark Zuckerberg, the founder of Facebook, says that most people want to share.
If he really really believes that, then why doesn't he give those users the ability to "opt-in" to share their personal data rather than force people to "opt-out"? After all, if he's right then that business model would work just fine.
now graham used to make software, once upon a time, and i wonder how long it's been since he's done so.

zuckerberg's stated belief that most people want to share is completely compatible with an opt-out privacy paradigm. don't make things too difficult or too much work. if most people want X then give them X by default, don't make them jump through hoops to get it. even if X is sharing.

that being said, i personally would prefer opt-in over opt-out and if you look at zuckerberg's own facebook profile you might be mistaken for thinking he'd prefer that too - his stated belief that most people want to share appears to be hypocrisy because he certainly does not share very much. granted his date of birth and hometown are up there, but if you're looking for anything truly personal the best you'll get are a list of "like"s (including himself, so i guess he's sharing a bit of narcissism). does zuckerberg think he's somehow different from the average person? does he not believe in eating his own dog food? no one wants to share willy-nilly, everyone is at least somewhat selective about what they share and with whom. facebook's privacy defaults should be designed with that in mind.

Tuesday, June 15, 2010

privacy NOT versus security

i've been growing increasingly perturbed by the notion that there is some tension or conflict between privacy and security and i want to set the record straight.

there is no such conflict. my privacy is completely compatible with my security. there is no tension between them, no conflict. ultimately they have the same goal - protecting the things i think need protecting.

often when people talk about privacy vs security they're not talking about the purely person perspective, though. they're talking about other organizations that are expected to help protect an individual's privacy. here there can be a conflict, but not because of some tension between privacy and security, rather because the organization's interests are not aligned with those of the individual. they never are, they aren't supposed to be, it's not reasonable to expect them to be. even amongst individuals alone, my interests, values, and priorities are different from your interests, values, and priorities - by chance we might happen to agree on what needs protecting on a general level but when it comes to the finer details there will always be disagreement.

when we hand over our information to an organization (or when it's handed over for us) we expect that organization to act as our partner in protecting that information - and to the extent the law requires them to do so they usually do. but the organization's interests, values, and priorities are not the same as our interests, values, and priorities.

that is where the true tension exists - not between privacy and security, but between the interests of different parties. whether those parties are two individuals, an individual and a company, or an individual and a nation - any conflict exists between those parties (because they have different needs), not between privacy and security. security of the whole vs. privacy of the individual is a conflict between entities, not strategies.

Monday, June 14, 2010

the security user conversion problem

i'm going to start out by saying that i believe in the effectiveness of user education in making users better able to protect themselves. i have to - i'm a product of self-directed user education - not believing in user education would be the same as not believing in myself.

and what's not to believe in? over 20 years of computing with only a single partial compromise (malware got in but was effectively neutered due to my precautions and environment). that's a better track record than a lot of people who work in the security industry, and i don't work in that industry. that doesn't make me a security expert, mind you, (in fact, i refuse to accept that title) but simply what i like to call a security user (a user of security, it's concepts, it's techniques, etc).

i don't know what specific security goals other proponents of user education have in mind. i've never asked any of them and perhaps i should have. mine is pretty simple, though. it seems to me that other people would be a lot better off, or at least a lot more secure ("better off" might be too open ended) if they were more like me. i know that seems rather egocentric but i was a teenager when i arrived at that conclusion so a certain amount of egocentricity is not unsurprising, and to be perfectly honest there hasn't been anything in the years since to change my mind.

so the question i have been grappling with since i was a teenager is 'how do i make others more like me?', which is to say how do i turn ordinary users into security users? it's a challenging problem and one that i've been working on for years. everything from providing strategies for people to follow (in the form of the anti-virus cookbook, originally written in the pre-windows days), to making information more available and easily found (through the anti-virus reference library), to simply trying to guide the way people think (which i use this blog for), to even trying a bit of memetic engineering (over at security memetics - and i use the term memetic engineering loosely). unfortunately those efforts haven't had the effect i'd been hoping for so i can definitely see both sides of the user education efficacy debate - on the one hand i know it works (it worked on me), but on the other hand it doesn't seem to be working.

obviously something is missing but what? how is it that i became a security user and the people around me generally don't even ask for advice? therein, i think, lies the clue. i've already framed security as a broad class of strategies for satisfying one's need for safety. if the people around me felt their need for online safety wasn't being met then asking their friendly neighborhood security nut would be one of the easiest approaches to changing that. in the absence of that happening i'm left to conclude they don't actually feel their needs for safety aren't being met. the lack of adoption of security best practices could easily be due to this fact alone - people feel safe enough already and don't feel the need to take any added measures. their perceived needs are already being met.

does that mean in contrast that i became a security user because i didn't feel safe? that's certainly an easy conclusion to jump to. but what about now? i'm still learning, still evolving as a security user - am i doing that because i still feel unsafe? that doesn't ring true to me. i feel pretty safe and i think i've got most of my bases covered. if i look back at the beginning, at my beginning on this path, i have to go back pretty far. i've recounted before the story of how i got interested in malware when i was 14, but what i haven't discussed openly before is that my association with security (even computer security) predates the events in that story. i started teaching myself programming at the age of 10 and my first user input prompt was a password prompt. nevermind the fact that it was a vic20 with a tape drive and at 10 i didn't have anything that needed to be protected, i obviously already had a pre-existing appreciation for security (and a rudimentary understanding of how to apply those concepts to computers). i can think of any number of early childhood experiences that could be responsible - all of them, admittedly, incidents after a fashion, but virtually everyone has encountered those sorts of incidents in their lives at one time or another without instilling in them an appreciation for security. more pointedly, people encounter computer security incidents now and still don't develop an appreciation for security.

that, i think, is an important point, because the basic premise of user awareness is to make the user aware of how unsafe they are - nothing should drive that point home better than an actual incident. by showing people that they are not actually safe you are creating (or revealing) a state where their needs are not being met and the universal reaction to this is fear (and possibly anger if you're the one threatening their needs). inevitably it's the application of fear in order to drive change, and personally i find the concept of playing on people's fears distasteful. i also suspect that it is an exercise in futility in the presence security vendor marketing types who have a long and successful history of dispelling fears as a means of selling product.

beyond that, i don't really think of myself as being afraid, so using fear on others doesn't really mesh with the idea of making people more like me. before i bore the remaining 3 readers to death i'll try and get to the point. i was taught at a very early age the value of arguing as a learning tool. it taught me to the importance of looking up facts and figures in order to support or disprove my own hypotheses, but more importantly it taught me to question and not believe everything i heard or read. it taught me to be skeptical. it taught me doubt. one of the things i've observed over the years is that others don't regard arguments in quite as positive a light as i do - and they also don't seem as quick to form doubts, to question or challenge those who supposedly know more. that's a shame because skepticism is the foundation of critical thinking, it is the the cornerstone of the advancement of human knowledge. if we believed everything we were told we'd still be living in caves and using stone tools.

and that, i think, is the missing ingredient in making people more like me - not fear that their needs aren't being met, but skepticism about whether X, Y, or Z can really make them as safe as the box says. skepticism about whether what their local smart guy says is right. even skepticism about whether security experts have it right. security marketing may be good at dispelling fears, but when it comes to doubts (especially reasonable ones) it's an entirely different ball game - and once people start doubting the easy answers those answers won't be able distract people from the search for what will really satisfy their needs for safety. everytime you use fear to drive change you're just feeding the marketing machine more fuel to turn that change you hoped for into mindless consumerism. we need to sow the seeds of reasonable doubt, to foster skepticism and train people to question and challenge more - not just so that they'll become more secure but so that they'll become fundamentally better at critical thinking.