Friday, November 19, 2010

security: it's almost like it isn't there

one of the ideas i continue to encounter over and over again throughout the years is the idea of liking a particular security product because it seems like it's not even there. it's amazing where one can find that idea being expressed. panda security's own luis corrons said the following about his wife's impression of panda's product:
My wife’s computer also have it, and she loves it, mainly because she doesn’t realize that it is installed :)

liking a security product because it seems like it's not even there strikes me as suggestive that the person in question likes to ignore security or not be bothered by security concerns. for most people this is going to be a recipe for eventual disaster. luis' wife, however, has luis on hand to take care of any malware incidents, so i guess for her it's ok. it's an interesting and probably effective strategy - well played, mrs. corrons, well played.

most people can't marry an anti-malware expert, however, so placing value in product's ability to shut up is the wrong way to think about things for most of us. don't get me wrong, if a security tool is too 'chatty' then certainly that poses a usability problem, but the quest for complete transparency is a symptom of mismatched expectations.

the predominant expectation among consumers is that if they install the 'right' product or combination of products then they can forget about all those nasty threats because they'll be protected. that's just not true, though, and it's never, ever going to be true.

people will actively defend this line of thinking, however, often they say they just want to do X and don't want security getting in the way. imagine if i said i just wanted to get to mcdonald's and didn't want traffic safety to get in the way - would that sound reasonable? not so much, i imagine. part of the reason for that is that most of us realize that following certain procedures on the road actually does keep us safer than we would otherwise be; but another part is that we also recognize that when others don't follow those procedures they put us and everyone else at risk, not just themselves.

what if i were to tell you the same principles apply in computer security? there are procedures you can follow that not only allow you to reach your goal in a reasonably secure way (whether that goal is getting work done or enjoying online entertainment or whatever else you use your computer for). not only that but by not following those procedures, by ignoring security, one actually does put other computer users at risk as well. i'm not just talking about other people who use the same computer, either. back in the days of viruses, when a virus infected a computer that computer joined the set of computers from which that virus could further it's spread. essentially it enlarged the platform from which the virus could attack still other systems. today, in the age of the botnet, the same principle applies. when a machine becomes compromised it get's added to the attack platform and assists in attacks on other systems, whether those attacks are simply sending out spam or sending out more malware or performing distributed denial of service attacks. by pretending like security isn't a concern a user puts not only themselves but all other computer users at risk as well.

now, likening secure computing practices to safe driving does not mean i'm trying to argue in favour of requiring users to have a license to operate a computer (though there are those who suggest that). the fact is that day to day life is full situations where you have to take precautions to increase your safety. just crossing the street calls for the precaution of looking both ways first. even toasters (which i bring up because some people literally think computers should be as simple to use as toasters) have safety precautions you need to follow - unplug the thing before you try to retrieve that piece of toast or bagel that's stuck inside.

i often criticize the security industry for perpetuating the myth of install-and-forget security, but the consumer shouldn't be thought of as blameless. people need to wake up and take responsibility for their own safety and security online, as well as being good online citizens and not putting others at undue risk. seriously, folks, computing without the need for taking active precautions is pure fantasy and it's time you started living in the real world. if you don't take responsibility for keeping yourself safe and secure, you won't be safe and secure - period.

Monday, November 15, 2010

another #sectorca has come and gone

[this is 2 weeks late now but better late than never]

i attended the sector conference for the 3rd year in a row this year and as with previous years i've chosen to write about the experience for the benefit of those who couldn't make it, or maybe have never gone and are looking for a reason to go in the future (by which i mean more of a reason than just because i said so). it just so happens i took quite a few notes this year (pen&paper style - i'm still not taking a computing device to a hacker conference - come on) so i've got plenty of material (perhaps too much) to draw from for this post

so jumping right into things - before the first talk even started i took a look at the materials that attendees were given. i have to agree with tyler reguly's twitter comment that there were an awful lot of pamphlets, and by extension and awful lot of dead trees. i'll tell you right now where my copies are almost certainly going to wind up: the garbage (or perhaps the blue box). no offense intended, but a big stack of papers i never asked for and have no interest in feels a lot like junk mail (the physical kind, not the electronic variety). i also noticed that the event programme seemed to be about twice as thick as last year's and the entire back half of it seemed to be advertizements. also there was apparently no puzzle/challenge/whatever like there has been in previous years' programmes.

the first keynote was "The Problem with Privacy is Security", given by Tracy Ann Kosa. in spite of my general disagreement with framing things as though there was a conflict between privacy and security - i stick by my previous position that the real conflict is between competing interests/concerns of disparate parties, usually the individual and the organization -  i found the exploration of what organizations interests are (personal information has become a commodity) and how they use security to protect them (at least so far as the integrity, usability, and reliability of the data is concerned) to be quite interesting. there were a number of other interesting things brought up in the walk-talk (for lack of a better term to describe a talk where the speaker wanders off the stage and amongst the room soon after starting) as well. one of the most salient, i think, is that we protect personal information (or try to) after we collect it rather than proactively by designing our systems to collect as little personal information as possible in the first place.

i also liked the fact that the topic of privacy is being taken more seriously, at least as a topic for discussion, by the sector conference organizers. if this trend continues then perhaps one day i'll be able to engage the vendors on the expo floor without the need to give away my personal information, including the non-obvious way whereby they get my name and contact info when they scan my RFID badge. just because i'm mildly curious in the here and now doesn't mean i care to hear anything more from a company in the future.

following the keynote i attended the "Malware Freakshow 2010" talk given by Nicholas Percoco and Jibram Ilyas. from my perspective it wasn't quite as good as their previous malware freakshow last year. that's not to say that it wasn't a great talk, mind you. it's just that last years' talk had a concept that was truly new to me and that doesn't happen all that often so it's kind of hard to top that - for me. for other people i imagine the memory parsing malware might be a new idea, though i suspect most of us have heard of keyloggers and packet sniffers. there were still some great stories, though, which is primarily what this sort of talk is about. it's the most interesting cases they've found in the field while investigating malware incidents. the data center that was literally in a barn is a good lesson to always visit your hosting company's data center in person to see what you're really getting for your money.

next up was Eldon Spickerhoff's "By The Time You've Finished Reading This Sentence, 'You're Infected'". at the moment i don't know if this has been done previous years but i came to realize this was a sponsored talk. it was only half as long as the normal technical talks, just 30 minutes long. i found myself agreeing with some of the things he said such as whitelisting being complementary to other techniques. other things not so much. for example i think he underestimates the value of botnets designed for sending spam (that spam can direct to drive-by download pages that install more malware, effectively growing the size of the attacker's launchpad - that's always valuable). i also don't agree with the notion that whitelists don't have false positives. i suppose it depends on how you define false positive in that context - false acceptance and false rejection probably are more intuitive terms when it comes to whitelists - but whitelists can certainly falsely accept programs (such as malware that has been erroneously whitelisted) and falsely reject programs (such as self-altering programs).

following that was the lunch keynote given by FBI agent Steve Kelly about "Today's Face of Organized Cyber Crime: A Paradign for Evaluating Threat". first of all i liked hearing about how an FBI agent prefers when he can bypass airport security - i hope that sort of thing filters up to the people who can actually affect change, but i won't hold my breath. one of the take-aways from this talk was that cybercrime organizations aren't necessarily organized, at least not in the sense one thinks of when one thinks of organized crime. i wonder though, if perhaps using the mob as your traditional crime model doesn't bias your thinking in a certain direction, but more to the point perhaps cybercrime doesn't really deserve to be called organized. the non-hierarchical network of people, each with their own criminal specialization, could simply be considered an example of criminal collaboration. even in traditional non-organized crime there are plenty of examples of collaboration between criminals with different areas of expertise. for example a thief specializing in high-end merchandise isn't necessarily going to move that merchandise himself after he's stolen it - instead he might well use the services of a fence, and perhaps he even goes back to the same fence over and over again. that kind of continuous criminal collaboration would look very much like the model the speaker presented for an online criminal enterprise but it certainly wouldn't fit what we would normally think of as organized crime.

another point of interest to me was that the FBI didn't want to treat cybercrime as just 'computer facilitated crime', even though as near as i can tell that's exactly what it is. i understand the reasons, though. when you've broken up your law enforcement efforts by type of crime (ie. one department for theft, another for fraud, etc) treating cybercrime as just computer facilitated crime puts the onus on each enforcement department to develop it's own expertise in dealing with the computer facilitated variant of the crime they focus on. computer facilitation, however, changes the nature of how the crime operates and how to combat it so profoundly, however, that spreading any expertise your organization can acquire so thinly across multiple departments just doesn't make practical sense. it's more logical to centralize that expertise into it's own department as the FBI has apparently done.

oh, and if you haven't guessed already, this speaker was much more interesting than the last law enforcement representative i recall giving a keynote.

after lunch (and the keynote that went with it) nature had to take it's course, and this is one place where i really wish things could be better. there's something like a thousand attendees at sector, most of them male, and we all get to share 4 urinals and 4 stalls in the washroom. no surprise, then, that the lineup stretched right out the washroom doors. really, though, when there's a line that long for the men's washroom, and when your security rockstar speaker has to beg people to let him jump to the head of the line so he can get to the talk he has to give in 4 minutes - that's when you know the washroom facilities are just not good enough. also, some of you security guys really need more fibre in your diet. there weren't any lineups for the sinks, now that i think of it, but i don't want to think about why.

there was some interesting things going on in the schedule for the talk that followed the lunch keynote on the first day. the talk that i really wanted to see, "Language Theoretic Security: An Introduction" by Len Sassaman and Meredith Patterson, was removed from the schedule entirely. on top of that, my second choice, Deviant Ollam's "Four Types of Locks", swapped times with Chris Hoff's "Cloudinomicon" - which is the talk i ultimately wound up seeing and had actually planned on seeing in it's original time-slot. much to everyone's surprise i'm sure, Hoff says you really can have security in the cloud, you just have to be prepared to make profound fundamental changes to pretty much everything or start over from scratch. there were some other really good observations that Hoff shared too, like the fact that companies don't make money by doing the right thing - they don't solve long term problems. while looking over my notes i also see something about survivability being about resist, recognize, and recover - perhaps i was paraphrasing him in my notes but in hindsight that sounds an awful lot like the old PDR triad (prevent, detect, recover) from malware defense.

the next talk i attended was Mohammad Akif's titled "Microsoft's Cloud Security Strategy". like Eldon Sprickerhoff's talk before lunch, this was one of the new half-hour sponsor talks and it was during this one that it really struck home that these half-hour sponsor talks kinda suck. i don't know if it's because of the time constraint (you can't get that much detail into 30 minutes) or if it's because of the fact that they were sponsored talks, but i really didn't get much out of this talk. then again it might have nothing to do with any of that, but rather more to do with an apparent disconnect between my concept of strategy and what most of the rest of the security community seems to think strategy means.

after that last talk i was feeling pretty unfocused and i knew i wasn't going to get much out of the next talk unless it really, really interested me. that meant i had to give up on Marisa Fagan's SDL talk in favour of  Brian Contos' "Dissecting the Modern Threatscape: Malicious Insiders, Industrialized Hacking, and Advanced Persistent Threats". the trick worked, it woke my brain right up. it was a very interesting talk and brian gave a very eloquent explanation of how industrial espionage causes harm (in case anyone thought copying was a victimless crime).

the final talk of the day, after being switched around with the Hoff's talk, was Deviant Ollam's "The Four Types of Lock". i've seen one of his talks previously and i enjoyed learning about locks and lock picking, but this talk was even better because instead of just focusing on the weaknesses (and sometimes strengths) of various locks, this one was geared towards aiding the decision making process when it comes to the procurement and usage of locks. i knew beforehand that the talk would be interesting simply because it was obviously presenting a classification system for locks, but this classification system was focused on strength against lock picking techniques. that was probably the most useful basis for a classification system for both attackers and defenders. Deviant (what a great name) is also a great educator, as someone else at the conference mentioned.

after all the first day talks were done it was time for the reception at Joe Bidali's. there i happened to meet David, Max, David (yes, 2 Dave's at the table) and Kevin. Max was happy to see a developer (myself) and a business analyst (David #2) taking an interest in security. Kevin had a decidedly different take on things, though he seemed to agree with Max that developers taking an interest in security was a good thing. Kevin had some very ... strongly held opinions about what developers need to start doing and how we need to work differently because the amount of time that others get to test things is so limited. of course the reality is that the time developers get to do their part is pretty limited too, and we're subjected to many of the same (or at least analogous) kinds of interference from business units that i often see IT folks complaining about, even when direct access between the two groups is severed. i would love to have the luxury to be able to do everything the right way the first time, but one of the things i've learned that sticks out most prominently in my mind is that even my job as a developer can involve a significant amount of compromise - and that's a hard lesson to learn when you're an uncompromising s.o.b. like me.

after the reception was the speakers dinner, which is now open to non-speakers. this was my first time at the speakers dinner, but it wasn't really all that eventful - other than the look on the waitress' face when i said i'd like my steak blue. i had to settle for rare (or at least what passed for rare there).

the morning keynote of day two was Greg Hoglund's talk titled "Attribution for Intrusion Detection". now i've had some choice words about Greg Hoglund before, but not wanting to make a scene i decided this was one of those people i don't want to introduce myself to (he's not unique in that regard, of course - last year i was careful to keep my distance from rsnake). anyways he had some interesting things to say during his talk. things like reverse engineering is dead (i'm sure that would go over real well at RECon) and malware analysis needs to be easy ("just show me the strings"). he also talks about making things harder for the bad guys, which is pretty rich considering he runs a site that helps the bad guys and not only does he know it, he's commented on the fact.

that being said, there was some legitimately interesting stuff too, such as the not particularly controversial idea that an organization is in a better position to know about the targeted threats it faces than security vendors. the consequence of which being the need for organizations to develop their own (rudimentary) malware analysis capabilities. that's where "show me the strings" comes in.

finally, one thing that really struck me was during the Q&A at the end. someone asked him if there was anything that could be done in the hardware architecture to make things more secure and he said that if we could solve the problem of how to prevent data from being turned into code we'd be more secure against malware (or something to that effect). it's not often that security folks approach what cohen describes as the generality of interpretation (the ability to interpret anything as an instruction) - now if he can just realize that the ability to turn data into code is a requirement of general purpose computing he can shorten future answers to such questions to a succinct "no".

following day 2's morning keynote i attended the presentation "What's old is new again: An overview of mobile application security" by Mike Zusman and Zack Lanier. in recent years mobile security has received a bit of attention but most of the attention i see focuses on mobile malware. this talk was more about finding vulnerabilities in legitimate mobile applications, including those required for operating the devices. there was some great information about the various platforms and one of the things i came away with (and perhaps what the presenters were trying to impress upon the audience) was that this new wave of mobile developers appear to have not learned the lessons from the mistakes previous waves of developers (such as web developers or more conventional application developers) have made in the past. it's as if with each new category of computing platform comes a new set of developers rather than a re-allocation of existing developers, and as such they miss out on the benefits of maturation in existing development communities.

following the mobile app security talk and just before the lunch keynote was another half-hour period for sponsor talks. since i had soured on the notion of such talks i took the opportunity to wander around the expo floor. unfortunately, due to my apparent increasing intolerance for marketing bullshit (if such an increase is even possible - i have long said i don't trust marketing people as far as i can throw them and i look forward to one day finding out how far that is) i found i couldn't bring myself to stop at any of the booths manned by suits. that just left the hardware hacking village which had a home-brew 3D printer and some amazing products of that printer, and the lockpick village where i learned how incredibly weak car locks are. do not leave valuables in your car - we're talking about locks that are easily opened with blank keys and may potentially be pick-able with popsicle sticks. doesn't that fill you with confidence?

the second day's lunch keynote was given by none other than Mike Rothman of Security Incite fame and now a part of Securosis. his talk "Involuntary Case Studies in Data Security" was about data breaches, most if not all being well publicized ones. Mike's talk didn't draw me in quite as much as Greg Hoglund's keynote earlier that day. perhaps it was because it had a business rather than technical focus. that said there was still a number of interesting points discussed during the talk. two that really got my attention the most were a) that there is still no known case of lost media resulting in fraud (even when encryption was absent), and b) companies don't notice their own breaches - they're always found by 3rd parties. the first one is actually a good thing, but it seems like a matter of luck to me; continued loss of media should eventually result in that media falling into the wrong hands and being used for fraud. the second one, where companies don't notice their own breaches, is obviously not good at all but i don't know that there's any way to improve the situation since, as Mike also said, even if you know the past you're still doomed to repeat it because other people who you work with don't understand it and will drag you down with them.

the talk i went to following lunch was Garry Pejski's "Inside the Malware Industry". what i seemed to have missed when reading the description for this talk was that it was going to be a first-hand account from someone who was actually in that industry. that's right, Garry admitted that he was a malware writer! i took more notes at this talk than any other - obviously "know your enemy" had a lot to do with that but also there were a lot of details about the malware, along with Garry's observations about the security countermeasures, the trustworthiness of employers in that industry, and the legality of the business. unlike Hoglund, Garry was repentant about the role he played in cybercrime, although he isn't convinced that the software was illegal because it had an EULA. my suggestion to him is to consider carefully how completely the EULA disclosed the actions of the malware and also to consider sections 342 and 430 of the Criminal Code of Canada (since that does appear to have been the proper jurisdiction).

after the malware industry talk was another half-hour period for sponsor talks. what was also being held during these periods were so-called turbo talks. they were called this because they held 2 back-to-back in the half-hour slot so each person only got 15 minutes.  Nick Owen did a short and sweet  presentation with a long name - "Securing Your Network With Open Source Technologies And Standard Protocols: Tips And Tricks". it came along with the premise (supported by a study apparently) that making better provisions for legitimate remote access results in fewer breaches. i think you can see where that's coming from - people are going to do something in order to work remotely, so it's better if you give them a secure way to do it.. following that, and without any sort of scheduled break for setting things up (thus causing the second turbo talk to run late) was Julia Wolf's presentation "OMG-WTF-PDF" which literally could not have been given a better name. i have a new appreciation for PDF, although i think appreciation is the wrong word. the format is jaw-droppingly bad and the fact that acrobat has 15 million lines of code (compared to NT4's 11 million) is amazing to me. it's amazing because i can't imagine how the programmers managed to put the drugs they were obviously taking down long enough to write so much code.

the final talk i attended (which, since the previous one ran late, i missed the beginning of) was Mike Kemp's "Into the Black: Explorations in DPRK". the title probably doesn't give away the fact that the talk is about the cyberwarfare capabilities of North Korea. that description doesn't really give away the fact that the talk is a debunking of the notion that North Korea has any cyberwarfare capabilities. Mike Kemp presented a hilarious juxtaposition between the purported North Korean superhackers and the internet black hole they live in. i don't want to give too much away, and i couldn't do it justice if i did.

this has been quite a long post (and it's taken me quite a while to finish it) but suffice to say i feel i learned a great deal from sector this year (far more than i've mentioned here) and would recommend it to anyone interested in security. i'm not sure if i'm going to go again next year, however. i've gone 3 years in a row now, and as security still isn't technically my job it kind of feels like self-indulgence to go to these things - and my tolerance for self-indulgence has limits. my employers have been quite generous in affording me this indulgence (and paying for it, no less), but i can't keep taking advantage of that generosity indefinitely.