Monday, February 20, 2012

to patch or not to patch: an edge case

i find myself in a rather odd predicament today. i've been using an older computer (we'll call it one of my secondary computers since it get very little use compared to the one i'm writing this with right now) and i got a pop-up notification that i was running out of space on drive C:.

now i want to put this in context; this computer sees very little use, mostly it gets turn on, has some files transferred to it or from it, and then switched off. i can't remember the last time i actually installed anything on it (for that matter, since i've switched over to using portable software, i can't recall the last time i installed anything on my primary system either) so let's say it's been a really, really long time since i touched the C: drive at all. mostly it's the larger secondary physical disk that gets used.

so you can imagine my surprise when the notification about running low on space popped up. was there something malicious going on? had the system been compromised? no, it was in the process of applying system updates. patches had actually eaten up the majority of my free space - the WINDOWS directory was taking up over 7 gigs of my 10 gig drive. i'm actually in the position where i have to uninstall software so that the patching will succeed.

now, this is an XP system so one might reasonably suggest that i upgrade to the latest version of windows so that i can avoid having all those patches on my system. unfortunately, this system is so old, i doubt it will meet the system requirements of anything newer than XP.

one might also, entirely reasonably, suggest upgrading the harddisk to something larger. memory is cheap, after all. it's a little difficult to justify upgrading the drive just to accommodate microsoft's attempts to fix their earlier mistakes, though. it's certainly not like i'm going to get any additional benefit from greater space on a drive i never make use of.

one could even go so far as to suggest upgrading all the things so that not only would i be able to move to the latest version of windows, i could have more space and a snappier system that is more amenable to being used day to day. but i already have a computer that's more amenable to being used, so really everything that was wrong with the idea of upgrading the drive is also wrong with this plan, in spades.

it's times like this that make one question things we normally take for granted, like why does it patching take so much space? is the fixed binary that much larger than the one with the error in it? no, that doesn't appear to be what's going on. it appears that windows keeps a bunch of stuff around so that you can uninstall the patch if you want to. does anyone ever actually do that? there may be a way to reclaim the space those uninstall files take up, but it's not obvious just by looking at the system, and right now simply letting the updates happen the way an ordinary user would is actually reducing the utility of the system.

thankfully the utility that's been lost wasn't really needed anymore. but what about next time? support for XP is ending, but it's not over yet, there are still more patches coming. i'm going to be facing the prospect of no longer getting patches anyway, so i might as well get used to it early - and since the system is little more than a network attached storage device that spends most of it's time powered off, i can't really see the harm.

in security, we normally think of applying patches as a no-brainer. it may present some logistical hurdles in the enterprise, but it still needs to get done. sometimes, though, there are cases where it just doesn't pay off. no practice is so universally beneficial that it should be mindlessly applied 100% of the time.

Sunday, February 12, 2012

is the iphone really malware free?

friday morning mikko hypponen posted a tweet about the folks behind flexispy changing the look of their site, and i took the opportunity to pose a question to him about iphone malware. you see, flexispy is (or was) a piece of mobile malware that f-secure posted about about 6 years ago. not only that, but there's a version of the software for the iphone, so i found mikko's repeated statement that there was no malware for the iphone to be a little strange in light of the fact that both he and his company have been aware of software that seems to contradict that claim for quite some time.

the resulting discussion with both mikko and his colleague sean sullivan lead in 2 separate directions, so let's look at them in turn. first mikko responded with the following:
@imaguid No malware for iPhones. If you jailbreak your phone: all bets are off. Flexispy runs on jailbroken only.
now to me, this gets to one of the hearts of the matter. when people say there's no malware for the iphone, they're only talking about non-jailbroken phones. the pertinent difference between a normal iphone and a jailbroken iphone is that normal iphones can only install apps from the app store. the app store is a so-called walled garden where all the apps go through a screening process to keep out undesirable programs.

so what people really mean when they say no malware for the iphone is that there's no malware in the app store. this is an important distinction, because the iphone ecosystem (and by extension, the threat landscape) extends beyond the app store. when chris di bona attempted to downplay the threat malware played to android devices by pointing to google's efforts to keep their android marketplace clean, a number of folks were quick to point out that the android ecosystem extended beyond google's android marketplace, so it seems strange that people would forget the same line of reasoning applies to the iphone as well.

one other thing (well, the only other thing, really) that mikko said was:
@imaguid ...and to top it all: we couldn't do anything about iPhone malware anyway, as Apple won't allow Antivirus products to iPhone.
and you know what? why should they allow them when there's apparently "No malware for iPhones"? whether or not there is malware for the iphone, apple doesn't want people to think there is. there is this (rather old) idea that computers can be as easy to use as an appliance (like a toaster). this idea is actually very appealing. it promises computers that just work, computers that don't get malware, computers that are easy and safe and worry free. that promise is part of the secret sauce behind apple's marketing, but if they allowed AV products in then it would dispel the illusion of the appliance computer and apple's products would lose their lustre. it's very convenient, then, that AV vendors are willing to be complicit in apple's marketing by repeating the claim that there's "No malware for iPhones".

but such unqualified claims are, as mikko has revealed, not technically true. it's not that there's no malware for iphones, it's that there's no malware in the iphone app store.

but wait, is that really true? is there no malware in the app store at all? i'm not sure that's true when we've recently been made aware of apps in the app store that collect and send personal information to a remote server without the user's knowledge or consent. but it's about time i turned my attention towards the much more verbose and nuanced discussion that sean sullivan and i had on the subject. perhaps he can shed light on why these personal info stealing apps shouldn't be considered malware. while mikko didn't question the classification of flexispy as malware, sean informed me that f-secure no longer calls it malware.
@imaguid @mikko But they then added an installation interface, and we have since categorized it as riskware.
that's right - in spite of the fact that it is designed and marketed as a tool for spying on other people, it is not classified as spyware or malware because it was given an installation interface - meaning that the attacker has to have physical control of the phone for at least as long as it takes to install an app. now, on the desktop this might be a meaningful mitigating factor, but on mobile devices where physical access is so much easier to achieve? come on...

why exactly that stops it from being malware in general or spyware in particular in the context of mobile device security i still can't fathom, but sean offered up two things by way of explanation. one being a concern over being sued... by malware vendors. this rationale is something i heard from dr. solomon years and years ago, but i have to admit i had hoped that the industry had become less spineless in the interim. i guess that was too much to hope for. google may stand up to the government on behalf of it's users (perhaps not always, and perhaps it doesn't always succeed, but it has tried), but apparently anti-malware vendors only stand up for their users when there's zero risk they'll be challenged.

the other thing he offered was the following definition of spyware from google:
Software that self-installs on a computer, enabling information to be gathered covertly about a person's Internet use, passwords, etc.
apparently it's not enough that the software spies on you in order for it to be called spyware, it has to "self-install" as well. now i'm sure i must be missing something, because this definition seems to exclude anything where the victim is socially engineered into installing the software (it's hard to call it self-installing if the victim is the one installing it). it also seems to exclude anything that utilizes the particular trojan horse case where the software actually does perform the function it claims to, so the payload is additional functionality instead of strictly misrepresented functionality. a game that also steals passwords, a text editor that also sniffs network traffic, webcam software that just happens to send the video stream to a second undisclosed location in addition to the intended recipient - all of these are examples of software that ought to be called spyware but which the victim actually knowingly installs (because the undesirable functionality is unreported) and thus fails to meet the "self-install" criteria. this is precisely the type of situation users of the photo sharing iphone app called path faced.

now, sean also pointed me towards the anti-spyware coalition's risk model description document. i had hoped it would help me to learn more about this "self-install" concept that sean assured me was part of an industry agreed upon standard definition. things didn't turn out that way, since the term "self-install" doesn't appear in that document, but the topic of installation and distribution do figure prominently in the contexts of both risk factors and consent factors. unfortunately this document from 2007 appears once again to be geared to desktop computing rather than mobile computing. that's probably not too surprising considering it's 5 years old now, but it does highlight the age old problem of letting context into the classification process. mobile devices are easier to gain illicit physical access to, as well as being shared more freely (and more frequently) in social circumstances by their owners. the issue of consent at the point of install has far less significance as a risk mitigation for mobile devices. furthermore, the issue of consent at the point of install pretty clearly drops the ball in the case of trojans because it's not necessarily fully informed consent.

as the risk model description document demonstrates, somewhere along the line the industry gave up on basing it's classification system on functional definitions. sean insists that this is a "stricter process" but i think it's more correct to say that it utilizes more criteria than a functional definition system would. utilizing more criteria doesn't always lead to a stricter process because not all criteria are created equal and, at least in the case of the risk model description document, some of those criteria are used to create exceptions (which are generally not the hallmark of a strict process).

one of the last things sean wondered is how could the AV industry possibly use my (supposedly) broader definition(s) and not be accused of FUD. now, aside from the fact that the industry is already accused of FUD (and worse) pretty much regardless of what they do, i think it's important to spell out one of the key differences between a functional definition and the kind of definitions that sean sees in use. definitions that include contextual evaluation are judgements, they engender choice and leave room for agendas. a functional definition has no judgement, it is purely descriptive of the functional capabilities of what is being classified. you can no more be blamed for saying software that spies is spyware than you can for saying water is wet or the sky is blue. there's no silver bullet to make accusations go away, but if you take judgement out of the equation it should render those accusations baseless.

so why is all of this important? because it appears that we've somehow stumbled upon a way in which malware can be classified as "riskware" instead of malware. nobody hears about the riskware classification, nobody cares. they hear "No malware for iPhones" and they shut the rest out because that's all they needed to know (or at least according to traditional notions of malware that should have been all they needed to know). classifying malware as something other than malware seems to be what's enabling people to make the "No malware for iPhones" claim, like some kind of terminological shell game. "No malware for iPhones" makes people think the devices are safe and worry free, but there are risks, and not just for those who jailbreak."No malware for iPhones" is creating a false sense of security and with the revelations that have been made about apple's abject failure to lock down a particular type of personal information and the near ubiquitous exploitation of that failure by app developers, it seems like the stuff of snake-oil.

i tend to think that when people face risks they want to know about them rather than be told there's nothing to worry about, and i tend to think that when those risks come in the form of software that acts against the user's interests, informing the user is the AV industry's job. some people don't want that to happen, they want their own interests to take precedent. if the AV industry allows that to happen through inaction (or worse, facilitates it) then they don't deserve the reputation they have for protecting the user. the industry may not be able to put AV software on iphones yet, but they can certainly do a better job of raising awareness of the risks than going around telling people there's "No malware for iPhones". maybe when public awareness is raised apple will change their ways.
image from secmeme.com