the resulting discussion with both mikko and his colleague sean sullivan lead in 2 separate directions, so let's look at them in turn. first mikko responded with the following:
@imaguid No malware for iPhones. If you jailbreak your phone: all bets are off. Flexispy runs on jailbroken only.now to me, this gets to one of the hearts of the matter. when people say there's no malware for the iphone, they're only talking about non-jailbroken phones. the pertinent difference between a normal iphone and a jailbroken iphone is that normal iphones can only install apps from the app store. the app store is a so-called walled garden where all the apps go through a screening process to keep out undesirable programs.
so what people really mean when they say no malware for the iphone is that there's no malware in the app store. this is an important distinction, because the iphone ecosystem (and by extension, the threat landscape) extends beyond the app store. when chris di bona attempted to downplay the threat malware played to android devices by pointing to google's efforts to keep their android marketplace clean, a number of folks were quick to point out that the android ecosystem extended beyond google's android marketplace, so it seems strange that people would forget the same line of reasoning applies to the iphone as well.
one other thing (well, the only other thing, really) that mikko said was:
@imaguid ...and to top it all: we couldn't do anything about iPhone malware anyway, as Apple won't allow Antivirus products to iPhone.and you know what? why should they allow them when there's apparently "No malware for iPhones"? whether or not there is malware for the iphone, apple doesn't want people to think there is. there is this (rather old) idea that computers can be as easy to use as an appliance (like a toaster). this idea is actually very appealing. it promises computers that just work, computers that don't get malware, computers that are easy and safe and worry free. that promise is part of the secret sauce behind apple's marketing, but if they allowed AV products in then it would dispel the illusion of the appliance computer and apple's products would lose their lustre. it's very convenient, then, that AV vendors are willing to be complicit in apple's marketing by repeating the claim that there's "No malware for iPhones".
but such unqualified claims are, as mikko has revealed, not technically true. it's not that there's no malware for iphones, it's that there's no malware in the iphone app store.
but wait, is that really true? is there no malware in the app store at all? i'm not sure that's true when we've recently been made aware of apps in the app store that collect and send personal information to a remote server without the user's knowledge or consent. but it's about time i turned my attention towards the much more verbose and nuanced discussion that sean sullivan and i had on the subject. perhaps he can shed light on why these personal info stealing apps shouldn't be considered malware. while mikko didn't question the classification of flexispy as malware, sean informed me that f-secure no longer calls it malware.
@imaguid @mikko But they then added an installation interface, and we have since categorized it as riskware.that's right - in spite of the fact that it is designed and marketed as a tool for spying on other people, it is not classified as spyware or malware because it was given an installation interface - meaning that the attacker has to have physical control of the phone for at least as long as it takes to install an app. now, on the desktop this might be a meaningful mitigating factor, but on mobile devices where physical access is so much easier to achieve? come on...
why exactly that stops it from being malware in general or spyware in particular in the context of mobile device security i still can't fathom, but sean offered up two things by way of explanation. one being a concern over being sued... by malware vendors. this rationale is something i heard from dr. solomon years and years ago, but i have to admit i had hoped that the industry had become less spineless in the interim. i guess that was too much to hope for. google may stand up to the government on behalf of it's users (perhaps not always, and perhaps it doesn't always succeed, but it has tried), but apparently anti-malware vendors only stand up for their users when there's zero risk they'll be challenged.
the other thing he offered was the following definition of spyware from google:
Software that self-installs on a computer, enabling information to be gathered covertly about a person's Internet use, passwords, etc.apparently it's not enough that the software spies on you in order for it to be called spyware, it has to "self-install" as well. now i'm sure i must be missing something, because this definition seems to exclude anything where the victim is socially engineered into installing the software (it's hard to call it self-installing if the victim is the one installing it). it also seems to exclude anything that utilizes the particular trojan horse case where the software actually does perform the function it claims to, so the payload is additional functionality instead of strictly misrepresented functionality. a game that also steals passwords, a text editor that also sniffs network traffic, webcam software that just happens to send the video stream to a second undisclosed location in addition to the intended recipient - all of these are examples of software that ought to be called spyware but which the victim actually knowingly installs (because the undesirable functionality is unreported) and thus fails to meet the "self-install" criteria. this is precisely the type of situation users of the photo sharing iphone app called path faced.
now, sean also pointed me towards the anti-spyware coalition's risk model description document. i had hoped it would help me to learn more about this "self-install" concept that sean assured me was part of an industry agreed upon standard definition. things didn't turn out that way, since the term "self-install" doesn't appear in that document, but the topic of installation and distribution do figure prominently in the contexts of both risk factors and consent factors. unfortunately this document from 2007 appears once again to be geared to desktop computing rather than mobile computing. that's probably not too surprising considering it's 5 years old now, but it does highlight the age old problem of letting context into the classification process. mobile devices are easier to gain illicit physical access to, as well as being shared more freely (and more frequently) in social circumstances by their owners. the issue of consent at the point of install has far less significance as a risk mitigation for mobile devices. furthermore, the issue of consent at the point of install pretty clearly drops the ball in the case of trojans because it's not necessarily fully informed consent.
as the risk model description document demonstrates, somewhere along the line the industry gave up on basing it's classification system on functional definitions. sean insists that this is a "stricter process" but i think it's more correct to say that it utilizes more criteria than a functional definition system would. utilizing more criteria doesn't always lead to a stricter process because not all criteria are created equal and, at least in the case of the risk model description document, some of those criteria are used to create exceptions (which are generally not the hallmark of a strict process).
one of the last things sean wondered is how could the AV industry possibly use my (supposedly) broader definition(s) and not be accused of FUD. now, aside from the fact that the industry is already accused of FUD (and worse) pretty much regardless of what they do, i think it's important to spell out one of the key differences between a functional definition and the kind of definitions that sean sees in use. definitions that include contextual evaluation are judgements, they engender choice and leave room for agendas. a functional definition has no judgement, it is purely descriptive of the functional capabilities of what is being classified. you can no more be blamed for saying software that spies is spyware than you can for saying water is wet or the sky is blue. there's no silver bullet to make accusations go away, but if you take judgement out of the equation it should render those accusations baseless.
so why is all of this important? because it appears that we've somehow stumbled upon a way in which malware can be classified as "riskware" instead of malware. nobody hears about the riskware classification, nobody cares. they hear "No malware for iPhones" and they shut the rest out because that's all they needed to know (or at least according to traditional notions of malware that should have been all they needed to know). classifying malware as something other than malware seems to be what's enabling people to make the "No malware for iPhones" claim, like some kind of terminological shell game. "No malware for iPhones" makes people think the devices are safe and worry free, but there are risks, and not just for those who jailbreak."No malware for iPhones" is creating a false sense of security and with the revelations that have been made about apple's abject failure to lock down a particular type of personal information and the near ubiquitous exploitation of that failure by app developers, it seems like the stuff of snake-oil.
i tend to think that when people face risks they want to know about them rather than be told there's nothing to worry about, and i tend to think that when those risks come in the form of software that acts against the user's interests, informing the user is the AV industry's job. some people don't want that to happen, they want their own interests to take precedent. if the AV industry allows that to happen through inaction (or worse, facilitates it) then they don't deserve the reputation they have for protecting the user. the industry may not be able to put AV software on iphones yet, but they can certainly do a better job of raising awareness of the risks than going around telling people there's "No malware for iPhones". maybe when public awareness is raised apple will change their ways.
|image from secmeme.com|