Monday, July 30, 2012

the folly of offensive cyberwarfare

i often feel like i can't speak freely about cyberwarfare (due almost entirely to my principles about not helping or giving ideas to those who make things worse, be they criminals or warmongers), but it's hard to deny the importance of the subject, and frankly when i read what others have written i can't help but think they haven't really thought things through very well.

when it comes to the development and use of digital weapons there are a couple of key points whose implications need to be understood and kept in mind. the first of these is the problem of attribution. the difficulty in attributing the source of a computer attack is both tactically advantageous, and strategically constraining. the advantages should be obvious - you can attack an opponent without the opponent knowing who is responsible for the attack (unless you screw up and reveal yourself). the problems begin, however, when you consider that the opposite is also true - one or more of your opponents can attack you without you being able to tell who it was.

consider what that means. if you can't tell who is attacking you, how can you possibly retaliate? imagine you're blindfolded, you're ears are plugged, you're handed a gun, and stuck in a room with other people who may or may not also have blindfolds, earplugs, and guns. if someone starts shooting at you, how can you realistically return fire to defend yourself without knowing where to shoot? without the ability to target the your opponent you cannot retaliate, you cannot end him before he ends you. further, when the threat of retaliation becomes empty like this, deterrence no longer works. as a result, so-called cyberweapons have no defensive value.

in the absence of attribution, a conflict must consist entirely of first strikes. there is no retaliation, there is no deterrence, there is no scaring an enemy off by showing what you can do, there is no point to visibly stockpiling armaments. that is significantly different from most conventional models of warfare. this is one of the reasons why cyberwarfare must only ever accompany traditional warfare - only then can combatants avoid firing blindly in the dark.

another important aspect of digital weapons to keep in mind is the fact that they're digital. they're code, bits and bytes inside a computer. what is the one thing computers are exceptionally good at doing with those bits and bytes? copying them. imagine a world where it's expensive to develop guns and tanks and bombs from scratch, but it costs virtually nothing to copy them. that is the world of cyberwarfare, and that is a world that actually does not favour the attacker, per se, but rather one that favours the forager (one of the things sun tzu teaches is to forage on the enemy) because s/he gets the most benefit (a sophisticated digital weapon) for the least cost.

when weapons cost a lot to develop from scratch but very little to copy, what conditions do you suppose would make their development and use make sense? if you could eliminate the possibility of copying and re-use, if the weapon assured you a decisive victory over your opponent then it wouldn't matter that it would be falling into your opponent's hands simply by you using it. unfortunately in the real world a nation has many opponents. they cannot all be fought at once and so a decisive victory against all whose hands such a weapon may fall into is not possible. 

what's more, not all of those opponents are necessarily other nation states. the low cost of copying weapons means that the barrier to entry on this battlefield is lowered and more mundane opponents like terrorists or even sophisticated criminals can join the fray. as you can well imagine, those kinds of opponents are far less disciplined and restrained than a nation state would be.

our best example of a digital weapon thus far is stuxnet. it's believed to have cost millions of dollars and many man-years of effort to develop, and now anyone who wants a copy can download it for free from the internet. i would be remiss if i failed to point out that by now stuxnet is pretty well neutered (since the windows vulnerabilities it exploited have been patched and most anti-malware will detect it's presence) and it would actually take a fair bit of time and money to replace the neutered bits so it could be re-used; but there was a time before that was true when stuxnet was still in many people's hands and could have been re-used at a much lower cost. as strange as it may sound, the malware's discovery and subsequent neutering actually served to mitigate the potential for it's re-use. it's creators are lucky it happened before the malware could be re-used against them, their allies, or other interests they might have. that might not be the case next time.

it's a peculiar irony that the people most capable of developing digital weaponry (the technologically advanced and dependent) are the same people who have the most to lose if such weaponry is used against them. this should make it obvious that defense, not offense, is where one's money and effort would better spent. just so i'm not that guy who makes overly general, hand-wavy suggestions, here are some ideas that are more specific than just "you should do defense":
  • fault tolerant designs
    • redundancy is already something we know how to do, but we don't always do it well (as the 2003 blackout clearly demonstrated). the internet is said to be so fault tolerant that if part of it goes down the rest will just route around it. there are many paths to the same destination. obviously that's a property we want for power, communications, water, etc. it's something we should be designing for and unfortunately because it costs it's something we need to pay for. 
    • ease of recovery is something we perhaps don't think quite as much about. how easy is it to replace physical equipment that no longer operates as intended? how easy is it to overwrite logical systems from backups? how many minutes, hours, or days does it take? aiming to minimize that time also serves to minimize the impact of anything unfortunate happening to the system in question.
  • system hardening
    • vulnerability research and patching is something that already enjoys a certain measure of success in consumer and enterprise environments. if a nation wants to protect it's critical infrastructure then perhaps more money and energy should be poured into researching vulnerabilities in that critical infrastructure.
    • eliminating or rethinking external connections (including both network connections as well as removable media) basically stands in direct opposition to the trend of hooking more and more of our most important systems up to one of the most dangerous networks on the planet (the internet). as with most things, the business incentives that are driving the current trend need to be accounted for. the cost saving benefits of remote connections are understood, but there are other ways of achieving that goal without resorting to the internet - that's simply the cheapest/easiest option
    • whitelisting of code and possibly even data on critical infrastructure systems, because quite frankly why should new unknown material be introduced to these systems? it may make sense to occasionally and in a very controlled way apply fixes or make changes corresponding to changes in the industrial processes those systems are a part of, but in general those machines should be unchanging and that should probably be enforced. as a corollary, eliminating dual use is probably a good idea too. there's no reason you should be writing your TPS report on a machine that can control whether the lights stay on. 
  • early warning detection
  • evasion
    • disinformation can be useful in a couple of ways. it can raise the cost of successfully performing an attack by tricking the attacker into doing useless things, and it can also trick the attacker into doing something that sets off an alarm (ie. they walk into a trap).
    • decoy systems that look and act for all intents and purposes just like the real ones can reduce the impact and success of attacks, especially if they have the same warning sensors the production systems do, by turning the problem of attacking the right system into a game of chance for the attacker. holding out baits for the attacker to reveal their presence and/or intentions can certainly confer advantages on a defender.

i've made a few veiled (and not so veiled) references to sun tzu. while some people may argue that "the art of war" is over-played and not particularly relevant to information security, when it comes to warfare of any kind i think it's very relevant:
Sun Tzu said: The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy.
that is to say, of course, that we need to take up a defensible position first before we start attacking. by most accounts (including president obama's) we aren't there yet.