Friday, May 27, 2011

the whitelister's dilemma

you remember marcus ranum's 6 dumbest ideas in computer security? #2 on that list was enumerating badness (aka blacklisting), which he believed should be replaced with enumerating goodness (aka whitelisting).

ignoring the fact that his underlying assumptions about relative sizes of the malware and legitimate software populations was incredibly wrong*, there's a much more fundamental problem with turfing blacklisting in favour of whitelisting:
the only meaningful criteria we have for deciding something is good or safe is that we haven't found anything bad in it yet.
oh sure you could assume that a system is currently malware free and start your whitelisting regimen from that (potentially pre-pwned) state. you could assume that software direct from the vendor is safe to add to a whitelist too (because microsoft never accidentally distributed infected materials, right?). you could even assume that things that are digitally signed are safe (it's not like stuxnet was digitally signed or anything).

of course, we know what happens when you assume. the reality is that even if we do adopt whitelisting we have to continue enumerating badness for the purposes of maintaining the whitelist. whitelisting stands on the shoulders of blacklisting - it has to, our only other criteria are assumptions that have all been proven false in practice.

as such, whitelisting can never replace blacklisting, it can only ever complement it.

[* according to figures by whitelisting vendor bit9 that i mentioned here, and frankly the idea of a malicious few coders out-producing the benign many seemed silly anyways]