application whitelisting, the practice of keeping a list of known good applications (a whitelist, as opposed to a blacklist) and only allowing those applications to run, is not a new idea... it's a good idea certainly, but not new... nick fitzgerald, for example, has been calling for the development and deployment of just this type of technology for years now... i myself utilize the application launch control functionality (a sort of whitelist) in kerio personal firewall (which is more interesting to me than the actual firewall part)...
today, however, there was a post on the nod32 news blog about whitelisting that i think underlines the need to delve into the strengths and weaknesses of the technology...
at the most fundamental level an application whitelist needs to know 2 things, what sorts of things are programs and which of them are known to be good...
determining what is a program and what isn't is not as simple as it sounds - in fact it's an intractable problem (unless someone solved the halting problem without telling me)... that's not to say we can't make that determination accurately most of the time... basically we're limited to detecting known program types - new types of script programs and that sort of thing are always going to exist and always going to be left out because they're too new... at any rate, this level of detection is always hard-coded into the whitelist system by the developers - there's no real way to add new program type detection on-the-fly on the end user's machine...
where things really get interesting is when we're trying to identify the known good programs... there are 2 main paradigms we can follow: either the 'goodness' is determined by the end user and is specific to his/her machine only, or the 'goodness' is determined by some central body somewhere that decides what's good for everyone... it's also possible to combine these two but let's look at them individually first...
in decentralized whitelist management (where the end user decides what's good) the list of known good things is relatively small and can be updated whenever the user needs to but there is a serious problem trying to enable end users to accurately determine the trustworthiness of new software... most people just don't have the expertise or inclination to perform the necessary research to find out if something should be allowed to run on their machine... frankly, if they were any good at making those kinds of decisions, email worms (and just about every other type of malware out there) would never have become as big a problem as it is today...
in centralized whitelist management (where a central body decides what's good) it's possible to have lots of expertise on hand to determine whether a specified program is good or not but the list of known good things needs to be huge in order to cover everything that everyone might use... one of the arguments against known virus scanning and in favour of whitelisting is that keeping track of all the known bad things (essentially a known virus scanner implements a blacklist) is that the number of known bad things is very large and ever-growing, but a centralized whitelist has this same problem in spades as determining if something is good is just as hard as determining if something is bad, there are far more good programs out there than there are bad ones, and the list grows even faster... in addition, the central organization will undoubtedly have finite resources with which to work so adding updated programs (a number of programs out there do tend to have regular security updates) to the global whitelist may not happen in as timely a fashion as we might like... finally, it puts the central organization in a position where they could exercise undue influence over which legitimate applications get to run on end users machines (presence on the whitelist would effectively be a certification and has many of the same undesirable properties that the system where a certifying authority digitally signs the known good programs so that people would know which ones they could trust)...
combining these two into a hierarchical whitelist management system (where the global whitelist serves to help the end user determine the trustworthiness of programs most of the time) would seem like it would give the best of both worlds but that's not necessarily the case... giving the end user the power to add a program to his/her local whitelist gives him/her the power to make the same mistakes s/he'd be making with a fully decentralized whitelist system - with the understanding that the global whitelist isn't always going to be completely up to date a piece of malware need only advertise itself as the latest security update of some widely deployed program (like part of the operating system, for example) in order to get the user to add it to the local whitelist... also, giving the end user the power to add programs to his/her local whitelist doesn't obviate the need to make the global whitelist as thorough and up-to-date as possible, nor does it make the task any easier or less time-consuming/resource-intensive... further, the absense of a program on the global whitelist could still communicate a lack of trustworthiness to the end users and bias them against a particular legitimate application...
even if you could solve all these problems, it's still possible to make good programs do bad things... and i'm not just talking about buffer overflows and other software exploits, the whitelist can't know that you want Format.com to run sometimes but not others... it can't know that Regedit.com should import the *.reg file with your personal configuration preferences in it but the *.reg file that disables the whitelist on the next reboot and causes the download and execution of some piece of malware... it can't know that your SQL server should run the query that selects all your customer table but not the one that self-replicates all through that same table... it also does nothing to stop malware that exists only in memory, nor malware that executes outside the scope of the operating system (like bootsector malware)
all that said, it should be fairly obvious that if you only let known good programs execute, then the bad programs (known and unknown alike) generally won't be able to run on your system... with a whitelist in place monitoring the attempted launching of applications you'll also get alerted when something you don't know about tries to run... finally, with a relatively small and static set of applications allowed to run, the security of the system is more easily analyzed and modelled...
application whitelisting, like integrity checking/change detection before it, may offer a technically superior security technology in theory, but in practice it is not a panacea... it's not a magical cure-all and it really doesn't eliminate the need for conventional anti-virus/anti-malware scanning... like all anti-malware technology, it is best when used in conjunction with complementary technologies and techniques...
0 comments:
Post a Comment