marcus ranum has finally found an execution control app (in other words application whitelisting software, the likes of which i've written about before) that he's happy with and apparently thinks could be the final nail in the coffin of anti-virus software - read it here...
yes, marcus is one of those people who thinks anti-virus is dead, or if it isn't it should be... he's taken shots at it before under guise of the stupidity of enumerating bad things instead of good things (nevermind the fact that there are far more good things in the world than bad) and in the above article makes reference to it as being an "default permit" technology... clearly, marcus likes to think in firewall terms...
despite that, calling anti-virus a "default permit" type of technology is a rather gross over-simplification... anti-virus software doesn't permit things, it deals with what it knows and what it knows is bad things - when it encounters one of those things it alerts you to the fact so that you can try to protect yourself from it and/or don't inadvertently do something dumb... what you do with everything else is entirely up to you, it's not anti-virus software's business to get involved in that... simply put, anti-virus software is a type of blacklist technology...
now, it shouldn't take a rocket scientist to figure out that blacklists and whitelists actually complement each other, especially here... yes, only allowing known good applications to run is more secure than simply filtering out the known bad ones, but how do you determine which ones to allow? predefined blacklists and whitelists provided by people with expertise allow us shrink the scope of this problem by reducing the number of things we'd need to determine the safety of unaided - but marcus would have us throw out one (or both, considering his reaction to prevx) sources of anti-malware intelligence and figure that sort of thing out for ourselves...
maybe i'm wrong, but i think most people use computers to get useful work done, not to waste their time researching whether executable X is good or bad when that information is often already available in downloadable form... most people aren't very good at that anyways (if they were the malware problem would never have become a big problem) and marcus is clearly no exception since he needed software to tell him that windows probably wouldn't operate properly if everything under the windows directory were denied the ability to execute...
when deciding what's ok to run the user needs help, especially with windows... microsoft has put a lot of magic into windows, by which i mean there's a lot going on behind the scenes that they make absolutely no attempt to be transparent about... without expert knowledge it's hard to know which of the hundreds of executables that get included with windows needs to be allowed to run in order for windows to operate properly...
marcus has apparently learned his lesson about the windows directory, but clearly not about the importance of making use of intelligence that has already been gathered... also not about knowing the enemy, and not about knowing oneself - by which i mean he appears to have not considered the weaknesses of application whitelisting technology... there are, and will always be, types of executable content that application whitelists won't be able to recognize (for example, new scripting languages won't be recognized by old whitelist software)... there are, and will probably always be, ways to get software that has already been authorized to do your dirty work for you (for example, internet exploder combined with some of those improperly labeled safe-for-scripting activex controls - or even buffer overflow exploits)... execution control software has a fairly limited notion of how execution can happen that doesn't actually reflect reality - most of the standard ways it can handle but would it have saved you from a WMF or VML exploit? would it have prevented macro viruses from ever becoming a problem? updating a blacklist to catch these things is relatively straight forward since you're just telling it additional things to look for, but updating a whitelist to deal with them involves making additions to how it looks for things being executed...
whitelisting is not the great stand-alone solution marcus thinks it is... it's a great complement to an anti-virus application, though, and an excellent technique to apply provided you can deal with figuring out what's safe to authorize...
15 comments:
You could make an arguement that tools like tripwire are a form of 'application whitelisting', yet systems with tripwire still get attacked successfully.
thanks, rob, that's a good example to know about...
Hi Kurt,
We are doing something in the MLS/TOS space that converts the inside network environment into a user centric, deny default environment, so we tend to think along the lines of Marcus, but not fully, as we do not throw out the AV defenses either. There is another important difference though. We enforce security policies at the kernel level, so we are not just talking about application whitelisting.
What this does is create a core layer that forms trust channels and acts holistically to assure that security policies are enforced, or machines are not allowed to execute. For example, rather than blocking malware, we prevent an owned machine (say a keylogger) from sending its payload home.
Any thoughts on this approach?
@rob lewis:
off the top of my head, no i don't have any thoughts on it... i'd need to know more...
as i think about it more though:
if i understand your description, you're talking about a whitelist that, instead of listing what applications can execute, lists what kernel level behaviours are allowed and in what contexts...
my first impression is that such a system could be very difficult to manage... i would assume there's either some very good machine learning facilities to deal with arbitrary programs or some of the basic things we take for granted in computing (like the ability to run arbitrary programs) won't hold for such a system..
my next impression is that computers can only accurately determine context in a distinct subset of the possible circumstances... that could allow the whitelist to be gamed in the sense that something that is allowed can still be abused... nothing is perfect, after all...
at any rate, it sounds very different from anything i've encountered before...
I think you have basically got it, it is not difficult to manage. This is an innovative product and I can guarantee you have not encountered anything like it before, because that is the comment everyone makes who sees it.
I wish I was able to fill you in moreI but as I am not technical, (on biz dev side) I am the wrong person to explain the nitty gritty. I read blogs like yours to help understand how to position ourselves amongst the status quo. The things I read on your site are new terrain for myself and your knowlege seems very extensive. Would be happy to pass on links if you tell me how. (rob at googgun dot com)
@rob lewis:
well, actually rob, you've already posted a link to your organization twice when you included your web page in the identity info for posting... i took a look and i think i figured out which product you're referring to, though what is findable on the site seems a little short on detail...
if there's a white paper describing the technology in more detail that is available to the public then i don't have a problem with you providing a link or URL to that right in the comments...
Again I have to point out the fact that "corporate" and "home users" have very different needs. In a corporation where you have limited number of applications a whitelist policy is very, very wise thing to use which can prevent 99% of current malware from running at very low cost.
Now in the case of home users, you need a blacklist/greylist approach because they want to be able to try out new pieces of software frequently.
Neither of the two approaches creates a 100% secure environment, and each of them makes sense in some cases and not others.
@cdman83:
i think you're falling into the same trap that marcus did... you're thinking of applications instead of programs...
when you count up the number of actual programs that have to go on the whitelist it still becomes a pretty big list, even in a corporate setting... people don't consider the fact that there are actually many executables that are part of the OS or part of their word processor or part of their [insert product here]... they think they're just dealing with a handful of things but they're not...
the fact that those programs will get changed/updated and need to be re-authorized over time are exacerbated by unexpectedly large whitelists...
Hi Kurt,
Thanks for the go ahead. There is a fine line between discussing a technology and blog spam.
These links are on our site, but can be hard to find, unfortunately.
The theoretical basis for this approach can be found here:
http://www.googgun.com/pdf/gti_trustifier_design.pdf
The security model is introduced here:
http://www.googgun.com/pdf/gti_trustifier_security_model.pdf
These descriptions basically apply to a Linux host. How our appliance hooks into a mixed platform or MS environment, I can't tell you, being a non-techie, but it does. For stronger trust statements an agent may be used with MS boxes, but it is not always necessary, depending upon the protection profile. Just using active directory may be enough in many cases. A small MS network can possibly be protected from a single Trustified firewall.
Cheers,
Rob
Trustifier Pt 2.
Further explanation of Trustifier Kurt. I hope you will have an opinion for me some time after you ponder it a bit.
According to its designer, Trustifier secures all systems and networks against all traditional outside threats, such as virii, hackers, malicious code, spam and other nasties. Its specific purpose though, is to protect from insider threats, which is much harder, technically to do.
To do so, Trustifier must secure in a mathematically complete, forensically defensible manner and protects its security functionality; Trustifier is able to defend its own integrity. This is the essence of trusted computing, as you must be able to trust the data, if you can not place %100 trust in your users.
Risk mitigation is only possible when the risk itself if measurable. Unless the security systems that provides security breaches is mathematically complete, there is no way to measure the probability of success or failure of its essential functions of protection.
This explains the necessary reliance on use of estimates and probability in metrics and discussion of ROI, and the lack of agreement on those methods used.
The theoretical paper I gave to you last post discusses that mathematical basis.
Cheers,
Rob
@rob lewis:
i've had a look at one of the papers (i get a timeout when trying to get the other one) and from what i can tell my simplified interpretation was basically right... the system concerns itself with authorization - is X allowed to do Y...
it's much more complicated, and much more fine grained than traditional application whitelisting but it shares the same fundamental short coming - it only cares about whether X is allowed to do Y, not whether X should be allowed to do Y...
it's ultimately people who are going to be authorizing things and that, more than the differences between trustifier and simple whitelists, is where the weakest link lies...
people trust the wrong things - trustifier will allow them to say exactly what they trust those things to do, but it won't help them make better decisions...
Are you saying that such a system to help make decisions like some kind of AI? Perhaps that is down the road, but for now, settling for laying down boundaries for what is in the realm of acceptable is better than what is out there now.
Or should the question be, "will it protect users from bad decisions"? Something is required to protect users from themselves. No education or policy will protect against vengeful users or those gone off the beam.
As you say, someone will be making data access decisions for other users, but in a commercial operation, I think they do have the right to protect corporate systems, customer data and intellectual property. A breach can ultimately lead to the demise of the whole commercial entity.
@rob lewis:
honestly, i don't think an AI would solve the problem... i don't think technology can completely solve the problem and i have my doubts that anything else can either...
bad decisions can often find their roots in the imperfect knowledge upon which those decisions are based... since we can only ever approach perfect knowledge, not actually achieve it, bad decisions will always be with us...
as for protecting people from bad decisions, rather than eliminating them, that implies making bad decisions free of consequence which seems just as unlikely as eliminating bad decisions in the first place... especially since the computer cannot know which decisions are good and which are bad and thus cannot remove consequences from just one them even if it could remove consequences...
none of which is to say that we shouldn't try to eliminate bad decisions or their consequences - we should try because that's the only way to get closer, to improve... i'm sure trusted computing is a significant improvement for some folks, but like most things that depends on the particulars of one's situation...
Just to frame your thinking into my original question to you then, do you think there is value in a technology that prevents some malware from propagating itself/executing? Is it a bad decision to prevent such things?
Thanks Kurt, I have enjoyed this exchange of ideas. We are in Ottawa, if you were not aware. Drop us a line in you're heading our way ( as I believe you are in TO).
the simple answer is anything that can prevent malware from gaining control has some value... it's definitely not bad to try and prevent that...
there's a cost, too, of course... there is with any technology (and i don't just mean the monetary cost)... it's up to the person in charge of a system to decide whether the perceived value of a technology is worth the cost...
Post a Comment