bromium's tal klein took exception to this based on what he believed to be my misunderstanding of their technology, and suggested i go read their white paper, among other things. here's some friendly advice for all you vendors out there: when someone calls you out for snake oil and you tell them to go read your white paper because they don't know what they're talking about, you better make sure they don't find more snake oil in your white paper - especially not in the second paragraph. and i quote:
It defeats all attacks by design.that's a rather bold claim, don't you think? sort of suggests perfect security, doesn't it? but wait, there's more in the third paragraph:
It revolutionizes information protection, ensures compliance – even when users make mistakes, eliminates remediation, and empowers users – saving money and time, and keeping employees productive.the emphasis above is mine. "eliminates remediation" or "eliminates the need for remediation" is something of a recurring theme in bromium's marketing materials. you can find it in their introductory video, and even hear a version of it from the mouth of simon crosby himself in their video on isolating and defeating attacks.
the only way you can eliminate remediation is if prevention never fails. but there is no such thing as a prevention technique that never fails. all preventative measure fail sometimes. if you believe otherwise then i've got a bridge to sell you (no, not really). perfect prevention is a pipe-dream. it's too good to be true, but people still want to believe and so snake oil peddlers seize on it as a way to help them sell their wares.
so it would appear that things are actually worse than i had originally thought. not only is bromium letting 3rd party generated snake oil go unchallenged, they're actively peddling their own as well. now just to be clear, i'm not saying that vsentry isn't a good product, from what i've read it sounds quite clever, but - even if you have the best product in the world, if you make it out to be better than it is (or worse still, make it out to be perfect) and foster a false sense of security in prospective customers, then you are peddling snake oil.
customers may opt to ignore the possibility for failure and the need to remediate an incident, but i wouldn't suggest it. to re-iterate something from my previous post, it's an isolation-based technique. although they often like to gloss over the finer details, their isolation is not complete - they have rules (or policies if you prefer) governing what isolated code can access outside the isolated environment, as well as rules/policies for what can persist when the isolated code is done executing. this is necessary. you're probably familiar with the aphorism about if a tree falls in forest and no one is around to hear it, does it make any sound? well:
if code runs in an environment that is completely isolated, does it do any useful work?the answer is a resounding no. useful work (the processing of data to attain something that can itself be used by something else) has not occurred. all isolation-based techniques must allow for exceptions because of the nature of how work gets done. we divide labour, not just between multiple people, but also between multiple computers, multiple processes, multiple threads, and even multiple subroutines. we need exceptions to isolation, paths through which information can flow into and out of isolated environments, so that the work that gets done in isolation can be used as input for yet more work. this transcends the way the isolation is implemented, it is an inescapable part of the theory of isolating work.
and that is a weakness of every isolation-based technique - the need to breach the containment it affords in order to get work done. someone or something has to decide if the exception being made is the right thing to do, if the data crossing the barrier is malicious or will be used maliciously. if a person is making the decision then it boils down to whether that person is good at deciding what's trustworthy or not. if a machine is making the decision then, by definition, it's a decidability problem and is subject to some of the same constraints as more familiar decidability problems (like detection - after all, determining if data is malicious is as undecidable as determining if code is malicious). in the case of vsentry, a computer is making the day to day decisions. the decisions are dictated by policies written by people, of course, but written long before the circumstances prompting the decision have occurred, so people aren't really making the decision so much as they're determining how the computer arrives at it's decision. the policies are just variables in an algorithm. the decisions made by people involve what things vsentry will isolate (it only isolates untrusted tasks, not all tasks), but people deciding what to trust and what not to trust is basically the same thing that happens in a whitelisting deployment or when people think they're smart enough to go without anti-virus software, and we already know the ways in which that can go awry.
vsentry may have scored a perfect score in an evaluation performed by NSS Labs using malware and an operator utilizing metasploit, but that doesn't mean it's perfect anymore than receiving the VB100 award makes an anti-virus product perfect. they weren't able to find a way past vsentry's defenses because vsentry is still new and still novel. it will take time for people to figure out how to effectively attack it, but eventually they will. the folks at bromium need to tone down their claims and take these famous words to heart:
Don't be too proud of this technological terror you've constructed - Darth Vader