bromium's tal klein took exception to this based on what he believed to be my misunderstanding of their technology, and suggested i go read their white paper, among other things. here's some friendly advice for all you vendors out there: when someone calls you out for snake oil and you tell them to go read your white paper because they don't know what they're talking about, you better make sure they don't find more snake oil in your white paper - especially not in the second paragraph. and i quote:
It defeats all attacks by design.that's a rather bold claim, don't you think? sort of suggests perfect security, doesn't it? but wait, there's more in the third paragraph:
It revolutionizes information protection, ensures compliance – even when users make mistakes, eliminates remediation, and empowers users – saving money and time, and keeping employees productive.the emphasis above is mine. "eliminates remediation" or "eliminates the need for remediation" is something of a recurring theme in bromium's marketing materials. you can find it in their introductory video, and even hear a version of it from the mouth of simon crosby himself in their video on isolating and defeating attacks.
the only way you can eliminate remediation is if prevention never fails. but there is no such thing as a prevention technique that never fails. all preventative measure fail sometimes. if you believe otherwise then i've got a bridge to sell you (no, not really). perfect prevention is a pipe-dream. it's too good to be true, but people still want to believe and so snake oil peddlers seize on it as a way to help them sell their wares.
so it would appear that things are actually worse than i had originally thought. not only is bromium letting 3rd party generated snake oil go unchallenged, they're actively peddling their own as well. now just to be clear, i'm not saying that vsentry isn't a good product, from what i've read it sounds quite clever, but - even if you have the best product in the world, if you make it out to be better than it is (or worse still, make it out to be perfect) and foster a false sense of security in prospective customers, then you are peddling snake oil.
customers may opt to ignore the possibility for failure and the need to remediate an incident, but i wouldn't suggest it. to re-iterate something from my previous post, it's an isolation-based technique. although they often like to gloss over the finer details, their isolation is not complete - they have rules (or policies if you prefer) governing what isolated code can access outside the isolated environment, as well as rules/policies for what can persist when the isolated code is done executing. this is necessary. you're probably familiar with the aphorism about if a tree falls in forest and no one is around to hear it, does it make any sound? well:
if code runs in an environment that is completely isolated, does it do any useful work?the answer is a resounding no. useful work (the processing of data to attain something that can itself be used by something else) has not occurred. all isolation-based techniques must allow for exceptions because of the nature of how work gets done. we divide labour, not just between multiple people, but also between multiple computers, multiple processes, multiple threads, and even multiple subroutines. we need exceptions to isolation, paths through which information can flow into and out of isolated environments, so that the work that gets done in isolation can be used as input for yet more work. this transcends the way the isolation is implemented, it is an inescapable part of the theory of isolating work.
and that is a weakness of every isolation-based technique - the need to breach the containment it affords in order to get work done. someone or something has to decide if the exception being made is the right thing to do, if the data crossing the barrier is malicious or will be used maliciously. if a person is making the decision then it boils down to whether that person is good at deciding what's trustworthy or not. if a machine is making the decision then, by definition, it's a decidability problem and is subject to some of the same constraints as more familiar decidability problems (like detection - after all, determining if data is malicious is as undecidable as determining if code is malicious). in the case of vsentry, a computer is making the day to day decisions. the decisions are dictated by policies written by people, of course, but written long before the circumstances prompting the decision have occurred, so people aren't really making the decision so much as they're determining how the computer arrives at it's decision. the policies are just variables in an algorithm. the decisions made by people involve what things vsentry will isolate (it only isolates untrusted tasks, not all tasks), but people deciding what to trust and what not to trust is basically the same thing that happens in a whitelisting deployment or when people think they're smart enough to go without anti-virus software, and we already know the ways in which that can go awry.
vsentry may have scored a perfect score in an evaluation performed by NSS Labs using malware and an operator utilizing metasploit, but that doesn't mean it's perfect anymore than receiving the VB100 award makes an anti-virus product perfect. they weren't able to find a way past vsentry's defenses because vsentry is still new and still novel. it will take time for people to figure out how to effectively attack it, but eventually they will. the folks at bromium need to tone down their claims and take these famous words to heart:
Don't be too proud of this technological terror you've constructed - Darth Vader
10 comments:
It seems to me that regardless of what we do, you'll just scream back "snake oil".
I actually gave you a reading syllabus that included two whitepapers, a blog, and a Q&A. We actually quote Turin's Halting Problem as one of the primary catalysts for the creation of our technology.
In our white papers, we specifically say that nothing is impervious to all attacks. You are confusing two statements:
1. We block 100% of known malware.
2. We exponentially reduce the endpoint attack surface (by our math, approximately 10^4).
At no point do we ever claim we will block 100% of all malware, ever. I agree with you, that would be a preposterous statement.
My advice to you is to stop nitpicking and focus on the big picture. Our micro-virtualization is a huge step forward in secure systems architecture, and it directly integrates with existing operating systems and endpoints, thus immediately reducing the attack surface of any company which adopts it. That's a fact, not snake oil.
-Tal
"At no point do we ever claim we will block 100% of all malware, ever."
except you actually did. the malware set is a proper subset of the set of all attacks. by claiming that you defeat all attacks you are also saying that you defeat all malware.
you want me to "stop nitpicking and focus on the big picture" but that's not how i work. i'm detail oriented and i focus on the way things can fail. that is part of the security mindset. it is necessary in order to be able to plan for and hopefully mitigate those potential failures.
you folks certainly aren't facilitating that thought process so somebody has to do it, and if it means tarnishing the rosy image of your "big picture" then so be it. rose-coloured glasses don't keep things safe.
(for the record, i read what you gave me (and more). that's how i found what i found.)
Tal, when the security community came up with DEP, it was a huge step forward reducing attack surfaces until the malware folks overcame it. ASLR was another huge step forward to reduce attack surfaces until that was subverted too. I am sure this micro virtualization stuff is yet another iteration of the same.
The scary bit in Bromium's case here is that you depend on exception policies and any policy based security mechanism is either too annoying and prevents getting useful work getting done causing the policies to be relaxes to an extent where the attack surface goes back up again and you get attacked by malware.
@Anonymous:
i don't think i'd be so quick to draw conclusions about the user experience/security trade-off without actually using it first.
unless you have and you're speaking from experience - in which case, thanks for the info.
It's obvious that these conclusions are derived without a proper demonstration and understanding of the technology.
@Anonymous:
"It's obvious that these conclusions are derived without a proper demonstration and understanding of the technology."
the conclusions in the blog post don't depend on how the technology is implemented. it is about a) how bromium presents their technology, and b) the theoretical limitations of isolation based techniques.
it doesn't matter how clever they are, they can't exceed theoretical limitations.
I'm in agreement with Tal. You are confused in how the technology works and are making inaccurate conclusions. There is no need to remediate a malware infection if it only exists in memory (a VM). It's akin to why EC2 maintains multi-tenancy.
@Anonymous:
maybe you should look more closely at Tal's comment. he agreed that it won't defeat all malware, therefore remediation WILL still be necessary at least some of the time.
Malware can do many many things that could be leveraged on a microVM. For example, you could spawn DDOS, Generate Bitcoint or Consume CPU, thats just to start. The fact is, you're technology is cool but there's no silver bullet... I like the blog post, and it IS correct.
This is the product marketing style of Bill Gardner, snake oil salesman #1
Post a Comment