so as a result of my previous post on the use of multiple scanners as a supposed form of defense in depth i was pointed towards this set of slides for a presentation by sergio alvarez and thierry zoller at n.runs:
http://www.nruns.com/ps/The_Death_of_AV_Defense_in_Depth-Revisiting_Anti-Virus_Software.pdf
the expectation was that i'd probably agree with it's contents, and some of them i do (ex. some of those vulnerabilities are taking far too long to get fixed) my blog wouldn't be very interesting if all i did was agree with people so thankfully for the reader there's a number of things in the slides i didn't agree with...
the first thing is actually in the filename itself - did anyone catch that death of av reference in there? obviously this is qualified to be specific to a more narrow concept but the frame of reference is still fairly clear...
i'm not going to harp on that too much because that's just a taste of what's in store, actually... the main thrust of the presentation these slides are for seems to be that defense in depth as it's often implemented (ie. with multiple scanners at multiple points in the network) is bad because of all the vulnerabilities in scanners that malware could exploit to do anything from simply bypassing the scanner to actually getting the scanner to execute arbitrary code...
this says to me that instead of recognizing that you can't build defense in depth using multiple instances of essentially the same control, the presenters would rather call the construct a defense in depth failure and blame the failure on the scanners and the people who make them (and make no mistake, there certainly is some room for blame there)... the fact is that it was never defense in depth in the first place and if you want to assign blame, start with the people who think it is defense in depth because they clearly don't understand the concept... in a physical security context, if my only defensive layer is a wall and i decide to add a second wall (and maybe even a third) i add no depth to the defense... an attack that can be stopped by a wall will be stopped by the first one and one that can't be stopped by the first wall probably won't be stopped by subsequent walls...
the slides also have some rather revealing bullet points, such as the one that lists "makes our networks and systems more secure" as a myth... this goes back to the large surface area of potential vulnerabilities; the argument can be made that using such software increases the total number of software vulnerabilities present on a system or in a network - however this is true for each and every piece of software one adds to a system... i've heard this argument used in support of the idea that one shouldn't use any security software and instead rely entirely on system hardening, least privileged usage, etc. but it's no more convincing now than it has been in the past... yes the total number of vulnerabilities increase but there's more to security than raw vulnerability counts... the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners... there aren't legions of malware instances exploiting these scanner vulnerabilities, otherwise we'd have ourselves an optimal malware situation...
another point, and one they repeat (so it must be important) is the paradox that the more you protect yourself the less protected you are... this follows rather directly from the previous point, using multiple scanners is bad because of all the vulnerabilities... the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure... i don't know if that was the intended implication, i'm tempted to give the benefit of the doubt and suggest it wasn't, but the implication remains... again, there's more to security than what they're measuring - they're looking at one part of the change in overall security rather than the net change...
yet another point (well set of points, really) had to do with how av vendors handle vulnerability reports... as i said earlier, some of the vendors are taking far too long but some of the other things they complain about are actually quite reasonable in my eyes and i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet (no point in giving out details that only the bad guys can actually act on), condensing bugs when rewriting an area of code (i develop software myself, it makes perfect sense to me that a bunch of related bugs with a single fix would be condensed into one report), fixing bugs silently (above all else, don't help the bad guys), and spamming vulnerability info in order to give credit to researchers (if you're in it for credit or other rewards you'll get no sympathy from me)...
finally, and perhaps the most novice error in the whole presentation was the complaint that scanners shouldn't flag archive files as clean if they're unable to parse them... there is a rather large difference between "no viruses found" and "no viruses present"... scanners do not flag things as clean (at least not unless the vendor is being intellectually dishonest) because a scanner cannot know something is clean - all a scanner can do is flag things that aren't clean and the absence of such flags cannot (and should not, if you know what you're talking about) be interpreted to mean a thing is clean... scanners tell you when they know for sure something is bad, not when they don't know for sure that something isn't bad... if you want the latter behaviour then you want something other than a scanner...
all this being said, though, i did take away one important point from the slides: not only is using multiple scanners not defense in depth, using multiple scanners in the faux defense in depth setup that many people swear by comes with security risks that most would never have considered...
10 comments:
Not really related to your post (which I agree with); just wanted to add a rant of my own.
Regarding the problem that scanner vendors are slow at implementing vulnerability detection. Well, the situation is pretty dreadfull here - and it's not exactly ours (vendors') fault, either.
I recently decided to implement detection of all Office-related vulnerabilities I could. Mostly because nowadays I get more Office documents with exploits than with macro malware and needed a tool to sort it out properly.
The first thing I discovered is that scanners very rarely detect exploits generically. Instead, they detect specific things - samples, shellcode, payload, etc. (Our scanner is an exception - when it reports something as CVE-whatever, I guarantee you that it detects that particular exploit, no matter in what form it is present and used. My main problem is that it detects relatively very few exploits.) So, scanners were useless for my purposes (sorting the incoming samples by exploit).
Next, I was pointed to a tool, called "officecat", distributed from the Snort site, but not open source itself (Google for it). It is a dedicated stand-alone program for detecting exploits in Office documents - apparently, exactly what I needed. Problem is, the program sucks. It sucks as user interface (it can scan only one document at a time) and it sucks as exploit detector. I found that it had wrong (often too broad) algorithms for detecting exlpoits I knew how to detect, that it called exploits with the wrong CVE identifier, that it reported non-existent exploits, and even that it didn't have any code at all for detecting one of the exploits it claims to detect. :-)
Well, as that character from "The Fifth Element" said, if you want something done properly, you have to do it yourself. So, I started writing my own Office exploit detector.
Hoo, boy. Fiding information how to detect the exploits is a bitch. Oh, sure, there are a dozen or so entities that publish "advisories" - but these advisories are essentially content-free and are usually just copies of the Microsoft Security Bulletins. All these so-called "advisories" can be summarised by the phrase "A vulnerability has been found in this Office application, patch it". The rest there is just useless crap. And why so many advisory-issuing entities?! And, of course, even the little bit of useful information is totally useless to me as somebody who has to implement detection of the vulnerability. The only useful info came from the full disclosure lists (not that I'm a fan of full disclosure) and even there usually the discoverer of the vulnerability had no clue where the vulnerability was or how to detect its presence; he just published a sample that would crash, e.g., Word with no explanations.
Then I discovered that the CVE database is full of crap, too. I fetched all the Office-related vulnerabilities listed there and started hunting for information about them. Turns out, some of the CVE entries don't really exist - they are duplicates of others. Also, sometimes even those who had discovered the vulnerability had no clue where it was. E.g., with CVE-2007-0913, Symantec simply got a PowerPoint sample that was dropping malware using an unknown vulnerability. They reported the issue to CVE and Microsoft and that was that. They have no clue what the vulnerability consists of - only that Microsoft has fixed it.
Microsoft, OTOH, apparently has no clue about many of the vulnerabilities they have fixed. I mean, they have no documentation what the vulnerability consists of and how to detect its presence. They are not inteterested in looking it up, either. Their standard reply is "this is a fixed issue; we can't afford to allocate resources for researching it; our resources are all tied up researching the present issues". :-(
So, is it any wonder that we're slow at implementing vulnerability detection? It would be helpful if we knew what to look for, you know?
Still, I have implemented detection of 63 Office-related vulnerabilities (using the CVE nomenclature) in my little utility (it's a Perl script, actually) and am now testing for false positives, before implementing these algorithms in F-PROT. In many other cases I have determined that the CVE issue doesn't really exist because it is covered by another one. Still, there are 27 Office-related CVE vulnerabilities that I have no idea how to detect. :-( I have samples of 10 of them but either insufficient description or no description at all. For the remaining 17, I have neither samples, nor description how to detect the vulnerability.
So, if you or any of your readers have any useful information about the following vulnerabilities, I'd appreciate if you drop me a line at bontchev@complex.is:
CVE-1999-1474 - PowerPoint. No samples, no description
CVE-2000-0088 - Word. No samples, no description
CVE-2001-0005 - PowerPoint. @stake -> Symantec? No samples, no description
CVE-2001-0718 - Excel. Symantec? No samples, no description
CVE-2002-0616 - Excel. No samples, no description
CVE-2002-0617 - Excel. No samples, no description
CVE-2003-0664 - Word. No description
CVE-2004-0571 - Word. Insufficient description
CVE-2004-0963 - Word. Insufficient description
CVE-2005-0558 - Word. Insufficient description
CVE-2005-0564 - Word. Insufficient description
CVE-2006-0028 - Excel. BOOLERR? What's corrupted - row, col, ixfe? No samples, no description
CVE-2006-0029 - Excel. FORMULA? What's corrupted - row, col, ixfe, cce? No samples, no description
CVE-2006-0030 - Excel. No description
CVE-2006-0031 - Excel. No samples, no description
CVE-2006-0761 - Word. Blackberry? No samples, no description
CVE-2006-0935 - Word. No samples, no description
CVE-2006-2197 - Word. wvWare? No samples, no description
CVE-2006-2388 - Excel. No samples, no description
CVE-2006-3493 - Word. No description
CVE-2006-3650 - Word. No samples, no description
CVE-2006-4693 - Word. No samples, no description
CVE-2006-6120 - PowerPoint. koffice? No samples, no description
CVE-2007-0913 - PowerPoint. No description
CVE-2007-1117 - Publisher. No samples, no description
CVE-2007-1910 - Word. No description
CVE-2007-3490 - Excel. No description
i know you said this wasn't really related to my post, but just so everyone is clear - the vulnerability handling i was talking about and the vulnerability detection you're talking about are entirely different... i was referring to the length of time it takes av vendors to fix vulnerabilities in their own products (the better part of a year in some cases, apparently)...
vulnerability (or is it exploit) detection is a good thing to have too and i'm glad to hear somebody is working on it... i imagine you're actually talking about exploit detection, as (if i'm not mistaken) one can generally locate a known/patched vulnerability by comparing the affected binaries before and after the patch that fixes it is applied... exploit detection, on the other hand, really is aided by having samples...
hopefully someone will be able to help but i'm not sure how many people will see it in the comments (another argument in favour of a vess blog, perhaps)...
Yes. The terms I use are:
Vulnerability. A software or a configuration bug that allows breach of security.
Exploit. Data, specially crafted to make use of an existing vulnerability.
Shellcode. A small piece of code that receives control when the specially crafted data of the exploit causes the vulnerable application to transfer control to places where it isn't supposed to. A shellcode doen't have to be present but it's usually easier (for the attacker) to have it.
Payload. The main malicious code which usually the shellcode transfers control to (possibly after decrypting it first). When the shellcode is absent, the exploits causes control to be transferred directly to the payload - but such cases are rare.
What I am trying to do is not detect the presence of vulnerabilities in applications. I am trying to detect the presence of exploits - i.e., data corruptions which would cause vulnerable applications to transfer control to where they shouldn't. The exploit could be present naturally - e.g., be the cause of a random corruption - in which case the vulneable application would just crash. Or it could be malicious and cause some shellcode and eventually a payload code to receive control. I am not trying to determine whether a malicious exploitation takes place - only whether the data corruption that defines the exploit is present.
Thanks for turning the attention of your readers to my quest - although I doubt very much that anybody will be able to shed some light on how to detect the exploits the I listed.
And a Vess blog is unlikely. A blog is essentially an electronic diary and I've never been a big fan of writing diaries. When I have something to say, I either say it as a comment to somebody's post, or write a paper about it and publish it in print.
Your points are well founded and actually some had been commented during the presentation itself. It's a shame you haven't seen/heard it, because a lot of comments you made actually have been discussed during the 2+1/2 this talk took.
Let me comment on a few things that caught my attention:
There was never any reference that DiD is wrong, what is wrong is how it is being perceived and being implemented. By who?
By customers ASWELL as Vendors, you see the point here is not that code is being added but also that this code runs on high privileges. Obviously as you say the more code you add the more potential for bugs you create, fair enough you call this obvious, but the problem here is we are not talking more, we are talking HUGE parsing engines running at high privileges. And by huge I mean it, it’s not your normal Attack surface that you have when you parse 10 formats (Winword, excel, etc.pp) we are talking thousands of formats.
Then you fail to understand that attack surface increases exponentially the more engines you add after the other. Think about why this is not logarithmic if you found the answer you found the root problem of the situation.
Furthermore you say:
“the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners.”
I disagree, you mix exploits and attacks with malware. Yes scanners find stuff, they are supposed too. This of course reduces a risk, granted. However the whole presentation is about the point that you also need to take into account the **protection** of the AV engine. Why are Antivirus Engines running with SYSTEM privileges ON mail servers? Is there any reason too? This is not the customers fault but rather the vendor that choose that design. This is not normal but plain ridiculous.
Furthermore you say:
“the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure”
This is NOT the implication, actually speaking strictly in terms of logic what you did is wrong. The implication is , if you add more engines (you should): mitigate the risk you add, try NOT to have Huge parsing engines running with high privileged rights on your critical servers.
Furthermore you say :
“i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet”
Well you agree with something that was never said :D Sorry but nowhere on the slides we refer to unpatched bugs. Vendors that care about customers publish notes about security problems fixed by patches, it’s a matter of trust.
Your argument about flagging as “clean” is totally correct, I have to revisit the logic and the wording there.
@thierry zoller:
"There was never any reference that DiD is wrong, what is wrong is how it is being perceived and being implemented."
that may be, but there was the implication that it was reasonable to actually call the construction in question defense in depth and then blame it's failings on the technology... the fundamental problem is that what is being called defense in depth isn't defense in depth at all... nowhere in the slides is this hinted at, in fact it seemed to operate under the premise that it was a valid (albeit dangerous) form of defense in depth...
"Then you fail to understand that attack surface increases exponentially the more engines you add after the other."
it's not that i fail to understand it, it's that i fail to recognize it as being as significant as you make it out to be... you make it sound like the sky is falling when it obviously isn't...
"Furthermore you say:
“the fact is that although the raw vulnerability count may be higher, the real world risk of something getting through is much lower because of the use of scanners.”
I disagree, you mix exploits and attacks with malware."
in order to mount an attack that compromises a system or network by exploiting one of the many vulnerabilities you're underscoring there must be an agent to carry out the attack... that basically means malware... that malware has to get to a point where it can exploit a vulnerability, and because there are so few pieces of malware that actually try to exploit the scanner vulnerabilities and because the attack surface doesn't balloon until after you start passing through some of the layers, the chances of people encountering an agent in the wild that can reach and exploit a vulnerable system component (whether that be in the anti-virus or elsewhere) is much lower than it would be without the scanners...
"Yes scanners find stuff, they are supposed too. This of course reduces a risk, granted. However the whole presentation is about the point that you also need to take into account the **protection** of the AV engine."
then shouldn't you be discussing how just using multiple scanners isn't defense in depth in the first place?
"Why are Antivirus Engines running with SYSTEM privileges ON mail servers? Is there any reason too?"
depends on what the scanner is supposed to scan... if it's supposed to scan the entire file system then there is indeed a reason to run with a high set of privileges (to ensure access to the entire file system)... a scanner that's only scanning a mail database, on the other hand can conceivably use lesser privileges (so long as those privileges will still give the scanning process access to the system resources it's trying to scan)..
"Furthermore you say:
“the implication, however, is that if using more scanners makes you less secure then using fewer scanners should make you more secure and thus using no scanners would make you most secure”
This is NOT the implication, actually speaking strictly in terms of logic what you did is wrong."
i realize, but i also know that it's precisely the kind of logic error most people would make...
"The implication is , if you add more engines (you should): mitigate the risk you add, try NOT to have Huge parsing engines running with high privileged rights on your critical servers."
that must have been something that came through in the actual presentation because it didn't come through in the slides...
"Furthermore you say :
“i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet”
Well you agree with something that was never said :D Sorry but nowhere on the slides we refer to unpatched bugs."
i direct you to page 27 - vulnerabilities that cannot be worked around are ones that (among other things) are unpatched (since applying a patch is effectively a kind of work around)...
"Vendors that care about customers publish notes about security problems fixed by patches, it’s a matter of trust."
2 things:
a) vendors who do that are doing so because they believe someone else would publish it if they didn't and it's good marketing to come clean about your own problems - it has nothing to do with caring about customers...
b) anti-virus vendors are well aware of the fact that their customers do not look for such security advisories... regardless of how it happened, av vendors have found themselves in a situation where everyone expects complete automation and transparency from them and many will even complain bitterly at the least little notification from the software... the ideal of disclosing security issues is all well and good but the av field tends to look at disclosure from a practical rather than idealistic point of view - practically, such advisories would do little good for their customers who mostly aren't expecting to have to deal with security problems in their security software, but lots of good for those ne'er do wells who are looking for ways to attack that security software...
and as i said before - above all else, don't help the bad guys...
@kurt said:
“i find myself in agreement with the av vendors... things like not divulging vulnerability details when there is neither a fix nor a work around ready yet”
@thierry said:
Well you agree with something that was never said :D Sorry but nowhere on the slides we refer to unpatched bugs.
This is not completely correct Thierry:
http://research.pandasecurity.com/archive/Vulnerability-found-that-allows-for-_2200_disclosure-policy-bypass_2200_.aspx
it seems kind of ironic that in the process of complaining about vendor responses n.runs would violate their own disclosure policy...
it also makes their vendor response complaints ring just a little bit hollow...
hmmm... given the content and timing of this i'm beginning to wonder if i've been trolled... i wonder who else was contacted with a link to those slides...
I'm not concerned about malware exploiting these vulnerabilities in security software. For the moment, most malware trying to exploit parser vulnerabilities targets widely installed applications, like MS Office, Acrobat Reader, ...
I believe it's more likely that these vulnerabilities get exploited by penetration testers, i.e. the ones that you hired and the other ones ;-)
@didier stevens:
"I'm not concerned about malware exploiting these vulnerabilities in security software. For the moment, most malware trying to exploit parser vulnerabilities targets widely installed applications"
that seems to be the popular opinion on scanner vulnerabilities, though i'm sure that will change now that we have someone to popularize this avenue of attack...
hey, it worked for stealthkits, and the principals involved there are still getting a lot of play out of that...
Post a Comment