tag:blogger.com,1999:blog-73472792024-03-07T03:06:55.407-05:00anti-virus rantsdevising a framework for thinking about malware and related issues such as viruses, spyware, worms, rootkits, drm, trojans, botnets, keyloggers, droppers, downloaders, rats, adware, spam, stealth, fud, snake oil, and hype...kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.comBlogger594125tag:blogger.com,1999:blog-7347279.post-67697106671326870082019-08-03T15:00:00.000-04:002019-08-03T15:00:58.098-04:00the disclosure debate is the wrong argumentprompted by an exchange on twitter this morning i began spending mental resources on an old favourite that everyone loves (hates) to go over again and again - the disclosure debate.<br />
<br />
the disclosure debate is a debate over what the right answer is to the question "i found a problem, now what do i do with it?". it's a natural place to come from for many in the security community and the question surely needs an answer. but the disclosure debate tries to raise the answer into a principle and each side of the debate believes that their side is more principled, more virtuous than the other. you have to pick a side and in so doing you also pick a tribe and signal your allegiance.<br />
<br />
the disclosure debate has been raging in the security community for a long time and the two major sides right now are full disclosure which is generally favoured by security researchers, and responsible disclosure which is generally favoured by vendors. when you pick a side you're choosing to align with one of these groups.<br />
<br />
but is that really it? is it just a choice between researchers and vendors? are they the only ones with real skin in the game? hell no.<br />
<div>
<br /></div>
<div>
so then let's catalog the groups of people involved:</div>
<div>
<ol>
<li>vendors of vulnerable products/systems/services. these are the people who, when faced with discovery of a security <a href="https://anti-virus-rants.blogspot.com/2012/09/what-is-vulnerability.html">vulnerability</a> in their product, are expected to act quickly and create a fix for their product to eliminate the risk posed by by that vulnerability</li>
<li>security researchers. these are the people who find the vulnerabilities. often a great deal of work goes into finding vulnerabilities and consequently researchers often want and even deserve some kind of reward, whether simply adding to their reputation which they can parlay into better paying job or a more direct financial reward like a bug bounty.</li>
<li>attackers. these are the people who would actually <a href="https://anti-virus-rants.blogspot.com/2006/05/what-is-exploit-code.html">exploit</a> the vulnerability to victimize people, whether for money, ideology, or a laundry list of other reasons</li>
<li>users of vulnerable products/systems/services, be they businesses, institutions, individuals, etc. these are the people who are actually affected by vulnerabilities and arguably should be the most important consideration</li>
</ol>
<div>
now attackers and users don't figure all that much in the disclosure debate. sure the ostensible reason we have the debate is because we say we want to help the users, and the reason for doing the vulnerability research and disclosing the results at all is supposed to be to thwart attackers, but when arguing over which side of the disclosure debate is best attackers and users are treated as constants, as monolithic homogeneous groups that always present the same issues in every case and that's just not an accurate reflection of reality. </div>
</div>
<div>
<br /></div>
<div>
not all attackers would be able to find the vulnerabilities on their own. those that can are just as much a minority as the security researchers themselves are. not all attackers are interested in any given vulnerability. those that target foreign states may be more interested in PLC vulnerabilities than VLC vulnerabilities. some may not have the connections or resources to exploit a particular vulnerability. some may specialize in attacks against a completely different kind of technology. some are simply not motivated enough to add a new exploit to their arsenal.</div>
<div>
<br /></div>
<div>
meanwhile, not all users pay attention to the channels where disclosures are communicated, in large part because most users can't understand or meaningfully act on the information in a disclosure. filtering out attack traffic, using IOCs, deploying mitigations, etc. are all the domain of a very select group. additionally not all users are aware of all patches when they become available, or are willing or able to apply them.</div>
<div>
<br /></div>
<div>
these are only a fraction of the variations among the two groups that can affect if/how they'll use a vulnerability and if/how they might be affected by the vulnerability. the reason we keep having the disclosure debate is because that debate has never adequately addressed these variables, because they have the potential to make the answer change depending on the vulnerability, which would mean there is no single right answer or right side in the debate. the debate focuses on principles to the exclusion of nuance, only paying lip service to the people who would use the vulnerabilities and the people those vulnerabilities would be used against. </div>
<div>
<br /></div>
<div>
those who find a problem in some kind of technology want an easy answer to the question of what they should do with it - that's why the disclosure debate exists, to try and arrive at an easy answer so people can just follow the one right principle for handling the situation and be done with it. it follows a trend that has become all too common in the technology sector of trying to abstract people out of the equation because people are messy and complicated. but people are part of the equation whether we like it or not so the argument shouldn't be about which principle is right, but rather whether our actions should be governed by principle or by considerations for people.</div>
<div>
<br /></div>
<div>
in the watchmen, the character rorschach was governed by principles to the exclusion of virtually everything else ("never compromise. not even in the face of armageddon"). defending truth and justice so stridently made him arguably the most heroic, in the comic book sense of the word. but the more realistic portrayal of individuals and circumstances and society as a whole in the watchmen highlighted how ill-fitting such unwavering adherence to principles can be in the real world.</div>
<div>
<br /></div>
<div>
we need to stop talking about the disclosure debate, not because it should be settled already, but because it can never be settled, because it's parameters don't fit the real world. it's the wrong argument. if we really want to help protect one group of people from another group of people then we need to spend a lot more time thinking about those people each and every time the subject comes up. there are no shortcuts when it comes to doing the right thing.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-56056248629921745512018-07-28T00:52:00.000-04:002018-07-28T12:48:36.416-04:00winblows update<div>
<div>
<div>
<div>
<div>
<div>
<div>
so i'm using my computer thursday night
when i get a notification that updates have been applied that require a
reboot. ok, whatever, i've mostly come to terms with the fact that
Windows 10 updates itself without asking me, at least it asks me when is
a good time to reboot. that wasn't a good time so i said later.</div>
<div>
<br /></div>
well,
later came in the wee hours of the morning when i was done working and
ready to head to bed. i go to shut off the computer and the new options
in the shutdown menu remind me that there was an update to take care of,
so i choose the update and reboot option.<br />
<br /></div>
i chose that
option rather than update and shut down because, from past experience,
the update process has not actually completed by the time the system
shuts down. there's a bunch of stuff it needs to do on the next restart
and that takes up time i could be doing something else, so no shutdown
just yet, just an update and reboot and i head off to take care of my
evening oral hygiene routine thinking that the process would be done by
the time i get back.<br />
<br /></div>
when i come back the computer is still
in the process of rebooting? what the heck? oh no, i've seen this
before (or at least i think i have). is the computer stuck in a reboot
loop? powering off for a few moments usually breaks out of the loop, but
this time i discover it wasn't a reboot loop at all. when i let it
power up again i see a screen saying that it's applying updates and that <b>several</b> reboots will be required.<br />
<br /></div>
are you kidding me?
this is not what i want to deal with in the wee hours of the morning
when i still have to go to work the next day. this is not convenient,
and frankly "several reboots" for an update is bullshit. i understand
the need to perform a reboot during an update; files and other resources
that need to be changed may be locked by running processes and
rebooting eliminates that impediment, but several reboots? Microsoft has
been at this update business for decades now, you'd think their little
minions would have figured out how to coordinate their efforts so that
each part of the update could make use of the same reboot, but no,
apparently that kind of unified effort is beyond them and in fact they
seem to be moving in the opposite direction where every bit and piece of
their updates (and the operating system itself) is becoming more
separate and isolated from the others. <br />
<br /></div>
so i did what i
hate doing. i left the computer on completely unattended overnight so
that hopefully by morning the update would be done. and it was, but
that's not the end of the update related problems. you see, Microsoft's
updates aren't just for security fixes. those are important, yes, and
the fact that people were taking too long to apply them and leaving
their systems to become part of massive <a href="http://anti-virus-rants.blogspot.com/2006/03/what-is-botnet.html">botnets</a> is part of the reason
the user's control over updates was taken away from them. however,
Microsoft has re-imagined how versioning of their operating system will
work so those updates now also come with feature changes, which (due to
the increasingly isolated approach units within Microsoft are taking nowadays) means new binaries with new behaviours.<br />
<br /></div>
how the
hell is anyone supposed to develop a behavioural baseline for their
system with this never ending parade of new binaries and new behaviours?
this morning's culprit? BackgroundTransferHost.exe. what does it do?
who the hell knows? not only does Microsoft give us less agency now that
we can't control if/when updates occur, but there's also less
transparency now too because the number of separate/isolated binaries
they're introducing to the system has far, far outpaced anyone's efforts
to document them. <br />
<br />
maybe BackgroundTransferHost.exe isn't even
Microsoft's. maybe it's <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a>. if i were going to make a downloader trojan, that sounds like just the sort of name i'd use - but what do i
know, i'm not a <a href="http://anti-virus-rants.blogspot.com/2010/08/what-is-malware-writer.html">malware writer</a>. i suppose they expect me to trust it
because it's signed, but that's not how that works. being signed (and
passing the signature validation procedure) just means it hasn't been
modified after getting signed, not that it's legitimate, not that it's
safe, not that it's trustworthy. signing certificates get stolen.
there's plenty of signed malware out there.<br />
<br /></div>
<div>
oh, and the cherry on top is now VMware is non-functional.</div>
<div>
<br /></div>
what the actual f#$% Microsoft.
stop making alternative security approaches so much harder than they
have to be. i'm regretting moving on from Window XP. at least there i could
perform application and behavioural <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-whitelist.html">whitelisting</a> with relative ease.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-7854887132438066442018-02-04T20:00:00.000-05:002018-02-04T20:00:51.136-05:00thoughts on attack automationaxiom: there is no perfect security<br />
<br />
because there is no perfect security we can say with certainty that systems will never be perfectly secure. if we close one vulnerability there will always be another to take it's place. we can spend an unending amount of money/time/effort on closing vulnerabilities and still remain vulnerable. in the process of doing this we would go bankrupt because no one, not even the largest companies in the world, have unlimited resources to spend on security.<br />
<br />
therefore, eliminating all the vulnerabilities is not a viable strategy. instead a promising alternative is to approach the problem of security from an economic standpoint. while we can't eliminate all the vulnerabilities, we can eliminate some, and if we eliminate the ones that are easiest to exploit then an attacker's job becomes harder and more expensive to carry out. if we make the attacker's job hard enough then the value/benefits they derive from succeeding in their attack (success will always be possible) would no longer cover the cost of launching that attack.<br />
<br />
attack automation doesn't make an attacker's job harder. quite the opposite in fact, it makes it easier. attack automation is carried out by attack tools. as a general rule, attack tools reduce the complexity of performing an attack. tools automate the fiddly bits to save an attacker time and effort but in so doing also save the attacker from needing to know how to do the fiddly bits themselves. this means that a larger population of attackers will become capable of carrying out a particular attack because the technical complexity of performing the attack is reduced. it also means there is a larger pool of targets to victimize because the lower cost of performing the attack makes attacking lower value targets economically viable.<br />
<br />
additional automation to save the attacker time and effort when selecting targets and launching attacks is also possible, as <a href="https://arstechnica.com/information-technology/2018/02/threat-or-menace-autosploit-tool-sparks-fears-of-empowered-script-kiddies/">the recent release of autosploit has highlighted</a>. this lowers the cost of scaling up the attack so that a single attacker can attack a larger group of victims at a lower cost.<br />
<br />
the argument can be made that attackers are entirely capable of making these automated attack tools themselves so it doesn't matter if security researchers do it as well. however, when researchers make the automated attack tools, not only do attackers enjoy cost savings with respect to launching attacks, they also enjoy cost savings with respect to developing the tools to launch attacks.<br />
<br />
all of these cost savings for the attacker work against defenders. when the cost of performing an attack is reduced it means that attacks that didn't need to be defended against before (because they were too expensive to launch relative to their payoff) must now be defended against, and that increases the costs for defenders because it requires more to be done.<br />
<br />
these cost savings are also permanent. attacks don't get harder, they only ever get easier. offensive security researchers are permanently changing the economics of mounting various attacks in the attackers' favour in an effort to incentivize defenders to do what the researchers think should be done to chase after the zero-vuln goal (a goal which we already know to be unattainable) without regard to the economic realities those defenders face.<br />
<br />
enforcing their will, their vision of what security should be on others is misguided and damaging. there is no one-size-fits-all approach to security, and where the fit is bad there will be undue burdens with respect to cost and/or unfortunate breaches that might legitimately have been avoided if attackers had not been given a helping hand.<br />
<br />
if attackers can build the tools that make their lives easier by themselves, then let them do it. make them pay the cost of doing it. stop subsidizing attackers in the name of security research. years ago the security research community embraced the idea that there should be no more free bugs - why then are the cybercriminals still getting bugs, and exploits, and frameworks, and more for free after all this time?<br />
<br />kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-81059289285601893602016-12-30T10:55:00.000-05:002016-12-30T10:55:42.654-05:00thoughts on "In Search of Evidence-Based IT-Security"<a href="https://twitter.com/csoghoian/status/814289810126962688">Christopher Soghoian</a> brought to my attention <a href="https://media.ccc.de/v/33c3-8169-in_search_of_evidence-based_it-security">a video of a talk by Hanno Böck</a>
at the 33rd Chaos Communication Congress. in it Hanno puts forward the claim that IT security is largely science-free, so let's follow a staple of the scientific process - peer review.<br />
<br />
Hanno introduces himself as a journalist and hacker and says that he prefers to avoid the term "security researcher" and that he hopes the audience will see why. for those who are relatively well versed in the field of anti-malware it should definitely become obvious why he prefers to avoid that term and i'll return to this near the end.<br />
<br />
Hanno is a skeptic, and far from the only one, his talk ultimately expresses the same sentiments that are now common-place in the perennially misinformed information security community. the difference is that Hanno has found a novel way of expressing them, couched in scientific jargon and easily mistaken for insight. he spends altogether too long and dives too deeply into the medical analogy upon which <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-virus.html">computer viruses</a> and by extension <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-virus software</a> is named. the analogy has long been recognized as deeply imperfect and limited. that's why, in reality, there are relatively few references to this analogy in the anti-malware field other than "computer virus", "anti-virus", and "<a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-infection.html">infection</a>" (all three of which date back virtually to the beginning of the field). his call towards the end of his talk for blinded or even double blinded studies, aside from being prohibitively expensive to perform, seem to cling to this medical paradigm in spite of the fact that the subject of such experimentation (ie. the computer, since we're interested in whether AV can prevent computers from becoming compromised) cannot be psychologically influenced by knowledge of which (if any) anti-virus is being used.<br />
<br />
when he FINALLY leaves the topic of medical science to return to security products (about 14 minutes into his half hour talk) he harps on the absence of one very particular kind of experiment being performed on security products - what he calls a <a href="https://en.wikipedia.org/wiki/Randomized_controlled_trial">randomized controlled trial</a>. it turns out this is a hold-over from his preoccupation with medical science. when Hanno says that IT security is largely science-free it is the absence of this particular kind of scientific experiment that he is referring to, but that doesn't actually make it science-free because science has a variety of different ways to study and experiment on things that aren't people.<br />
<br />
there is in fact good scientific evidence for the efficacy of anti-virus software and it's provided by none other than <a href="http://blogs.microsoft.com/microsoftsecure/2016/06/14/whats-been-happening-in-the-threat-landscape-in-the-european-union/">Microsoft</a>:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikReBrYpFgCWUUYKldvISkl-fWDBhqnDE9UPIOkK8CON0SUuUwcUnE1Bp5SMxG4lJC1nZ5URPAITG2kfj5Ic9DMOTcC-uR9qK3OqJyX_WBh1KCF7B5Pty7Jlx3PkT4MGvXebHlXA/s1600/figure1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikReBrYpFgCWUUYKldvISkl-fWDBhqnDE9UPIOkK8CON0SUuUwcUnE1Bp5SMxG4lJC1nZ5URPAITG2kfj5Ic9DMOTcC-uR9qK3OqJyX_WBh1KCF7B5Pty7Jlx3PkT4MGvXebHlXA/s1600/figure1.PNG" /></a></div>
<br />
now it's true that this is data is from an observational study and that it only shows correlation rather than causation, but that's not the end of the world. observational studies <b>are still science</b>. showing correlation may not be definitive evidence but it's still strong evidence, especially considering the scope of the study (hundreds of millions of computers around the world out of a total estimated population of 1.25 billion windows PCs). in this particular case A may not be causing B but B definitely can't cause A and if anyone can think of a confounding variable that might be present on hundreds of millions of systems then maybe let Microsoft know so that they can try to account for it in the future.<br />
<br />
another source of scientific evidence (oft derided in information security circles because the results don't match experts' anecdata) are the independent testing labs like <a href="http://av-test.org/">av-test.org</a> or <a href="http://av-comparatives.org/">av-comparatives.org</a>. they eliminate the influence of confounding variables and so are capable of showing causation rather than just correlation. unfortunately Hanno believes their methodology is "extremely flawed". let's look at his complaints:<br />
<ul>
<li>"If a software detects a malware it does not mean it would've
caused harm if undetected."</li>
<ul>
<li>this is trivially false. anyone who actually reads the testing methodology at av-comparatives (for example) can find right at the beginning a statement about first testing the <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> without the AV present and eliminating any that don't work in that scenario. therefore every sample that is detected by AV in their tests would have caused harm if it had gone undetected.</li>
</ul>
<li>"Alternatives to Antivirus software are not considered." (the talk gives "regular updates" and "application whitelisting" as examples)</li>
<ul>
<li>the example of "regular updates" is frankly a little bit bizarre given Hanno's earlier references to confounders. not controlling for this scenario would actually introduce a confounding variable and make it more difficult to show a causal relationship between the use of a particular AV and the prevention of malware incidents.</li>
<li>the example of "application whitelisting" underscores a serious problem in Hanno's understanding of what he's critiquing. <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-application-whitelisting.html">application whitelisting</a> isn't an alternative to AV, it's <b>a part of</b> AV. many products include this as a feature. Symantec's product, for example, has what they call a reputation engine which alerts when it encounters anything that doesn't have a known good reputation (which means new/unknown malware, traditionally the bane of <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-known-malware-scanning.html">known-malware scanning</a>, will get alerted on because it hasn't been seen before and thus no reputation, good or bad).</li>
</ul>
<li>"Antivirus software as a security risk is not considered."</li>
<ul>
<li>when malware <a href="http://anti-virus-rants.blogspot.com/2006/05/what-is-exploit-code.html">exploiting</a> <a href="http://anti-virus-rants.blogspot.com/2012/09/what-is-vulnerability.html">vulnerabilities</a> in anti-virus software is found in the wild then perhaps the test methodologies should be updated to include this possibility. until then, changing the methodology to account for malware that doesn't seem to exist outside a lab has no real benefit.</li>
</ul>
<li>"None of these tests are with real users."</li>
<ul>
<li>again, this would introduce a confounding variable. maybe the lack malware incidents is because of something the user did rather than because of the AV. alternatively maybe the failure to stop malware incidents is because of something the user did rather than because of a failure of the AV. if you want to establish causation you have to control your variables (something our scientifically-minded speaker Hanno should know all too well). does the anti-virus prevent malware incidents? the tests say yes. can a user preempt or compromise that prevention? also yes. is there any prevention a user can't preempt or compromise? sadly (or perhaps thankfully) no. if you want a study that includes users and thus eliminates the ability to establish a causal link between AV use and prevention of malware incidents, see the study by Microsoft, but even with the inclusion of the users it still suggests AV prevents malware incidents.</li>
</ul>
</ul>
<div>
<br />
when Hanno addressed the paucity of scientific papers dealing with security i found myself confused. using Google Scholar to find the most cited scientific papers? surely he doesn't think the realm of security is so narrowly focused that he'll find what he's looking for that way. security is in fact incredibly broad, covering many different quasi-related domains, and looking at a handful of the most popular scientific papers across all of security is in no way representative of the corpus of available works related to any one particular field (like security software). perhaps i'm biased, having previously (in the very distant past) maintained <a href="http://members.tripod.com/~k_wismer/papers.htm">a reference library of papers related specifically to anti-virus</a>, but it doesn't seem like Hanno showed much evidence that he knew how to find evidence-based security. is it really that hard to add the term "malware" to his search query? could he not find a few and then use them as a seed in an algorithm that crawls backwards and forwards through scientific papers by citation? did he even bother to look at <a href="https://www.virusbulletin.com/">Virus Bulletin</a>? does he even know what that is?<br />
<br />
security isn't the only thing that is incredibly broad - so too is the practice and discipline of science itself. there are many different fields and each one does things in their own particular way. we do not perform randomized controlled trials on the cosmos. as a general rule we do not intervene in volcano formation. the work being done at the large hadron collider does not follow exactly the same methodologies that are used in medical science. are we to judge cosmology, volcanology, or particle physics poorly because of this? no of course not. a question you might well ask is what kind of science should logically be used when it comes to studying computer security and, while i suspect multiple scientific disciplines could be useful, the one that springs immediately to mind is computer science. does computer science look anything like medical science? as someone with a degree in computer science i can tell you the answer is emphatically <b>no</b>. we do many things in computer science but randomized controlled trials are not among them (because computers are not people). while Hanno may style himself as "scientifically minded" he doesn't seem to demonstrate an appreciation for the breadth of valid scientific research methodologies and one is left to wonder if he's familiar with any kind of science outside of medicine.<br />
<br />
when it comes right down to it, it's this apparent lack of familiarity with the subject matter he's talking about that i found most troubling about Hanno's talk.what is anti-virus software <b>really</b>? what is av testing methodology <b>really</b>? what does science <b>really</b> look like? where do you look for scientific research into malware and anti-malware? these all seem to be questions Hanno struggles with, which brings us back to the subject of why he likes to avoid the term "security researcher". if i had to venture a guess i'd say it's because he doesn't do research, even the basic research necessary to understand the subject matter. as such i would say avoiding the term "security researcher" is probably appropriate (for now).<br />
<br />
i'm not sure what one can say in a talk about a subject one hasn't done one's homework on, but hopefully that can improve in the future. Hanno referenced Tavis Ormandy during his talk (as people who criticize AV like to do). Tavis' work on AV also <a href="http://anti-virus-rants.blogspot.com/2011/08/tavis-ormandys-sophail-presentation.html">suffered from a lack of understanding in the beginning</a>, but he improved over time and, while he still has room for more improvement, now has arguably done some good work in finding vulnerabilities in AV and holding vendor's accountable for the quality of their software. i'm certain Hanno can also improve. i know there are real criticisms to be made of AV software and the industry behind it, but they have to be informed, they have to come from a place of real knowledge and understanding. i look forward to Hanno reaching that place.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-38299150567755702872016-10-21T11:00:00.000-04:002016-10-21T11:00:15.345-04:00highlights from #sector2016i haven't posted about the sector conference in a number of years, in spite of attending. let's break that trend. here's some highlights from this year's conference.<br />
<br />
<ul>
<li>edward snowden - he was surprisingly well prepared to talk about canadian policy and events</li>
<li>marketing gimmicks - hockey pucks are one thing, but give me a key and tell me that there's a lock that it <i><b>might</b></i> fit and you're darn right i'm going to go find out if it fits. i gather there was a prize i could have won if it did fit but i didn't pay much attention to that. a friend at work said he'd have bought their product on the strength of that gag alone.</li>
<li>ransomware - ransomware seemed to be the theme this year. i lost track of how many talks were about or mentioned ransomware. 2016 really does seem like the year of ransomware. i caught the tail end of talk from someone at sophos where they described a feature for rolling back encrypted files using a proprietary backup mechanism. if other vendors aren't doing something like this you're leaving money on the table. (that's right, i know how you vendors think)</li>
<li>mikko hypponen - great perspective on what protecting computers has evolved into: protecting society because it now runs on computers</li>
<li>the security problems of an 11 year old and how to solve them - this talk was given by an actual 11 year old who could probably put some professionals to shame. this is the talk i most look forward to sharing with people at work when the videos become available.</li>
<li>mikko's "virus" floppy disk that he left behind - this isn't a highlight because someone from the AV industry was careless with infectious materials but rather because when it was found people wanted to find a computer they could stick it into and see what was there. you'd think the difficulty in finding the hardware necessary to read a 5 1/4" floppy would make such a disk a relatively safe prop to use, even if there was a virus on it. leave it to infosec pros to try and find ways around such barriers. now you know where shadow IT comes from, folks. by the way, don't tell mikko.</li>
</ul>
<div>
there were other good talks and keynotes, of course, but i'm not going to detail every talk i attended and every person i met. these are the things that really stood out to me and if you want to know more you should have gone yourself.</div>
<div>
<br /></div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-57225304321378450102016-09-02T12:25:00.000-04:002016-09-02T12:25:10.940-04:00the anti-virus harm balance<a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-virus software</a>, like all software, has defects. sometimes those defects are functional and manifest in a failure to do something the software was supposed to do. some other times the defects manifest in the software doing something it was never supposed to do, which can have security implications so we classify them as <a href="http://anti-virus-rants.blogspot.com/2012/09/what-is-vulnerability.html">software vulnerabilities</a>. over the years the software vulnerabilities in anti-virus software has been gaining an increasing amount of attention by the security community and industry - so much so that these days there are people in those groups expressing the opinion that, due to the presence of those vulnerabilities, anti-virus software does more harm than good.<br />
<br />
the reasoning behind that opinion goes something like this: if anti-virus software has vulnerabilities then it can be attacked, so having anti-virus software installed increases the attack surface of the system and makes it more vulnerable. worse still, anti-virus software is everywhere, in part because of well funded marketing campaigns but also because in some situations it's mandated by law. add to that the old but still very popular opinion that anti-virus software isn't effective anymore and it starts looking like a perfect storm of badness waiting to rain on everyone's parade.<br />
<br />
there's a certain delicious irony in the idea that software intended to close avenues of attack actually opens them instead, but as appealing as that irony is, is it really true? certainly each vulnerability does open an avenue of attack, but is it really doing that <b>instead</b> <b>of</b> closing them or is it <b>as well as</b> closing them?<br />
<br />
if an anti-virus program stops a particular piece of <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a>, it's hard to argue that it hasn't closed the avenue of attack that piece of malware represented. it's also hard to argue that anti-virus software doesn't stop any malware - i don't think anyone in the anti-AV camp would try to argue that because it's so demonstrably false (anyone with a malware collection can demonstrate anti-virus software stopping at least one piece of malware). indeed, the people who criticize anti-virus software usually complain not about set of malware stopped by AV being too small but rather that the set of malware stopped by AV doesn't include the malware that matters most (the new stuff).<br />
<br />
so, since anti-virus does in fact close avenues of attack, that irony about opening avenues of attack instead of closing them isn't strictly true. but what about the idea that anti-virus software does more harm than good? well, for that to be true anti-virus software would have to open more avenues of attack than it closes. i don't know how many vulnerabilities any given anti-virus product has so i can't give an exact figure of how many avenues of attack are opened. i doubt anyone else can do so either (though i imagine there are some who could give statistical estimates based on the size of the code base). the other side of the coin, however, is one we have much better figures for. the number pieces of malware that better known anti-virus programs stop (and therefore the number of avenues of attack closed) is in the millions if not tens of millions and that number increases by thousands each day. can the number of vulnerabilities in anti-virus software really compare with that?<br />
<br />
it's said that windows has 50 million lines of code. if an anti-virus product were comparable (i suspect in reality it would have fewer lines of code) and if that anti-virus product only stops 5 million pieces of malware (i suspect the real number would be higher) then in order for that anti-virus product to do more harm than good it would need to have at least one vulnerability for every 10 lines of code. that would be ridiculously bad software considering such metrics are usually estimated per 1000 lines of code.<br />
<br />
now one might argue (in fact i'm sure many will) that those millions of pieces of malware that anti-virus software stops don't really represent actual avenues of attack because for the most part they aren't actually being used anymore. they've been abandoned. counting them as closed avenues of attack isn't realistic. the counter-argument to that, however, is to examine why they were abandoned in the first place. the reason is obvious, they were abandoned because anti-virus software was updated to stop them. the only reason why <a href="http://anti-virus-rants.blogspot.com/2010/08/what-is-malware-writer.html">malware writers</a> continue making new malware instead of resting on their laurels and using existing malware in perpetuity is because once anti-virus software can detect that existing malware it generally stops being a viable avenue of attack. so rather than the abandonment of that malware counting against anti-virus software's record of closing avenues of attack it's actually closer to being AV's figurative body count.<br />
<br />
there is still malware out there that anti-virus software hasn't yet stopped, and as that set is continually replenished it's unlikely that anti-virus software will stop all the malware. it has stopped an awful lot so far, however, so the next time someone says anti-virus software does more harm than good (due to it's vulnerabilities) ask them for their figures on the number of vulnerabilities in anti-virus products and see how it compares with the number of things anti-virus software stops. i have a feeling you'll find those people are full of it.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-5214373567468291482015-09-01T16:50:00.000-04:002015-09-01T17:10:14.191-04:00there's a quality problem in the anti-malware industryif you follow infosec news sources at all, by now you've probably heard about <a href="http://www.reuters.com/article/2015/08/14/us-kaspersky-rivals-idUSKCN0QJ1CR20150814">the claim made by an anonymous pair of ex-kaspersky employees that kaspersky labs weaponized false positives</a>.<br />
<br />
more specifically, the claim is that engineers at kaspersky labs were directed to reverse engineer competing products and use that knowledge to alter legitimate system files by inserting malicious looking code into them so that they would both seem like files that should be detected and be similar enough to the original file that the competing product will also act on the legitimate file and in so doing cause problems for users of those competing products.<br />
<br />
i've heard this described as fake <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a>, but for the life of me i can't see why it should be called fake. the altered files may not do anything malicious when executed, but they're clearly designed to exploit those competing products. furthermore, there is clearly a damaging payload. this isn't fake malware, it's real malware. it may launch it's malicious payload in an unorthodox and admittedly indirect manner, but this is essentially an <a href="http://anti-virus-rants.blogspot.com/2006/05/what-is-exploit-code.html">exploit</a>.<br />
<br />
some consider the detection of these altered files to be false positives because the files don't actually do anything themselves, but since they have malicious intent and indirectly harmful consequences, i think the only real false positives in play here are the original system files that are being mistaken for these modified files.<br />
<br />
by all accounts, this type of attack on <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-malware products</a> actually happened. what's new here is the claim that kaspersky labs was responsible at the direction of eugene kaspersky himself. there's a lot of room for doubt. the only data we have to go by so far, besides the historical fact of the attack's existence, is the word of anonymous sources (who potentially have an ax to grind) and some emails that, quite frankly, are easily forged. circumstantially there's also <a href="https://securelist.com/blog/opinions/30611/on-the-way-to-better-testing/">an experiment kaspersky participated in</a> around the same time frame that has similar earmarks to what is being claimed except for the part about tricking competing products into detecting legitimate files as malware.<br />
<br />
i don't expect we'll ever know for sure if kaspersky was behind the attacks. doubts have been expressed by members of the industry, but frankly i've seen too many things whitewashed or completely ignored (like <a href="http://anti-virus-rants.blogspot.com/2011/02/ethical-conflict-in-anti-malware-domain.html">partnerships with government malware writers</a>) to take their publicly expressed statements at face value. there are certainly vendors i'd have a harder time believing capable of this but there just doesn't seem to be sufficient evidence that the claims are true. the problem is that i can't imagine any kind of evidence the anonymous sources are likely to have that isn't easy to repudiate. had they taken a stand at the time (like someone with actual scruples would have done) they would have been able to put their names behind their claims - they may have lost their jobs but they surely would have been able to find employment with a different vendor because hiring a whistle-blower would have been good PR.<br />
<br />
however, as it stands now, the anonymous sources have to remain anonymous. if they're telling the truth then they are complicit in kaspersky's wrong-doing, and if they're lying they are throwing the entire industry under the bus for no good reason (because this claim fans the fires of that old conspiracy theory about AV vendors being the ones who write the <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-virus.html">viruses</a>). Either way, to have this claim linked to their real identities now would make them radioactive in the industry. no one would touch them, and for good reason.<br />
<br />
long ago it used to be that the industry only employed the highest calibre of researchers. people who were beyond reproach. naturally, in order to grow, the industry has had to hire ever increasing numbers of people and old safeguards against undesirable individuals joining the ranks don't scale very well. increasingly people who aren't beyond reproach are being found amongst the industry's ranks and there appears to be no scenario where these two anonymous sources don't fall into that category. the inclusivity that the general security community embraces (and that the anti-malware industry is increasingly mimicking) has the consequence that blackhats are included. the anti-malware industry is going to have to either figure out if they're ok with that or figure out a way to avoid what the general security community could not.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com5tag:blogger.com,1999:blog-7347279.post-35947925938574181552014-12-16T14:14:00.001-05:002014-12-16T14:14:44.828-05:00no malware defeats 90% of defensesyesterday, '<a href="http://anti-virus-rants.blogspot.com/2006/09/end-of-security-experts.html">security expert</a>' robert graham penned a blog post claiming that <a href="http://blog.erratasec.com/2014/12/all-malware-defeats-90-of-defenses.html">all malware defeats 90% of defenses</a> - a claim made in answer to the FBI's claim that the attack on sony would have been just as successful against 90% of other companies. as you might well imagine, however, robert graham was in error.<br />
<br />
the error isn't a straight-forward one, but it is one that most of the security industry makes. it's an error in framing.<br />
<br />
the security industry likes to frame the problem as automaton vs. automaton because that facilitates the comforting lie they tell their customers. businesses see security (not incorrectly) as something that costs them time and money and so they search for ways to cut those costs. the security industry, flush with skillful sales people, tells businesses what they want to hear: that they can cut costs and automate much of security, leaving only a handful of personnel left to operate a little like janitorial staff - cleaning up messes and keeping the automaton running smoothly. likewise, the security industry tells consumers what they want to hear as well: that they just need to install a product and that product will take care of security for them automatically.<br />
<br />
in security, however, your adversary isn't a thing, it's a person. <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> doesn't defeat defenses anymore than a pick and tension wrench defeats the tumblers in a lock. malware is an object, not a subject. it may have some small measure of autonomy (some more so than others), but it doesn't defeat anything - it's not the agent in that kind of scenario, it's simply a proxy for an intelligent adversary.<br />
<br />
intelligent adversaries are notoriously good at outsmarting automatons. robert graham provided a wonderful example of that in his own post when he described creating brand new malware that went undetected by the <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-malware software</a> being run by his targets. what he failed to do was take appropriate credit. it wasn't the malware the defeated those defenses, it was a person or persons with <a href="http://anti-virus-rants.blogspot.ca/2010/07/what-is-apt.html">APT</a> level skill (even if it didn't require quite that much skill to pull it off - he described it as easy, but easy is a relative term). the targets were compromised, not because they were using substandard defensive technology per se, but because they were relying on automatons to protect them against people.<br />
<br />
<b>in a battle of wits between an automaton and an intelligent adversary, the intelligent adversary has the advantage by definition</b>.<br />
<div>
<br /></div>
<div>
so long as the security industry continues to tell their customers what they want to hear instead of what they need to know, those customers are going to continue relying on a stupid box to fend off smart people. that is a recipe for failure no matter what technology is involved.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-56564447597287849702014-11-24T12:00:00.000-05:002014-11-24T12:00:02.675-05:00stealing master passwords is just not that big a dealby now a lot of people have heard the news that <a href="http://arstechnica.com/security/2014/11/citadel-attackers-aim-to-steal-victims-master-passwords/">a new version of the citadel trojan steals the master password for password management software</a>. a LOT of electrons have gone into reporting this new development <a href="http://www.zdnet.com/citadel-malware-attacking-open-source-password-managers-7000036028/">over</a> and <a href="http://thehackernews.com/2014/11/new-citadel-trojan-targets-your.html">over</a> and <a href="http://threatpost.com/citadel-variant-targets-password-managers/109493">over</a> again, but it really doesn't seem like it's a big enough deal to warrant all this attention.<br />
<br />
while it may be novel to use keylogging to steal specific passwords for password management software, <a href="http://anti-virus-rants.blogspot.com/2006/04/what-is-password-stealer.html">password stealers</a> and <a href="http://anti-virus-rants.blogspot.com/2006/04/what-is-keylogger.html">keyloggers</a> are anything but new, and the biggest difference between having this new version of citadel on your system or a traditional keylogger is basically that citadel will be able to collect (and therefore compromise) all your passwords <b><i>faster</i></b> than a normal keylogger (all at once vs. piecemeal).<br />
<br />
that's really all there is to it. from the perspective of a potential victim, citadel isn't doing anything really new, it's just doing it more efficiently. password stealing <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> has been out there for a long time and password managers were never meant to combat that particular threat to password security. password managers are meant to facilitate the use of strong, unique passwords which in turn serves to mitigate the risk of compromises to remote systems - compromising the local system is an entirely different problem.<br />
<br />
at the end of the day, you can't operate securely with a compromised computer. even if you were to use 2 factor authentication (which could conceivably render password stealing moot) everything else you enter or access would be exposed to potential theft or manipulation if you're using a compromised computer.<br />
<br />
i realize it may seem awkward that a class of software security pros have been promoting for years in order to improve security is now being targeted by malware, but it's only awkward because such a needlessly big deal is being made out of it. password management software still mitigates the same risks associated with remote compromises that it always did, and you're as hosed as you ever were in the event of a local compromise. nothing has actually changed for the people trying to keep their things secure so stop acting like this development changes anything - it doesn't.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-8596066141191063062014-09-17T11:00:00.000-04:002014-09-17T11:00:00.791-04:00the PayPal pot calling the Apple Pay kettle blackso if you haven't heard yet, <a href="http://appleinsider.com/articles/14/09/15/paypal-questions-apple-pay-security-in-new-ad-uses-icloud-celebrity-photo-debacle-as-ammunition">PayPal took out a full page ad in the New York Times</a> trying to drag Apple Pay's name through the mud based on Apple's unfortunate celebrity nude selfie leak. This despite the fact that PayPal happily hands out your email address to anyone you have a transaction with. In essence, PayPal has been leaking email addresses for years and not doing anything about it, so they shouldn't get to criticize others for leaking personal information.<br />
<br />
what's the big deal about email addresses? while it's true that we often have to give every site we do a transaction on an email address, we don't have to give them all the same address. in fact, giving each site a different email address happens to be a pretty good way to <a href="http://anti-virus-rants.blogspot.com/2006/12/how-to-avoid-email-spam.html">avoid spam</a>, but more importantly it's a good way to <a href="http://anti-virus-rants.blogspot.com/2006/12/how-to-recognize-phishing-emails-easy.html">avoid phishing emails</a>, and that's important where PayPal is concerned because PayPal one of the most <a href="http://anti-virus-rants.blogspot.com/2006/11/what-is-phishing.html">phished</a> brands in existence.<br />
<br />
unfortunately, because PayPal wants all parties in a transaction to be able to communicate with each other, they do the laziest, most brain-dead thing one can imagine to accomplish this: they hand out your <b>PayPal</b> email address to others, which is pretty much the worst email address to do that with. i have actually had to change the disposable email address i use with PayPal because they are apparently incapable of keeping that address out of the hands of spammers, phishers, and other email-based miscreants. furthermore, i also use their service <b>less</b> because i don't want to have to clean up after their <b>mess</b>.<br />
<br />
at some point i may have to start creating disposable PayPal accounts and use prepaid debt cards with them. certainly if i were trying to hide from massive spy agencies then that would be the way to go, but if i'm only concerned with mitigating email-borne threats i really shouldn't have to go to that much trouble. there are other, more intelligent things that PayPal could, even should be doing.<br />
<br />
<ul>
<li>they could share the email address of your choosing, rather than the one you registered with their service unconditionally. that way you could provide the same address you probably already provided that other party when you created an account on their site. it shouldn't be too difficult for them to verify that address before sharing it with the other party since they already verify the one you register with.</li>
<li>they could offer their own private messaging service so that communication could be done through their servers (which would no doubt aid in conflict resolution).</li>
<li>they could provide a disposable email forwarding service such that the party you're interacting with gets a unique {something}@paypalmail.com address that forwards the mail on to the email address you registered on PayPal with, and once the transaction is completed to everyone's satisfaction the address is deactivated.</li>
</ul>
<div>
they don't do anything like that, however. here's what you can do right now with the facilities PayPal makes available. it's a more painful and less intuitive process than anything proposed above, but it does work.</div>
<div>
<ol>
<li>before you choose to pay for something with PayPal, log into PayPal and add an email address (the one you want shared with the party you're doing a transaction with) to your profile. PayPal limits you to 8 addresses.</li>
<li>confirm the address by opening the confirmation link that was sent to that address</li>
<li>make that address the primary email address for your account</li>
<li>confirm the change in primary email address (if you have a card associated with your PayPal account, PayPal may ask you to enter the full card number)</li>
<li>at this point you can use PayPal to pay for something and the email address that will be shared with the other party is the one you just added to your PayPal account</li>
<li>once you've paid with PayPal you will probably want to log back into PayPal, change the primary email address back to what it originally was (and confirm the change once again) and then remove the address you added for the purposes of your purchase. the reason you'll likely want to do this is because PayPal sends emails to every address it has on record for you, and those duplicate emails will get old fast.</li>
</ol>
<div>
most people aren't even going to be aware that they can do this to keep their real PayPal email address a secret from 3rd parties. as a result all manner of email-borne threats can and eventually will wind up in what would otherwise have been a trusted email inbox. make no mistake, this isn't PayPal providing a way to keep that email address private, this is a way of manipulating PayPal's features to achieve that effect. there are too many unnecessary steps involved for this to be the intended use scenario.</div>
</div>
<div>
<br /></div>
<div>
as such, PayPal is leaking a valuable email address by default every time you pay for something. yes Apple's selfie SNAFU was embarrassing to people, and yes if Apple doesn't do something about that now that they're becoming a payment platform it could be not just embarrassing but financially costly for victims, but PayPal is already assisting in similarly costly outcomes right now (not to mention potential <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> outcomes) so they really have no right to be criticizing Apple. Apple, at least, is taking steps to correct their problems - what is PayPal doing?</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-27075065264342035672014-09-08T11:00:00.000-04:002014-09-08T11:00:00.452-04:00on the strength of authentication factorsi ran across a <a href="http://www.theregister.co.uk/2014/09/05/barclays_authentication_tech/">story</a> on friday about barclays bank rolling out biometric authentication for online banking and wound up starting a debate on twitter that i didn't have time for and couldn't easily fit into a series of tweets even if i did have time. essentially what it came down to was that i don't believe all authentication factors are equally strong and the statement that the barclays system was a "password replacement" raised a red flag for me.<br />
<div>
<br /></div>
<div>
the reason it raised a red flag for me is because single factor biometric authentication is something i've come across before, and not just in an article on the web or even as a user but as a builder. my first job out of university was with a biometric security company, and one of the biggest projects i had while working there was developing an authentication enhancement for windows logon. one of the requests made (and the one i fought the hardest against) was to allow logon with just the biometric. </div>
<div>
<br /></div>
<div>
here's the problem with this idea - since windows didn't have biometric capabilities built in, the only way to add single factor biometric authentication in was to store more traditional authentication data that windows could accept (such as a password) and then pass that along to windows when the subject's biometric sample matched the registered biometric template. i should note that the article about barclays makes it clear they'll be doing the same thing since they say that barclays won't be storing customer biometric data on their servers. there will have to be a local biometric client that stores more traditional authentication data and passes it on when a biometric match is achieved. storing credentials is not exactly the safest thing in the world. it's not like you can just store a hash of the authentication data in this scenario because you have to be able to present the original, unmodified credentials to the authentication system.</div>
<div>
<br /></div>
<div>
i balked at the idea of making windows less instead of more secure, but i acquiesced when the decision was made to keep the more secure 2 factor mode of operation (without traditional credential storage) in there as well, along with informing the users that biometric-only logon was less secure. </div>
<div>
<br /></div>
<div>
it's not just less secure because of the credential storage, though, and this is where the twitter debate on friday ventured into. in the course of that job i had the opportunity to examine multiple biometric systems, such as face, voice, iris, etc. and i came away with 2 realizations: 1) the only biometrics that users will ever accept are non-invasive ones (no one wants sensors stuck into them), and 2) that lack of invasiveness makes it relatively easy to steal biometric samples from users, often without them even knowing. fingerprints can be lifted from anything you touch. recordings of your voice can be made without your knowledge. photographs of faces are ubiquitous and a high enough resolution image will capture your iris pattern. </div>
<div>
<br /></div>
<div>
other authentication factors like passwords and tokens generally rely on restricting access to the authentication data, often through secrecy. when that secrecy is lost, such as when someone takes a photograph of a door key (which is a kind of token) it becomes relatively easy to reproduce the authenticator and gain access to what was being protected. biometrics, especially non-invasive ones, forgo this secrecy under the mistaken belief that reproducing the authenticator is difficult for biometrics. the reality, though, is that you don't have to reproduce a biometric sample, you only have to create an approximation that is good enough to fool the biometric sensor, which often isn't particularly difficult. optical sensors can be fooled with images, audio sensors can be fooled with recordings, the mythbusters once fooled a capacitance sensor by licking a photocopy of a fingerprint.</div>
<div>
<br /></div>
<div>
now hold on, i hear you say, isn't it also really easy to steal passwords? and isn't reproducing that authenticator the easiest of all? it's certainly true that in practice all kinds of things can affect how easy it is for an attacker to become illegitimately authenticated. for that reason i try to look at the upper bound of the strength of the various authentication factors. how strong is a system under ideal conditions, that is where everything goes right and legitimate parties don't make any mistakes.</div>
<div>
<br /></div>
<div>
for passwords, that ideal situation means that the user doesn't accidentally click on anything that would steal his/her password, doesn't get fooled by phishing sites, etc. in short, the attacker can't get the password from the user. it also means the attacker can't get passwords in transit (because that's been properly secured) or a password database from service provider because no vulnerability is found in their system and their employees are likewise careful to avoid making mistakes. under this ideal situation the attacker's only way to succeed in gaining illegitimate entry is to perform an online brute force attack (no, not a dictionary attack, because the user didn't make the mistake of using something from a dictionary) and they'd have to go slow because the ideal provider would have rate-limited failed logon attempts. now you might say this is unrealistic, people make mistakes, and that's true in practice in the aggregate, but it is possible for an individual to do everything right, and it is also possible for attackers to not be able to find any way to attack the provider in order to get the password database. this isn't how strong password protection always is, but rather the ideal we hope to achieve by making our systems secure and avoiding making mistakes, and sometimes in limited cases this is achieved.</div>
<div>
<br /></div>
<div>
for tokens, let's consider the ideal situation to be comparable to that for passwords but on top of that let's consider the strongest token possible (ie. not a door key). let's consider a token that produces one-time-passwords (without any vulnerabilities that would make those passwords easy to predict) so that even brute force attacks become much harder. on the surface this seems even stronger than passwords, but there's a chink in the armour and apple's recent icloud problems are a good example. tokens can be lost or stolen so there needs to be a way to recover from that problem. while our "ideal situation" precludes our user from losing their token, it does not preclude our service provider from providing users with a way to cope with the loss of their tokens. the strongest way to do this is to provide the user with pre-generated one-time-passwords ahead of time. this can work for an individual user who is careful and doesn't make any mistakes but as we've previously seen our "ideal situation" does not extend to the point of saying all users make no mistakes, so the pre-generated one-time-pads are going to fail for reasons such as never being printed out and put in a safe place, or not being able to get to that safe place because the user is traveling, etc. what's a service provider to do then? so far, their best option might be to use traditional passwords as a fall back, and if they do then the token system becomes only as strong as passwords, because although our ideal user didn't lose their token, the provider can't really know that the user didn't lose it (or worse that it was stolen) and has to accept attempts to use the password fall back. while there is room for tokens to be stronger than passwords, the price is that only ideal users will be able to recover in the event of a lost token, and that price may be more than service providers are willing to accept.</div>
<div>
<br /></div>
<div>
for biometrics, we once again consider an ideal user who does nothing wrong, and an ideal service provider who likewise makes no mistakes. in spite of doing nothing wrong the user's voice can still be recorded, their face can still be photographed (in most cultures since facial covering is relatively rare), etc. simply interacting with the world cannot qualify as doing something wrong or making a mistake. acquiring the information necessary to construct a counterfeit authenticator is easy compared to passwords and tokens because no effort is taken to conceal that information and the cultural adjustments needed to change that are beyond what i think would be reasonable to expect. the difficulty in attacking a biometric authentication system boils down to the difficulty in fooling a sensor (or sometimes 2 sensors as people have tried to strengthen fingerprint biometrics with so-called "liveliness tests"), and that difficulty has been consistently overestimated in the past.</div>
<div>
<br /></div>
<div>
this is why i consider biometrics weaker than passwords - because even when everyone does everything right it's still fairly easy to fool the system. as such, when someone (especially a bank) provides people with an authentication system that <b>replaces</b> passwords with biometrics, i think that should raise an alarm. even at that prior job of mine it was conceded that that mode of operation was more about convenience than it was about security. convenience is a double-edged sword, it can make things easier for legitimate users and attackers alike if you aren't careful. using biometrics in a 2 factor authentication system may provide more security than any single factor authentication system can, but biometrics on it's own? there's a reason some people have started saying that your biometric is your username, not your password. don't replace passwords with it (at least not without having someone present to guard against funny business - which isn't an option for online banking).</div>
<div>
<br /></div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-56626571321587778612014-06-30T15:00:00.000-04:002014-06-30T15:00:03.186-04:00i wouldn't bet on itlast year cryptography professor matthew green made a bet with mikko hypponen that by the 15th of this month there would be a snowden doc released that showed that US <a href="http://anti-virus-rants.blogspot.ca/2008/02/what-is-anti-malware-anti-virus.html">AV</a> companies collaborated with the NSA. he has since accepted that <a href="https://twitter.com/matthew_d_green/status/478184963221647362">he lost the bet to mikko</a>, but should he have?<br />
<div>
<br /></div>
<div>
<a href="https://twitter.com/imaguid/status/478195333378633728">i mentioned to matthew</a> the case of <a href="http://anti-virus-rants.blogspot.ca/2013/10/what-would-avs-complicity-in-government.html">mcafee being in bed with government malware writing firm hbgary</a> and <a href="https://twitter.com/mikko/status/478198705301225472">mikko chimed in</a> that hbgary wasn't an AV company and being partners with them wasn't enough to win the bet. aside from the fact that this is the first time after all these years that i've seen a member of the AV industry publicly comment on the relationship between mcafee and hbgary (i guess managing matthew's perception of AV is more important than managing mine), something about mikko's response rang hollow.</div>
<div>
<br /></div>
<div>
one way to interpret the situation with hbgary is to view them as government contractors whom mcafee endorsed, advertised, and helped get their code onto the systems of mcafee's customers (hbgary makes a technology that integrates with mcafee's endpoint security product). that certainly would have given hbgary access to systems and organizations they might have had difficulty getting otherwise. i have no idea if that access was ever used in an offensive way, though, so this line of thought is a little iffy.</div>
<div>
<br /></div>
<div>
another way to interpret the situation is to directly contradict mikko and admit that hbgary is a member of the AV industry. after all, they make and sell technology that integrates into an endpoint security product. they may only be on the fringe of the industry, but what more do you have to do to be a member of the industry than make and sell technology for fighting <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a>? the fact that they also made malware for the government makes them essentially a US AV company that collaborated with the government in one of the worst ways possible.</div>
<div>
<br /></div>
<div>
i feel like this should be enough to have won matthew green the bet, at least in spirit, but the letter of the bet was apparently that a snowden doc would reveal it and the revelation about mcafee and hbgary actually predates snowden's leaks by a number of years. </div>
<div>
<br /></div>
<div>
so, the question becomes are there any companies that happen to be members of the AV industry and also happen to have been fingered by a snowden leak? it turns out there was (at least) one. they were probably forgotten because they're not just an AV vendor, but AV vendor does happen to be one of the many hats that microsoft wears (plenty of <a href="http://anti-virus-rants.blogspot.ca/2006/09/end-of-security-experts.html">security experts</a> were even advising people to drop their paid-for AV in favour of microsoft's offering at one point in time), and <a href="http://www.theguardian.com/world/2013/jul/11/microsoft-nsa-collaboration-user-data">microsoft was most certainly fingered by snowden docs</a>. the instances where microsoft helped the government may not have involved their anti-malware department, but the fact remains that a company that is a member of the AV industry was revealed by snowden documents to have collaborated with the government.</div>
<div>
<br /></div>
<div>
i imagine mikko could find a way to argue this doesn't count either - i admit it's not air-tight - but given how close it meets both the spirit and (as i understand it) the letter of the bet, i think mikko should match <a href="https://twitter.com/mikko/status/478198954845536256">the sum he had matthew pay to the EFF</a> and pay it to an organization of matthew's choosing. i won't bet on that happening, though.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-8859348735499662162014-06-14T15:39:00.000-04:002014-06-14T15:39:24.603-04:00confessions of a twitter worm victimas some of you may know, this past wednesday <a href="https://nakedsecurity.sophos.com/2014/06/11/twitter-jumps-to-block-xss-worm-in-tweetdeck/">someone released a self-retweeting worm on twitter</a> that exploited an XSS vulnerability in the popular twitter client tweetdeck. i happen to be a tweetdeck user and i got hit by the <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-worm.html">worm</a>, not once but twice. since i believe in owning up to my mistakes in order to serve as an example to others, i figured it was important for me to write this post.<br />
<br />
this isn't the first time i've had to do this. four years ago it was discovered that there had been a <a href="http://anti-virus-rants.blogspot.com/2006/03/what-is-rat.html">RAT</a> bundled with the software for a USB battery charger sold by the energizer battery company (it had gone undetected by the community for years) and <a href="http://anti-virus-rants.blogspot.com/2010/03/energizer-bunny-looks-more-like-rat.html">i wrote about my experience</a> then as well.<br />
<br />
this was the first time getting hit with something that could spread to others, and spread it did. i know this because i got email notifications from twitter when other people's tweetdeck clients automatically retweeted the tweet that that my client automatically retweeted. that's actually one of the things i think i did right - i have twitter setup to send me notifications for as much of that kind of activity as i possibly can. the result is that i get what is essentially an activity log sent to my email in near real-time and that alerted me to the problem within minutes of it occurring.<br />
<br />
that quick notification allowed me to undo the retweet before it propagated from my account again. that limited the extent to which i contributed to the spread of the worm. acting quickly to neutralize the threat in my twitter stream is another thing i believe i did right.<br />
<br />
unfortunately i also did a number of things wrong. for example, i knew about the XSS vulnerability before i encountered the worm, i saw excellent preventative advice and even retweeted that, but i failed to follow it exactly. the advice was to sign out of tweetdeck and then de-authorize the app in twitter. what i did instead was close the tweetdeck tab in my browser and de-authorize the app. i took a shortcut because i didn't believe anyone i followed would actually tweet anything malicious. i didn't anticipate that they might do so involuntarily - the possibility of something like the samy worm from years past never occurred to me. and so when news spread that the vulnerability had been fix and that users needed to log out and back in again to apply the fix i re-opened the tab, re-authorized the app (because that was the first prompt i was presented with) and then went hunting for the logout button. that's when i got the email notification that another user had retweeted one of my retweets.<br />
<br />
however, i did not see the alert popup that was supposed to indicate the worm had executed. i didn't realize it at the time but that was important because it meant there was more going on than i realized. it meant that the worm had not executed in the client i was sitting in front of. what i had forgotten was that i had another tweetdeck client open on a computer at work and when i re-authorized the app the worm executed on the work computer rather than my home computer. it wasn't until i was on a bus to see an old friend that the significance of what had (and had not) happened clicked and then it wasn't for another several hours before i could get access to that work computer (where the alert popup was still patiently waiting for me) in order to log out and back into tweetdeck again, which i did without de-authorizing the app beforehand so the un-retweeted tweet got re-retweeted.<br />
<br />
in short it was a comedy of errors.<br />
<br />
what i've taken away from this is a number of things:<br />
<br />
<ol>
<li>i am once again humbled by the clear demonstration that i am not perfect. while i certainly knew conceptually that i wasn't perfect, i have had a surprisingly good track record with <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a>. having my ass handed to me made the appreciation of my imperfection much more visceral.</li>
<li>i've gained a better appreciation for the value of de-authorizing apps in twitter. to a certain extent it can seem kind of abstract but what it's actually doing is isolating a vulnerable component from the rest of the network not unlike pulling the network cable out of an infected computer did back when worms that enumerated network shares or sent mass emails were prevalent.</li>
<li>i've identified my failure to log out of things (not just tweetdeck but all sorts of sites) as a bad habit. it's pure laziness and it's not even rational laziness because there's almost no effort involved in logging in when you use a password manager. part of the reason i didn't post this sooner is because i wanted to see if breaking this habit was a reasonable expectation or whether saying i was going to improve was just wishful thinking. so far this improvement seems like an entirely reasonable expectation - i've had no problems logging out of things when i don't need the session open any longer.</li>
</ol>
<div>
at the end of the day, improvement is what sets an incident apart from a failure. the only real failure is a failure to learn from your mistakes and do better the next time. i'm not perfect (no one is) but each time i screw up i make sure i get better.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-85852847335462673812014-05-06T12:30:00.000-04:002014-05-06T12:30:00.981-04:00symantec anti-virus is deadthere's a lot of digital ink getting spilled right now over symantec's brian dye saying that anti-virus is dead (<a href="http://www.zdnet.com/antivirus-is-dead-long-live-the-antivirus-7000029078/">one</a>, <a href="http://www.theregister.co.uk/2014/05/06/symantec_antivirus_is_dead_and_not_a_moneymaker/?utm_source=twitterfeed&utm_medium=twitter">two</a>, <a href="http://www.csoonline.com/article/2151440/malware-cybercrime/symantec-develops-new-business-strategy-says-av-is-dead.html#.U2g-VM_VCZE.twitter">three</a>, <a href="http://online.wsj.com/news/article_email/SB10001424052702303417104579542140235850578-lMyQjAxMTA0MDAwNTEwNDUyWj">four</a>, <a href="http://www.spgedwards.com/2014/05/anti-virus-keeps-dying.html">five</a>, and more to come i'm sure), but i don't see many people asking the tough question, which is "why should we believe symantec now"?<br />
<br />
looking back over <a href="http://anti-virus-rants.blogspot.com/search/label/symantec">my past posts about symantec</a> paints a pretty unappealing picture, and reveals what might be considered a pattern. virtually right from the beginning they named their consumer anti-virus product after a man who famously said <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-virus.html">computer viruses</a> were an urban legend. then, when they then tried to reinvent themselves with their "security 2.0" campaign, they claimed the virus problem was solved. now, when it appears they're trying to reinvent themselves again, they're saying that anti-virus is dead. it seems that whenever their business plan calls for serious marketing, they latch on to messages that grab attention but whose reality is questionable at best.<br />
<br />
when the biggest <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-virus</a> vendor starts saying anti-virus is dead, there's no way that isn't going to grab a lot of attention. it seems designed to hurt the very industry they're on top of, while they are (apparently) in the process of trying to distance themselves from it. i've noted in the past that the biggest players in the industry are hurt the least by the consequences of their bad acts. as market leaders they control perception not just of themselves but of the entire industry, so that even if a smaller player wanted to try to present a more reasonable and accurate view of things in order to better compete on technical merit rather than deceptive marketing manipulation, there's very little impact they could have. saying that anti-virus is dead while simultaneously trying to position themselves as something else is essentially a scorched earth tactic.it will hurt the entire anti-virus industry while drawing attention to the alternate industry they're trying to create/break into.<br />
<br />
when the biggest anti-virus vendor starts saying anti-virus is dead, there's also no way that shouldn't raise the hairs on the back of your neck. out of the blue symantec starts mimicking exactly the same message that enterprise level infosec people have been saying for years? am i the only one who thinks that sounds like it belongs in the <i>too good to be true</i> category? this is the same kind of technique a <a href="http://anti-virus-rants.blogspot.com/2010/08/what-is-malware-writer.html">malware writer</a> might use to trick you into trying out his/her handiwork. before you get any ideas about symantec using 'trojan marketing', though, it's also the same kind of technique AV marketers used when they told people just using AV would solve their security problems. <i>too good to be true</i> has been part of the AV marketing arsenal from the very beginning, it's just that this new one about AV being dead seems to be designed for a much more select class of dupe, i mean user. this is the same shit, it's just a different pile.<br />
<br />
it'll probably work, though. telling people what they want to hear is unfortunately quite effective. even smart people will fall for it, because despite being smart, those people still want to hear something that is far too simplistic to have anything in common with reality. when you look closely enough, the truth always seems to wind up being messy and complicated, not something that could fit in a sound-bite.<br />
<br />
this is the reason why i try to convince people to stop listening to marketing (and really, everything that comes out of a vendor is marketing to some degree). this is almost certainly nothing more than another in a long line of efforts to deceive and manipulate the market. if you must listen to something, listen to their actions. they aren't retiring their AV product, so how dead can AV really be?<br />
<br />
all that being said, i actually do welcome their shift in focus from purely prevention to now include more detection and recovery. it's about time AV vendors started getting serious about the last 2 parts of the PDR triad (prevention, detection, recovery). it doesn't have to be purely service-based detection though. years ago we had generic detection tools (such as integrity checkers) that end users could use themselves. symantec's focus on providing detection services instead of detection tools belies a philosophy of not trusting the users' competence, which in turn is consistent with their long history of failing to educate, elevate, and empower their users. maybe that kind of paternalism is appropriate for home users, but enterprise security operations? i thought we could expect enterprise level IT and infosec professionals to develop skills and expertise in these kinds of areas, so why is symantec choosing a path that takes these things out of advanced customers' hands?<br />
<br />
as much as it seems like symantec is doing an about-face, they really haven't changed their tune. telling enterprises what they want to hear is just a ploy so that enterprises will get in bed with them (<a href="http://www.youtube.com/watch?v=96iyQNPzFN0">that's just what we call pillow talk, baby</a>). they still aren't giving their users any new power to affect their own security outcomes. so far they're just offering <a href="http://coolquotescollection.com/5036/words-nothing-but-sweet-sweet-words-that-turn-into-bitter-orange-wax-in-my-ears">words. nothing but sweet, sweet words that turn into bitter orange wax in your ears</a>.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-49165665759585664312014-04-04T10:00:00.000-04:002014-04-04T10:00:00.550-04:00goto fail refactoredi wrote the lion's share of this a while ago but wasn't sure i wanted to publish yet another post about GOTO here since this isn't a programming blog. my mind was made up yesterday when i read <a href="http://blog.smartbear.com/development/goto-still-has-a-place-in-modern-programming-no-really/">this post by Steven J Vaughan-Nichols</a> where he quotes a number of technology personalities essentially giving bullshit excuses for why GOTO is OK to use. it's no wonder 2 separate crypto libraries (both making prodigious use of GOTO) suffered embarrassing and dangerous defects recently when programming thought leaders perpetuate myths about structured programming.<br />
<br />
i'm providing this as an object lesson in how to avoid the use of GOTO, especially in security-related code where a higher standard of quality is sorely needed. i'll be using Apple's Goto Fail bug as the example. here is the complete function where the fail was found, with the bug intact:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">{</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> OSStatus err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer hashOut, hashCtx, clientRandom, serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer signedHashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t *dataToSign;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> size_t dataToSignLen;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signedHashes.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashCtx.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.data = ctx->clientRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.data = ctx->serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(isRsa) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* skip this if signing with DSA */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> else {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* DSA, ECDSA - just use the SHA1 hash */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = &hashes[SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLFreeBuffer(&hashCtx)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = sslRawVerify(ctx,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> ctx->peerPubKey,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign, /* plaintext */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen, /* plaintext length */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signature,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signatureLen);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(err) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> "returned %d\n", (int)err);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> goto fail;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">fail:</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&signedHashes);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> return err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">} </span><br />
<br />
one of the things you might notice is that all roads lead to "fail:", meaning "fail:" isn't really just for failures, it's for clean-up.<br />
<br />
another thing you might notice is that the final "goto fail;" doesn't actually bypass any code - it's completely redundant and if it weren't there the next thing to execute would still be the code after the "fail:" label.<br />
<br />
the first thing we're going to try is the most obvious approach to refactoring this function, to get rid of GOTO by making proper use of IF.<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">{</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> OSStatus err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer hashOut, hashCtx, clientRandom, serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer signedHashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t *dataToSign;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> size_t dataToSignLen;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signedHashes.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashCtx.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.data = ctx->clientRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.data = ctx->serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(isRsa) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* skip this if signing with DSA */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLFreeBuffer(&hashCtx)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = sslRawVerify(ctx,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> ctx->peerPubKey,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign, /* plaintext */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen, /* plaintext length */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signature,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signatureLen);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(err) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> "returned %d\n", (int)err);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> else {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* DSA, ECDSA - just use the SHA1 hash */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = &hashes[SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLFreeBuffer(&hashCtx)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0) { </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = sslRawVerify(ctx,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> ctx->peerPubKey,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign, /* plaintext */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen, /* plaintext length */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signature,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signatureLen);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(err) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> "returned %d\n", (int)err);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&signedHashes);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> return err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">}</span><br />
<br />
as you can see this version of the function is quite a bit longer as well as being deeply nested. this is the kind of code that actually makes programmers think the use of GOTO isn't as bad as their teachers told them it was, because that deep nesting makes the function seem more complex and more difficult to read. on top of which there is a considerable amount of duplicated code. neither of these things are appealing to programmers because they make reading and maintaining the code more work.<br />
<br />
however, this is the most simple-minded and unimaginative way to refactor the original function. if we were to also tackle that complex pattern used in virtually all of the IF statements at the same time as getting rid of the GOTOs, we would instead get something like this:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">{</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> OSStatus err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer hashOut, hashCtx, clientRandom, serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLBuffer signedHashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> uint8_t *dataToSign;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> size_t dataToSignLen;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signedHashes.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashCtx.data = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = 0;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.data = ctx->clientRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.data = ctx->serverRandom;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(isRsa) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* skip this if signing with DSA */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = ReadyHash(&SSLHashMD5, &hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashMD5.update(&hashCtx, &clientRandom);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashMD5.update(&hashCtx, &serverRandom);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashMD5.update(&hashCtx, &signedParams);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashMD5.final(&hashCtx, &hashOut);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> else {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> /* DSA, ECDSA - just use the SHA1 hash */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign = &hashes[SSL_MD5_DIGEST_LEN];</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(err == 0) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.data = hashes + SSL_MD5_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> hashOut.length = SSL_SHA1_DIGEST_LEN;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLFreeBuffer(&hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = ReadyHash(&SSLHashSHA1, &hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashSHA1.update(&hashCtx, &clientRandom);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashSHA1.update(&hashCtx, &serverRandom);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashSHA1.update(&hashCtx, &signedParams);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = SSLHashSHA1.final(&hashCtx, &hashOut);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if (err == 0) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> err = sslRawVerify(ctx,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> ctx->peerPubKey,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSign, /* plaintext */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> dataToSignLen, /* plaintext length */</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signature,</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> signatureLen);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> if(err) {</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> "returned %d\n", (int)err);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&signedHashes);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> SSLFreeBuffer(&hashCtx);</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> return err;</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><br /></span>
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">} </span><br />
<br />
not only does this follow almost exactly the same format as the original function (thereby retaining it's readability), it makes the condition checking simpler and easier to read, and it has virtually the same number of lines of code as the original.<br />
<br />
combining assignment and equivalence tests into a single line IF statement was clearly intended to reduce the overall size of the source code but it failed, and in the process it made the code more complex and difficult to read. the combined assignment/condition checking and GOTO statements were complementary to each other. they supported each other and jointly contributed to the complexity of the original function.<br />
<br />
this third version of the function, by contrast, has neither complex expressions nor the potential for complex control flow. the only real complaint one might make is that after an error occurs in one of the many steps in the method, the computer still needs to perform the "if(err == 0)" check numerous times. however that is only true if the compiler can't optimize that code, and checking the same variable against the same constant value over and over again seems like the kind of pattern a compiler's optimization routines might be able to detect and do something about.<br />
<br />
complexity is the worse enemy of security, sloppiness begets complexity, and GOTO is a crutch for sloppy, undisciplined programmers - it is part of that sloppiness and contributes to that complexity, even when it's supposedly used the <i>right way</i>. what i did above isn't rocket science or magic. the same basic technique can be used in any case where GOTO is used to jump forward in the code (if you're using it to jump backward then god help you). the excuses people trot out for the continued use of GOTO not only make them sound like dumb-asses, it leads to lesser programmers trying to follow their lead and doing much worse at it. it is never used as sparingly as the gurus think it should be, and even <a href="http://www.cprogramming.com/tutorial/goto.html">their own examples occasionally contain redundant invocations of it</a>, thoughtlessly applied.<br />
<br />
if you actually work at adhering to structured programming rather than abandoning it the moment the going gets tough, you will eventually learn ways to make it just as easy as unstructured programming, you'll be a better programmer for having done so, and your programs will be less complex, easier to validate, and ultimately more secure.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com2tag:blogger.com,1999:blog-7347279.post-51979116212568250142014-03-11T13:00:00.000-04:002014-03-11T13:00:03.608-04:00the case against GOTO in securityi could have made this longer but i have a feeling it might be more powerful in this form.<br />
<br />
<b>there is no programming construct that offers more freedom and flexibility than GOTO. consequently, no programming construct carries with it a greater potential for <i>complexity</i>. </b><br />
<b><br /></b>
<b>since "<a href="https://www.schneier.com/crypto-gram-0003.html#8">complexity is the worst enemy of security</a>", </b><b>therefore GOTO should be considered harmful to and/or an enemy of security.</b><br />
<br />
i'm surprised more people haven't made this connection, or that it hasn't seen more mainstream attention. whatever else you may think of GOTO in regular software, in security-related software this has to be an added consideration. the traditional taboos against GOTO that Larry Seltzer identified <a href="http://www.drdobbs.com/architecture-and-design/is-goto-still-considered-harmful/240166595">may not be entirely rational</a>, but i tend to think the security taboo against complexity is.<br />
<br />
<br />kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com2tag:blogger.com,1999:blog-7347279.post-65574600086095395902014-02-25T12:30:00.000-05:002014-02-25T12:55:50.146-05:00goto fail, do not pass go, do not collect your next paycheckby now you've probably heard about the rather widely reported SSL bug that Apple quietly dropped on friday afternoon, <a href="https://www.imperialviolet.org/2014/02/22/applebug.html">teasing security researchers into finding out what was up</a>. if not, the gist of it is that the C code Apple used for verifying signatures used in SSL had what appears to have been a copy&paste error that broke the security and allowed people to read your supposedly secure traffic. literally there were 2 lines that said "goto fail;" when there should only have been one. now i'm not about to make a big deal about copy&paste errors because that can legitimately happen to anyone, but i am going to make a big deal about the content of that copy&paste error.<br />
<br />
the overall lack of acknowledgement (and in some cases denial) that the use of <b><i>goto</i></b> represents a deeper problem in Apple's application security is itself suggestive of a failure to recognize a fundamental principle: in software, quality is the foundation upon which security must stand.<br />
<br />
<b><i>goto</i></b> is representative of the kind of spaghetti code we had before the introduction of <a href="http://en.wikipedia.org/wiki/Structured_programming">structured programming</a> approximately 50 years ago. no, that's not a typo. <b><i>goto</i></b> has been falling out of favour for about half a century and when i saw how much it was used in Apple's code it raised a red flag for me. every programmer i broached the subject with similarly found it concerning - including my boss who admitted he hasn't coded in C in over 30 years. he wondered, as do i, if the programmer responsible for the code in question still has a job at Apple.<br />
<br />
you may think me rehashing a decades old debate, and perhaps i am - i wouldn't know, i never read any of that stuff - <a href="http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html">Edsger Dijkstra's letter "Go To Statement Considered Harmful"</a> was published about 7 years before i was born. what i'm not doing, however, is mindlessly repeating dogma. we're interested today in application security and, as we have already covered, that requires software quality. structured programming produces software that is higher quality, easier to read, easier to model, easier to audit, easier to prove, etc. than unstructured programming.<br />
<br />
why is this so? to answer that we need to think about what "structured programming" means. it is the nature of structure (all structure) to serve as a kind of constraint. your bones, for example, provide structure for your body and in so doing limit the way your body can move and bend. the support pillars for a bridge provide structure for that bridge and limit the extent to which the bridge can move and bend (yes, they bend and flex, but if the structure is doing it's job they only flex a little). likewise, code that follows the structured programming paradigm is constrained such that program control flows in a more limited number of ways. reducing the number of possibilities for program control flow makes it easier to predict (almost by definition) how a block of code's control will flow with just a quick glance. fewer possibilities mean it's easier to know what to expect. i'm sure you've seen the same effect with the written word. just like it's easier to read and understand sentences made with familiar words and phrases and following a few familiar construction patterns, the same is true for reading and understanding code as well. it's just another language, after all.<br />
<br />
that reduction of possibilities also reduces the complexity of the code, which makes building a mental model of an arbitrary block of code easier. the constructs that make structured programming what it is lend themselves more naturally to abstraction since random lines of code within them are unlikely to cause program control to jump to random other places in the code. making it easier to build a mental model of the code makes it easier to formally prove the code's correctness because it's easier to describe the algorithm you're trying to prove. less formally, greater ease in building accurate mental models of the code means that it's easier to anticipate outcomes, especially unwanted ones that you want to eliminate, before they happen because it becomes easier to run through those possibilities in your head.<br />
<br />
finally, both the greater ease of reading/understanding and the greater ease of modeling benefit efforts of others to review or audit the code. they're really only doing the same thing the programmer him/herself would have done by reading and understanding the code, creating a mental model of it, and trying to anticipate unwanted outcomes.<br />
<br />
i work as a programmer professionally in a small company with not a lot of resources. we fly by the seat of our pants in some ways, but if someone asked me to review code that relied as much on <b><i>goto</i></b> as <a href="http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c">this source file from Apple</a>, i wouldn't accept it. i'm surprised that code like that is able to survive for so long in a company with as many resources as Apple has. it makes me wonder about the programming culture within the company and it reminds me of <a href="http://youtu.be/b0w36GAyZIA?t=44m46s">a talk Jacob Appelbaum gave not too long ago</a> where he accused them of writing shitty software. sure code reviews and more rigorous testing might have found the copy&paste error that sparked this off, but those processes don't add quality, they subtract problems. it's still a garbage-in/garbage-out sort of scenario so there's only so much they can do to affect the quality of Apple's software. quality has to go into those filters before you can get quality out.<br />
<br />
i've often heard it said that regular programmers typically don't understand the nuances involved in writing secure code, especially when it comes to crypto, and having seen programmers more senior than myself flub crypto code i can certainly agree with that sentiment. that being said, i think it's probably also true that regular security people typically don't understand the nuances involved in writing quality code. since quality is a prerequisite for security, it's just as important for a programmer responsible for security-related code to have mastered the coding practices and techniques that lead to quality software as it is for them to understand secure software development.<br />
<br />
i'll concede that it may well be possible for a master programmer to produce high quality, highly readable code that relies as heavily on <b><i>goto</i></b> as Apple's programmers appear to; but, as <a href="http://www.canonical.org/~kragen/tao-of-programming.html#book3">"The Tao Of Programming"</a> satirically points out, you are not a master programmer, almost none of you are, so stop pretending and learn the lessons they've been trying to teach for the past 50 years.<br />
<br />
(and now i'll go read Dijkstra's letter. maybe this is a rehash, but that wouldn't make it wrong even if it were)<br />
<br />
(updated to fix the spelling of Jacob Appelbaum's name. thanks to Martijn Grooten)kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com3tag:blogger.com,1999:blog-7347279.post-33167202594768508782013-11-02T12:45:00.000-04:002013-11-02T12:45:02.387-04:00AV complicity explainedearlier this week i wrote <a href="http://anti-virus-rants.blogspot.com/2013/10/what-would-avs-complicity-in-government.html">a post about the idea of the AV industry being somehow complicit in the government spying</a> that has been all over the news for months. some people seemed to really 'get it' while others, for various reasons, did not; so i thought i'd try to be a little more clear about my thoughts on the subject.<br />
<br />
<a href="https://www.bof.nl/2013/10/25/experts-call-upon-the-vendors-of-antivirus-software-for-transparency/">the question</a> that the EFF et al have put towards the <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">AV</a> industry (besides having already been <a href="http://news.cnet.com/Security-firms-on-police-spyware%2C-in-their-own-words/2100-7348_3-6196990.html">asked and answered</a> some years ago) is a little banal, a little pedestrian, a little sterile. real life is messy and complicated and things don't always fit into neat little boxes. i wanted to try to get people to think outside the box with respect to complicity, what it means, what it would look like, etc. but i think some people have a hard time letting go of the straightforward question of complicity that has been put forward so let's start by talking about that.<br />
<br />
has the NSA (or other organization) asked members of the AV industry to look the other way and has the AV industry (or parts thereof) agreed to that request? almost certainly the NSA has not made such a request, for at least a couple of reasons:<br />
<br />
<ol>
<li>telling people about your super-secret <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> is just plain bad OpSec. if you want to keep something secret, the last thing you want to do is tell dozens of armies of reverse engineers to look the other way.</li>
<li>too many of the companies that make up the AV industry are based out of foreign countries and so are in no way answerable to the NSA or any other single intelligence organization.</li>
<li>there's quite literally no need. there are already well established techniques for making malware that AV software doesn't currently detect. commercial malware writers have been honing this craft for years and it seems ridiculous to suggest that a well-funded intelligence agency would be any less capable.</li>
</ol>
<br />
<br />
now while it seems comical that such a request would be made, to suggest that the AV industry would agree to such a request would probably best be described as <i>insulting</i>. whatever you might think of the AV industry, there are quite a few highly principled individuals working in that would flat out refuse, in all likelihood regardless of what their employer decided (in the hypothetical case that the pointy-haired bosses in AV aren't quite as principled).<br />
<br />
now please feel free to enjoy a sigh of relief over the fact that i don't think the AV industry has secretly agreed to get into bed with the NSA and help them spy on people.<br />
<br />
done? good, because now we're going to take a deeper look at the nature of complicity and the rest of this post is probably not going to be nearly as pleasant.<br />
<br />
here's one of the very first things <a href="http://en.wikipedia.org/wiki/Complicity">wikipedia has to say about complicity</a>:<br />
<blockquote class="tr_bq">
An individual is complicit in a crime if he/she is aware of its occurrence and has the ability to report the crime, but fails to do so. As such, the individual effectively allows criminals to carry out a crime despite possibly being able to stop them, either directly or by contacting the authorities, thus making the individual a de facto accessory to the crime rather than an innocent bystander.</blockquote>
<br />
in the case of government spying we may or may not be talking about a crime. the government says they broke no law and observers speculate that that may be because they've subverted the law (much like they subverted encryption algorithms). so let's consider a version of this that relates to ethical and/or moral wrong-doing instead of legal wrong-doing:<br />
<blockquote class="tr_bq">
an individual is complicit in wrong-doing if he/she is aware of it's occurrence and has the ability to alert relevant parties but fails to do so. as such, the individual effectively allows immoral or unethical people to carry out their wrong-doing despite possibly being able to stop them either directly or by alerting others who can, thus making the individual a de facto accessory to the wrong-doing rather than an innocent bystander.</blockquote>
<br />
in this context, could the AV industry be complicit with government spying? perhaps not directly, not in the sense that they saw what the government was doing and failed to alert people to that wrong-doing. however, what about a different wrong-doing by a different entity but still related to the government spying?<br />
<br />
<a href="http://arstechnica.com/tech-policy/2011/02/black-ops-how-hbgary-wrote-backdoors-and-rootkits-for-the-government/">hbgary wrote spyware for the government</a>. this became public knowledge in the beginning of 2011. by providing the government with tools to perpetrate spying they become accessories to that spying.<br />
<br />
hbgary was and is a partner of mcafee. now what is the nature of this partnership? hbgary is an integration partner. they make technology that integrates into mcafee's endpoint security product to extend it's functionality. mcafee does marketing/advertising for this technology and by extension for hbgary, giving them exposure, lending them credibility, and generally helping them make money. that money is almost certainly re-invested into research and development of hbgary's products, which includes governmental malware that's used for spying on people/organizations. there are mcafee customers out there right now whose security suite includes components that were written by known <a href="http://anti-virus-rants.blogspot.com/2010/08/what-is-malware-writer.html">malware writers</a> and endorsed by mcafee (although they make sure to weasel out of responsibility for anything going wrong with those components with some fine print). mcafee didn't break off the partnership when hbgary's status as an accessory to government spying became known, and since they didn't break off the partnership you can probably make a safe bet that they didn't warn those customers that part of their security suite was made by people aiding the government in spying either. even if we ignore the fact that mcafee aids a business that writes malware for the government, mcafee's failure to raise the alarm about the possible compromising nature of any content provided by hbgary makes them accessories to hbgary's wrong-doing. by breaking ties with hbgary and warning the public about what hbgary was up to they could have had a serious impact on hbgary's cash flow and hurt their ability to win contracts and/or execute on their more offensive espionage-assisting projects. they didn't do any of that and that makes them complicit in the sense discussed a few paragraphs earlier.<br />
<br />
the rest of the AV industry may not be directly aiding hbgary's business but, like mcafee, they have failed to raise any alarm about hbgary. they could have done much the same as mcafee by warning the public, with the added bonus that they would have hurt one of the biggest competitors in their own industry while they were at it and that would have benefited all of them (except mcafee, of course). again, failing to act to help prevent wrong-doing makes them a de facto accessory to that wrong-doing. the AV industry as a whole is complicit in the sense discussed earlier.<br />
<br />
of course, the AV industry isn't alone in being accessories to an accessory to government spying, and that brings up a consideration that should not be overlooked because there is a larger context here. historically, the culture of the AV industry has been one that values being very selective in things like who to trust, who to accept into certain groups, etc. add to that a very narrowly defined mission statement (to fight <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-virus.html">viruses</a> and other malware) and it's little wonder that the ethical boundaries that developed in the early days were so dead-set against hiring, paying, or doing anything else that might assist malware writers or possibly promote malware writing. heck, i knew one member who wouldn't even engage <a href="http://anti-virus-rants.blogspot.com/2006/07/what-is-virus-writer.html">viruses writers</a> in conversation, and another who said he was wary of hiring anyone who already knew about viruses just in case they came by that knowledge through unsavoury means. aiding malware writers, turning a blind eye to their activities, etc. are things that normally would have violated AV's early ethical boundaries.<br />
<br />
by contrast, the broader security industry is highly inclusive and has long viewed the AV industry's selectivity as unfair elitism. that inclusivity means that the security industry isn't actually just one homogeneous group. there are many groups, from cryptographers to security operations personnel to vulnerability researchers to penetration testers, etc. each one has it's own distinct mission statement and it's own code of ethics. what do you think you get from a highly inclusive melting pot of security disciplines? well, in order for them to tolerate each other, one necessary outcome is a very relaxed ethical 'soup'. many quarters openly embrace the more offensive security-related disciplines such as malware creation. in order for AV to integrate into this broader security community (and they have been, gradually, over time), AV has to loosen it's own ethical restrictions and be more accepting.<br />
<br />
so while the AV industry failed to raise the alarm about hbgary, the broader security industry failed as well. the difference is that ethics in the security industry don't necessarily require raising an alarm over what was going on. hbgary is a respected company in security industry circles and it's founder greg hoglund is a respected researcher whose proclivity for creating malware has been known for a long, long time. as far as the security industry is concerned, hbgary's activities don't necessary qualify as ethical wrong-doing. there will probably be those who think it does, but in general the ethical soup will be permissive enough to allow it, and without being able to call something "wrong-doing" there can be no complicity. this is where AV is going as it continues to integrate into the broader security community. in fact it may be there already. maybe that's the reason they didn't raise the alarm - because they've become ethically compromised, not as a result of a request from some intelligence organization, but as a result of trying to fit in and be something other than what they used to be.<br />
<br />
in the final analysis, if you were hoping for a yes or no answer to the question of whether AV is in any way complicit in the spying that the government has been doing (specifically, the spying done using malware), i'm afraid you're going to be disappointed. it depends. based on AV's earlier ethics the answer would probably be yes. based on the security community's ethics the answer may well be no. where is the AV industry now? somewhere between what they were and what the broader security community is. ethical relativity is unfortunately a significant complicating factor. then again, i'm an uncompromising bastard, so i say "yes" (after all, i did grow up with those old-school ethics).kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-79026104808156145652013-10-29T12:30:00.000-04:002013-10-29T13:21:57.798-04:00what would AV's complicity in government spying look like?as you may well have heard, the EFF and a bunch of security experts have written <a href="https://www.bof.nl/2013/10/25/experts-call-upon-the-vendors-of-antivirus-software-for-transparency/">an open letter to the AV industry</a> asking about any possible involvement by them in the mass spying scandal that has been in the headlines for much of this year. at first i thought this was old news for <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">AV</a>, since the issue of government <a href="http://anti-virus-rants.blogspot.com/2006/02/what-is-trojan.html">trojans</a> has actually been around a lot longer than the current spying revelations. i thought these people had simply failed to do their homework but, as time passed, the wheels began to turn and i started thinking differently. now i think the question we should all be asking ourselves is, what would AV's complicity look like?<br />
<br />
some background, first. the subject of government trojans have been around for over a decade. <a href="http://en.wikipedia.org/wiki/Magic_Lantern_%28software%29">magic lantern</a>, for example, dates back to 2001 (or at least public awareness of it does). so it should come as little surprise that the question of whether the AV industry looks the other way has come up before. in 2007 <a href="http://news.cnet.com/Security-firms-on-police-spyware%2C-in-their-own-words/2100-7348_3-6196990.html">cnet ran a story</a> where 13 different vendors were asked about this very thing. they all more or less denied being a party to such shenanigans, but i suggest you read the article and pay careful attention to the answers.<br />
<br />
now earlier this year one of the first controversial spying revelations to come about was about a program called <a href="http://en.wikipedia.org/wiki/PRISM_%28surveillance_program%29">PRISM</a> which a whole bunch of well known, big name internet companies (including google, microsoft, yahoo, facebook, etc) were apparently involved with. the companies all denied it of course, and it turns out they may be legally required to do so.<br />
<br />
that adds an interesting wrinkle to the question now being put towards the AV industry; would they be allowed to admit to any complicity that might be going on? they say actions speek louder than words, so maybe we should look for something other than the carefully crafted assurances of multi-million dollar corporations. maybe what we should be looking for is the same thing that alerted us to the mass spying in the first place - a leak. maybe then we can get a glimpse of their actions.<br />
<br />
back in early 2011 a rather spectacular breach occurred. security firm hbgary was breached by some members of anonymous, and one of the things that leaked out was the fact that <a href="http://arstechnica.com/tech-policy/2011/02/black-ops-how-hbgary-wrote-backdoors-and-rootkits-for-the-government/">hbgary wrote malware for the government</a>. in fact, it doesn't take much imagination to suppose that this would be the very type of <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> the EFF et al are concerned the AV industry may have been asked to ignore.<br />
<br />
it's unknown whether any AV vendor actually did field such a request. i have my doubts since traditional commercial <a href="http://anti-virus-rants.blogspot.com/2010/08/what-is-malware-writer.html">malware writers</a> seem to be perfectly capable of creating undetected malware without making such requests. that being said, one fact that became rather suspicious in light of the revelations about hbgary was <a href="http://www.mcafee.com/apps/partners/partnerlisting.aspx?mFilter=SelectMarketSegment&sFilter=SelectServiceLevel&cFilter=true&hdnLastAZSelection=All&region=us">the fact that they were partners with mcafee</a>, one of the biggest AV vendors around and certainly one of the best known names in AV. i wrote about <a href="http://anti-virus-rants.blogspot.com/2011/02/ethical-conflict-in-anti-malware-domain.html">this apparent ethical conflict</a> back in february of 2011, and then again in march of 2011 to note <a href="http://anti-virus-rants.blogspot.com/2011/03/covenant-is-broken.html">the tremendous non-reaction from the industry</a>. i even went so far as to create <a href="http://anti-malware-community-watch.blogspot.com/">a blog specifically for keeping an eye on the industry</a> (though as an outsider myself there was little i could do on my own).<br />
<br />
the EFF and others want to know if the AV industry has been complicit in the government's spying. well, one AV vendor was <a href="http://news.cnet.com/Security-firms-on-police-spyware%2C-in-their-own-words---page-4/2100-7348_3-6196990-4.html">notably evasive</a> when asked by cnet in 2007 about their handling of governmental trojans/police <a href="http://anti-virus-rants.blogspot.com/2006/03/what-is-spyware.html">spyware</a>. that same AV vendor was and still is partnered with a company that wrote government malware (in all likelihood for very purpose in question). furthermore, in the intervening years, nothing has come of it. no other vendor has said anything or done anything to call attention to or raise awareness of this partnership. even after the mass surveillance controversy started earlier this year, not a one bothered to raise the alarm and suggest that mcafee might at least in principle be compromised by that partnership, even though they certainly could have benefited from disrupting mcafee's market share. no one thought they could profit from it? no one thought it was their duty to warn people of a potential problem? to raise concerns that the protection mcafee's customers receive may suffer in some way because of their close ties with government malware writers? to give voice to the doubts this partnership creates even after publicly wringing their hands over how wrong what the government themselves were doing was?<br />
<br />
AV vendors may or may not have been asked to turn a blind eye to government malware - we may never know, and it's impossible to prove a negative. but they've done a heck of a job turning a blind eye to the people who make government malware and to those in their own ranks who got in bed with government malware writers. i asked at the beginning what AV complicity would look like and i think when it comes to those whose job it is to raise an alarm, complicity would probably have to look like silence (and something about silence makes me sick).<br />
<br />
(2013-10-29 13:21 - updated to change the open letter link to point to the blog post that includes the list of intended recipients as well as a link to the letter itself)kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-21475125306833077442013-10-16T16:00:00.000-04:002013-10-16T16:00:00.668-04:00my experiences at #sectorca in 2013well, another year, another sector conference. i <b>almost</b> got another of my colleagues at work to go too (an actual security operations sort of guy at that) but in the end it didn't happen. i'm going to have to see if there's anything more i can do to make it happen next year. in fact, i'm pretty sure some of the folks at work would have preferred if i hadn't gone either (just so much to do) but it was already paid for, so...<br />
<br />
the first thing that struck me this year (aside from the great big gaping hole where the street around union station used to be) was that the staff at the metro toronto convention center could accurately guess where i was trying to go just by looking at me. i guess that must mean i look like i belong with the crowd of other sector attendees, even if i've never really felt like i do (what with not being an information security professional and all).<br />
<br />
the second thing that stuck me was the badge redesign. more space was dedicated to the QR code than to the human readable name. almost as if my interactions with machines are more important than my interactions with people.<br />
<br />
the first keynote of day one was "how the west was pwned" by g. mark hardy. i suppose it was a kind of cyberwar talk (that's certainly how it was introduced), but really focused more on economic/industrial espionage, theft of trade secrets and intellectual property and that sort of thing. there were some interesting bits of trivia, like china's cyber warrior contingent having a comparable number of people to the entire united states marine corps. also an interesting observation about the global form of government (that being the system that governs us on a global scope rather than simply within our own nations) being anarchy. i'd never thought of it that way before, but there really isn't anyone governing over how how nations interact with each other or how people interact with foreign nations.<br />
<br />
the first normal talk of day one that i attended was a so-called APT building talk. specifically it was "exploiting the zero'th hour: developing your advanced persistent threat to pwn the network" given by solomon sonya and nick kulesza. i kinda knew going in that this wasn't going to be the best quality APT talk just by the title. they clearly believe APT is simply a kind of advanced malware rather than realizing that APT is people. i can't say references to "the internet cloud" improved my opinion any. add to that the fact that anyone who took an undergrad systems programming course would have recognized most of the concepts they were talking about and i was pretty "meh" about the talk. the rest of the audience, however, was clearly very impressed based on the applause. all but one, that is. he called them out on their amateurish malware (about the only part of the APT acronym they got right was persistent, and even that is debatable). he also called them out on their releasing of malware (i swear he wasn't me, even though it probably seems like something i would do) that really wouldn't help anyone defend but certainly would help arm the shallower end of the attacker gene pool. i quite agreed with his opposition, but the applause again from the rest of the audience when one of the speakers said he could sleep quite well at night made it clear who the community was siding with here.<br />
<br />
that all left a bad taste in my mouth so i decided to skip the next round of talks. that wasn't a difficult decision to make since the entire time-slot was filled with sponsored talks which i've long found to be a disappointment. so instead i took the time to look around and see what and who i could see.<br />
<br />
i happened to luck out and stumble across chris hoff. i'm not entirely sure he remembered/recognized me but that doesn't come as a huge surprise since i'm not the most memorable person in the world and my appearance has changed significantly since the days when he did remember/recognize me. also, and perhaps more to the point, someone like chris has got to get approached by so many people that there'd be no way he could remember them all. that's part of being a "security rock star". anyway, we chatted briefly and he asked me if i was a speaker or listener. i'm definitely not a speaker and i told him i've sorta been down the speaking path before and it didn't work out so well (part of being on a panel involves speaking, right?). he shared an anecdote of his own which frankly put my bad experience to shame. still, if i went to the effort to develop that skill, what would i do a talk about? "everything you know about anti-virus is wrong"? i expect that would go over about as well as a lead balloon. my specialty is in something that has little or no respect in the information security community, so even if i did by some miracle make it past the CFP stage, i can't imagine there'd be much of a turn-out.<br />
<br />
after that i saw a familiar face i never would have expected. an old colleague from work, joel campbell, who i gather now works at trustwave and was manning their booth on the expo floor. we chatted a bit about work of course, but also about security conferences like sector and how they compare with some of the ones in the states. sector is apparently small, which rationally i knew since i did once attend RSA, but i guess with little else to compare it to in more recent times, sector seems big to me.<br />
<br />
the lunch keynote given by gene kim about DevOps interested me in a "i know someone who'd probably be interested in this" sort of way. i can't wait for the video to become available so i can share it with some of my higher-ups in the dev department at work (we do have an ops guy sort of embedded with us devs, i wonder what DevOps would say about that). there was also a very interesting observation about human nature; apparently when we break promises we compensate by making more promises that are even bolder and less likely to be kept. i think i've seen that play out on more than one occasion.<br />
<br />
after lunch i attended kelly lum's talk ".net reversing: the framework, the myth, the legend", which was pretty good despite the original recipe bugs that kept her distracted at the beginning. i actually saw a .net hacking talk last year as well (i'm a .net developer, it stands to reason i'd be interested in knowing how people can attack my work) but this one spent less time talking about all the various gadgets you could use to attack .net programs and more time talking about the format such that one could possible use it as a starting point for creating one's own .net reverse engineering tools. that'll certainly be filed away for future reference.<br />
<br />
following that i attended leigh honeywell's talk "threat modeling 101", only it wasn't really a talk. this was one of the more inventive uses of the time-slots speakers are given, as she actually had us break up into groups to play a card game called elevation of privilege. it's quite an interesting approach to teaching people to think about various types of attacks and i've already talked about the game at work and shared some links. hopefully i can get some of my coworkers to play.<br />
<br />
for the last talk of day 1 i attended "return of the half schwartz fail panel" with james arlen, mike rothman, dave lewis, and ben shapiro. this was apparently a follow-up of a previous fail panel that i never saw but that didn't seem to matter because it didn't seem to reference it at all. i didn't find it particularly cohesive, i guess because the only common theme it was designed to have running throughout was failure, but one interesting thing i took away was the notion of venture altruism. it's a different way of looking at things than i'm used to as i tend to frame things more as 'noblese oblige', but it certainly appears as though quite a few people really do have their hearts in the right place in that they're trying to make the world a better place in their own particular, security-centric way.<br />
<br />
i decided to opt out of the reception afterwards. i felt guilty about it because i know i really ought to have gone but the truth is that in all the times i've gone before i've never really felt comfortable among all those strangers in a purely social environment. plus there was last year's (and possibly other years as well, but definitely last year) shenanigans where your badge would get scanned in order for you to get drink tickets, and then the company doing the scanning would send you email as though you had actually shown interest in them and visited their booth. i know the conference is an important tool for generating leads for sales, but over drink tickets? really? i suppose if they're paying for the drinks then it's hard to argue against them getting your contact info in return, but at least when facebook asks you to trade your privacy for some reward you have some kind of idea that that's what's going on. it made participating in the reception feel like bad OpSec; and you know, if you add enough disincentives together you're eventually going to inhibit behaviour.<br />
<br />
the day 2 morning keynote was another panel, and if i'd gotten the impression from the fail panel that panels lacked cohesion, this one dispelled it. "crossing the line; career building in the IT security industry" with brian bourne, leigh honeywell, gord taylor, james arlen, and bruce cowper as moderator focused very strongly on the issue of crossing legal, ethical, and moral lines and whether that was necessary to get ahead and be taken seriously in security. i came into the keynote thinking it would be more about career building (which hasn't been that interesting to me in the past since i'm perfectly happy <b>not</b> being in InfoSec) but the focus on the law, ethics, and morals is much more interesting to me as the frequent mentions of ethics on this blog could probably attest to. i was pleased to see both leigh and gord take the position that crossing those lines is not necessary and holding themselves up as examples. james was careful to point out that those lines are not set in stone (they're "rubber" as he put it, though he also made a point that that doesn't mean they aren't well defined), and certainly theres a point there at least with the relevancy of the law as there are some really poorly written laws as well as some badly abused laws (as the prosecution of aaron schwartz certainly highlights). of course as the amateurish malware distributors from day 1 demonstrated, crossing ethical and moral lines is still widely accepted and embraced in the information security community. one might want to draw a comparison between that and lock pick village which teaches people how to breach physical security, but the lock picking at least has a dual use (beyond simple education) in that it allows you to regain access to things that you have a legal right to but would otherwise be unable to access because you lost a key, for example. the AV community was historically much more stringent about not crossing those lines, and much closer to having (or at least implicitly obeying) a kind of hippocratic oath; and having literally grown up with that influence i'm certainly in favour of it, though when leigh mentioned the hippocratic oath it did not seem that well received. james pointed out that ISC^2 has a rule against consorting with hackers and yet gives credits for attending hacker conferences - which to me just makes them seem like they're either hypocrites or toothless. i could probably write an entire post about this topic alone, or rather another entire post about this topic since i already did once years ago that's kind of begging for a follow-up.<br />
<br />
the first regular talk i attended the second day was schuyler towne's "how they get in and how they get caught", which turned out to be a lock picking forensics talk (in the security fundamentals track, no less). after having seen a number of talks about lock picking over the years, seeing one on detecting that lock picking has occurred rounded things out really nicely. the information density for the talk was high, there was even a guy in front of my taking picture after picture of the diagrams being shown on the screen, but schuyler is really passionate about the subject matter and did a good job of keeping the audience's interest in spite of all the details and photos of lock parts under high magnification.<br />
<br />
after that talk i finally relented and attended one of the sponsored talks, specifically "the threat landscape" by ross barrett and ryan poppa of rapid7. i suppose it's only fitting that a vendor would hand out buzzword bingo sheets. certainly it's good that they acknowledge that as vendors they're expected to throw out a lot of buzzwords. but i think it kind of backfired for the talk because rather than paying attention to what they were saying i found myself paying attention to what buzzwords i could cross off my sheet. buzzword bingo is a funny joke, but if you make it real i think you wind up sabotaging your talk. on the other hand, perhaps that acts as a proxy for actual engagement of the audience, so that people will come away feeling better about the talk than they otherwise might have.<br />
<br />
the lunch keynote by marc saltzman was really more entertainment than information. flying cars? robots? virtual reality? ok. lunch was good, though.<br />
<br />
after lunch i attended an application security talk given by gillis jones. this one wasn't in the schedule so i can't look up the actual name of the talk. it replaced james arlen's "the message and the messenger" which i've already seen on youtube. i guess whenever they say app sec they must be talking about web application security, because i can't say i've seen much in the way of winform application security talks (unless .net reversing counts). i'm not a web guy, i don't do web application development (yet) so i sometimes find myself out of my depth, but (perhaps because it was in the security fundamentals track) gillis approached the topic in a way that would help beginners understand, and i certainly feel like i have a better handle on some of the topics he covered. in fact, i started trying to find XSS vulnerabilities at work the very next day.<br />
<br />
for the final talk of the conference i attended todd dow's "cryptogeddon" which was a walk-through of a cyber wargame exercise. it had a very class-room like approach to working through a set of clues in order to gain access to an enemy's resources. that format works well, i think, and i can see why educators would want to use todd's materials for their classes.<br />
<br />
and that was pretty much my experience of sector 2013. it's taken me several days to write this up - certainly enough time for me to come down with the infamous "con-flu", but i never do. i'm not certain, but i have a feeling that my less social nature makes me less likely to contract it somehow. i don't shake as many hands, or collect as many cards, or stand face to coughing/sniffling/sneezing face with as many people as some of the more gregarious attendees do.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-670950050592180752013-08-07T20:17:00.000-04:002013-08-07T20:17:26.945-04:00if google gives you security advice, get a second opinion<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.secmeme.com/2013/08/aint-nobody-got-time-for-master.html" style="margin-left: auto; margin-right: auto;"><img border="0" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCNIKv37uD9avUYgxuvl-nFDaK6GG_l2lyjvCa7QGv_4-J3IE4BooM5Rm_AOsssD5lJjBZSRkcKIKBNPPfbig0xf7xyJPqk8KR2qH5fCGQdM6FavWWW_P4NAgLabiJs428AziYcA/s320/chromepasswordstorage.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">originally posted on <a href="http://www.secmeme.com/2013/08/aint-nobody-got-time-for-master.html">secmeme</a></td></tr>
</tbody></table>
remember back when google's chrome browser was shiny and new and their vaunted <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-sandbox.html">sandboxing</a> technology <a href="http://anti-virus-rants.blogspot.ca/2008/09/chrome-plated-security.html">didn't actually place plug-ins in a sandbox</a> even though the plug-ins, with their existing body of <a href="http://anti-virus-rants.blogspot.ca/2012/09/what-is-vulnerability.html">vulnerabilities</a> and research, would have been the most likely vector of attack for a brand new browser? seems like kind of a glaring oversight, right?<br />
<br />
and who could forget google's <a href="http://news.cnet.com/8301-30685_3-57327424-264/googler-android-antivirus-software-is-scareware-from-charlatans/">chris dibona ranting about android not needing anti-malware and sellers of such products being scammers and charlatans</a>? of course now google themselves are hard at work trying to stem the tide of android <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> with things like bouncer, <a href="http://www.techrepublic.com/blog/it-security/-google-play-androids-bouncer-can-be-pwned/">but that's far from perfect</a>.<br />
<br />
heck, even google's infamous tavis ormandy had to take a second stab at executing his sophail vendetta* because his <a href="http://anti-virus-rants.blogspot.ca/2011/08/tavis-ormandys-sophail-presentation.html">first attempt was so laughably bad</a>.<br />
[* i refer to it as a vendetta because a) it followed then sophos representative <a href="http://nakedsecurity.sophos.com/2010/06/15/tavis-ormandy-pleased-website-exploits-microsoft-zeroday/">graham cluley publicly chewing tavis ormandy</a> out for what has since become official google policy (disclosing vulnerabilities after a ridiculously short period of time), and b) the entire sophail effort from start to finish spanned <b>years</b>.]<br />
<br />
now comes news that google's chrome browser <a href="http://blog.elliottkember.com/chromes-insane-password-security-strategy">doesn't require the user to enter a master password</a> before displaying saved passwords? and not only that but it also comes with a condescending head of chrome security, justin schuh, defending the design by claiming that master passwords breed a false sense of security by making people think it's safe to share their computer with others or leave them unlocked and unsupervised. he repeatedly falls back on the trope of "once the bad guy got access to your account the game was lost". nevermind the fact that most people will assume it's protected regardless of what chrome does because that's how most browsers have behaved for years (so not protecting the passwords is even worse than protecting them partially), nor the fact that attackers are also capable of bypassing the user account protection chrome is abdicating password security responsibility to. no protection is perfect, but that doesn't mean we throw out the imperfect ones or we'll eventually be left with none at all.<br />
<br />
it's almost enough to make you think google never gets anything in security right the first time. but wait - it's not like password storage is an innovative new concept. there's been an established pattern around for years that they could have simply followed. it's not even like they could claim to not be aware of it when other browsers follow that pattern. frankly, if the folks at google really think they know password storage security better than everyone that came before them, from a UK software developer to mozilla engineers to <a href="http://www.schneier.com/blog/archives/2005/06/password_safe.html">bruce freaking schneier</a>, then i respectfully suggest that they pull their heads out of their asses and get with the program. if they were really concerned about a false sense of security then maybe they shouldn't be storing passwords in the first place, after all it's not unheard of for a browser to be tricked into revealing the contents of it's password store to a remote attacker when visiting a specially crafted malicious webpage.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-89606998416699883872013-06-13T12:30:00.000-04:002013-06-13T12:30:00.567-04:00expert misuse of the term "virus"<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">in the past, the argument many <a href="http://anti-virus-rants.blogspot.com/2006/05/what-is-expert.html">experts</a> have used against putting in the effort to actually correct people's misconceptions about what a <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-virus.html">computer virus</a> is and what it does and how it's different from other forms of <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> have centered on the idea that it doesn't really matter to a victim what kind of malware they have. when they have malware on their computer, all they care about is getting rid of it. </span><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">i can understand this school of thought, even though i've never agreed with it. however, the world has changed. </span><br />
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">this is a post stuxnet world and defenders/victims aren't the only ones we need to consider anymore. we now have people barking orders to create digital weapons, and the details of those orders matter. if they say make me a virus then someone will make them a virus even if they didn't understand what they were asking for - and there's evidence to suggest they don't. </span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">stuxnet is generally believed to have been a targeted, covert operation, but it used a noisy, untargetable* type of malware - a computer virus (or more specifically a <a href="http://anti-virus-rants.blogspot.ca/2006/01/what-is-worm.html">worm</a>). there were and are better ways to achieve the ends we believe the creators of stuxnet were after, so one can only assume that high level decisions were made in ignorance of the differences between malware types and the consequences those differences would have on that type of operation.</span><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;" /><br style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;" /><span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">the question, then, of whether to put in the effort to educate people about the proper use of the term virus can no longer be answered by looking exclusively at the victims who want their computers to be clean. it is now also necessary to consider aggressors who want to use malware as a weapon to serve national interests. as misguided as such behaviour is, we have to accept that it's happened and will continue to happen, and actually knowing the differences between malware types may mean the difference between a surgically precise operation, or one with a lot of collateral damage. </span><br />
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">this isn't to say that i think we should start helping aggressors create and/or launch their digital weapons. i still don't believe in helping the bad guys, and even if i believed such nationalistic aggressors weren't bad guys (which i don't), i don't believe there's any way to help them without also helping those who are much more unambiguously bad. what i am saying, however, is that this particular form of ignorance that experts have been too lazy to address can cause real harm and ignoring it means ignoring the opportunity to reduce the unintended harm such people will cause.</span><br />
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;"><br /></span>
<span style="background-color: white; color: #404040; font-family: Roboto, arial, sans-serif; font-size: 13px; line-height: 18px;">(*many believe stuxnet was highly targeted, but there's a distinction to be made. while it's destructive payload was highly targeted to a very specific environment, it's self-replication was not - it spread far beyond it's intended target)</span>kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0tag:blogger.com,1999:blog-7347279.post-23082674316645641652013-05-27T16:00:00.000-04:002013-05-27T16:00:00.575-04:00more on bromium and snake oilin my<a href="http://anti-virus-rants.blogspot.com/2013/05/no-bromium-will-not-kill-all-malware.html"> previous post about bromium</a> i looked at claims that a technology reporter made about their technology (that it would kill all <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> forever), noting that it was for all intents and purposes <a href="http://anti-virus-rants.blogspot.com/2006/04/what-is-snake-oil.html">snake oil</a>, and suggesting that the folks at bromium were doing the public a disservice by failing to dispel the false sense of security that that sort of reporting/opinion generates.<br />
<br />
bromium's tal klein took exception to this based on what he believed to be my misunderstanding of their technology, and suggested i go read their white paper, among other things. here's some friendly advice for all you vendors out there: when someone calls you out for snake oil and you tell them to go read your white paper because they don't know what they're talking about, you better make sure they don't find more snake oil in your <a href="http://webcache.googleusercontent.com/search?q=cache:lm-kPgBJD3YJ:www.bromium.com/misc/Bromium_vSentry_WP.pdf+&cd=2&hl=en&ct=clnk&gl=ca">white paper</a> - especially not in the second paragraph. and i quote:<br />
<blockquote class="tr_bq">
It defeats all attacks by design.</blockquote>
that's a rather bold claim, don't you think? sort of suggests perfect security, doesn't it? but wait, there's more in the third paragraph:<br />
<blockquote class="tr_bq">
It revolutionizes information protection, ensures compliance – even when users make mistakes, <b>eliminates remediation</b>, and empowers users – saving money and time, and keeping employees productive.</blockquote>
the emphasis above is mine. "eliminates remediation" or "eliminates the need for remediation" is something of a recurring theme in bromium's marketing materials. you can find it in their <a href="http://www.bromium.com/product/introducing-vsentry.html" rel="nofollow">introductory video</a>, and even hear a version of it from the mouth of simon crosby himself in their video on <a href="http://www.bromium.com/product/isolate-defeat-attacks.html" rel="nofollow">isolating and defeating attacks</a>.<br />
<br />
the only way you can eliminate remediation is if prevention never fails. but there is no such thing as a prevention technique that never fails. <a href="http://anti-virus-rants.blogspot.ca/2007/03/theres-more-to-security-than-just.html">all preventative measure fail</a> sometimes. if you believe otherwise then i've got a bridge to sell you (no, not really). perfect prevention is a pipe-dream. it's too good to be true, but people still want to believe and so snake oil peddlers seize on it as a way to help them sell their wares.<br />
<br />
so it would appear that things are actually worse than i had originally thought. not only is bromium letting 3rd party generated snake oil go unchallenged, they're actively peddling their own as well. now just to be clear, i'm not saying that vsentry isn't a good product, from what i've read it sounds quite clever, but - even if you have the best product in the world, if you make it out to be better than it is (or worse still, make it out to be perfect) and foster a false sense of security in prospective customers, then you are peddling <b>snake oil</b>.<br />
<br />
customers may opt to ignore the possibility for failure and the need to remediate an incident, but i wouldn't suggest it. to re-iterate something from my previous post, it's an isolation-based technique. although they often like to gloss over the finer details, their isolation is not complete - they have rules (or policies if you prefer) governing what isolated code can access outside the isolated environment, as well as rules/policies for what can persist when the isolated code is done executing. this is necessary. you're probably familiar with the aphorism about if a tree falls in forest and no one is around to hear it, does it make any sound? well:<br />
<blockquote class="tr_bq">
if code runs in an environment that is completely isolated, does it do any useful work? </blockquote>
the answer is a resounding no. useful work (the processing of data to attain something that can itself be used by something else) has not occurred. all isolation-based techniques must allow for exceptions because of the nature of how work gets done. we divide labour, not just between multiple people, but also between multiple computers, multiple processes, multiple threads, and even multiple subroutines. we need exceptions to isolation, paths through which information can flow into and out of isolated environments, so that the work that gets done in isolation can be used as input for yet more work. this transcends the way the isolation is implemented, it is an inescapable part of the theory of isolating work.<br />
<br />
and that is a weakness of every isolation-based technique - the need to breach the containment it affords in order to get work done. someone or something has to decide if the exception being made is the right thing to do, if the data crossing the barrier is malicious or will be used maliciously. if a person is making the decision then it boils down to whether that person is good at deciding what's trustworthy or not. if a machine is making the decision then, by definition, it's a decidability problem and is subject to some of the same constraints as more familiar decidability problems (like detection - after all, determining if data is malicious is as undecidable as determining if code is malicious). in the case of vsentry, a computer is making the day to day decisions. the decisions are dictated by policies written by people, of course, but written long before the circumstances prompting the decision have occurred, so people aren't really making the decision so much as they're determining how the computer arrives at it's decision. the policies are just variables in an algorithm. the decisions made by people involve what things vsentry will isolate (it only isolates untrusted tasks, not all tasks), but people deciding what to trust and what not to trust is basically the same thing that happens in a <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-application-whitelisting.html">whitelisting</a> deployment or when people think they're smart enough to go without <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-virus software</a>, and we already know the ways in which that can go awry.<br />
<br />
vsentry may have <a href="http://www.reuters.com/article/2013/02/20/ca-bromium-idUSnBw8bh7JHa+106+BSW20130220">scored a perfect score in an evaluation performed by NSS Labs</a> using malware and an operator utilizing metasploit, but that doesn't mean it's perfect anymore than receiving the VB100 award makes an anti-virus product perfect. they weren't able to find a way past vsentry's defenses because vsentry is still new and still novel. it will take time for people to figure out how to effectively attack it, but eventually they will. the folks at bromium need to tone down their claims and take these famous words to heart:<br />
<blockquote class="tr_bq">
<b>Don't be too proud of this technological terror you've constructed - Darth Vader</b></blockquote>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com10tag:blogger.com,1999:blog-7347279.post-32082013842201890182013-05-20T16:00:00.000-04:002013-05-20T16:00:03.430-04:00no, bromium will not kill all malware foreverover the weekend a discussion broke out on twitter (as discussions are want to do) about <a href="http://www.zdnet.com/bromium-a-virtualization-technology-to-kill-all-malware-forever-7000015382/">a somewhat overly optimistic article</a> concerning the new <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-malware</a> apple of the security community's eye: bromium.<br />
<br />
the primary tactic that bromium uses (or at least the primary one that people focus on) is isolation/<a href="http://anti-virus-rants.blogspot.ca/2008/02/what-is-sandbox.html">sandboxing</a>. bromium's vsentry product uses virtualization on a per-process basis to isolate every process from the system and from each other. that level of granularity for isolation is a lot higher than most sandboxing efforts can give you. while there are certainly benefits to that granularity, there are also drawbacks.<br />
<br />
perfect isolation is actually not desirable, we want and even need to be able to use the results of one process inside another one. the more sandboxes you have, he harder this is to manage. the folks at bromium have opted to address this issue using rule-based systems to decide what something in a sandbox can access as well as what to do with any changes that are left when the sandboxed process is finished. rules which, in all likelihood, the administrator can modify to suit their needs.<br />
<br />
now, while the article in question is reasonably good at explaining what bromium's vsentry does, the author (jason perlow) takes the arguably naive view that this sandboxing technique can stop all possible <a href="http://anti-virus-rants.blogspot.com/2006/01/what-is-malware.html">malware</a> (as evidenced by the article's headline: "Bromium: A virtualization technology to kill all malware, forever"). the reality, however, is that <a href="http://anti-virus-rants.blogspot.ca/2006/12/what-virtualization-can-and-cannot-do.html">there are limits to what sandboxing can do</a>, and as clever as the folks at bromium are, they aren't clever enough to deliver on the promise that headline makes.<br />
<br />
that's a problem, because people are going to read that headline, see nothing in the article to actually contradict it, and believe that it's actually true. have we seen claims like that before? sure we have - saying it can kill all malware forever is not intrinsically different from claiming 100% protection. it's classic <a href="http://anti-virus-rants.blogspot.com/2006/04/what-is-snake-oil.html">snake oil</a>, only in this case it's not the vendor that's spreading it (as far as we know - we don't know exactly what the folks at bromium may have said to mr. perlow, only that <a href="https://twitter.com/VirtualTal/status/335881219424264193">they say the headline is his words, not theirs</a>).<br />
<br />
i suppose that should mean there's no problem, right? the vendor's hands are clean, after all. the snake oil is being spread by a third party. <a href="http://twitter.com/VirtualTal/status/335935195599499264">the vendor isn't doing anything about it in this case</a> or <a href="http://www.businessinsider.com/bromium-could-end-computer-viruses-forever-2012-9">previous cases that have arisen</a> because, let's face it, they benefit from it. it's good for bromium's business if people think vsentry is better than it actually is, at least in the short term. in the long term, the kinds of mismatched expectations that creates are the same kind that the AV industry struggles with daily.<br />
<br />
it is bromium's responsibility to control how their products are perceived, and by failing to take action they are giving tacit approval to the snake oil being spread on their behalf. their hands are not actually clean, they are dirty through negligence. however, <a href="http://anti-virus-rants.blogspot.com/2013/05/know-your-enemy-security-vendors.html">i didn't really expect any better of them</a> (though <a href="https://twitter.com/imaguid/status/335976578897035264">i did give them an opportunity to surprise me</a>) and you probably shouldn't either. tread carefully - caveat emptor.kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com4tag:blogger.com,1999:blog-7347279.post-11955433614207823722013-05-20T15:30:00.000-04:002013-05-20T15:30:01.040-04:00know your enemy: security vendorsjust to be clear, i'm not suggesting that vendors are waging some kind of war against their own customers - they aren't (usually) that kind of enemy. but by the same token, vendors are not your friends either. when it comes to laying out strategies for protecting yourself and your stuff, it's important to know what category to place the various players involved, and vendors are best thought of as adversaries.
<br />
<br />
to better explain what i mean, imagine you're sitting around a table with your friends playing the classic board game monopoly. although these people really are your friends, in the context of the game, their goal is to win at everyone else's expense. in serving their own interests, they act in ways that don't serve yours and in fact may sometimes be in direct opposition to your interests. in this way it can be said that you and your friends have competing interests.<br />
<br />
the customer and the vendor are generally not competing with each other in the conventional sense, but their interests are not aligned and in some cases the interests do compete. you as a customer have an interest in keeping your computers, intellectual property, banking credentials, etc. safe and secure. vendors also have an interest in that to a certain extent, but protecting you and your stuff is not a vendor's highest priority.<br />
<br />
vendors are companies. as such their highest priority is the bottom line. without the bottom line, the company ceases to be. companies don't just start up out of thin air, they need money; which means they have investors and those investors expect a good return on their investment, or else it's not a good investment and they might not invest anymore in the future, or maybe even pull out their stake in the company. companies also have operating expenses. they need to pay to keep the lights on and the machines running, and they need to pay their employees who themselves have expenses (families they need to feed and put roofs over their heads). therefore the company has to make profit it's priority. the way vendors make money is by vending - they sell a product and the more product they sell the more money they make.<br />
<br />
in theory if the product is good then they'll sell more of it, but it doesn't need to be good enough to stop all the threats to you or your stuff - vendors aren't competing with the bad guys, they're competing with each other, so they only need to be better than other vendors. what's more, since technical '<i>goodness</i>' is difficult for customers to accurately quantify, the vendor only needs their product to be <b>perceived</b> to be good. technical quality is still required up to a point, of course, because you can't fool all the people all the time. but, since your buying decisions as a customer are based on perception, and that perception can be altered/manipulated more cheaply through marketing than through technological advancement, companies engage in this kind of shortcut to help them maintain or even advance their market position.<br />
<br />
how does this compete with your interests as a defender of yourself and stuff? well, in a few different ways, actually:<br />
<br />
<ol>
<li>by conventional falsehood, they make their product out to be better than it is and so draw you away from something that may actually suit your needs better (example: look at any vendor that's ever claimed to be able to take care of all/100% of any kind of threat)</li>
<li>by omission, they make solving your security problems seem easier than they really are because nobody wants to make the customer swallow a bitter pill about how much work is really involved in staying safe, especially when their competitors aren't doing it (example: how many vendors will tell you about what you need to do when their product doesn't work? how many will even talk about that scenario?)</li>
<li>by framing the issue, they make the customer think about the customer's security issues in the vendor's terms, thereby favouring the vendor's proposed 'solution' rather than formulating strategies to meet the customers own unique, individual needs (example: a number of <a href="http://anti-virus-rants.blogspot.com/2008/02/what-is-anti-malware-anti-virus.html">anti-malware</a> vendors used to provide generic detective controls in the form of integrity checkers, but those seem to be mostly gone now and vendors instead talk about technologies based on having varying degrees and types of knowledge about threats, while '<i>generic detection</i>' (of a different sort) has become a glossed over, value added feature of their scanners)</li>
</ol>
<div>
all of these work against your interests in protecting yourself and your stuff. they work against you finding the best tool for your job, or figuring out everything you need to do, or even knowing there's more to it than just using the vendor's product.</div>
<div>
<br /></div>
<div>
before you get the wrong idea, i don't want you to think this is a condemnation of the people who work for vendors. individually, many of them may well be much closer to being your friend and being on your side than the company they work for as a whole is. their interests are never perfectly aligned with yours, of course. you won't see them sacrificing their own interests (their families, their money, their jobs) for your benefit, and you wouldn't really expect them to, would you? some of them (a scant few when you consider the total number that security vendors employ) will sacrifice some of their time and energy to help people (whether their company's customers or no) learn about the threats that are out there and thus be better armed against those threats. just because someone works for a vendor doesn't mean their character is a reflection of the character of the corporate entity that employs them. yes, companies are run by people but it's their collective behaviour that makes the character of the company. the phrase "<i>none of us are as cruel as all of us</i>" doesn't just apply to anonymous, nor does it just apply to cruelty. </div>
<div>
<br /></div>
<div>
i also don't want you to think this is a condemnation of vendor companies either. remember, they're not exactly enemies in the conventional sense, but rather adversaries. as much as i tend to refer to them as bad actors, or irresponsible, or any number of other judgmental labels, i can't really see how they could work any other way. the judgments are really just a way of highlighting the divergence of interests between the vendor and the customer. there is some variation in the degree to which they do the the things that they do, of course. smaller companies are more easily influenced by noble ideals, in part because of size and in part because they have less at stake and so can afford to be more '<i>innovative</i>' in how they operate. it doesn't always work that way, and it doesn't mean their bottom line isn't still the bottom line, but some take a more scenic route to their goals.</div>
<div>
<br /></div>
<div>
that being said, the fact remains that vendors' interests do not align with those of their customers (i.e. you). that means it's important to take what they say with a grain of salt and to evaluate whether the things they say or do or produce are really of actual benefit to you. pick over what they have to offer, take what you can use and throw away the rest. in essence, forage on the enemy.</div>
kurt wismerhttp://www.blogger.com/profile/03810635947269551517noreply@blogger.com0