Friday, April 04, 2014

goto fail refactored

i wrote the lion's share of this a while ago but wasn't sure i wanted to publish yet another post about GOTO here since this isn't a programming blog. my mind was made up yesterday when i read this post by Steven J Vaughan-Nichols where he quotes a number of technology personalities essentially giving bullshit excuses for why GOTO is OK to use. it's no wonder 2 separate crypto libraries (both making prodigious use of GOTO) suffered embarrassing and dangerous defects recently when programming thought leaders perpetuate myths about structured programming.

i'm providing this as an object lesson in how to avoid the use of GOTO, especially in security-related code where a higher standard of quality is sorely needed. i'll be using Apple's Goto Fail bug as the example. here is the complete function where the fail was found, with the bug intact:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) != 0)
            goto fail;
        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) != 0)
            goto fail;
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
    hashOut.length = SSL_SHA1_DIGEST_LEN;
    if ((err = SSLFreeBuffer(&hashCtx)) != 0)
        goto fail;

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

    err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
    if(err) {
        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                    "returned %d\n", (int)err);
        goto fail;
    }

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



one of the things you might notice is that all roads lead to "fail:", meaning "fail:" isn't really just for failures, it's for clean-up.

another thing you might notice is that the final "goto fail;" doesn't actually bypass any code - it's completely redundant and if it weren't there the next thing to execute would still be the code after the "fail:" label.

the first thing we're going to try is the most obvious approach to refactoring this function, to get rid of GOTO by making proper use of IF.

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) == 0) {    
            if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) == 0) {    
                if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) == 0) {    
                    if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) == 0) {    
                        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) == 0) {    
                            hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
                            hashOut.length = SSL_SHA1_DIGEST_LEN;
                            if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
                                if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                                    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                                        if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                                            if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                                                if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0)    {    
                                                    err = sslRawVerify(ctx,
                                                                       ctx->peerPubKey,
                                                                       dataToSign,                /* plaintext */
                                                                       dataToSignLen,            /* plaintext length */
                                                                       signature,
                                                                       signatureLen);
                                                    if(err) {
                                                        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                                    "returned %d\n", (int)err);
                                                    }
                                                }
                                            }
                                        }
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
            if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                        if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                            if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0) {    
                                err = sslRawVerify(ctx,
                                                   ctx->peerPubKey,
                                                   dataToSign,                /* plaintext */
                                                   dataToSignLen,            /* plaintext length */
                                                   signature,
                                                   signatureLen);
                                if(err) {
                                    sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                "returned %d\n", (int)err);
                                }
                            }
                        }
                    }
                }
            }
        }
    }

    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

}

as you can see this version of the function is quite a bit longer as well as being deeply nested. this is the kind of code that actually makes programmers think the use of GOTO isn't as bad as their teachers told them it was, because that deep nesting makes the function seem more complex and more difficult to read. on top of which there is a considerable amount of duplicated code. neither of these things are appealing to programmers because they make reading and maintaining the code more work.

however, this is the most simple-minded and unimaginative way to refactor the original function. if we were to also tackle that complex pattern used in virtually all of the IF statements at the same time as getting rid of the GOTOs, we would instead get something like this:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;
    err = 0;
    
    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        err = ReadyHash(&SSLHashMD5, &hashCtx);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &clientRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &serverRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &signedParams);
        if (err == 0)
            err = SSLHashMD5.final(&hashCtx, &hashOut);
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    if(err == 0) {
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        err = SSLFreeBuffer(&hashCtx);
    }
    if (err == 0)
        err = ReadyHash(&SSLHashSHA1, &hashCtx);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &clientRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &serverRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &signedParams);
    if (err == 0)
        err = SSLHashSHA1.final(&hashCtx, &hashOut);
    if (err == 0) {
        err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
        if(err) {
            sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                        "returned %d\n", (int)err);
        }
    }
    
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



not only does this follow almost exactly the same format as the original function (thereby retaining it's readability), it makes the condition checking simpler and easier to read, and it has virtually the same number of lines of code as the original.

combining assignment and equivalence tests into a single line IF statement was clearly intended to reduce the overall size of the source code but it failed, and in the process it made the code more complex and difficult to read. the combined assignment/condition checking and GOTO statements were complementary to each other. they supported each other and jointly contributed to the complexity of the original function.

this third version of the function, by contrast, has neither complex expressions nor the potential for complex control flow. the only real complaint one might make is that after an error occurs in one of the many steps in the method, the computer still needs to perform the "if(err == 0)" check numerous times. however that is only true if the compiler can't optimize that code, and checking the same variable against the same constant value over and over again seems like the kind of pattern a compiler's optimization routines might be able to detect and do something about.

complexity is the worse enemy of security, sloppiness begets complexity, and GOTO is a crutch for sloppy, undisciplined programmers - it is part of that sloppiness and contributes to that complexity, even when it's supposedly used the right way. what i did above isn't rocket science or magic. the same basic technique can be used in any case where GOTO is used to jump forward in the code (if you're using it to jump backward then god help you). the excuses people trot out for the continued use of GOTO not only make them sound like dumb-asses, it leads to lesser programmers trying to follow their lead and doing much worse at it. it is never used as sparingly as the gurus think it should be, and even their own examples occasionally contain redundant invocations of it, thoughtlessly applied.

if you actually work at adhering to structured programming rather than abandoning it the moment the going gets tough, you will eventually learn ways to make it just as easy as unstructured programming, you'll be a better programmer for having done so, and your programs will be less complex, easier to validate, and ultimately more secure.

Tuesday, March 11, 2014

the case against GOTO in security

i could have made this longer but i have a feeling it might be more powerful in this form.

there is no programming construct that offers more freedom and flexibility than GOTO. consequently, no programming construct carries with it a greater potential for complexity

since "complexity is the worst enemy of security", therefore GOTO should be considered harmful to and/or an enemy of security.

i'm surprised more people haven't made this connection, or that it hasn't seen more mainstream attention. whatever else you may think of GOTO in regular software, in security-related software this has to be an added consideration. the traditional taboos against GOTO that Larry Seltzer identified may not be entirely rational, but i tend to think the security taboo against complexity is.


Tuesday, February 25, 2014

goto fail, do not pass go, do not collect your next paycheck

by now you've probably heard about the rather widely reported SSL bug that Apple quietly dropped on friday afternoon, teasing security researchers into finding out what was up. if not, the gist of it is that the C code Apple used for verifying signatures used in SSL had what appears to have been a copy&paste error that broke the security and allowed people to read your supposedly secure traffic. literally there were 2 lines that said "goto fail;" when there should only have been one. now i'm not about to make a big deal about copy&paste errors because that can legitimately happen to anyone, but i am going to make a big deal about the content of that copy&paste error.

the overall lack of acknowledgement (and in some cases denial) that the use of goto represents a deeper problem in Apple's application security is itself suggestive of a failure to recognize a fundamental principle: in software, quality is the foundation upon which security must stand.

goto is representative of the kind of spaghetti code we had before the introduction of structured programming approximately 50 years ago. no, that's not a typo. goto has been falling out of favour for about half a century and when i saw how much it was used in Apple's code it raised a red flag for me. every programmer i broached the subject with similarly found it concerning - including my boss who admitted he hasn't coded in C in over 30 years. he wondered, as do i, if the programmer responsible for the code in question still has a job at Apple.

you may think me rehashing a decades old debate, and perhaps i am - i wouldn't know, i never read any of that stuff - Edsger Dijkstra's letter "Go To Statement Considered Harmful" was published about 7 years before i was born. what i'm not doing, however, is mindlessly repeating dogma. we're interested today in application security and, as we have already covered, that requires software quality. structured programming produces software that is higher quality, easier to read, easier to model, easier to audit, easier to prove, etc. than unstructured programming.

why is this so? to answer that we need to think about what "structured programming" means. it is the nature of structure (all structure) to serve as a kind of constraint. your bones, for example, provide structure for your body and in so doing limit the way your body can move and bend. the support pillars for a bridge provide structure for that bridge and limit the extent to which the bridge can move and bend (yes, they bend and flex, but if the structure is doing it's job they only flex a little). likewise, code that follows the structured programming paradigm is constrained such that program control flows in a more limited number of ways. reducing the number of possibilities for program control flow makes it easier to predict (almost by definition) how a block of code's control will flow with just a quick glance. fewer possibilities mean it's easier to know what to expect. i'm sure you've seen the same effect with the written word. just like it's easier to read and understand sentences made with familiar words and phrases and following a few familiar construction patterns, the same is true for reading and understanding code as well. it's just another language, after all.

that reduction of possibilities also reduces the complexity of the code, which makes building a mental model of an arbitrary block of code easier. the constructs that make structured programming what it is lend themselves more naturally to abstraction since random lines of code within them are unlikely to cause program control to jump to random other places in the code. making it easier to build a mental model of the code makes it easier to formally prove the code's correctness because it's easier to describe the algorithm you're trying to prove. less formally, greater ease in building accurate mental models of the code means that it's easier to anticipate outcomes, especially unwanted ones that you want to eliminate, before they happen because it becomes easier to run through those possibilities in your head.

finally, both the greater ease of reading/understanding and the greater ease of modeling benefit efforts of others to review or audit the code. they're really only doing the same thing the programmer him/herself would have done by reading and understanding the code, creating a mental model of it, and trying to anticipate unwanted outcomes.

i work as a programmer professionally in a small company with not a lot of resources. we fly by the seat of our pants in some ways, but if someone asked me to review code that relied as much on goto as this source file from Apple, i wouldn't accept it. i'm surprised that code like that is able to survive for so long in a company with as many resources as Apple has. it makes me wonder about the programming culture within the company and it reminds me of a talk Jacob Appelbaum gave not too long ago where he accused them of writing shitty software. sure code reviews and more rigorous testing might have found the copy&paste error that sparked this off, but those processes don't add quality, they subtract problems. it's still a garbage-in/garbage-out sort of scenario so there's only so much they can do to affect the quality of Apple's software. quality has to go into those filters before you can get quality out.

i've often heard it said that regular programmers typically don't understand the nuances involved in writing secure code, especially when it comes to crypto, and having seen programmers more senior than myself flub crypto code i can certainly agree with that sentiment. that being said, i think it's probably also true that regular security people typically don't understand the nuances involved in writing quality code. since quality is a prerequisite for security, it's just as important for a programmer responsible for security-related code to have mastered the coding practices and techniques that lead to quality software as it is for them to understand secure software development.

i'll concede that it may well be possible for a master programmer to produce high quality, highly readable code that relies as heavily on goto as Apple's programmers appear to; but, as "The Tao Of Programming" satirically points out, you are not a master programmer, almost none of you are, so stop pretending and learn the lessons they've been trying to teach for the past 50 years.

(and now i'll go read Dijkstra's letter. maybe this is a rehash, but that wouldn't make it wrong even if it were)

(updated to fix the spelling of Jacob Appelbaum's name. thanks to Martijn Grooten)

Saturday, November 02, 2013

AV complicity explained

earlier this week i wrote a post about the idea of the AV industry being somehow complicit in the government spying that has been all over the news for months. some people seemed to really 'get it' while others, for various reasons, did not; so i thought i'd try to be a little more clear about my thoughts on the subject.

the question that the EFF et al have put towards the AV industry (besides having already been asked and answered some years ago) is a little banal, a little pedestrian, a little sterile. real life is messy and complicated and things don't always fit into neat little boxes. i wanted to try to get people to think outside the box with respect to complicity, what it means, what it would look like, etc. but i think some people have a hard time letting go of the straightforward question of complicity that has been put forward so let's start by talking about that.

has the NSA (or other organization) asked members of the AV industry to look the other way and has the AV industry (or parts thereof) agreed to that request? almost certainly the NSA has not made such a request, for at least a couple of reasons:

  1. telling people about your super-secret malware is just plain bad OpSec. if you want to keep something secret, the last thing you want to do is tell dozens of armies of reverse engineers to look the other way.
  2. too many of the companies that make up the AV industry are based out of foreign countries and so are in no way answerable to the NSA or any other single intelligence organization.
  3. there's quite literally no need. there are already well established techniques for making malware that AV software doesn't currently detect. commercial malware writers have been honing this craft for years and it seems ridiculous to suggest that a well-funded intelligence agency would be any less capable.


now while it seems comical that such a request would be made, to suggest that the AV industry would agree to such a request would probably best be described as insulting. whatever you might think of the AV industry, there are quite a few highly principled individuals working in that would flat out refuse, in all likelihood regardless of what their employer decided (in the hypothetical case that the pointy-haired bosses in AV aren't quite as principled).

now please feel free to enjoy a sigh of relief over the fact that i don't think the AV industry has secretly agreed to get into bed with the NSA and help them spy on people.

done? good, because now we're going to take a deeper look at the nature of complicity and the rest of this post is probably not going to be nearly as pleasant.

here's one of the very first things wikipedia has to say about complicity:
An individual is complicit in a crime if he/she is aware of its occurrence and has the ability to report the crime, but fails to do so. As such, the individual effectively allows criminals to carry out a crime despite possibly being able to stop them, either directly or by contacting the authorities, thus making the individual a de facto accessory to the crime rather than an innocent bystander.

in the case of government spying we may or may not be talking about a crime. the government says they broke no law and observers speculate that that may be because they've subverted the law (much like they subverted encryption algorithms). so let's consider a version of this that relates to ethical and/or moral wrong-doing instead of legal wrong-doing:
an individual is complicit in wrong-doing if he/she is aware of it's occurrence and has the ability to alert relevant parties but fails to do so. as such, the individual effectively allows immoral or unethical people to carry out their wrong-doing despite possibly being able to stop them either directly or by alerting others who can, thus making the individual a de facto accessory to the wrong-doing rather than an innocent bystander.

in this context, could the AV industry be complicit with government spying? perhaps not directly, not in the sense that they saw what the government was doing and failed to alert people to that wrong-doing. however, what about a different wrong-doing by a different entity but still related to the government spying?

hbgary wrote spyware for the government. this became public knowledge in the beginning of 2011. by providing the government with tools to perpetrate spying they become accessories to that spying.

hbgary was and is a partner of mcafee. now what is the nature of this partnership? hbgary is an integration partner. they make technology that integrates into mcafee's endpoint security product to extend it's functionality. mcafee does marketing/advertising for this technology and by extension for hbgary, giving them exposure, lending them credibility, and generally helping them make money. that money is almost certainly re-invested into research and development of hbgary's products, which includes governmental malware that's used for spying on people/organizations. there are mcafee customers out there right now whose security suite includes components that were written by known malware writers and endorsed by mcafee (although they make sure to weasel out of responsibility for anything going wrong with those components with some fine print). mcafee didn't break off the partnership when hbgary's status as an accessory to government spying became known, and since they didn't break off the partnership you can probably make a safe bet that they didn't warn those customers that part of their security suite was made by people aiding the government in spying either. even if we ignore the fact that mcafee aids a business that writes malware for the government, mcafee's failure to raise the alarm about the possible compromising nature of any content provided by hbgary makes them accessories to hbgary's wrong-doing. by breaking ties with hbgary and warning the public about what hbgary was up to they could have had a serious impact on hbgary's cash flow and hurt their ability to win contracts and/or execute on their more offensive espionage-assisting projects. they didn't do any of that and that makes them complicit in the sense discussed a few paragraphs earlier.

the rest of the AV industry may not be directly aiding hbgary's business but, like mcafee, they have failed to raise any alarm about hbgary. they could have done much the same as mcafee by warning the public, with the added bonus that they would have hurt one of the biggest competitors in their own industry while they were at it and that would have benefited all of them (except mcafee, of course). again, failing to act to help prevent wrong-doing makes them a de facto accessory to that wrong-doing. the AV industry as a whole is complicit in the sense discussed earlier.

of course, the AV industry isn't alone in being accessories to an accessory to government spying, and that brings up a consideration that should not be overlooked because there is a larger context here. historically, the culture of the AV industry has been one that values being very selective in things like who to trust, who to accept into certain groups, etc. add to that a very narrowly defined mission statement (to fight viruses and other malware) and it's little wonder that the ethical boundaries that developed in the early days were so dead-set against hiring, paying, or doing anything else that might assist malware writers or possibly promote malware writing. heck, i knew one member who wouldn't even engage viruses writers in conversation, and another who said he was wary of hiring anyone who already knew about viruses just in case they came by that knowledge through unsavoury means. aiding malware writers, turning a blind eye to their activities, etc. are things that normally would have violated AV's early ethical boundaries.

by contrast, the broader security industry is highly inclusive and has long viewed the AV industry's selectivity as unfair elitism. that inclusivity means that the security industry isn't actually just one homogeneous group. there are many groups, from cryptographers to security operations personnel to vulnerability researchers to penetration testers, etc. each one has it's own distinct mission statement and it's own code of ethics. what do you think you get from a highly inclusive melting pot of security disciplines? well, in order for them to tolerate each other, one necessary outcome is a very relaxed ethical 'soup'. many quarters openly embrace the more offensive security-related disciplines such as malware creation. in order for AV to integrate into this broader security community (and they have been, gradually, over time), AV has to loosen it's own ethical restrictions and be more accepting.

so while the AV industry failed to raise the alarm about hbgary, the broader security industry failed as well. the difference is that ethics in the security industry don't necessarily require raising an alarm over what was going on. hbgary is a respected company in security industry circles and it's founder greg hoglund is a respected researcher whose proclivity for creating malware has been known for a long, long time. as far as the security industry is concerned, hbgary's activities don't necessary qualify as ethical wrong-doing. there will probably be those who think it does, but in general the ethical soup will be permissive enough to allow it, and without being able to call something "wrong-doing" there can be no complicity. this is where AV is going as it continues to integrate into the broader security community. in fact it may be there already. maybe that's the reason they didn't raise the alarm - because they've become ethically compromised, not as a result of a request from some intelligence organization, but as a result of trying to fit in and be something other than what they used to be.

in the final analysis, if you were hoping for a yes or no answer to the question of whether AV is in any way complicit in the spying that the government has been doing (specifically, the spying done using malware), i'm afraid you're going to be disappointed. it depends. based on AV's earlier ethics the answer would probably be yes. based on the security community's ethics the answer may well be no. where is the AV industry now? somewhere between what they were and what the broader security community is. ethical relativity is unfortunately a significant complicating factor. then again, i'm an uncompromising bastard, so i say "yes" (after all, i did grow up with those old-school ethics).

Tuesday, October 29, 2013

what would AV's complicity in government spying look like?

as you may well have heard, the EFF and a bunch of security experts have written an open letter to the AV industry asking about any possible involvement by them in the mass spying scandal that has been in the headlines for much of this year. at first i thought this was old news for AV, since the issue of government trojans has actually been around a lot longer than the current spying revelations. i thought these people had simply failed to do their homework but, as time passed, the wheels began to turn and i started thinking differently. now i think the question we should all be asking ourselves is, what would AV's complicity look like?

some background, first. the subject of government trojans have been around for over a decade. magic lantern, for example, dates back to 2001 (or at least public awareness of it does). so it should come as little surprise that the question of whether the AV industry looks the other way has come up before. in 2007 cnet ran a story where 13 different vendors were asked about this very thing. they all more or less denied being a party to such shenanigans, but i suggest you read the article and pay careful attention to the answers.

now earlier this year one of the first controversial spying revelations to come about was about a program called PRISM which a whole bunch of well known, big name internet companies (including google, microsoft, yahoo, facebook, etc) were apparently involved with. the companies all denied it of course, and it turns out they may be legally required to do so.

that adds an interesting wrinkle to the question now being put towards the AV industry; would they be allowed to admit to any complicity that might be going on? they say actions speek louder than words, so maybe we should look for something other than the carefully crafted assurances of multi-million dollar corporations. maybe what we should be looking for is the same thing that alerted us to the mass spying in the first place - a leak. maybe then we can get a glimpse of their actions.

back in early 2011 a rather spectacular breach occurred. security firm hbgary was breached by some members of anonymous, and one of the things that leaked out was the fact that hbgary wrote malware for the government. in fact, it doesn't take much imagination to suppose that this would be the very type of malware the EFF et al are concerned the AV industry may have been asked to ignore.

it's unknown whether any AV vendor actually did field such a request. i have my doubts since traditional commercial malware writers seem to be perfectly capable of creating undetected malware without making such requests. that being said, one fact that became rather suspicious in light of the revelations about hbgary was the fact that they were partners with mcafee, one of the biggest AV vendors around and certainly one of the best known names in AV. i wrote about this apparent ethical conflict back in february of 2011, and then again in march of 2011 to note the tremendous non-reaction from the industry. i even went so far as to create a blog specifically for keeping an eye on the industry (though as an outsider myself there was little i could do on my own).

the EFF and others want to know if the AV industry has been complicit in the government's spying. well, one AV vendor was notably evasive when asked by cnet in 2007 about their handling of governmental trojans/police spyware. that same AV vendor was and still is partnered with a company that wrote government malware (in all likelihood for very purpose in question).  furthermore, in the intervening years, nothing has come of it. no other vendor has said anything or done anything to call attention to or raise awareness of this partnership. even after the mass surveillance controversy started earlier this year, not a one bothered to raise the alarm and suggest that mcafee might at least in principle be compromised by that partnership, even though they certainly could have benefited from disrupting mcafee's market share. no one thought they could profit from it? no one thought it was their duty to warn people of a potential problem? to raise concerns that the protection mcafee's customers receive may suffer in some way because of their close ties with government malware writers? to give voice to the doubts this partnership creates even after publicly wringing their hands over how wrong what the government themselves were doing was?

AV vendors may or may not have been asked to turn a blind eye to government malware - we may never know, and it's impossible to prove a negative. but they've done a heck of a job turning a blind eye to the people who make government malware and to those in their own ranks who got in bed with government malware writers. i asked at the beginning what AV complicity would look like and i think when it comes to those whose job it is to raise an alarm, complicity would probably have to look like silence (and something about silence makes me sick).

(2013-10-29 13:21 - updated to change the open letter link to point to the blog post that includes the list of intended recipients as well as a link to the letter itself)

Wednesday, October 16, 2013

my experiences at #sectorca in 2013

well, another year, another sector conference. i almost got another of my colleagues at work to go too (an actual security operations sort of guy at that) but in the end it didn't happen. i'm going to have to see if there's anything more i can do to make it happen next year. in fact, i'm pretty sure some of the folks at work would have preferred if i hadn't gone either (just so much to do) but it was already paid for, so...

the first thing that struck me this year (aside from the great big gaping hole where the street around union station used to be) was that the staff at the metro toronto convention center could accurately guess where i was trying to go just by looking at me. i guess that must mean i look like i belong with the crowd of other sector attendees, even if i've never really felt like i do (what with not being an information security professional and all).

the second thing that stuck me was the badge redesign. more space was dedicated to the QR code than to the human readable name. almost as if my interactions with machines are more important than my interactions with people.

the first keynote of day one was "how the west was pwned" by g. mark hardy. i suppose it was a kind of cyberwar talk (that's certainly how it was introduced), but really focused more on economic/industrial espionage, theft of trade secrets and intellectual property and that sort of thing. there were some interesting bits of trivia, like china's cyber warrior contingent having a comparable number of people to the entire united states marine corps. also an interesting observation about the global form of government (that being the system that governs us on a global scope rather than simply within our own nations) being anarchy. i'd never thought of it that way before, but there really isn't anyone governing over how how nations interact with each other or how people interact with foreign nations.

the first normal talk of day one that i attended was a so-called APT building talk. specifically it was "exploiting the zero'th hour: developing your advanced persistent threat to pwn the network" given by solomon sonya and nick kulesza. i kinda knew going in that this wasn't going to be the best quality APT talk just by the title. they clearly believe APT is simply a kind of advanced malware rather than realizing that APT is people. i can't say references to "the internet cloud" improved my opinion any. add to that the fact that anyone who took an undergrad systems programming course would have recognized most of the concepts they were talking about and i was pretty "meh" about the talk. the rest of the audience, however, was clearly very impressed based on the applause. all but one, that is. he called them out on their amateurish malware (about the only part of the APT acronym they got right was persistent, and even that is debatable). he also called them out on their releasing of malware (i swear he wasn't me, even though it probably seems like something i would do) that really wouldn't help anyone defend but certainly would help arm the shallower end of the attacker gene pool. i quite agreed with his opposition, but the applause again from the rest of the audience when one of the speakers said he could sleep quite well at night made it clear who the community was siding with here.

that all left a bad taste in my mouth so i decided to skip the next round of talks. that wasn't a difficult decision to make since the entire time-slot was filled with sponsored talks which i've long found to be a disappointment. so instead i took the time to look around and see what and who i could see.

i happened to luck out and stumble across chris hoff. i'm not entirely sure he remembered/recognized me but that doesn't come as a huge surprise since i'm not the most memorable person in the world and my appearance has changed significantly since the days when he did remember/recognize me. also, and perhaps more to the point, someone like chris has got to get approached by so many people that there'd be no way he could remember them all. that's part of being a "security rock star". anyway, we chatted briefly and he asked me if i was a speaker or listener. i'm definitely not a speaker and i told him i've sorta been down the speaking path before and it didn't work out so well (part of being on a panel involves speaking, right?). he shared an anecdote of his own which frankly put my bad experience to shame. still, if i went to the effort to develop that skill, what would i do a talk about? "everything you know about anti-virus is wrong"? i expect that would go over about as well as a lead balloon. my specialty is in something that has little or no respect in the information security community, so even if i did by some miracle make it past the CFP stage, i can't imagine there'd be much of a turn-out.

after that i saw a familiar face i never would have expected. an old colleague from work, joel campbell, who i gather now works at trustwave and was manning their booth on the expo floor. we chatted a bit about work of course, but also about security conferences like sector and how they compare with some of the ones in the states. sector is apparently small, which rationally i knew since i did once attend RSA, but i guess with little else to compare it to in more recent times, sector seems big to me.

the lunch keynote given by gene kim about DevOps interested me in a "i know someone who'd probably be interested in this" sort of way. i can't wait for the video to become available so i can share it with some of my higher-ups in the dev department at work (we do have an ops guy sort of embedded with us devs, i wonder what DevOps would say about that). there was also a very interesting observation about human nature; apparently when we break promises we compensate by making more promises that are even bolder and less likely to be kept. i think i've seen that play out on more than one occasion.

after lunch i attended kelly lum's talk ".net reversing: the framework, the myth, the legend", which was pretty good despite the original recipe bugs that kept her distracted at the beginning. i actually saw a .net hacking talk last year as well (i'm a .net developer, it stands to reason i'd be interested in knowing how people can attack my work) but this one spent less time talking about all the various gadgets you could use to attack .net programs and more time talking about the format such that one could possible use it as a starting point for creating one's own .net reverse engineering tools. that'll certainly be filed away for future reference.

following that i attended leigh honeywell's talk "threat modeling 101", only it wasn't really a talk. this was one of the more inventive uses of the time-slots speakers are given, as she actually had us break up into groups to play a card game called elevation of privilege. it's quite an interesting approach to teaching people to think about various types of attacks and i've already talked about the game at work and shared some links. hopefully i can get some of my coworkers to play.

for the last talk of day 1 i attended "return of the half schwartz fail panel" with james arlen, mike rothman, dave lewis, and ben shapiro. this was apparently a follow-up of a previous fail panel that i never saw but that didn't seem to matter because it didn't seem to reference it at all. i didn't find it particularly cohesive, i guess because the only common theme it was designed to have running throughout was failure, but one interesting thing i took away was the notion of venture altruism. it's a different way of looking at things than i'm used to as i tend to frame things more as 'noblese oblige', but it certainly appears as though quite a few people really do have their hearts in the right place in that they're trying to make the world a better place in their own particular, security-centric way.

i decided to opt out of the reception afterwards. i felt guilty about it because i know i really ought to have gone but the truth is that in all the times i've gone before i've never really felt comfortable among all those strangers in a purely social environment. plus there was last year's (and possibly other years as well, but definitely last year) shenanigans where your badge would get scanned in order for you to get drink tickets, and then the company doing the scanning would send you email as though you had actually shown interest in them and visited their booth. i know the conference is an important tool for generating leads for sales, but over drink tickets? really? i suppose if they're paying for the drinks then it's hard to argue against them getting your contact info in return, but at least when facebook asks you to trade your privacy for some reward you have some kind of idea that that's what's going on. it made participating in the reception feel like bad OpSec; and you know, if you add enough disincentives together you're eventually going to inhibit behaviour.

the day 2 morning keynote was another panel, and if i'd gotten the impression from the fail panel that panels lacked cohesion, this one dispelled it. "crossing the line; career building in the IT security industry" with brian bourne, leigh honeywell, gord taylor, james arlen, and bruce cowper as moderator focused very strongly on the issue of crossing legal, ethical, and moral lines and whether that was necessary to get ahead and be taken seriously in security. i came into the keynote thinking it would be more about career building (which hasn't been that interesting to me in the past since i'm perfectly happy not being in InfoSec) but the focus on the law, ethics, and morals is much more interesting to me as the frequent mentions of ethics on this blog could probably attest to. i was pleased to see both leigh and gord take the position that crossing those lines is not necessary and holding themselves up as examples. james was careful to point out that those lines are not set in stone (they're "rubber" as he put it, though he also made a point that that doesn't mean they aren't well defined), and certainly theres a point there at least with the relevancy of the law as there are some really poorly written laws as well as some badly abused laws (as the prosecution of aaron schwartz certainly highlights). of course as the amateurish malware distributors from day 1 demonstrated, crossing ethical and moral lines is still widely accepted and embraced in the information security community. one might want to draw a comparison between that and lock pick village which teaches people how to breach physical security, but the lock picking at least has a dual use (beyond simple education) in that it allows you to regain access to things that you have a legal right to but would otherwise be unable to access because you lost a key, for example. the AV community was historically much more stringent about not crossing those lines, and much closer to having (or at least implicitly obeying) a kind of hippocratic oath; and having literally grown up with that influence i'm certainly in favour of it, though when leigh mentioned the hippocratic oath it did not seem that well received. james pointed out that ISC^2 has a rule against consorting with hackers and yet gives credits for attending hacker conferences - which to me just makes them seem like they're either hypocrites or toothless. i could probably write an entire post about this topic alone, or rather another entire post about this topic since i already did once years ago that's kind of begging for a follow-up.

the first regular talk i attended the second day was schuyler towne's "how they get in and how they get caught", which turned out to be a lock picking forensics talk (in the security fundamentals track, no less). after having seen a number of talks about lock picking over the years, seeing one on detecting that lock picking has occurred rounded things out really nicely. the information density for the talk was high, there was even a guy in front of my taking picture after picture of the diagrams being shown on the screen, but schuyler is really passionate about the subject matter and did a good job of keeping the audience's interest in spite of all the details and photos of lock parts under high magnification.

after that talk i finally relented and attended one of the sponsored talks, specifically "the threat landscape" by ross barrett and ryan poppa of rapid7. i suppose it's only fitting that a vendor would hand out buzzword bingo sheets. certainly it's good that they acknowledge that as vendors they're expected to throw out a lot of buzzwords. but i think it kind of backfired for the talk because rather than paying attention to what they were saying i found myself paying attention to what buzzwords i could cross off my sheet. buzzword bingo is a funny joke, but if you make it real i think you wind up sabotaging your talk. on the other hand, perhaps that acts as a proxy for actual engagement of the audience, so that people will come away feeling better about the talk than they otherwise might have.

the lunch keynote by marc saltzman was really more entertainment than information. flying cars? robots? virtual reality? ok. lunch was good, though.

after lunch i attended an application security talk given by gillis jones. this one wasn't in the schedule so i can't look up the actual name of the talk. it replaced james arlen's "the message and the messenger" which i've already seen on youtube. i guess whenever they say app sec they must be talking about web application security, because i can't say i've seen much in the way of winform application security talks (unless .net reversing counts). i'm not a web guy, i don't do web application development (yet) so i sometimes find myself out of my depth, but (perhaps because it was in the security fundamentals track) gillis approached the topic in a way that would help beginners understand, and i certainly feel like i have a better handle on some of the topics he covered. in fact, i started trying to find XSS vulnerabilities at work the very next day.

for the final talk of the conference i attended todd dow's "cryptogeddon" which was a walk-through of a cyber wargame exercise. it had a very class-room like approach to working through a set of clues in order to gain access to an enemy's resources. that format works well, i think, and i can see why educators would want to use todd's materials for their classes.

and that was pretty much my experience of sector 2013. it's taken me several days to write this up - certainly enough time for me to come down with the infamous "con-flu", but i never do. i'm not certain, but i have a feeling that my less social nature makes me less likely to contract it somehow. i don't shake as many hands, or collect as many cards, or stand face to coughing/sniffling/sneezing face with as many people as some of the more gregarious attendees do.

Wednesday, August 07, 2013

if google gives you security advice, get a second opinion

originally posted on secmeme
remember back when google's chrome browser was shiny and new and their vaunted sandboxing technology didn't actually place plug-ins in a sandbox even though the plug-ins, with their existing body of vulnerabilities and research, would have been the most likely vector of attack for a brand new browser? seems like kind of a glaring oversight, right?

and who could forget google's chris dibona ranting about android not needing anti-malware and sellers of such products being scammers and charlatans? of course now google themselves are hard at work trying to stem the tide of android malware with things like bouncer, but that's far from perfect.

heck, even google's infamous tavis ormandy had to take a second stab at executing his sophail vendetta* because his first attempt was so laughably bad.
[* i refer to it as a vendetta because a) it followed then sophos representative graham cluley publicly chewing tavis ormandy out for what has since become official google policy (disclosing vulnerabilities after a ridiculously short period of time), and b) the entire sophail effort from start to finish spanned years.]

now comes news that google's chrome browser doesn't require the user to enter a master password before displaying saved passwords? and not only that but it also comes with a condescending head of chrome security, justin schuh, defending the design by claiming that master passwords breed a false sense of security by making people think it's safe to share their computer with others or leave them unlocked and unsupervised. he repeatedly falls back on the trope of "once the bad guy got access to your account the game was lost". nevermind the fact that most people will assume it's protected regardless of what chrome does because that's how most browsers have behaved for years (so not protecting the passwords is even worse than protecting them partially), nor the fact that attackers are also capable of bypassing the user account protection chrome is abdicating password security responsibility to. no protection is perfect, but that doesn't mean we throw out the imperfect ones or we'll eventually be left with none at all.

it's almost enough to make you think google never gets anything in security right the first time. but wait - it's not like password storage is an innovative new concept. there's been an established pattern around for years that they could have simply followed. it's not even like they could claim to not be aware of it when other browsers follow that pattern. frankly, if the folks at google really think they know password storage security better than everyone that came before them, from a UK software developer to mozilla engineers to bruce freaking schneier, then i respectfully suggest that they pull their heads out of their asses and get with the program. if they were really concerned about a false sense of security then maybe they shouldn't be storing passwords in the first place, after all it's not unheard of for a browser to be tricked into revealing the contents of it's password store to a remote attacker when visiting a specially crafted malicious webpage.