Wednesday, September 17, 2014

the PayPal pot calling the Apple Pay kettle black

so if you haven't heard yet, PayPal took out a full page ad in the New York Times trying to drag Apple Pay's name through the mud based on Apple's unfortunate celebrity nude selfie leak. This despite the fact that PayPal happily hands out your email address to anyone you have a transaction with. In essence, PayPal has been leaking email addresses for years and not doing anything about it, so they shouldn't get to criticize others for leaking personal information.

what's the big deal about email addresses? while it's true that we often have to give every site we do a transaction on an email address, we don't have to give them all the same address. in fact, giving each site a different email address happens to be a pretty good way to avoid spam, but more importantly it's a good way to avoid phishing emails, and that's important where PayPal is concerned because PayPal one of the most phished brands in existence.

unfortunately, because PayPal wants all parties in a transaction to be able to communicate with each other, they do the laziest, most brain-dead thing one can imagine to accomplish this: they hand out your PayPal email address to others, which is pretty much the worst email address to do that with. i have actually had to change the disposable email address i use with PayPal because they are apparently incapable of keeping that address out of the hands of spammers, phishers, and other email-based miscreants. furthermore, i also use their service less because i don't want to have to clean up after their mess.

at some point i may have to start creating disposable PayPal accounts and use prepaid debt cards with them. certainly if i were trying to hide from massive spy agencies then that would be the way to go, but if i'm only concerned with mitigating email-borne threats i really shouldn't have to go to that much trouble. there are other, more intelligent things that PayPal could, even should be doing.

  • they could share the email address of your choosing, rather than the one you registered with their service unconditionally. that way you could provide the same address you probably already provided that other party when you created an account on their site. it shouldn't be too difficult for them to verify that address before sharing it with the other party since they already verify the one you register with.
  • they could offer their own private messaging service so that communication could be done through their servers (which would no doubt aid in conflict resolution).
  • they could provide a disposable email forwarding service such that the party you're interacting with gets a unique {something}@paypalmail.com address that forwards the mail on to the email address you registered on PayPal with, and once the transaction is completed to everyone's satisfaction the address is deactivated.
they don't do anything like that, however. here's what you can do right now with the facilities PayPal makes available. it's a more painful and less intuitive process than anything proposed above, but it does work.
  1. before you choose to pay for something with PayPal, log into PayPal and add an email address (the one you want shared with the party you're doing a transaction with) to your profile. PayPal limits you to 8 addresses.
  2. confirm the address by opening the confirmation link that was sent to that address
  3. make that address the primary email address for your account
  4. confirm the change in primary email address (if you have a card associated with your PayPal account, PayPal may ask you to enter the full card number)
  5. at this point you can use PayPal to pay for something and the email address that will be shared with the other party is the one you just added to your PayPal account
  6. once you've paid with PayPal you will probably want to log back into PayPal, change the primary email address back to what it originally was (and confirm the change once again) and then remove the address you added for the purposes of your purchase. the reason you'll likely want to do this is because PayPal sends emails to every address it has on record for you, and those duplicate emails will get old fast.
most people aren't even going to be aware that they can do this to keep their real PayPal email address a secret from 3rd parties. as a result all manner of email-borne threats can and eventually will wind up in what would otherwise have been a trusted email inbox. make no mistake, this isn't PayPal providing a way to keep that email address private, this is a way of manipulating PayPal's features to achieve that effect. there are too many unnecessary steps involved for this to be the intended use scenario.

as such, PayPal is leaking a valuable email address by default every time you pay for something. yes Apple's selfie SNAFU was embarrassing to people, and yes if Apple doesn't do something about that now that they're becoming a payment platform it could be not just embarrassing but financially costly for victims, but PayPal is already assisting in similarly costly outcomes right now (not to mention potential malware outcomes) so they really have no right to be criticizing Apple. Apple, at least, is taking steps to correct their problems - what is PayPal doing?

Monday, September 08, 2014

on the strength of authentication factors

i ran across a story on friday about barclays bank rolling out biometric authentication for online banking and wound up starting a debate on twitter that i didn't have time for and couldn't easily fit into a series of tweets even if i did have time. essentially what it came down to was that i don't believe all authentication factors are equally strong and the statement that the barclays system was a "password replacement" raised a red flag for me.

the reason it raised a red flag for me is because single factor biometric authentication is something i've come across before, and not just in an article on the web or even as a user but as a builder. my first job out of university was with a biometric security company, and one of the biggest projects i had while working there was developing an authentication enhancement for windows logon. one of the requests made (and the one i fought the hardest against) was to allow logon with just the biometric. 

here's the problem with this idea - since windows didn't have biometric capabilities built in, the only way to add single factor biometric authentication in was to store more traditional authentication data that windows could accept (such as a password) and then pass that along to windows when the subject's biometric sample matched the registered biometric template. i should note that the article about barclays makes it clear they'll be doing the same thing since they say that barclays won't be storing customer biometric data on their servers. there will have to be a local biometric client that stores more traditional authentication data and passes it on when a biometric match is achieved. storing credentials is not exactly the safest thing in the world. it's not like you can just store a hash of the authentication data in this scenario because you have to be able to present the original, unmodified credentials to the authentication system.

i balked at the idea of making windows less instead of more secure, but i acquiesced when the decision was made to keep the more secure 2 factor mode of operation (without traditional credential storage) in there as well, along with informing the users that biometric-only logon was less secure. 

it's not just less secure because of the credential storage, though, and this is where the twitter debate on friday ventured into. in the course of that job i had the opportunity to examine multiple biometric systems, such as face, voice, iris, etc. and i came away with 2 realizations: 1) the only biometrics that users will ever accept are non-invasive ones (no one wants sensors stuck into them), and 2) that lack of invasiveness makes it relatively easy to steal biometric samples from users, often without them even knowing. fingerprints can be lifted from anything you touch. recordings of your voice can be made without your knowledge. photographs of faces are ubiquitous and a high enough resolution image will capture your iris pattern. 

other authentication factors like passwords and tokens generally rely on restricting access to the authentication data, often through secrecy. when that secrecy is lost, such as when someone takes a photograph of a door key (which is a kind of token) it becomes relatively easy to reproduce the authenticator and gain access to what was being protected. biometrics, especially non-invasive ones, forgo this secrecy under the mistaken belief that reproducing the authenticator is difficult for biometrics. the reality, though, is that you don't have to reproduce a biometric sample, you only have to create an approximation that is good enough to fool the biometric sensor, which often isn't particularly difficult. optical sensors can be fooled with images, audio sensors can be fooled with recordings, the mythbusters once fooled a capacitance sensor by licking a photocopy of a fingerprint.

now hold on, i hear you say, isn't it also really easy to steal passwords? and isn't reproducing that authenticator the easiest of all? it's certainly true that in practice all kinds of things can affect how easy it is for an attacker to become illegitimately authenticated. for that reason i try to look at the upper bound of the strength of the various authentication factors. how strong is a system under ideal conditions, that is where everything goes right and legitimate parties don't make any mistakes.

for passwords, that ideal situation means that the user doesn't accidentally click on anything that would steal his/her password, doesn't get fooled by phishing sites, etc. in short, the attacker can't get the password from the user. it also means the attacker can't get passwords in transit (because that's been properly secured) or a password database from service provider because no vulnerability is found in their system and their employees are likewise careful to avoid making mistakes. under this ideal situation the attacker's only way to succeed in gaining illegitimate entry is to perform an online brute force attack (no, not a dictionary attack, because the user didn't make the mistake of using something from a dictionary) and they'd have to go slow because the ideal provider would have rate-limited failed logon attempts. now you might say this is unrealistic, people make mistakes, and that's true in practice in the aggregate, but it is possible for an individual to do everything right, and it is also possible for attackers to not be able to find any way to attack the provider in order to get the password database. this isn't how strong password protection always is, but rather the ideal we hope to achieve by making our systems secure and avoiding making mistakes, and sometimes in limited cases this is achieved.

for tokens, let's consider the ideal situation to be comparable to that for passwords but on top of that let's consider the strongest token possible (ie. not a door key). let's consider a token that produces one-time-passwords (without any vulnerabilities that would make those passwords easy to predict) so that even brute force attacks become much harder. on the surface this seems even stronger than passwords, but there's a chink in the armour and apple's recent icloud problems are a good example. tokens can be lost or stolen so there needs to be a way to recover from that problem. while our "ideal situation" precludes our user from losing their token, it does not preclude our service provider from providing users with a way to cope with the loss of their tokens. the strongest way to do this is to provide the user with pre-generated one-time-passwords ahead of time. this can work for an individual user who is careful and doesn't make any mistakes but as we've previously seen our "ideal situation" does not extend to the point of saying all users make no mistakes, so the pre-generated one-time-pads are going to fail for reasons such as never being printed out and put in a safe place, or not being able to get to that safe place because the user is traveling, etc. what's a service provider to do then? so far, their best option might be to use traditional passwords as a fall back, and if they do then the token system becomes only as strong as passwords, because although our ideal user didn't lose their token, the provider can't really know that the user didn't lose it (or worse that it was stolen) and has to accept attempts to use the password fall back. while there is room for tokens to be stronger than passwords, the price is that only ideal users will be able to recover in the event of a lost token, and that price may be more than service providers are willing to accept.

for biometrics, we once again consider an ideal user who does nothing wrong, and an ideal service provider who likewise makes no mistakes. in spite of doing nothing wrong the user's voice can still be recorded, their face can still be photographed (in most cultures since facial covering is relatively rare), etc. simply interacting with the world cannot qualify as doing something wrong or making a mistake. acquiring the information necessary to construct a counterfeit authenticator is easy compared to passwords and tokens because no effort is taken to conceal that information and the cultural adjustments needed to change that are beyond what i think would be reasonable to expect. the difficulty in attacking a biometric authentication system boils down to the difficulty in fooling a sensor (or sometimes 2 sensors as people have tried to strengthen fingerprint biometrics with so-called "liveliness tests"), and that difficulty has been consistently overestimated in the past.

this is why i consider biometrics weaker than passwords - because even when everyone does everything right it's still fairly easy to fool the system. as such, when someone (especially a bank) provides people with an authentication system that replaces passwords with biometrics, i think that should raise an alarm. even at that prior job of mine it was conceded that that mode of operation was more about convenience than it was about security. convenience is a double-edged sword, it can make things easier for legitimate users and attackers alike if you aren't careful. using biometrics in a 2 factor authentication system may provide more security than any single factor authentication system can, but biometrics on it's own? there's a reason some people have started saying that your biometric is your username, not your password. don't replace passwords with it (at least not without having someone present to guard against funny business - which isn't an option for online banking).

Monday, June 30, 2014

i wouldn't bet on it

last year cryptography professor matthew green made a bet with mikko hypponen that by the 15th of this month there would be a snowden doc released that showed that US AV companies collaborated with the NSA. he has since accepted that he lost the bet to mikko, but should he have?

i mentioned to matthew the case of mcafee being in bed with government malware writing firm hbgary and mikko chimed in that hbgary wasn't an AV company and being partners with them wasn't enough to win the bet. aside from the fact that this is the first time after all these years that i've seen a member of the AV industry publicly comment on the relationship between mcafee and hbgary (i guess managing matthew's perception of AV is more important than managing mine), something about mikko's response rang hollow.

one way to interpret the situation with hbgary is to view them as government contractors whom mcafee endorsed, advertised, and helped get their code onto the systems of mcafee's customers (hbgary makes a technology that integrates with mcafee's endpoint security product). that certainly would have given hbgary access to systems and organizations they might have had difficulty getting otherwise. i have no idea if that access was ever used in an offensive way, though, so this line of thought is a little iffy.

another way to interpret the situation is to directly contradict mikko and admit that hbgary is a member of the AV industry. after all, they make and sell technology that integrates into an endpoint security product. they may only be on the fringe of the industry, but what more do you have to do to be a member of the industry than make and sell technology for fighting malware? the fact that they also made malware for the government makes them essentially a US AV company that collaborated with the government in one of the worst ways possible.

i feel like this should be enough to have won matthew green the bet, at least in spirit, but the letter of the bet was apparently that a snowden doc would reveal it and the revelation about mcafee and hbgary actually predates snowden's leaks by a number of years. 

so, the question becomes are there any companies that happen to be members of the AV industry and also happen to have been fingered by a snowden leak? it turns out there was (at least) one. they were probably forgotten because they're not just an AV vendor, but AV vendor does happen to be one of the many hats that microsoft wears (plenty of security experts were even advising people to drop their paid-for AV in favour of microsoft's offering at one point in time), and microsoft was most certainly fingered by snowden docs. the instances where microsoft helped the government may not have involved their anti-malware department, but the fact remains that a company that is a member of the AV industry was revealed by snowden documents to have collaborated with the government.

i imagine mikko could find a way to argue this doesn't count either - i admit it's not air-tight - but given how close it meets both the spirit and (as i understand it) the letter of the bet, i think mikko should match the sum he had matthew pay to the EFF and pay it to an organization of matthew's choosing. i won't bet on that happening, though.

Saturday, June 14, 2014

confessions of a twitter worm victim

as some of you may know, this past wednesday someone released a self-retweeting worm on twitter that exploited an XSS vulnerability in the popular twitter client tweetdeck. i happen to be a tweetdeck user and i got hit by the worm, not once but twice. since i believe in owning up to my mistakes in order to serve as an example to others, i figured it was important for me to write this post.

this isn't the first time i've had to do this. four years ago it was discovered that there had been a RAT bundled with the software for a USB battery charger sold by the energizer battery company (it had gone undetected by the community for years) and i wrote about my experience then as well.

this was the first time getting hit with something that could spread to others, and spread it did. i know this because i got email notifications from twitter when other people's tweetdeck clients automatically retweeted the tweet that that my client automatically retweeted. that's actually one of the things i think i did right - i have twitter setup to send me notifications for as much of that kind of activity as i possibly can. the result is that i get what is essentially an activity log sent to my email in near real-time and that alerted me to the problem within minutes of it occurring.

that quick notification allowed me to undo the retweet before it propagated from my account again. that limited the extent to which i contributed to the spread of the worm. acting quickly to neutralize the threat in my twitter stream is another thing i believe i did right.

unfortunately i also did a number of things wrong. for example, i knew about the XSS vulnerability before i encountered the worm, i saw excellent preventative advice and even retweeted that, but i failed to follow it exactly. the advice was to sign out of tweetdeck and then de-authorize the app in twitter. what i did instead was close the tweetdeck tab in my browser and de-authorize the app. i took a shortcut because i didn't believe anyone i followed would actually tweet anything malicious. i didn't anticipate that they might do so involuntarily - the possibility of something like the samy worm from years past never occurred to me. and so when news spread that the vulnerability had been fix and that users needed to log out and back in again to apply the fix i re-opened the tab, re-authorized the app (because that was the first prompt i was presented with) and then went hunting for the logout button. that's when i got the email notification that another user had retweeted one of my retweets.

however, i did not see the alert popup that was supposed to indicate the worm had executed. i didn't realize it at the time but that was important because it meant there was more going on than i realized. it meant that the worm had not executed in the client i was sitting in front of. what i had forgotten was that i had another tweetdeck client open on a computer at work and when i re-authorized the app the worm executed on the work computer rather than my home computer. it wasn't until i was on a bus to see an old friend that the significance of what had (and had not) happened clicked and then it wasn't for another several hours before i could get access to that work computer (where the alert popup was still patiently waiting for me) in order to log out and back into tweetdeck again, which i did without de-authorizing the app beforehand so the un-retweeted tweet got re-retweeted.

in short it was a comedy of errors.

what i've taken away from this is a number of things:

  1. i am once again humbled by the clear demonstration that i am not perfect. while i certainly knew conceptually that i wasn't perfect, i have had a surprisingly good track record with malware. having my ass handed to me made the appreciation of my imperfection much more visceral.
  2. i've gained a better appreciation for the value of de-authorizing apps in twitter. to a certain extent it can seem kind of abstract but what it's actually doing is isolating a vulnerable component from the rest of the network not unlike pulling the network cable out of an infected computer did back when worms that enumerated network shares or sent mass emails were prevalent.
  3. i've identified my failure to log out of things (not just tweetdeck but all sorts of sites) as a bad habit. it's pure laziness and it's not even rational laziness because there's almost no effort involved in logging in when you use a password manager. part of the reason i didn't post this sooner is because i wanted to see if breaking this habit was a reasonable expectation or whether saying i was going to improve was just wishful thinking. so far this improvement seems like an entirely reasonable expectation - i've had no problems logging out of things when i don't need the session open any longer.
at the end of the day, improvement is what sets an incident apart from a failure. the only real failure is a failure to learn from your mistakes and do better the next time. i'm not perfect (no one is) but each time i screw up i make sure i get better.

Tuesday, May 06, 2014

symantec anti-virus is dead

there's a lot of digital ink getting spilled right now over symantec's brian dye saying that anti-virus is dead (one, two, three, four, five, and more to come i'm sure), but i don't see many people asking the tough question, which is "why should we believe symantec now"?

looking back over my past posts about symantec paints a pretty unappealing picture, and reveals what might be considered a pattern. virtually right from the beginning they named their consumer anti-virus product after a man who famously said computer viruses were an urban legend. then, when they then tried to reinvent themselves with their "security 2.0" campaign, they claimed the virus problem was solved. now, when it appears they're trying to reinvent themselves again, they're saying that anti-virus is dead. it seems that whenever their business plan calls for serious marketing, they latch on to messages that grab attention but whose reality is questionable at best.

when the biggest anti-virus vendor starts saying anti-virus is dead, there's no way that isn't going to grab a lot of attention. it seems designed to hurt the very industry they're on top of, while they are (apparently) in the process of trying to distance themselves from it. i've noted in the past that the biggest players in the industry are hurt the least by the consequences of their bad acts. as market leaders they control perception not just of themselves but of the entire industry, so that even if a smaller player wanted to try to present a more reasonable and accurate view of things in order to better compete on technical merit rather than deceptive marketing manipulation, there's very little impact they could have. saying that anti-virus is dead while simultaneously trying to position themselves as something else is essentially a scorched earth tactic.it will hurt the entire anti-virus industry while drawing attention to the alternate industry they're trying to create/break into.

when the biggest anti-virus vendor starts saying anti-virus is dead, there's also no way that shouldn't raise the hairs on the back of your neck. out of the blue symantec starts mimicking exactly the same message that enterprise level infosec people have been saying for years? am i the only one who thinks that sounds like it belongs in the too good to be true category? this is the same kind of technique a malware writer might use to trick you into trying out his/her handiwork. before you get any ideas about symantec using 'trojan marketing', though, it's also the same kind of technique AV marketers used when they told people just using AV would solve their security problems. too good to be true has been part of the AV marketing arsenal from the very beginning, it's just that this new one about AV being dead seems to be designed for a much more select class of dupe, i mean user. this is the same shit, it's just a different pile.

it'll probably work, though. telling people what they want to hear is unfortunately quite effective. even smart people will fall for it, because despite being smart, those people still want to hear something that is far too simplistic to have anything in common with reality. when you look closely enough, the truth always seems to wind up being messy and complicated, not something that could fit in a sound-bite.

this is the reason why i try to convince people to stop listening to marketing (and really, everything that comes out of a vendor is marketing to some degree). this is almost certainly nothing more than another in a long line of efforts to deceive and manipulate the market. if you must listen to something, listen to their actions. they aren't retiring their AV product, so how dead can AV really be?

all that being said, i actually do welcome their shift in focus from purely prevention to now include more detection and recovery. it's about time AV vendors started getting serious about the last 2 parts of the PDR triad (prevention, detection, recovery). it doesn't have to be purely service-based detection though. years ago we had generic detection tools (such as integrity checkers) that end users could use themselves. symantec's focus on providing detection services instead of detection tools belies a philosophy of not trusting the users' competence, which in turn is consistent with their long history of failing to educate, elevate, and empower their users. maybe that kind of paternalism is appropriate for home users, but enterprise security operations? i thought we could expect enterprise level IT and infosec professionals to develop skills and expertise in these kinds of areas, so why is symantec choosing a path that takes these things out of advanced customers' hands?

as much as it seems like symantec is doing an about-face, they really haven't changed their tune. telling enterprises what they want to hear is just a ploy so that enterprises will get in bed with them (that's just what we call pillow talk, baby). they still aren't giving their users any new power to affect their own security outcomes. so far they're just offering words. nothing but sweet, sweet words that turn into bitter orange wax in your ears.

Friday, April 04, 2014

goto fail refactored

i wrote the lion's share of this a while ago but wasn't sure i wanted to publish yet another post about GOTO here since this isn't a programming blog. my mind was made up yesterday when i read this post by Steven J Vaughan-Nichols where he quotes a number of technology personalities essentially giving bullshit excuses for why GOTO is OK to use. it's no wonder 2 separate crypto libraries (both making prodigious use of GOTO) suffered embarrassing and dangerous defects recently when programming thought leaders perpetuate myths about structured programming.

i'm providing this as an object lesson in how to avoid the use of GOTO, especially in security-related code where a higher standard of quality is sorely needed. i'll be using Apple's Goto Fail bug as the example. here is the complete function where the fail was found, with the bug intact:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) != 0)
            goto fail;
        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) != 0)
            goto fail;
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
    hashOut.length = SSL_SHA1_DIGEST_LEN;
    if ((err = SSLFreeBuffer(&hashCtx)) != 0)
        goto fail;

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

    err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
    if(err) {
        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                    "returned %d\n", (int)err);
        goto fail;
    }

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



one of the things you might notice is that all roads lead to "fail:", meaning "fail:" isn't really just for failures, it's for clean-up.

another thing you might notice is that the final "goto fail;" doesn't actually bypass any code - it's completely redundant and if it weren't there the next thing to execute would still be the code after the "fail:" label.

the first thing we're going to try is the most obvious approach to refactoring this function, to get rid of GOTO by making proper use of IF.

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) == 0) {    
            if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) == 0) {    
                if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) == 0) {    
                    if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) == 0) {    
                        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) == 0) {    
                            hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
                            hashOut.length = SSL_SHA1_DIGEST_LEN;
                            if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
                                if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                                    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                                        if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                                            if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                                                if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0)    {    
                                                    err = sslRawVerify(ctx,
                                                                       ctx->peerPubKey,
                                                                       dataToSign,                /* plaintext */
                                                                       dataToSignLen,            /* plaintext length */
                                                                       signature,
                                                                       signatureLen);
                                                    if(err) {
                                                        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                                    "returned %d\n", (int)err);
                                                    }
                                                }
                                            }
                                        }
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
            if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                        if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                            if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0) {    
                                err = sslRawVerify(ctx,
                                                   ctx->peerPubKey,
                                                   dataToSign,                /* plaintext */
                                                   dataToSignLen,            /* plaintext length */
                                                   signature,
                                                   signatureLen);
                                if(err) {
                                    sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                "returned %d\n", (int)err);
                                }
                            }
                        }
                    }
                }
            }
        }
    }

    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

}

as you can see this version of the function is quite a bit longer as well as being deeply nested. this is the kind of code that actually makes programmers think the use of GOTO isn't as bad as their teachers told them it was, because that deep nesting makes the function seem more complex and more difficult to read. on top of which there is a considerable amount of duplicated code. neither of these things are appealing to programmers because they make reading and maintaining the code more work.

however, this is the most simple-minded and unimaginative way to refactor the original function. if we were to also tackle that complex pattern used in virtually all of the IF statements at the same time as getting rid of the GOTOs, we would instead get something like this:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;
    err = 0;
    
    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        err = ReadyHash(&SSLHashMD5, &hashCtx);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &clientRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &serverRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &signedParams);
        if (err == 0)
            err = SSLHashMD5.final(&hashCtx, &hashOut);
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    if(err == 0) {
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        err = SSLFreeBuffer(&hashCtx);
    }
    if (err == 0)
        err = ReadyHash(&SSLHashSHA1, &hashCtx);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &clientRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &serverRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &signedParams);
    if (err == 0)
        err = SSLHashSHA1.final(&hashCtx, &hashOut);
    if (err == 0) {
        err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
        if(err) {
            sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                        "returned %d\n", (int)err);
        }
    }
    
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



not only does this follow almost exactly the same format as the original function (thereby retaining it's readability), it makes the condition checking simpler and easier to read, and it has virtually the same number of lines of code as the original.

combining assignment and equivalence tests into a single line IF statement was clearly intended to reduce the overall size of the source code but it failed, and in the process it made the code more complex and difficult to read. the combined assignment/condition checking and GOTO statements were complementary to each other. they supported each other and jointly contributed to the complexity of the original function.

this third version of the function, by contrast, has neither complex expressions nor the potential for complex control flow. the only real complaint one might make is that after an error occurs in one of the many steps in the method, the computer still needs to perform the "if(err == 0)" check numerous times. however that is only true if the compiler can't optimize that code, and checking the same variable against the same constant value over and over again seems like the kind of pattern a compiler's optimization routines might be able to detect and do something about.

complexity is the worse enemy of security, sloppiness begets complexity, and GOTO is a crutch for sloppy, undisciplined programmers - it is part of that sloppiness and contributes to that complexity, even when it's supposedly used the right way. what i did above isn't rocket science or magic. the same basic technique can be used in any case where GOTO is used to jump forward in the code (if you're using it to jump backward then god help you). the excuses people trot out for the continued use of GOTO not only make them sound like dumb-asses, it leads to lesser programmers trying to follow their lead and doing much worse at it. it is never used as sparingly as the gurus think it should be, and even their own examples occasionally contain redundant invocations of it, thoughtlessly applied.

if you actually work at adhering to structured programming rather than abandoning it the moment the going gets tough, you will eventually learn ways to make it just as easy as unstructured programming, you'll be a better programmer for having done so, and your programs will be less complex, easier to validate, and ultimately more secure.

Tuesday, March 11, 2014

the case against GOTO in security

i could have made this longer but i have a feeling it might be more powerful in this form.

there is no programming construct that offers more freedom and flexibility than GOTO. consequently, no programming construct carries with it a greater potential for complexity

since "complexity is the worst enemy of security", therefore GOTO should be considered harmful to and/or an enemy of security.

i'm surprised more people haven't made this connection, or that it hasn't seen more mainstream attention. whatever else you may think of GOTO in regular software, in security-related software this has to be an added consideration. the traditional taboos against GOTO that Larry Seltzer identified may not be entirely rational, but i tend to think the security taboo against complexity is.