Thursday, December 16, 2010

the transparency delusion

prompted by lenny zeltser's recent post on usability (which itself may be a response to my previous post) and with an actual usability study on 2 pieces of security software [PDF](specifically 2 password managers) still fresh in my mind i've decided to take another look at the issue of usability and more importantly transparency.

the usability study i referred to makes an excellent point about security only paying lip-service to usability, and i don't think they mean because the security software they studied had too many clicks to get through each function or because the menus were non-intuitive. the study was a wonderful object lesson for just how badly things can go wrong when transparency is taken too far - and why. in the case of the software in the study, transparency didn't just make the software harder to use, it actually lead to compromised security.

the key problem of transparency is that it robs the user of important information necessary for the formulation and maintenance of a mental model of what's going on. as a result, the user invariably forms an incomplete/inaccurate mental model, which then leads them to make the wrong decisions when user decision-making is required (at some point a user decision is always required - you can minimize them but you can never eliminate them); not to mention making it more difficult to realize when and how the security software has failed to operate as expected (they all fail occasionally) and so robbing them of the opportunity to react accordingly.

the usability study in question serves as an adequate example of how transparency can go wrong for password managers, but what about more conventional security software like firewalls or scanners? mr. zeltser used the example of a firewall that alerts the user whenever an application tries to connect to the internet. let's turn that around - what if the firewall was 'intelligent' in the way mr. zeltser is suggesting? what if it never alerted the user because all of the user's applications happened to be in some profile the firewall vendor cooked up to prevent the user from facing so-called unnecessary prompts? and what if one day that firewall fails to load properly (i.e. windows thinks it's loaded but the process isn't really doing anything)? will the user know? will s/he be able to tell something is wrong? it seems pretty obvious that when something that never gave feedback on it's operation all of a sudden stops operating, there will be no difference in what the user sees and so s/he will think nothing is wrong.

how about a scanner? let's consider a transparent scanner that makes decisions for you. you never see any alerts from it because it's supposedly 'intelligent' and doesn't need input from you. what happens then is that you formulate an incorrect model, not just what the scanner is doing (because you have no feedback from the scanner to tell you what it's doing), but also an incorrect model of how risky the internet is (because your scanner makes the decisions for you). you come to believe the internet is safe; you know it's safe because you have AV, but any specifics beyond that are a mystery to you because you're just an average user. one day you download something and attempt to run it but nothing happens. you try again and again and nothing happens. then you realize that your AV may be interfering with the process, and since you've come to believe the internet is safe instead of risky you decide that your AV must be wrong by interfering with things so you disable it and try again. congratulations, your incorrect mental model (fostered by lack of feedback in the name of transparency) has resulted in your computer becoming infected.

we shouldn't beat too hard on the average users here, though. i have to confess that even i have been a victim of the effects of transparency. a few years ago, when i was starting to experiment with application sandboxing for the first time, i tried a product called bufferzone. in fact, i tried it twice, and both times i failed to formulate an accurate mental model of how it was operating. bufferzone tried to meld the sandbox and the host system together so that the only clue you had that something was sandboxed was the red border around it. not just running processes either, files on your desktop could have red added to their icons to indicate they were sandboxed. but since i was new to sandboxing at the time i didn't appreciate what that really meant; and as a result, each time i removed bufferzone i was left with a broken firefox installation and had to reinstall.

when we talk about transparency in government, we're talking about being able to see what's going on. for some reason, however, when we talk about transparency in security software we're talking about not seeing anything at all - we're talking about invisibility. invisible operation can only be supported if we can make the software intelligent enough to make good security decisions on our behalf. lenny zeltser offer's the church-turing thesis in support of this possibility but i'd like to quote turing here:
"It was stated ... that 'a function is effectively calculable if its values can be found by some purely mechanical process.' We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability † with effective calculability" († is the footnote above, ibid).
security decisions necessarily involve a user's intent and expectations - neither of which can be found by 'purely mechanical processes', and therefore neither of which can be used by security software making decisions on our behalf. the decisions made by software must necessarily ignore what you and i were trying to do or expected to happen. that kind of decision-making isn't even sophisticated enough to handle pop-up blocking very well (sometimes i'm expecting/wanting to see the pop-up) so i fail to see how we can reasonably expect to abdicate our decision-making responsibilities to an automaton of that calibre.

transparency in security software is not a pro-usability goal, it is an agenda put forward by the lazy who feel our usability needs would be better addressed if we all could be magically transported back to a world where we didn't have to use security software any more. designing things so that you don't actually have to use them doesn't make them more usable, it's just chasing after a pipe-dream. true usability would be better served by facilitating the harmonization of mental models with actual function, and that requires (among other things) visibility not transparency/invisibility.

0 comments: