Posts Tagged ‘featured’

the ego dilemma

by Hang

The Ego Dilemma

I love meeting engaged people when I’m drunk because it allows me to ask my most drunkly assholish question ever:

“So, are you guys going to sign a pre-nup?”

Roughly two thirds of the time, they give some version of an acceptable answer:

  • Yes
  • No because we have no assets
  • No because, while it minimizes the fallout from a divorce, we feel it increases the chance of one by starting the marriage off on a wrong footing so we’d rather not risk it.

But about one third of the time, I get my absolutely most favorite answer of all which is

  • No because we don’t believe it’s likely we’ll get divorced.

It’s my most favorite answer of all because, after many years of experience, I’ve found that it’s the best way to force people to actually grapple with the ego dilemma.

The ego dilemma goes something like this:

“So, why don’t you think you’re going to get a divorce? Nobody enters a marriage expecting a divorce yet many of them do”

“Well, sure, other people get divorces but we have X & Y and that makes our marriage special”

“Well, yeah, but there were plenty of people who thought they were also X & Y at the start of their marriage but they eventually found out that didn’t help them much in the end”

“OK, but did those people have Z-which-is-so-uniquely-rare-only-we-have-it?”

“You’re right, they didn’t have Z, but when asked a similar line of questioning, they had the same reaction except they put in Z* which was unique only to their marriage, it didn’t help them much”

“Look… we’re just SPECIAL, OK?”

It’s the “Look, we’re just SPECIAL” which is the hallmark of the ego dilemma, it might not ever be as blatantly obvious as that but it’s always hidden in there somewhere.

The ego dilemma is the belief, against reasonable evidence, that there is something unique contained in your ego that challenges previous historical experience. In short, the ego dilemma would be a perfectly reasonable assumption if you lived in a movie where you were the main character but a deeply tricky one in the real world.

Other example ego dilemmas include believing you’re of significantly above average intelligence, setting aside your life so that you can “make it” as a famous actor/musician/sports star/writer, thinking you WILL get the girl with that desperately creepy romantic gesture or, if you’re coming here from Hacker News, assuming that your startup has a reasonable chance of success commensurate with the effort you’re putting into it.

The truly frustrating thing about the ego dilemma is that it tells you nothing of any value. Recognizing that you’re caught in an ego dilemma doesn’t mean that you’re wrong. You could, after all, be the next Mark Zuckerberg. Someone has to be after all. But also likely is that you’re a clueless idiot who’s utterly convinced at your own fallacious arguments. We know this intellectually because we’ve all experienced the ego dilemma from the outside, you’re trying to convince someone that they’re just plain wrong but they keep on returning back to what makes them SPECIAL. And if you’re experienced it from the outside, it’s meant that someone’s experienced it from the outside at you.

When confronted with the ego dilemma, there are two wrong reactions and one right reaction.

The first wrong reaction is to aggressively try and deflect yourself away from an ego dilemma: “Oh, yeah, I probably SUCK at programming but I just don’t know it yet”. STFU: That you can even concieve that you suck at programming is proof positive that you’re above average and your sanctimonious faux-modest attitude isn’t fooling anyone, including yourself. Deep inside, you still think you’re an awesome programmer and so you still have an ego dilemma.

The second wrong reaction is to instantly assume the question is futile and throw your hands up in the air. “Who can ever KNOW if I’m smart or not?”. Obviously, you don’t live in a world where you believe that to be true. You still think and act like a person who believes they are smart.

Unfortunately, the right way to deal with the ego dilemma is tricky and complex and deserves an entire post of it’s own. It really involves revamping your entire belief structure into something deeply probabilistic with a much finer and more nuanced representation of ignorance which I promise to write at a later date when I’ve fully processed what I’m actually doing.

But the absolutly most fascinating thing about the ego dilemma, and the reason why I so love torturing the almost married is that, even if you fully agree with and accept the argument and logic behind the ego dilemma, even if you’re an otherwise intelligent and reasonable person who doesn’t commit the obvious errors against rationality, when confronted with an actual ego dilemma from the inside, knowledge of the ego dilemma helps you barely at all.

The ego dilemma is what I call an unthinkable thought, you can almost see it slip around people’s head, evading capture. It’s so fascinating to me watching otherwise intelligent people utterly unable and unwilling to grapple with the ego dilemma set in front of them.

Back to our married couple:

“So you understand what an ego dilemma is now?”

“Yes, it all seems very logical and well thought out”

“So you see how it applies to you signing a pre-nup?”

“Oh? No, that doesn’t count, our pre-nup is special”

“What? But saying it’s special is how you RECOGNIZE it’s an ego dilemma”

“It is… but this is a special exception to the ego dilemma because of…”

“ARGH”

Anything you think is either unoriginal, wrong or both

by Hang

I first discovered this obviously wrong truth when I was doing my honors thesis. Time and again, I would come up with a novel idea or a neat algorithmic trick. Some of them, I would discover had already been invented 3, 5, sometimes 10 years before I came up with it. But the ones I was absolutely sure nobody had published before because I had scoured the literature and covered every approach. Well, all of those original ideas turned out to have some hidden, unforeseen flaw that rendered them either trivial or actively stupid. This lead me to formulate the belief that “anything you think is either unoriginal, wrong or both“. Like all obviously wrong truths, it has the paradoxical property of being obviously wrong and also true.

The premise for the statement comes from the simple observation that good ideas survive and bad ideas die. This means there exists an entire class of awful ideas that people come up with time and again only to eventually discover their wrongness and then abandon them. Every person who discovers them believes themselves to be wholly original since nothing of the sort exists in the world and each of them is met with disappointment, sometimes after many years of sweat and toil. But because failures are almost invisible, they leave no warning signs to future generations that this is an awful idea that should be avoided*.

Anything you think is either unoriginal, wrong or both” is an acknowledgment of your own stupidity. Your first instinct, when you come up with a new idea, should be to try and find out if anyone else has done it before. Your second instinct should be to try and find out if anyone’s done it before. Your third, forth and fifth instincts are to ask how come everyone else figured out this was a dumb idea and I haven’t? If you’ve gotten this far and you still haven’t discovered anything useful, you should start feeling a little bit uneasy, it probably means you weren’t smart enough to discover how wrong you are.

If you have discovered the prior art or the fatal flaw, then breathe a small sigh of relief. Unoriginal ideas are GOOD, wrong ideas are GOOD. An unoriginal but right idea is still valuable to all the other people who’ve never heard of it and chances are, if you’ve never heard of it, there will be a significant fraction of the population to which bringing this idea contributes value. Wrong ideas do more to teach you more about the world than right ideas because they teach you about some discrepancy between your expectations and the world, The corrective force of wrong ideas is what allows you to deftly cut to the core of any issue and tease out just where assumptions are weak and likely to fail.

But if you’re lucky, over the course of your life, you’re going to stumble across many ideas which are both original and right, in which case it’s still better to treat them as unoriginal and wrong. Believing an idea is unoriginal and wrong makes that idea do more work. You attack it more fiercely and from more angles. You keep on asking people if the idea sounds familiar and you’re eager to seek feedback because you’re so damn curious to discover why it could be so wrong yet elude you for so long. In doing so, you disassociate the idea from your ego so that you can take criticism about it calmly and dispassionately. Eventually, that drive of curiosity will force you to action, just to finally prove how this idea is flawed. Treating an idea as unoriginal and wrong means that the only standard you’re willing to accept is success. This brings a clarity or purpose that cuts through the confusion when executing upon that idea. Other people may be willing to make excuses or caveats that salve their ego but, as far as you’re concerned, if an idea is not successful, it’s not right**.

Anything you think is either unoriginal, wrong or both” is an idea that also applies to itself. I’ve been slowly chewing over this idea for almost four years now and it’s been frustrating to me that so far, I haven’t been able to find someone else that’s expressed it as a similar sentiment which by de facto, makes it wrong. I’m putting this out there to invite the embarrassment of someone pointing out the obvious source or the obvious flaw that I’ve managed to miss for so long. Please, tell me how I’m stupid, it would be a welcome relief.

*Some people, when first discovering this problem, come up with elaborate schemes of recording all of these common awful ideas so that future generations can avoid them. This, unfortunately, is a common awful idea.

** not right and wrong are different concepts in the same way that not being a millionaire is different from being homeless.

April 23 2009

Friennuendo

by Hang

Summary:

Friennuendo is an attempt to add social nuance and avoid social awkwardness when sending a friend request to a newly made acquaintance.

The Inspiration:

In real life social interaction, friendship is not formed through an explicit request. Rather, it occurs over a gradual period of increased bonding. The implicit nature of friendship allows either party to deescalate from friendship while maintaining face for both parties. Because each friendship gesture has plausible deniability, mutual ignorance is maintained.

In social network sites, because friendship requests are explicit, the potential for social awkwardness exists. People fear sending a friend request and having it rejected due to a misinterpretation of the closeness of the relationship or receiving a friend request and then being forced to decide between explicitly rejecting or uncomfortable accepting the request.

Friennuendo is an experiment in whether some of the social nuance of real world friendship can be brought into the online world in a usable and effective manner.

The Concept:

Friennuendo allows for traditional unilateral requests but also introduces a concept of “mutual requests”. Mutual requests only trigger when both parties have made some move towards friendship and reveal no other information if this is not the case. This way, one person is able to send a mutual request and be secure in the knowledge that the other party will never discover this unless they also reciprocate interest.

The Design Space:

The first iteration of Friennuendo used the metaphor of a line and visibility. By default, both participants would start on the left side of the line and the person to be friended would be on the right side. Users can move in discrete steps across the line and if one user moves to within the visibility range of the other user, a friend request is sent. The first iteration of Friennuendo also included the idea of private areas in which only the owner could access. Private areas allow users to “hide” from unwanted friend requests by always remaining invisible.


First iteration of Friennuendo

The building of this first iteration of Friennuendo illustrated a number of design dimensions which affect the ultimate social semantics. Tweaks in these dimensions allow the system to express different social nuances:

  • Discrete vs Continuous: Movement can be either stepwise or continuous. A discrete model allows users to performed fine grained reasoning about “if I move 2 squares and they moves one then…” which would not be possible with a continuous model.
  • Number of steps: Altering the size of the strip affects the social nuance of what it means to move forward. At its simplest, requiring only a single move to be visible to the other person is mechanistically identical to the conventional “Add as friend” system. Adding more steps allows for expressing both a wider and different range of social nuances
  • Visibility range: Moving within the visibility range reveals your interest but does not mean that friendship is automatic of guaranteed. How close someone needs to get before they are visible also has deep implications on what types of messages can be expressed.
  • Private Areas: Private areas provided a buffer against overly aggressive seeking of friend requests and also allowed a user to “hide” by moving their avatar backwards so they could never be visible.
  • Unequal Visibility: In the initial design, me seeing you always implied that you can see me but it is also possible to create a model in which my visibility may be greater than yours.
  • Single step vs Multi step: The initial design was based on a draggable slider which allowed the user to move multiple steps at a time. This could potentially indicate to the user that, in order to become friends they could skip over the notches and just drag the slider across. Implementing forward and backward buttons suggest to the user that he or she should only take one step at a time without actually limiting the user in his or her actions.

Iterations:

I decided to explore the use of Friennuendo within an online speed dating platform. The results of some preliminary user testing indicated that the main difficulty with using the original implementation was that it did not suggest the right mental model to users. After going through several iterations, we created a prototype that had much better results in testing:


A jungle and grassland metaphor was helpful in conveying the idea of visibility but did not work thematically for our designs.

The first iteration of the new slider was conceived with jungles and grassland. Each user would have their own jungle, a portion of the playing field so thick with growth that the other user couldn’t look or step into it. This gave users a private safety area to retreat to. The area between the two jungles would be covered by grassland, with grass thick and high enough to obscure part of the playing field, but with enough visibility for the players to eventually find each other.

The Jungle and Grassland concept proved an excellent metaphor for the functionality of the sliders and the speed dating venue. However, visual representation wasn’t considered appropriate for a dating site, a sentiment echoed by peers and users.


Another iteration that didn’t quite do what we wanted.

Another design focused on a horseshoe shaped path on which the visibility issue would be solved by adjusting the line of sight as users moved around the curve of the path. Eventually this design was deemed too abstract and too complex to create as a low-fidelity prototype.


Final paper based prototype that we used for ad-hoc user testing

The eventual implementation preserved the isometric view from the jungle sketch but made it into a more neutral lawn. Private areas were abandoned in favor of letting users “escape” by moving off screen.


high fidelity model we used for formal user testing

This model was converted into a high fidelity mockup and eventually into a prototype:

A product pitch that we made to demonstrate how Friennuendo would work as a prototype.

Conclusion:

Friennuendo is potentially an interesting new form of social interaction which promises to allow for new ways of communication. However, for Friennuendo to be successful, it must be placed within the right context, possess the right mechanics to foster healthy rather than pathological social behavior and be presented in a way such that it’s intuitive and appealing.

April 20 2009

Intentional Unusabilty: Supporting deniability through unorthodox design

by Hang

Intentional Unusabilty: Supporting deniability through unorthodox design

This was originally published as a CHI Workshop proceeding in 2008.
original PDF

Introduction

Traditional HCI and interaction design has focused around usability. An application is usable if it is efficient, effective, easy to use, fun or some other metric pertaining to the subjective experience of the user. One powerful tool in this approach, borrowed from psychology, is the mental model. Mental models are naïve, cognitive schemas about how objects work and how one interacts with them, like “The progress bar measures how much time is remaining”. These mental models provide us with predictions and expectations about the results of an interaction and usability is enhanced if the user’s mental model is a good fit with the actual behavior of the application.

This mental modeling approach is effective for single user application interaction but needs to be augmented in the case of social software because users not only have a mental model of the application, they also contain “social mental models” or “theories of mind” of the people they are interacting with. We model other people through these theories like “John thinks he’s shy” or “Lisa likes John”. However, theories of mind differ from the traditional mental models because minds are also capable of possessing theories of mind. This means such theories can be multilayered and recursive like “John thinks I think he’s shy” or “Lisa thinks that John doesn’t know that I’m aware that Lisa likes John”.

We construct and use these theories of mind to guide our social reasoning process and they form a crucial part of how we decide how to act in social situations. When we are interact via social software, the software modulates the range of interactions that are possible. The design of the software affect what theories of mind are constructed and, as a result, what users will choose to do. Thus, it becomes possible to use these theories of mind to construct a model of user behavior and how it will emerge through social software design as well as how to influence and encourage certain group behaviors through this design.

Plausible deniability

Judging motivations forms an important part of our social reasoning process because motivations allow us to predict how people will react in future scenarios. Plausible deniability is the ability to hide the true motivations of our actions by providing others with a plausible, alternate hypothesis or “convenient fiction” that can explain our behavior. Such motivation hiding acts as an incredibly powerful social tool by allowing us to mitigate potentially socially awkward situations (“Sorry I didn’t answer your call, my cell phone was on vibrate ”) or giving us an advantage in social negotiations (playing hard to get in a relationship).

In order to support such plausible deniability in cell phone example, the social situation has to be set up so:

  • I know “my cell phone is on vibrate” is a convenient fiction for me not answering.
  • I know I’ve told you that my cell phone was on vibrate.
  • I know that you can’t know for sure that my cell phone wasn’t on vibrate.
  • Therefore, you are forced to accept my convenient fiction.

It is this need to support convenient fictions that often is at odds with conventional HCI. Oftentimes, effective plausible deniability involves deliberately making software harder to use to enhance the ambiguity present in plausible deniability. This paper details several design mechanisms for supporting plausible deniability by deliberate usability degradation.

Omitting information:

Omitting information is the most direct approach to supporting plausible deniability by directly hiding the information necessary to determine motivation. For example, most email systems don’t tell you when an email you send has been read by the recipient. Although this information might be useful to the sender, it would also prevent the recipient from plausibly claiming “it must have got caught in the spam filter” when they would rather not have to bother replying to an email.

Error prone UIs:

Making user interfaces deliberately more error prone can allow users to plausibly claim they made an error when they actually did something intentionally. This can allow users to avoid appearing to be rude when attempting socially awkward tasks. For example, if a group event planning tool had a highly sophisticated, foolproof invitation system; it would be hard to plausibly claim that you accidentally forgot to invite somebody. Subtle UI tweaks that introduce room for error into the system would support such plausible deniability and allow users to “forget” to invite certain people to an event.

Default settings:

Default settings allow us to be ambiguous about whether we agree with the defaults of the system or whether we simply don’t bother to change them. For example, if the default action on accepting a friend request on a social networking site is that they can only see a limited part of your profile, then you could alter the default for most of your normal friends so that they can see all of your profile but keep it at the default for certain friends. Those friends who can only view the limited profile would not be able to tell if that was a deliberate decision or carelessness on your part. But such ambiguity can only be achieved if the default setting is plausibly difficult to use. Thus, plausibility can be enhanced by deliberately making the setting more unusable by making it harder to understand or placing it in a more obscure location so that users can plausibly claim “Oh, I can’t be bothered changing that”.

The nature of a default setting also changes the meaning of what changes in the default represent. Any change from the default indicates that not only do you not prefer the default; you dislike it to such an extent that you are willing to expend the effort to change that setting. If the default setting when adding friends was that they could see your full profile, then by setting someone as limited, you’re sending the message to them that “you’re so awkward/creepy/unpleasant that I was uncomfortable with you seeing all of my profile”. Instead, if the setting was limited by default, then the social message you are sending by setting someone as full is “you’re so cool and interesting and close to me that I made a special effort to give you more access to my profile”.

Perceived vs actual usability:

Plausible deniability doesn’t have to involve actual usability degradation. What is important is that you believe other people think it is difficult for you to use. This means that it could be possible to take advantage of perceptual biases to introduce perceived unusability without significantly degrading actual usability.

The effect of unusability

Social expression

Supporting plausible deniability also tends to make rude actions even ruder. Because a plausible, polite alternative is present, that I chose not to use it sends the message that I want you to know that my motivations are indeed rude. This is not necessarily a bad thing in social software as it allows users to express a larger gamut of social messages.

Plausibly denying plausible deniability

Designing for plausible deniability is only effective if users are unaware that this was your intention. Once users become aware of this, then such actions become much less credible. Thus, designers themselves need a plausible reason for their design decisions to make their software less usable.

If the initial design of the software is usable, then it is very hard to justify design decisions making it less usable. However, if is hard to use to begin with, then designers can simply claim that improving that particular aspect of usability is not a priority. Designers can also claim their design decisions were motivated by other concerns, a concern for privacy or technical limitations for example which can also limit usability. Finally, if all else fails, then it’s always possible to pretend to be bad designers who are ignorant of the design flaws and who studiously avoid investigating them.

Conclusions:

Building social software is very different from building conventional software and a new set of design principles and paradigms are needed for effective social software applications. Rather than focusing on usability, the most important aspects of social software is the facilitating of desirable group behaviors.

In this paper, we present a cognitive model called “theory of mind” that allows designers to predict user behavior based on a set of cognitive reasoning principles. We focus on the particular design problem of supporting plausible deniability in social software and shown how software sometimes needs to be in direct violation of traditional notions of usability to effectively support such behavior. Supporting plausible deniability often involves deliberately making the system less transparent and more ambiguous through intentionally poor usability but such design changes allowed a wider range of social expression to be performed.

In the future, gathering empirical data on how theories of mind interact with software design and how users perceive others through the lens of social software behaviors would allow more accurate and powerful prediction models to be built and a better understanding of the design challenges uniquely facing social software.

Google’s lead visual designer quit due to a clash of cultures

by Hang

Douglas Bowman, Google’s lead visual designer announced yesterday that he was leaving Google to join Twitter. At the root of it, Bowman’s decision to leave stems from a clash of cultures between the world of Interaction and Visual Design. The best way to understand this this clash of cultures is to listen to the ghost stories each field tells the young’uns.

In Interaction Design, around the campfires at night, it’s common to hear a variant of this chilling tale:

I heard, there was this company once, where they, like, got these totally great designers to build this user interface for them and they were all excited about it being the best thing since sliced toast until they tried to watch some people use it in the real world and it, like, totally sucked. The things everyone thought were easy to use were completely confusing. Luckily, they went through several iterations of redesign and testing the thing until it became something users loved.

Interaction designers are actively trained to filter out expert opinion as a justification for design decisions. The expert, no matter how qualified and trained they are, is ultimately, not the user and is ultimately, totally ineffectual and predicting what the user is like. The only way that design decisions can be justified is through feedback from actual users. Uttering the words “I prefer…” as justification for a design decision is the quickest way to move you from the potentially-an-ally category to dangerous-fool-who-must-be-neutralized category in the eyes of an interaction designer.

Over in the Visual Designer camp, a different ghost story is being passed round the campfire:

I heard, there was this company once who hired this, like, genius visual designer who built them this totally bold and brilliant design. But then, in an attempt to please everyone, the design was buried under so many focus groups and QA evaluations that  integrity of the design was destroyed and what was ultimately put up, like, totally sucked and ended up pleasing no one. Luckily, a more design friendly management was put into place and the original design was restored which ended up creating the emotional bond with the users that saved the company.

Visual designers are trained to keep their artistic integrity in the face of pressure and to be the keepers of the secret knowledge against the tide of the aesthetically ignorant. Uttering the words “consensus seeking” as justification for a design decision is the quickest way for you to become a dangerous-fool-who-must-be-neutralized in the eyes of a visual designer.

You can see both of these dynamics play out in the Google saga. Douglas Bowman’s characterization of the design process at Google:

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such miniscule design decisions. There are more exciting design problems in this world to tackle.

The debate on border pixels dragged on because Bowman became a dangerous-fool-who-must-be-neutralized in the eyes of the interaction design team.

Similarly, on Marissa Mayer’s attempt to reach out towards the visual designers:

A designer, Jamie Divine, had picked out a blue that everyone on his team liked. But a product manager tested a different color with users and found they were more likely to click on the toolbar if it was painted a greener shade.

As trivial as color choices might seem, clicks are a key part of Google’s revenue stream, and anything that enhances clicks means more money. Mr. Divine’s team resisted the greener hue, so Ms. Mayer split the difference by choosing a shade halfway between those of the two camps.

Is so, tin-earred it’s cringe inducing. Like rich yuppies trying to connect with the less affluent by speaking the language of the “street”, Marissa reads the culture of visual design so wrong and her attempt and consensus and compromise ends up doing more harm than good.

The sad thing is, both of these viewpoints are perfectly justified and are the result of a counter-intuitive lesson learned. Both of these ghost stories are repeated precisely so the newbies in the field don’t end up making the same mistakes the pros once made. Unfortunately this means for both sides, the views of the other side look like ignorance.

Look, I was like you once, and then I learned better. So I’m just going to sit hear and wait for the other shoe to drop for you Mmmkay? Do you want to hear a ghost story while we’re waiting?

So what you end up getting is a staring contest where each side is waiting for the other to finally blink. Unfortunately, in this case, Douglas Bowman blinked first and both Douglas and Google were both impoverished for this.

PS: In anticipation of the criticism that I have no business talking about visual design when the design of my own site sucks so much, I know, it’s being fixed, be patient.

Provably Unsolvable Security

by Hang

One interesting, unnoticed property of security is that it often contains provably unsolvable problems. Generally, we tend to split problems into those that have been solved and those which we don’t know if they can be solved. Nobody knows right now how to build a 100 mpg+ Internal Combustion Engine but that’s because building a 100+ mpg engine is hard. We imagine that if we throw enough smart people and technology at a problem, it will inevitably be beaten down and solved or we’ll reach a point where it’s not worth the effort to solve. Nobody imagines that building fuel efficient engines is impossible.

Translating that same thinking to security, we imagine security problems are a matter of effort. If only we were willing to expend enough resources, security problems could get solved. The TSA takes this approach to airline security. Airline security breaches occur because there is a lack of political will and if we only had enough regulations, screeners, X-Ray backscatter machines and cameras, airport security would become a solved problem.

However, the fundamental flaw with airport security is that what makes a good “dangerous” is how you use it and not what its made out of and so it’s impossible to develop an effective screening process that is not in the context of use. A laptop battery is pretty much just an explosive which is designed not to explode (sometimes unsuccessfully). That planes aren’t being brought down every day from laptop explosions is not because they can’t explode but because nobody wants them to explode. Imagine all the technology you want, it’s impossible to look at a laptop battery sitting in a scanning machine and decide whether it will be wanted to explode.

Convincing people that security can be provably unsolvable is the hardest step because often, the actual proofs of unsolvability are fairly simple. Normally, we assume that an explanation of why something can’t be done is something only comprehensible to experts because it’s more accurately a proof of why it can’t be done yet which requires you to understand what can be done now. As a result, we take explanations of infeasibility on a certain degree of faith and deferral to expert opinion, we use zero knowledge rather than first order proofs.

Security flips this around. Proving something secure is hard because it requires you to know all the ways it can be attacked whereas proving something can never be secure is easy because it requires a simple application from first principles. This is an important consideration in policy debates because one common tactic of bamboozling your opponent is to force them into using first order proofs where zero knowledge proofs would have been more appropriate (the Intelligent Design movement uses this to great effect with their “teach the controversy” and “let the children decide” messages). This means that unless your opponent is aware of the curious inversion on the structure of a security debate, arguments about security can often seem seedy and underhanded because they resemble so much debates in other, less reputable areas.

The result of all this is that security is one of those areas where there is a disproportionate amount of astoundingly bad, poorly thought out policies and a large part of this can be explained through the communication mismatch between security experts and managers where “it can’t be done” means “It’s impossible to do” but is interpreted as “I don’t know how to do it and I’m too lazy to find out”.

Copyright ©2009 BumblebeeLabs — Theme designed by Michael Amini