Posts Tagged ‘security’

Provably Unsolvable Security

by Hang

One interesting, unnoticed property of security is that it often contains provably unsolvable problems. Generally, we tend to split problems into those that have been solved and those which we don’t know if they can be solved. Nobody knows right now how to build a 100 mpg+ Internal Combustion Engine but that’s because building a 100+ mpg engine is hard. We imagine that if we throw enough smart people and technology at a problem, it will inevitably be beaten down and solved or we’ll reach a point where it’s not worth the effort to solve. Nobody imagines that building fuel efficient engines is impossible.

Translating that same thinking to security, we imagine security problems are a matter of effort. If only we were willing to expend enough resources, security problems could get solved. The TSA takes this approach to airline security. Airline security breaches occur because there is a lack of political will and if we only had enough regulations, screeners, X-Ray backscatter machines and cameras, airport security would become a solved problem.

However, the fundamental flaw with airport security is that what makes a good “dangerous” is how you use it and not what its made out of and so it’s impossible to develop an effective screening process that is not in the context of use. A laptop battery is pretty much just an explosive which is designed not to explode (sometimes unsuccessfully). That planes aren’t being brought down every day from laptop explosions is not because they can’t explode but because nobody wants them to explode. Imagine all the technology you want, it’s impossible to look at a laptop battery sitting in a scanning machine and decide whether it will be wanted to explode.

Convincing people that security can be provably unsolvable is the hardest step because often, the actual proofs of unsolvability are fairly simple. Normally, we assume that an explanation of why something can’t be done is something only comprehensible to experts because it’s more accurately a proof of why it can’t be done yet which requires you to understand what can be done now. As a result, we take explanations of infeasibility on a certain degree of faith and deferral to expert opinion, we use zero knowledge rather than first order proofs.

Security flips this around. Proving something secure is hard because it requires you to know all the ways it can be attacked whereas proving something can never be secure is easy because it requires a simple application from first principles. This is an important consideration in policy debates because one common tactic of bamboozling your opponent is to force them into using first order proofs where zero knowledge proofs would have been more appropriate (the Intelligent Design movement uses this to great effect with their “teach the controversy” and “let the children decide” messages). This means that unless your opponent is aware of the curious inversion on the structure of a security debate, arguments about security can often seem seedy and underhanded because they resemble so much debates in other, less reputable areas.

The result of all this is that security is one of those areas where there is a disproportionate amount of astoundingly bad, poorly thought out policies and a large part of this can be explained through the communication mismatch between security experts and managers where “it can’t be done” means “It’s impossible to do” but is interpreted as “I don’t know how to do it and I’m too lazy to find out”.

Nov 12th (day 30): No Evil Geniuses

by Hang

Yesterday, I wrote about the mystery of why spam was so bad at being spam and I claimed that it was a mystery that seemingly defied explanation. None of what I proposed as possible answers was really satisfying. In order to answer this question, I think you have to look further afield and ask some other interesting questions: “Why has there not been a non-pathetic foreign terrorist attempt on US soil since 9/11?” and “Why has there only been a handful of truly crippling computer viruses in the last 10 years”

Our first instinct is that such occurrences are rare because they are difficult. However, neither of these tasks actually are difficult. Two guys in a van managed to terrorize Washington DC for a month and no amount of security precautions could have prevented them from doing so. The Sasser worm was written “by someone that could barely get the code working” and attacked a security flaw that had been noted and patched months ago and other worms haven’t been much more sophisticated. Such things are not trivial but they aren’t of such herculean difficulty that would be sufficient to explain their rarity. Just why exactly isn’t there a legion of evil geniuses who are routinely executing the downfall of society?

An evil genius is anyone who is both a genius and evil where “Evil” encompasses everything from trolling to keying someone’s car to pedophilia, “Genius” is anything which evokes any degree of “huh, why didn’t I think of that?” or “That’s clever”. As a rough approximation, we assume that the number of evil geniuses can be calculated by multiplying the proportion of people who are geniuses with the proportion of people who are evil. But what I’ve noticed through looking at a huge range of diverse social systems is that evil geniuses exist at a stunningly lower frequency than this naive calculation would have us believe. The number of evil geniuses is so off base from the naive calculation that it indicates a our model of the world with regards to evil geniuses is unsalvagable and needs to be replaced, not just tweaked.

Such a claim has radical implications for the design of social systems as so much of our thinking about security, about design and about society is obsessed with preventing evil geniuses from wreaking havoc that we don’t even stop to notice that they aren’t.

Part of the reason we’re so obsessed with evil geniuses is because we think we know what they’re like: they’re just like us except they actually do the evil things we think about. Bruce Schneier, one of the most widely read security experts in the world writes about how

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

The Security Mindset

“Why golly”, the man with the Security Mindset says, “I’ve found a great way to exploit this system. It’s lucky I’m a good person because all that is stopping me from executing this exploit for my personal gain is my innate goodness.”

It’s easy to imagine a person who is just like me except without my innate goodness. As a result, it’s easy to design a system with defenses against such a mythical attacker. What we completely fail to notice is that, most of the time, such an attacker simply does not materialize. But even though evil geniuses might not be a major problem, evil behavior most definitely is and it’s in our best interests to design a system which is resilient to pathological actions such as trolling, flaming and abuse.

Our naive view of the world is that we mentally segment people out into “good people” and “bad people”. Good people are people like us and bad people are people like us, except without any morality. The work of Milgram and Zimbardo shows though that goodness is largely a property of circumstance and the more correct way of thinking about the world is that most people are ordinary people and there are good situations and bad situations. If evil people are inherently evil, then it’s easy to imagine an evil genius. However, if evil is a product of the situation, then maybe the reason there are no evil geniuses was because noone gave them permission to be evil geniuses. The reason why Milgram and and Zimbardo managed to cause people to become evil was by relying on authority to signal that such actions were permissible. Genius, by definition, cannot provide be provided such social proof because you’re doing something new and unexpected. Without such social proof, it’s very hard to create an evil situation and, as a result, evil genius is hard to come by.

Such a statement has radical implications for design: you can cause pathological behavior simply by putting in visible mechanisms to prevent pathological behavior. We look to social cues within the system to understand acceptable bounds of behavior and in certain cases, one could reason that if the designer spent so much time building safeguards against certain behaviors into the system, such behavior must be prevalent and thus, acceptable to experiment with. In some cases, the correct approach to obsessing about the security of a system is to leave the system deliberately unsecured so that it does not even occur to people to test the security.

The “No Evil Geniuses” hypothesis is a radically different way to think about the world and one I don’t even think I can completely justify. At the same time, after having looked at all of these disparate cases in which there simply isn’t any other good explaination, it’s one I’ve been increasingly forced to take. Whenever I’ve gone out on a hunt to spot a rich treasure trove of evil geniuses, I’ve never been able to find them. Maybe there’s a simpler, more coherent explaination for all of this but until I find it, I’m going to bill this the No Evil Geniuses Paradox.

Nov 11th (day 29): Bumblebees and Spam

by Hang

Bumblebee Labs is called Bumblebee Labs because of the following quote:

Aerodynamically, the bumblebee shouldn’t be able to fly. But the bee does not know this, so it goes on flying anyway – Antoine Magnan

A bumblebee is an occurrence which cannot be explained by our current theory and thus, demands special attention. Bumblebees are the keys to uncovering areas where our understanding of the world drastically fail and how we can construct a better theory to explain what is happening. But to even notice bumblebees, you have to be on the lookout for them. You have to make a commitment to noticing when you theory goes awry and be willing to dig for an answer.

I was reminded of bumblebees while reading about how researchers infiltrated the storm botnet and discovered that the response rate to spam is 1 in 125 million (Slashdot, Original Paper). How is it that spam is still so awful in this day and age? Spam should be just like any other business, those who are incompetent at it should go out of business and those who do the best, thrive. But this does not seemingly explain why spam seems to have such abysmal conversion rates and why spammers aren’t innovating and experimenting with better ways of spamming.

Spam seems like the perfect vehicle for a data driven, analytic approach. Each email is constructed programatically, websites are created in a largely automated fashion and the path from action to profit is easy to chart out. All the necessary ingredients for Spam 2.0 seem to have been around for the last 10 years and yet spam is still universally awful.

How do we explain the quality of spam then? I can think of a couple of possible explanations, none of them satisfying:

  • The spam we are getting now has been optimized and is the spam which maximizes conversion rates. If so, I would be very surprised as this seems to violate almost everything we know about marketing.
  • Spam suffers from a supply problem, not a demand problem. Spammers only profit when there’s something to sell and there’s simply not enough people wanting to sell via spam to bother increasing response rate. Andrew Chen writes about how your ad-supported Web 2.0 site is actually a B2B enterprise in disguise and the same issues could be facing spammers. However, for spammers hawking V1agra, it seems like the potential supply should be limitless so I’m going to discount this theory for now.
  • Quality is totally irrelevant to a spam campaign, high quality spam and low quality spam get close enough to the same response rate that it doesn’t matter. This might be true if you view spam not as an inducement but as a provider. The purpose of spam is not to convince you that you need a 12 inch h4rd C0ck, it’s to be there for those who have already decided a 12 inch h4rd C0ck is what would rock their world. If this is the case, it doesn’t matter what you put in the messages. However, this does not seem to account for Nigerian Scam emails which very much are set up like an inducement.
  • Spam is an oligopoly and hard to break into. It might be the case that there really only are 3 or 4 actual spammers in the world and it’s a hard market to break into. If that’s the case, then it could be none of them have the necessary awareness or expertise to conduct a data driven campaign. There does not seem any obvious structural element to spam though that would make this the case. Given how many Silicon Valley titans have been overthrown by entrepreneurs, spam doesn’t seem to be any different.
  • Spammers are all universally stupid. No one in spam is smart enough to conduct a data driven approach. This may be true but if so, it points to a gaping niche in the market which has been open for an extraordinarily long time. By all rights, an entrepreneur should have filled this space by now.

None of these explanations are wholly satisfying and none of them just plain sound right. There is one other explanation I have though which holds some tantalizing clues as to what the true answer might be. However, this explanation is so paradoxical, so shocking and so counter to our intuitive experience that all I can do today is lay the necessary groundwork to show how the problem of spam is a bumblebee that defies resolving with any of our conventional theories. If you have a better explanation for why spam is the way it is, post it in the comments. Otherwise, tune in tomorrow to understand the problem of spam can be explained by the fact that there are no evil geniuses.

Copyright ©2009 BumblebeeLabs — Theme designed by Michael Amini