This is the third of a weekly series of posts on various aspects of social software design I find interesting, here is the full list. Each of these posts are written over the course of a few hours in a straight shot. Contents may be mildly idiosyncratic. To vote on what I should write about next, go to this Quora question.

sundaes

And one day the wizards of LambdaMOO announced “We’ve gotten this system up and running, and all these interesting social effects are happening. Henceforth we wizards will only be involved in technological issues. We’re not going to get involved in any of that social stuff.”

And then, I think about 18 months later — I don’t remember the exact gap of time — they come back. The wizards come back, extremely cranky. And they say: “What we have learned from you whining users is that we can’t do what we said we would do. We cannot separate the technological aspects from the social aspects of running a virtual world.

Clay Shirky – A Group Is It’s Own Worst Enemy

Social software is deceptive because it looks like conventional software but does not behave like conventional software. You can take a piece of social software and it seems possible to analyze it in terms of feature set, user experience, traction and all the conventional tools used to analyze software. But to do so fundamentally misses it’s essential nature. It is impossible to split social software into a technical system as distinct from a social system and analyze each piece separately. Instead,  all social software are inherently socio-technical systems.

To illustrate with an example (borrowed from Latour), let us assume that we have a road in a quiet residential area in which the main problem is that cars drive too fast down down it. There are at least two possible ways of solving this problem: adding in a speed bump or adding a “slow” sign at the start of the road.

Speed bumps provide an obvious physical mechanism that forces cars to slow down: driving too fast results in an uncomfortable jolt and possible damage to the car. If we were technical analysts, we would totally understand through decomposition, the purpose and mechanism of speed bumps. But a “slow” sign has no intrinsic property of slowness about it. Using technical decomposition, we can see that the molecules of the “slow” sign barely interact with the molecules of the car. Instead, “slow” signs operate purely due to the social mechanisms that society has set into place. I know that if I were to run a slow sign, there is the possibility of a policeman catching me and this could lead to a large fine which would ruin my day (not to mention my social conditioning to be lawful regardless of circumstance). Both the speed bump and the slow sign achieve roughly the same goal but through two very different mechanism.

Likewise, with all social software, only part of the mechanisms that ensue success are encoded in the technology platform. The rest of it is encoded in the social mechanisms of the community of users who are running it. Rather than analyze social software from the perspective of features and code, it is instead, far more correct and useful to analyze it in terms of what mechanisms are necessary for the software to succeed and only after that, to figure out which is the correct place to put them.

This makes social software a very different beast from conventional software because social software runs on humans in conjunction with machines. While machines can be manipulated by typing words into a text file and hitting compile, humans are much more finicky and dynamic (although, it the case of some game dynamics, almost as easily predictable and reliable). What this means is that every piece of social software has a huge chunk of it which has both limited visibility and is constantly in flux. What’s more the same code base running on different communities leads to intrinsically different pieces of social software and lessons learnt from one community cannot be directly applied to any other. On top of that, while only the developers have the privilege of checking in source code, any particular user can affect the social norms of a community. Unless you start development with these realities baked into your understanding of the world from the very beginning, you cannot produce humane social software.

The most visible arena where social software fails is as communities scale. Small, tight knit communities are capable of having a rich social layer and good communities manage to practically design themselves with merely the benign neglect of the software creators. However, as communities grow, the social fabric becomes weaker and weaker and less capable of supporting sophisticated mechanisms. Unless technical solutions are put into place, the community degrades into an underwhelming mess.

Last week, I talked about the Evaporative Cooling Effect and how one way to mitigate this is by unequal reputational roles for different members. In a small community, it is possible to do this purely through the social layer. Participants are able to remember who has particularly good domain expertise, who displays generosity and kindness & who is abrasive but knowledgeable.  Rich mental models of reputation are formed and different members in the group will be treated in different ways, abusive behavior will lead to shunning and admirable behavior will lead to respect. But there are intrinsic cognitive limits to how much reputational information we can hold and process (Dunbar’s number is commonly cited in this, usually incorrectly). Once communities exceed this limit, the ability to provide reputational distinction through purely social norms becomes impossible. Instead, reputation must be augmented through technical means (action logs, karma, reviews, etc).

However, overdeveloped technical systems can often be a much bigger problem than underdeveloped technical systems. It’s a common failing for technologists that to see software as the hammer that can hammer in every social nail. Access control and privacy is a perfect example of this kind of thinking.

Access control mechanisms are often developed under the assumption that no social layer exists whatsoever and all access control must be done purely through the technical layer. While this leads to cleanly analyzable assumptions and formally verifiable proofs, it also leads to rigid and inflexible access controls systems which do not at all map onto people’s actual work patterns. This, ironically means that workers routinely bypass the technical access control mechanisms anyway and routinely email “confidential” files around and rely purely on just social mechanisms to prevent unwarranted access.

This same security thinking has been applied to our consumer social arena with even more absurd results. Technologists love to crow on about how “privacy is dead” and that they now live their lives in a purely binary completely-in-public or not-on-the-internet mode. In reality, most of our sharing is done through mediums with rich social layers through which we use to mediate our privacy. While celebrities and occasional unlucky people thrust into the limelight end up having their private lives completely exposed, the average person still goes through life without any significant privacy violation because they manage to effectively modulate the social norms around privacy. Drunk party photos of them exist on Facebook but, as long as they take care not to friend their boss, none of their friends are assholes enough to be actively forwarding those pictures along. Facebook itself seems to fundamentally misunderstand at the most basic level and this is reflected in their byzantine privacy settings which were an attempt to encode all privacy data in a purely technological fashion. This is a topic worthy of a completely separate post so I’m going to punt on the discussion for now.

The only effective way of building social software is to view code and policy as two sides of the same coin. To build a successful social system, what is needed is to establish what are all the requisite mechanisms that are required for a successful social design and then figure out how to keep those mechanisms in place, via either the technical or social layer regardless of how either of them morph. This leads to a fundamentally different way of building compared to conventional software and is a large part of the reason why so many technologists struggle so much, building compelling social experiences. Too often, people who analyze social software systems only look at the technical aspects because those are the most visible, stable and generalizable and completely ignore the morphing social contracts that are happing at the same time. But doing so leads to unbalanced design which either does not provide enough technology to support the social layer or ignores the powers of the social layer and overcompensates with inflexible technology.

To be notified of the next Social Software Sunday piece as it’s posted, you can subscribe to the RSS feed, follow me on twitter or subscribe via email: