Posts Tagged ‘science’

July 2 2010

A scrappy way of reliable double blind taste testing

by Hang

Most amateur double blind tastings are horrible from a statistical perspective. They barely shed any insight into the truth at all but, what’s worse, they give a false sense of knowledge. Last night, I made the assertion that top shelf vodkas are indistinguishable from each other and that any perceived taste differences were purely psychological. This lead me to be responsible for a quick, impromptu blind vodka tasting of 3 top shelf vodkas (Ketel 1, Grey Goose & Ciroc) between myself & 4 other skeptical participants (in retrospect, we should have added a well vodka as a control but we did try a well vodka after the blind tests and the difference was pretty apparent).

Our very helpful bartender marked the bottom of each glass with the vodka brand such that we could not see them, then we proceeded to taste & rate. Now, most amateur double blind studies I’ve seen rely on a single tasting then ranking. This is somewhat fine in a large lab setting with a sufficient number of participants and samples but, in our circumstances would lead to 0 statistical insight. The reason why is pretty simple, among a sample of 3 vodkas, there are only 6 different permutations. Thus, with 5 participants, it’s more likely or not, someone will get a “hit” purely by chance.

Instead, what we relied on was a double tasting procedure. Each person would sip & rank the vodkas, an independant 3rd party would then proceed to shuffle the order while we closed our eyes and we then proceeded to sip & rank the vodkas again. What we were looking for was not whether you could correctly assign the brand to a vodka (which is relatively hard) but whether you could rerecognize a vodka you had just drank (which is relatively easy). As it turns out, of the 5 participants, I was the only one who correctly determined how the vodka had been shuffled.

Now, despite the fact that I was crooning all night about how I “won” the challenge, this is not the correct conclusion to be drawn from the data. What it demonstrated was that at least 4 of the 5 participants were unable to distinguish top shelf vodkas with reliability, despite their certainty before revealing the results that there were clear and distinct differences. What this proves was that the perceived differences were purely physiologically and psychologically based and not as a result of the chemical qualities of the vodka. Additionally, it is unknown whether I could truly distinguish the difference. Remember, there’s still only 6 possible answers so it’s pretty probably that I got them right purely on luck. A further shuffle & taste would be able to shed more insight into this hypothesis but we were out of vodka at that point.

Most amateur double blind studies aren’t worth the blog post they’re written on because the authors have such a poor grasp of experimental setup that the data is worthless. Amateur studies don’t have the resources of a professional study to collect large enough amounts of data to make confident predictions, thus you need to scale back the expectations of the experiment to match the resources you have on hand. If you want to perform a double blind study with either a small sample set or experimental group, you need to use a repeated tasting procedure rather than a single tasting procedure or you run the risk of making assertions which are not statistically supported.

January 22 2009
January 22 2009

Big Science and little science

by Hang

This is an idea I’ve been chewing on for a while on how there seems to be two different modes of science which have a very hard time talking to each other because of their radically different approaches to problems. I’m going to call these two approaches big science and little science.

Big science is about wading into the thick of a big problem and working from a state of utter incomprehension, being satisifed with chewing off whatever nugget of comprehension they can take a hold of. They take hold of questions like “what is love” and grapple with it in it’s full complexity. Big science is like parachuting into the middle of the jungle, setting up base camp and gradually establishing contact with all the other little camps around you.

Little science is all about carving off a well definied, definite area of study and solving it. It asks questions like “How does Paxil bind with the serotonin receptors in the brain”. Little science is all about building the foundation, a solid ground of work on which other work can be based. The little science approach to colonisation is to bring in the bulldozers and clear and settle all the land directly adjacent to the settled land.

Big science and little science represent two fundamentally different ways of trying to understand the world and the approach of one can look bafflingly unscientific to the other. I can feel that frustration when I talk about my work to someone who does little science. My research thesis basically boils down to “How does design influence group behaviour in social software” but everything I talk about comes with the implicit caveat that it’s messy and there’s a lot more things going on than what I’m modelling. I’m not seeking to completely understand human behaviour, even if my work increased predictive power by 1%, I would view that as a major triumph.

Our tools and understanding about social psychology and design are primitive. That’s no excuse for not trying though.

Copyright ©2009 BumblebeeLabs — Theme designed by Michael Amini