![]() |
Note: They don’t cause acne. |
Please take a moment to click through to xkcd’s awesome comic. Make sure you look at the mouseover text. Done? Good. Today I’m explain why that conclusion about jellybeans is wrong. Hang in there, because the next post is going to apply all of this to video games.
Developing scientific knowledge can be tricky.
Many, many things can go wrong with doing science and spreading the word about outcomes. Think back to the scientific method you learned about in 3rd grade. First, you think of a question like “What is the effect of electromagnetic radiation on flowers?” Let’s say you think that electromagnetic radiation might mutate flowers based on something you read in the Internet. You make a hypothesis (educated guess): “Electromagnetic radiation will mutate flowers.”
How do you test your hypothesis? In school you learned about experiments, and you probably learned them as something like “give half of your subjects [you chose marigolds in this case] the independent variable [let’s say you use gamma rays] and half nothing, and see what the outcome [dependent variable] is”.
Let’s say you have 200 yellow marigolds on your back porch. You make a gamma ray machine and zap half of them with gamma rays. You notice that in a couple of weeks, 4 of the zapped ones got a little weird-looking: maybe turned red or started to grow crookedly. What do you conclude?
If you didn’t know better, you’d say your hypothesis was supported–the gamma rays did have some kind of effect on the marigolds. So you would go to your science fair and proudly stand the weird-looking treated plant next to a non-treated plant with your results “Radiation mutates flowers” laser-printed on a nice colored-construction-paper background, maybe with a bar graph or two.
But guess what—you were wrong.
Here’s why:
Coincidence: According to the laws of probability, your results could have happened randomly. Basically, the laws of probability say that sometimes things will happen by chance, and you have to determine how to make sure your outcomes aren’t just due to randomness in the universe. Scientists do that by setting a significance level for their analyses—the probability that you will incorrectly see the results as “true” even if they’re due to chance. So if you set your significance level to, say, 5% then 5% of the time even if nothing is happening, you would expect to see it just by chance.
Misclassification: Not to mention, you said you were looking at electromagnetic radiation, but what you used was a certain type of electromagnetic radiation: gamma rays. You lumped your conclusions into the category of “radiation” when what really happened was that gamma rays led to the changes. You also had the plants in the sun, another form of electromagnetic radiation, which helped the plants to grow.
I’m leaving out one other big error mentioned in the project’s title. First one to figure it out and post in comments gets their name mentioned in my next post.
Why is this a problem for understanding “the truth”?
The way you describe your results and what happens when you describe them affects scientific knowledge.
Problem A: Say you knew about statistical significance. You split your plants up and zapped them and only 3 of them changed. Hmmm, you said to yourself; that’s not significant. Maybe I did something wrong. So you did the experiment again. This time you did get 6 mutated marigolds (more than 5% of the marigolds changed). Awesome, you said; it’s significant! So you fix up your fancy poster with your color photos and bullet points and head to the science fair.
You just committed the bias of selective outcome reporting. You did the experiment twice, but only reported the one with the positive outcomes. Obviously this is a huge problem because the outcomes you did provide show evidence that something is happening, while your negative results don’t get presented to the world as evidence for nothing happening.
Problem B: At the science fair, your unknowing crowd sees your poster and concludes that you really did show scientifically that radiation causes flowers to mutate. You’ve just told the public that radiation really does cause flowers to mutate. Now everyone is going to be looking out for flowers, legislators will develop special radiation protections for them, flower advocacy groups will to try to outlaw radiation, etc.
Public dissemination: This could be good or bad. Usually, scientific results go through a peer review process—other scientists decide whether your study was done well enough to represent the truth. In this case you sidestepped that (because you’re in the 3rd grade) and shared results that were incorrect, giving the public the wrong idea and leading to concern about something that isn’t really there and a huge waste of time, money, and mental energy.
Problem C: Here’s another potential outcome of your experiment: Your marigolds don’t change at at all—your whole experiment bombs. You write up your negative results anyway with the pretty lettering and the graphs, but when you bring it in your teacher looks at it and says, “Ya know, that’s really not that interesting so…here’s an A and let’s not even bother.”
This represents publication bias: You were ready to share your results, but your teacher wasn’t interested because your results were negative.
Problem D: A world-famous scientist on the effects of radiation stops by your poster, sees your results, and tallies them up in her journal article that summarizes studies about the effects of radiation. Your study tipped the balance in favor of radiation causing flowers to mutate, so she ends up saying that “a majority of studies show that radiation hurts flowers”.
This is a huge problem for evidence development: Because you disseminated your results, your misleading outcomes are now part of the body of scientific knowledge. If you disseminated incorrect or incomplete results, the synthesis of that evidence by the world-famous scientist is flat-out wrong. Thanks to you, scientists and policymakers can hold this incorrect assumption up as the truth when they make decisions about what to do with that knowledge.
Next time: Violence, video games, and evidence development.
By the way, please feel free to comment, question or criticize. It’s all a part of science!