As scientists, we have to be perfectionists. Getting at the truth is what we do. In public health science, we do it to ensure population health, so we can’t screw it up. As I’ve learned through years of experience, perfectionism can be a tragic flaw, paralyzing us from progress. Making sure we act when science is “good enough”—rather than perfect—is vital to ensure public health. But what constitutes “good enough” should differ based on the possible level of global harm—standards for “good enough” evidence should be much higher for conditions with less of a population impact. Acting on questionable evidence is not warranted unless the global risk is very high, and this is the conclusion of our evaluation of the quality of evidence for gaming disorder, which just came out in PLOS ONE yesterday. Hopefully, this post will start putting into context what kind of “good enough” evidence is needed to drive policy around behavioral addictions so that we can make sure we are getting the most population health benefit.
This search for “good enough” evidence has been my driving force since I became interested in public mental health. Over the last year, we’ve all learned how truly limited scientific evidence can be, and we’ve learned about the pitfalls of relying on early evidence to drive public health decision making. In public health, we sometimes need to use the precautionary principle (similar to the idea “better safe than sorry”) to take action when the evidence isn’t certain. (Note that the linked website was chosen because it is very readable and shows the contexts in which precautionary principle is generally used.) This means weighing what evidence we have for or against the benefits of a certain intervention (say, wearing masks or face covers) against the possibility of serious harm if no action is taken. The important thing to keep in mind is that the goal of population science is to provide the most benefit at a population level.
We saw the precautionary principle in use in the World Health Organization’s (WHO) initial reluctance to support universal face coverings (non-medical masks). Early on in the pandemic, WHO suggested that universal face coverings would not have more benefits than risks because people would not properly be able to use them. This is based on years of research on the use of personal protective equipment (PPE). Training is required for it to be useful, otherwise contamination of PPE could spread disease rather than protect. For example, touching the front of the mask when removing it could increase the possibility of contracting or spreading the virus, negating any benefits from preventing the spray of droplets by the wearer. As more evidence came in, WHO guidance later changed to making clear that decision-makers should take a risk-based approach to deciding whether the recommend or mandate universal face coverings. Now WHO recommends face coverings be worn by everyone whenever they are in public as a way to reduce the widespread transmission of the SARS-Cov-2 coronavirus.
The point of these changes is that WHO decided that, at a population level, more lives would be saved if faces were covered, and that the risk from face covering use (e.g., contamination from touching the mask) didn’t outweigh the benefits. As evidence changed, guidance was updated.
This is a huge public health problem; as of this writing we have over 43.5 million cases worldwide and over 1 million deaths.

Making policy decisions involves saving hundreds of thousands to millions of lives, keeping people out of the hospital, making sure our global society doesn’t go to hell in a handbasket through a crumbling infrastructure, etc. Policy decisions may need to be revised as more evidence comes in, but at least when we are conducting careful research and translating that to intervention and policy quickly, we are using our resources to avert death and mitigate global disaster.
Now let’s dial that down to a public health problem with a much lower impact on global health. In our paper, we look at how reliable the evidence is for systematic reviews on gaming disorder (GD) that also examine depression or anxiety. Of the seven reviews included, none meet reliability criteria. All have selective outcome reporting—they don’t report clearly which outcomes they will include, and all but one end up reporting only the links between GD and depression or anxiety, rather than findings that say there are no links.
This was a years-long passion project for me and my co-authors, a completely unfunded project that involved hours and hours of data collection, analysis, write-up and revision. Late in the game we had to knock it down from a systematic review of reviews to a summary of reviews because we just did not have the resources to have duplicate extraction of data from 196 studies included in the 7 reviews. This made the limitations of our paper significant enough that it was not up to systematic review standards; we make that clear because understanding and describing limitations is part of the scientific game. In our conclusions, we talk about the implications for policy, because clarifying how your science can be used to make decisions is a vital part of clinical and public health research.
The precautionary principle has been cited as the reason for making sure we have a diagnosis of gaming disorder in WHO’s ICD-11. Yet the paper linked in the previous sentence that calls for using the precautionary principle does the same thing we saw in our summary of reviews; it confuses gaming disorder with other conditions. It uses clinical evidence for “Internet-related disorders including GD”; it cites a publication that discusses “online addiction [translated]” rather than gaming disorder; and it then provides further support for its arguments in the form of unpublished data.
In my recent lecture Preventing behavioral addictions: A population approach, I go into further detail about why the level of evidence for gaming disorder as a behavioral addiction does not warrant using the precautionary principle to protect population health. Think of the difference between what we know about the harms of gaming disorder vs. the harms of other global public health threats, e.g. COVID-19. It’s clear to me that the evidence is “good enough” to warrant consideration of behavioral addictions in general as a disorder that would benefit from being named, included in research funding opportunities, and covered by insurance. If we allow evidence for online addiction/Internet addiction to be used as evidence for gaming disorder, that’s not good enough. If we allow selective outcome reporting in reviews that are used to support decision-making and policy, that’s really not good enough.
It would be great to see what a well thought-out, comprehensive systematic review of technology addictions looks like. It would be very revealing to clarify the trajectories, burden of disease, and harms associated with clinical-level addiction to games, online gambling, social media, online shopping, etc. using appropriately rigorous methods and see how these compare to other public health concerns. That is what is necessary to get that “good enough” evidence for these relatively new phenomena. Only then will we be in a good position to make the tradeoffs that are necessary to understand how we can promote health around technology and reduce harms at the population level.