Tag Archives: grants

Research transparency in public health and clinical research-unicorns and rainbows?

I crashed the Psychiatric Epidemiology Journal Club meeting on Friday, even though I’m no longer a fellow[1], because the journal article talked about selective reporting in studies of diagnostic tests[2], which is highly relevant to my interests in Internet gaming disorder.  In my naïve scientific fantasy, all health researchers strive to discover the truth and then present this truth to the public. I was excited to see that the article had some recommendations for transparency in reporting—I don’t usually see this emphasized in health and epidemiology research, and it would make a big difference in how decisions are being made about disorders related to video gaming.

Aside from what I learned from the Cochrane Collaboration about excellent meta-analysis methods, I learned about next-level transparency from my colleagues in psychology. Although Cochrane showed me how to do a good meta-analysis, assess bias in studies, etc., I had never even heard of registering analysis plans until I reviewed a study by Andrew Przybylski and colleagues that used this very strict approach to show that what they reported matched their original hypotheses.[3] (Note that in Biostatistics we were taught to write our code first and then stick to it as a way to not data dredge, but we weren’t taught to register it.) This prevents bad practices like data-dredging/ hypothesizing after results are known (HARKing[4]), and selective outcome reporting (cherry picking results).

For example, if you were a researcher studying video game addiction in children over a period of three years, you could collect data on  a bunch of things like time gaming, grades in school, sleep quality, whether the kids have depression, ADHD or anxiety at each time point,  and a measure of problematic gaming (PG)/game addiction. If you wanted to data dredge, you could just run a bunch of analyses to see which variables seem to cause PG (by being positively associated prior to the development of PG), and which variables PG might cause (by showing up only after PG). Suppose you find that depression and anxiety come before PG—they might be the reasons that PG develops—but they also are associated after PG. If you’re  really interested in the idea that PG causes problems but doesn’t result from them, you could decide just to report the depression and anxiety that happen after PG (cherry-picking results) and never even mention that you also measured the same damn thing before PG. You could then HARK by adding this to your writeup:  “We hypothesize that PG will lead to mental health problems like depression and anxiety”.

HARKing is a big debate now in psychology and the social sciences, I’m surprised it seems to be more of a niche/specialty area in public health and clinical medicine, aside from Cochrane-level meta-analyses. What was the most weird/surprising to me was that one of my PET Fellow colleagues was talking about how the changes to NIH’s definition of a clinical trial meant that their team would have to register all of their measurements and analyses, which he implied was really different from the way things were currently done.

Epidemiology is not his primary field; I can understand where the emphasis on theory and causation might be different. Perhaps hypothesis testing is not as important. I know for me, learning to give a strong rationale and explain the theory behind my research was a huge challenge. I drew so many causal diagrams (and threw so many away) that I probably could have wallpapered my living room with them.  I would have preferred to just “allow a story to emerge from the data”—it sounds so romantic.  I ended up preferring latent class analysis, which does allow a story to emerge from the data but still requires theory and hypothesis testing (e.g., see my recent paper with Kardefelt-Winther[5]). And once I learned about reporting, open science, etc., I realized I had to keep doing it—once you go open, you never go back.

Transparency should be vital to health research—hiding results or being unclear could kill people. I got into public health partially because of how freaked out I was when I learned that a suicide in a clinical trial volunteer given duloxetine was never reported. Since that time, I’ve learned how many ways there are for science to be transparent, and how powerful it can be to be less than transparent. Whether it’s who funds or influences studies or how outcomes or biases are reported, there’s a lot of potential for futzing around with the data in a way that obscures the truth and benefits personal, academic, or industry agendas. It’s impossible to be perfectly unbiased, but even seemingly overwhelming sources of bias (like the tobacco industry funding tobacco control research) may be manageable if you try hard enough with mitigators like transparency and independent reviews of findings.[6]

I haven’t read NIH’s whole 2014 guidance on what constitutes a clinical trial, but I did read their case studies. Now, anything that involves an intervention (action taken for the purposes of improving a health outcome) is considered a clinical trial.  Testing a drug or vaccine?  Pretty clearly a clinical trial. Want to see if 20 minutes of exercise a day helps chronic pain? Clinical trial. But what about making some people watch scary videos, then measuring their brain function with an MRI? That seems like a psych experiment, right?  Well, now it’s apparently a clinical trial. Even though it’s not designed to recruit patients to test interventions that can possibly improve their health, it has to be registered as a clinical trial with all the experiments people might want to participate in to improve their health and/or contribute to science, [7]just because it measures health-related outcomes.

I agree with the social and behavioral scientists (and my colleague) that that’s kind of ridiculous. People use clinicaltrials.gov to look for research studies; it’s designed to ensure that clinical trialists don’t screw around with people’s lives and health by not reporting the outcomes (or side effects) they measure. I can see the need for government-funded studies—or any test of a medicine, biologic, device, or psychosocial or behavioral intervention designed as therapy—to have that requirement. That’s just basic human subjects protection. It’s encouraging to see that step toward transparency, but redefining a clinical trial seems like a step backwards.

 

To me, the solution is simple—don’t clutter up clinicaltrials.gov with experiments like that, but do require registration and other ways to address transparency. Andy’s paper and our group’s subsequent work to develop an open science definition of behavioral addiction introduced me to the wonders of the Open Science Framework, which we used in our recent study of problematic gaming. I think efforts like that are going to be vital for moving research forward in a clear way and not wasting taxpayers’ money.

 

**************************************************************************

[1] But was told “PET for lyfe” is the motto and not to worry

[2] Levis, B., Benedetti, A., Levis, A. W., Ioannidis, J. P. A., Shrier, I., Cuijpers, P., … Thombs, B. D. (2017). Selective Cutoff Reporting in Studies of Diagnostic Test Accuracy: A Comparison of Conventional and Individual-Patient-Data Meta-Analyses of the Patient Health Questionnaire-9 Depression Screening Tool. American Journal of Epidemiology, 185(10), 954–964. https://doi.org/10.1093/aje/kww191

[3] Przybylski, A. K., & Weinstein, N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital-Screen Use and the Mental Well-Being of Adolescents. Psychological Science, 28(2), 204–215. https://doi.org/10.1177/0956797616678438

[4] Murphy, K. R., & Aguinis, H. (2017). HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results? Journal of Business and Psychology, 1–17. https://doi.org/10.1007/s10869-017-9524-7

[5] Colder Carras, M., & Kardefelt-Winther, D. (2018). When addiction symptoms and life problems diverge: a latent class analysis of problematic gaming in a representative multinational sample of European adolescents. European Child & Adolescent Psychiatry. https://doi.org/10.1007/s00787-018-1108-1

[6] Cohen, J. E., Zeller, M., Eissenberg, T., Parascandola, M., O’Keefe, R., Planinac, L., & Leischow, S. (2009). Criteria for evaluating tobacco control research funding programs and their application to models that include financial support from the tobacco industry. Tobacco Control, 18(3), 228–234. https://doi.org/10.1136/tc.2008.027623

[7] Perhaps I will write a blog post later about the ethical problem of therapeutic misconception, which may lead potential subjects to not understand that they may actually get a placebo rather than a potential treatment.