Here at ParentData, we refer a lot to “panic headlines.” If you’re a parent just trying to follow the science, to do what’s best for your kid, sometimes it feels like you’re being absolutely and nonsensically bombarded with the wrong things to do. And when it all feels like too much, I like to call Dr. Bapu Jena.
Bapu is an economist, like me, but unlike me, he’s also a medical doctor. And he loves natural experiments, which means an experiment he can’t control observing; people existing in naturally occurring circumstances.
If you’ve read Freakonomics, you’re familiar with this kind of research. Today I wanted to talk about how research can be abused once it makes its way into the world — how we get to that splashy headline that warns you that there’s lead in your Cheerios or you have to throw out all your black plastic cooking utensils or how screens give your kids autism. How do we get there? How does the research that informs it get published? If it’s not the researchers’ aim to make us worry, they’re just curious data nerds. Why does it feel like they’re colluding with the newspapers to scare the crap out of parents?
That’s what Bapu and I discuss: what the researchers are actually aiming to do, and then what happens to that research when it arrives in your headlines to freak you out. We get into the complicated relationship between causality and correlation, the academic and popular incentives to publish these kinds of headlines, and who decides what research is worth sharing with the world.
This is on the face a conversation about research, but really it’s about reassurance; that there are a lot of reasons behind publishing a story about lead in Cheerios that have nothing to do with you or how dangerous Cheerios actually are or whether you’re a good parent who cares about the health and well-being of your kids. You are and you do. Don’t throw out your Cheerios. But also, if you’re curious, listen to the journey that headline took to land in your lap.
Here are three highlights from the conversation:
Why is randomization so important in research?
In the real world, though, you don’t have that randomization for the most part, and so you have people who drink a lot of coffee and people who don’t. And I would explain to them that people who drink a lot of coffee are different than people who don’t. And because of that, you don’t know if it’s the coffee that’s leading to this health outcome or everything else that is different in those people who drink a lot of coffee.
Why are there so many results that are published that are correlation and not causation?
And then the other problem is I think that the incentives aren’t there to do something more. It takes more work to do something clever and creative to solve a problem in a more rigorous way. It takes more effort and more thinking and more creativity. And if the incentives aren’t there to do that, then you’re not going to do that. And I think in general, medical journals don’t require that as a standard. You write a lot of economics papers; economics papers do require that as a standard. You would not typically see those kinds of papers published in the best economics journals, whereas almost every week in the best medical journals, you’ll see a paper like that come out.
How do studies that inspire panic headlines impact parents negatively?
Full transcript
This transcript was automatically generated and may contain small errors.
The canonical example of this problem is new study shows that coffee generates higher longevity, or new study shows that coffee makes you die sooner, or new… A lot of this stuff happens in nutrition, and it’s either specific or it can be very general. New study shows that people who consume more ultra-processed foods die sooner. And these are places where often when people send them to me, then I say, “It’s correlation and not causation.” But can you give me your answer? I’m your patient. I come and I say, “I saw this new study that says blah, blah, blah about coffee.” How do you help me understand what is the problem with that? What are some of your concerns about that study?
In the real world though, you don’t have that randomization, for the most part, and so you have people who drink a lot of coffee and people who don’t. And I would explain to them that people who drink a lot of coffee are different than people who don’t. And because of that, you don’t know if it’s the coffee that’s leading to this health outcome or everything else that is different in those people who drink a lot of coffee. And I think most people would understand that idea if told in that way.
The question I find more puzzling is why are there so many papers which do this? Which are written by people who haven’t just watched a 45-second Instagram video or had one conversation with you but who have spent literally many, many years taking classes on these exact topics and then writing papers. And these people are professors; they have a lot of education, they’re smart. And I just fundamentally don’t understand why so much of this literature keeps happening and whether the people who are writing it think that they are uncovering something causal or not. Do you have any insight?
I would have no problem if people would do the kinds of studies that you’re describing, which are purely correlational studies, no experiment serving as the backbone behind the idea and saying, “Look, here’s this interesting observation. We don’t know what to make of it. It could be causal for X, Y, Z reason.” There’s a channel by which this thing, in this case dandelions, could have an effect on some health outcome, and it would work by this mechanism. This flower affects this protein in the body; this protein in the body affects these cells; these cells affect the development of dementia; something like that. Totally fine with that because then a reader could say, “All right, well, I want to investigate this or not investigate this any further.” But that’s not the way the studies come out. The studies come out is dandelion greens are strongly associated with dementia, and therefore you should eat fewer or more of these dandelion greens.
The question then is, to your question, why do you see this? And among educated researchers, I think it’s two things probably. One is I think that they might at their core believe this to be true. And this is something I’ve been thinking a lot about now, which is how do your beliefs impact the way that you do science, the way that you do research? Certainly if you have a belief that something is right or correct, true, you might do research in a way that validates that and you might interpret the findings in a way that validates that. I think that’s one problem. And I think that was something that we maybe saw a lot of actually during the COVID-19 pandemic.
And then the other problem is I think that the incentives aren’t there to do something more. It takes more work to do something clever and creative to solve a problem in a more rigorous way. It takes more effort and more thinking and more creativity. And if the incentives aren’t there to do that, then you’re not going to do that. And I think in general, medical journals don’t require that as a standard. You write a lot of economics papers; economics papers do require that as a standard. You would not typically see those kinds of papers published in the best economics journals, whereas almost every week in the best medical journals, you’ll see a paper like that come out.
And I get the impression when I read these papers that people think that the kinds of empirical adjustments they do are sufficient. When you read these papers… Just to back up, for people who don’t spend their days in papers about coffee and longevity, it’s very common in research like this for researchers to talk about adjusting for variables. They say, “In the data, I see not only do you eat dandelion greens and how long do you live or do you have dementia?” Let’s say. Let’s keep with our dementia thing. “Not only do I see your dandelion greens and your dementia, but I also see whether you went to high school and some categories of income and your sex and your age and maybe your race, and maybe I see more,” whatever it is. And they adjust for those. They use a statistical method to effectively try to match people with similar values of those variables. It’s not usually quite that technique, but something where I basically say, “I’m going to try to hold constant these other features so I can be closer to being causal.”
And my view informed by a lot of the research I’ve done is that most of the time that’s completely insufficient. And that, in fact, there’s tons of things we don’t see about people which are very important for their choices of, say, what to eat that are not summarized… A full component of somebody’s interest in their health is not summarized by two categories of education and three categories of income and that these are just really insufficient. But I wonder how much of it is that I just have a particular, I think, informed but maybe unusual view about just how important unobservables are and that most of the public health profession has a different view about how important unobservables are. How much do you think it’s that?
And the only way I can rationalize this is to say that I think everybody knows that these studies that we’re talking about just can’t be true. The only way this makes sense is they just can’t be true, and so you hold a different standard to them. Even though the language is there, which is close to causal, there’s these disclaimers at the end about how the study is not causal or you can’t interpret anything about the cause relationships here based on the study design. But I think that they have an understanding of the issues, but they’re applied differently. And the question might be why are they applied differently?
Ultimately, the job of the journals is to create scientific information that shapes the scientific record. That’s certainly one goal, I think. But the other is to make sure people read their journal to feel like they’re being relevant. That’s a core part of their business model. If they’re not publishing things that people don’t want to read, that’s a problem. And one of the things that people have an appetite for, there’s a demand for, is this kind of science. Now, why that is, I don’t know, or why there’s a demand for this low-quality science, I don’t know. I certainly understanding why there’s an interest in understanding whether or not coffee has an effect on your life or all these things that we do all the time.
That the fact that they’re more comfortable saying to a journalist, “People should eat less ultra-processed foods,” because they’re coming in with the belief that that is right and that this paper reinforces something which they really already thought it was true. And even if you said, “Well, this particular paper didn’t really add that much to my knowledge, I was already very sure that was true, and so I’m comfortable saying that it should change people’s behavior, not so much because this particular piece of evidence that we’ve produced, but just because I generally think that that’s a thing people should do, and this is an opportunity to express that.”
I think that there are ways to show that a researcher’s views about the world… And I won’t even call it ideology because there’s no ideology about coffee and longevity. It’s I think it could be true. There’s an incentive for me to publish this, so I’m going to do it. But thinking about how the incentives, whether it’s career incentives, financial incentives, ideological incentives, how much the role of incentives play in the questions that people ask and the way they study them and the answers that they find I think is an untapped area for research but could be really interesting.
And now fast-forward a couple years later, we have this paper now which shows that the diagnosis of ADHD goes up on Halloween. And so we look at millions of visits to pediatricians on Halloween versus all the surrounding days. We definitely account for the possibility that parents who go to the doctor on Halloween, take their kid there might be different. And we show that we don’t think that that’s what’s going on. And we show that the diagnosis rate of ADHD is higher. And what we think is going on is that there are some children who, the provider was thinking about this as a possibility of a diagnosis, ADHD, but they never pulled the trigger on the diagnosis. But on that day, the behavior of the child is different for obvious reasons. They’re excited about Halloween, so they’re just less attentive, they’re not answering questions, and so that diagnosis gets made. And it’s not a huge effect, but it’s clearly measurable. You can see it in the data.
And we write this paper up, and we don’t know whether this is underdiagnosis or overdiagnosis. Could it be? It could be overdiagnosis. Something is arbitrary as Halloween leads you to make a diagnosis that you otherwise wouldn’t have made. But it could also be underdiagnosis. This could be what we call a stress test. Like you do a cardiac stress test to look for heart disease, this could be an ADHD stress test where if that child responds in such a way on Halloween that you now think about the diagnosis of ADHD more firmly, that might be indicative of something that needs to be addressed in that child. That could be underdiagnosis.
We write this paper, and we send it to a lot of places; medical journals, all good places. And most of the reviews are actually generally, on the methods, they don’t have anything to say. But from a number of places we’ve gotten feedback back, both from the editors and from sometimes reviewers about how this finding is highly stigmatizing to the disease. And we’ve had an extraordinarily difficult time to get it published. Now we’re going a different route. But my pushback has been, well, look, we’re not saying this is overdiagnosis or these are frivolous diagnoses. In fact, if this is underdiagnosis, what does it mean to a parent that for their child, the only way that they would’ve gotten diagnosed with ADHD is if by chance they happen to go on Halloween, and otherwise it wouldn’t have been diagnosed, with a condition that their child might have? That arbitrariness in the diagnosis I think should be concerning to people. It might be inevitable, but it should certainly be something that flags your attention. And we’ve had a really difficult time getting this thing published because of, for lack of better words, what I think are the optics. That’s the feedback that we’re getting.
The hippocampus has also been implicated in Alzheimer’s disease. Now, the study that’s coming out, and it’s coming out in the British Medical Journal in a couple of weeks, we use this data that you’re probably familiar with where death certificates in the United States have recently been linked to occupation. We’re looking at these data and what kind of questions can we answer? And I think about this study in London taxi cab drivers a long time ago, and I’m like, wow, I wonder if we look at taxi cab drivers in the United States, what do rates of Alzheimer’s dementia look like in taxi cab drivers as a cause of death compared to every other occupation?
And so we look at something like 450 occupations, and taxi cab drivers and ambulance drivers, not EMTs, but ambulance drivers who are just literally doing the same thing. They have to drive all sorts of different places. They don’t know where they’re going. They have to memorize these roads. They have the two lowest rates of Alzheimer’s-related dementia… Sorry, Alzheimer’s-related dementia contributing to mortality than all other occupations. And we show that they’re not lower in terms of other forms of dementia that are not thought to be related to the hippocampus or what we call Alzheimer’s disease. And so it’s a very strange finding. And of course we’re not going to come out and say, “You should become a taxi cab driver.” We’re not going to come out and say, you should stop using Google Maps so that you can-“
And so I think that there is space for correlational studies that are, A, answering interesting questions that have not been already published on 50 times before in the last two years. There is space for those studies if they’re, A, answering or looking for interesting relationships that haven’t previously been studied, and B, not claiming more than what they’re claiming and saying, “Look, here’s what it makes us think about. Here’s how we should look at this further.” I think those kinds of things are useful.
CREDITS
And just for fun, I’d encourage you to go to our website and just search for panic headlines. The articles that come up run the gamut from whether you should worry about lead in your tampons or white noise or breast pump bacteria or microplastics. And after digging into the data, the answer is almost always no. Cut yourself some slack at parentdata.org.
There are a lot of ways you can help people find out about us. Leave a rating or a review on Apple Podcasts. Text your friend about something you learned from this episode. Debate your mother-in-Law about the merits of something parents do now that is totally different from what she did. Post a story to your Instagram, debunking a panic headline of your own. Just remember to mention the podcast, too. Right, Penelope?
Community Guidelines
Log in