Emily Oster

16 minute read Emily Oster

Emily Oster

Understanding Panic Headlines

How studies that influence your parenting choices get published

Emily Oster

16 minute read

Here at ParentData, we refer a lot to “panic headlines.” If you’re a parent just trying to follow the science, to do what’s best for your kid, sometimes it feels like you’re being absolutely and nonsensically bombarded with the wrong things to do. And when it all feels like too much, I like to call Dr. Bapu Jena. 

Bapu is an economist, like me, but unlike me, he’s also a medical doctor. And he loves natural experiments, which means an experiment he can’t control observing; people existing in naturally occurring circumstances. 

If you’ve read Freakonomics, you’re familiar with this kind of research. Today I wanted to talk about how research can be abused once it makes its way into the world — how we get to that splashy headline that warns you that there’s lead in your Cheerios or you have to throw out all your black plastic cooking utensils or how screens give your kids autism. How do we get there? How does the research that informs it get published? If it’s not the researchers’ aim to make us worry, they’re just curious data nerds. Why does it feel like they’re colluding with the newspapers to scare the crap out of parents? 

That’s what Bapu and I discuss: what the researchers are actually aiming to do, and then what happens to that research when it arrives in your headlines to freak you out. We get into the complicated relationship between causality and correlation, the academic and popular incentives to publish these kinds of headlines, and who decides what research is worth sharing with the world.

This is on the face a conversation about research, but really it’s about reassurance; that there are a lot of reasons behind publishing a story about lead in Cheerios that have nothing to do with you or how dangerous Cheerios actually are or whether you’re a good parent who cares about the health and well-being of your kids. You are and you do. Don’t throw out your Cheerios. But also, if you’re curious, listen to the journey that headline took to land in your lap. 

Here are three highlights from the conversation:

Why is randomization so important in research?

Emily Oster:

So much of the research that people hear about in the world is just correlation and not causation. A lot of this stuff happens in nutrition, and it’s either specific or it can be very general. “New study shows that people who consume more ultra-processed foods die sooner.” And these are places where often when people send them to me, then I say, “It’s correlation and not causation.” But can you give me your answer? Say I’m your patient. I come and I say, “I saw this new study that says blah, blah, blah about coffee.” How do you help me understand what is the problem with that? What are some of your concerns about that study?

Bapu Jena:

Well, typically if someone were to ask me that kind of question, I’d say, all right, well, if their goal is to understand whether taking this medication, taking this supplement, drinking coffee is going to actually improve the quality or length of their life, how would you study it? And I’d say to them, “The only way to really study this is to do a randomized trial where you take a bunch of people and you randomize some of them to getting the product.” In this case, it could be coffee, it could be medication. And you take an otherwise similar group of people and you randomize them to not getting that. And then you study differences in whatever outcome you care about. It could be blood pressure, it could be weight, could be how long you live. And if you see something there, then you can say, “It’s because of the thing that you did,” in this case, drinking coffee.

In the real world, though, you don’t have that randomization for the most part, and so you have people who drink a lot of coffee and people who don’t. And I would explain to them that people who drink a lot of coffee are different than people who don’t. And because of that, you don’t know if it’s the coffee that’s leading to this health outcome or everything else that is different in those people who drink a lot of coffee.

Why are there so many results that are published that are correlation and not causation?

Bapu Jena:

Among educated researchers, I think it’s two things probably. One is I think that they might at their core believe this to be true. And this is something I’ve been thinking a lot about now, which is how do your beliefs impact the way that you do science, the way that you do research? Certainly if you have a belief that something is right or correct, true, you might do research in a way that validates that and you might interpret the findings in a way that validates that. I think that’s one problem. And I think that was something that we maybe saw a lot of, actually, during the COVID-19 pandemic.

And then the other problem is I think that the incentives aren’t there to do something more. It takes more work to do something clever and creative to solve a problem in a more rigorous way. It takes more effort and more thinking and more creativity. And if the incentives aren’t there to do that, then you’re not going to do that. And I think in general, medical journals don’t require that as a standard. You write a lot of economics papers; economics papers do require that as a standard. You would not typically see those kinds of papers published in the best economics journals, whereas almost every week in the best medical journals, you’ll see a paper like that come out.

Emily Oster:

Yeah, I think, for me, that is an answer, which is almost certainly right, which is my incentive as a researcher is to publish my paper in the best, in JAMA. And if JAMA is happy to see a correlation, then that’s a much easier paper to produce than something else, and so maybe I do that. That, of course, pushes the question down the line to why don’t the editors of JAMA understand the difference between correlation and causality? A question I ask myself frequently.

How do studies that inspire panic headlines impact parents negatively?

Emily Oster:

So much of what I spend my time doing is dialing down people’s panic around these kinds of headlines, which come up all the time in parenting, because if you had to pick a space in the world to write panicky headlines, parenting would be it because that’s really what people like to click. And I guess just not to put too fine a point on it, but I think actually in the parenting space, it can be quite bad in the sense of, we talk about this — you and I talk about this — like “Oh, of course nobody thinks that this is true, and researchers know it’s not, but here’s why. Here’s this interesting academic thing.” But I think for many people who click on these headlines who see them, actually it does make them feel really bad and it does make them worry about behaviors which they’re already taking. I let my kid watch a screen, and is it my fault that they have ADHD? It’s such a common question, and so it’s not free. This is not just an academic debate once it gets out into the world.

Bapu Jena:

Your statement makes me think of a research question, which is I think the journals and the people who write these articles probably view these articles as being costless, meaning that there’s no cost imposed on society from generating this kind of knowledge. But what if you did a study where you showed that if there is a paper that comes out that shows some purported link between autism or ADHD or something where we don’t have a good, firm understanding of what’s causing it and it links it to a very common behavior or a common thing that people might do, I wonder if in the weeks or months following that you see increased diagnoses of anxiety or depression, or increased prescription fills of young children who have that condition versus mothers of young children who do not have that condition, or something like that. You’d have to find something that was really splashy that got people like, “Oh my goodness, could this possibly be true?” But if your hypothesis is right, which I totally think it is, you see studies like this and you think, oh, what did I do wrong? It’s a paralyzing feeling, I think, for a parent. 

Full transcript

This transcript was automatically generated and may contain small errors.

Emily Oster:

Bapu Jena, thanks for joining me.

Bapu Jena:

Thank you for having me.

Emily Oster:

You’ve actually been on the ParentData podcast before, but for people who weren’t listening two and a half years ago or whenever I had you, could you introduce yourself?

Bapu Jena:

First of all, it feels like yesterday, but that’s long time.

Emily Oster:

Right. It just feels it’s such a treat. You’re always in my head.

Bapu Jena:

Exactly. Yeah. My name is Bapu Jena. I’m an economist, a physician, and a professor at Harvard Medical School. I have a book called Random Acts of Medicine with Chris Worsham and a podcast called Freakonomics, M.D.

Emily Oster:

I think it’s important to emphasize that although you start with being an economist, you are, unlike me, also an actual doctor.

Bapu Jena:

That’s true. I’m a doctor-doctor. That’s-

Emily Oster:

Yeah, you see patients.

Bapu Jena:

I do. I was actually just on service for a couple weeks at Mass General in Boston.

Emily Oster:

In the emergency room, or where?

Bapu Jena:

No, I work in the general medical floor. I spend about two weeks there in a stretch, and so I just finished up yesterday.

Emily Oster:

Do you feel like you get a lot of ideas for your economics work from your doctoring?

Bapu Jena:

Yeah, I do. Yeah, there’s all sorts of random things that you see in the hospital, things that patients will say or problems that you’ll encounter, which I think certainly have given me ideas in the past. A couple of my ideas came from things I saw in the hospital.

Emily Oster:

I’m just always curious about that. You seem like a person who gets many of your research ideas from just walking around in the world or things that happened in your own life.

Bapu Jena:

Yeah, and ChatGPT feels quite good at this. I don’t know if you’ve tried it for coming up with ideas, but it’s not bad.

Emily Oster:

Where you just write in, “I need a research idea”? “I’m a crazy doctor who likes natural experiments. Give me a research idea”?

Bapu Jena:

Yeah. For example, I had a dinner with some students at Harvard the other night, and we were talking about ChatGPT. And so I said to [inaudible 00:06:57], “Suppose I want to study whether or not playing football as a youth impacts your long-term health.” And so I asked ChatGPT, “That’s what I want to know. Give me some natural experiments to try to figure this out.” And ChatGPT came up with a couple of good ideas. It says, “All right, well, let’s look at whether or not Pop Warner programs come into certain counties at certain times. Let’s look at age cutoffs. Let’s look at changes in policies towards contact versus no contact football.” All very plausible ideas. I don’t think anybody has studied this question before, and so it was remarkable that it was able to come up with ideas that all made a lot of sense. None of them I would pursue, but not bad, not bad.

Emily Oster:

I feel like my use of ChatGPT is more limited. Yesterday, I had it make me an image of Elf on the Shelf, which was actually very good. Okay, but the purpose of this call is not to advertise the products of OpenAI, which is not a sponsor, but to talk about a topic that I think you and I both care a lot about, which is why so much of the research that people hear about in the world is just correlation and not causation. I talk about this a lot, and I guess that I want to set the stage for the problem that we’re talking about, but then I actually want to talk about why this happens, which for me is the more interesting question.

The canonical example of this problem is new study shows that coffee generates higher longevity, or new study shows that coffee makes you die sooner, or new… A lot of this stuff happens in nutrition, and it’s either specific or it can be very general. New study shows that people who consume more ultra-processed foods die sooner. And these are places where often when people send them to me, then I say, “It’s correlation and not causation.” But can you give me your answer? I’m your patient. I come and I say, “I saw this new study that says blah, blah, blah about coffee.” How do you help me understand what is the problem with that? What are some of your concerns about that study?

Bapu Jena:

Well, typically if someone were to ask me that kind of question, I’d say, all right, well, if their goal is to understand whether taking this medication, taking this supplement, drinking coffee is going to actually improve the quality or length of their life, how would you study it? And I’d say to them, “The only way to really study this is to do a randomized trial where you take a bunch of people and you randomize some of them to getting the product.” In this case, it could be coffee, it could be medication. And you take an otherwise similar group of people and you randomize them to not getting that. And then you study differences in whatever outcome you care about. It could be blood pressure, it could be weight, could be how long you live. And if you see something there, then you can say, “It’s because of the thing that you did,” in this case, drinking coffee.

In the real world though, you don’t have that randomization, for the most part, and so you have people who drink a lot of coffee and people who don’t. And I would explain to them that people who drink a lot of coffee are different than people who don’t. And because of that, you don’t know if it’s the coffee that’s leading to this health outcome or everything else that is different in those people who drink a lot of coffee. And I think most people would understand that idea if told in that way.

Emily Oster:

Yeah, the other thing I will sometimes tell people is think about this choice and think about whether it seems like people are making it randomly. People will say there’s a relationship between having a family dinner, a home-cooked family meal every night and outcomes for kids and school outcomes. Once you accept the best way to do this would be randomization, you would ask comparing groups who do it and who do not, how close is that to randomization? How much randomness do you think there is in this choice? And with many of these choices like what I eat or whether I have family dinner or whatever it is, it doesn’t really seem very random. It doesn’t seem like there’s a very important component of randomization there that’s like the world is not doing a good job randomizing for you.

Bapu Jena:

Yeah. And another way to put it, and we have a technical term that we use in our studies called falsification tests, I would say to them, “All right, suppose that I showed you that people who drive expensive cars live longer. Would you say to me that buying an expensive car will make you live longer?” And most people would say, “No, I absolutely not.” And I’d say, “Well, what if it’s the case that people who drink a lot of coffee drive fancier cars or something like that?” Then they would get, “Oh, all right, okay, I get it.” There are other factors that are going to be correlated with some outcome but could not plausibly be so in a causal way.

Emily Oster:

My favorite thing I ever did about this was a time when I took every food in the National Health and Nutrition Examination survey and I correlated it with weight. And I showed that, for example, dandelion greens are associated with a lower weight, but iceberg lettuce is associated with having a higher weight. And chemical-based sugar substitutes are associated with higher weight, but plant-based sugar substitutes are associated with lower weight, which for me was such an illustration of, well, who’s eating iceberg lettuce? What kind of idiot is eating dandelion greens? It’s my dad. And he’s also doing every other thing. And I think that kind of thing can make this quite vivid when you’re just like, “Well, actually, what do you think is going on?” And as soon as you point that out to someone, the fancy car is a good example, it’s like, “Oh yeah, okay.” This is not that complicated a point to make to people, I find.

Bapu Jena:

Yeah, yeah. But just to be fair, there is pretty high quality evidence about dandelion greens and dementia.

Emily Oster:

There is not.

Bapu Jena:

That’s not true. There’s not. Can I just ask a clarifying question? When you say dandelion greens, are those the same thing as those fancy little microgreens that you get it at nice restaurants?

Emily Oster:

No, oh my God, microgreens. No. A dandelion green is literally a part of a dandelion. You don’t eat the flower, but you eat the greens. Microgreens are a delicious treat-

Bapu Jena:

Yeah. Interesting. Right.

Emily Oster:

… associated with higher rates of IQ or… No.

Bapu Jena:

I’ll say the following, though. The astute listener will know that my name is an Indian name, and so it’s not uncommon for Indian people to take flowers and fry them in some flour like dough. It comes out delicious. I don’t think it’s healthy, though.

Emily Oster:

Right. The flowers might be good for you, but the frying somewhat negates. It’s actually that people are frying their iceberg lettuce. That’s the issue.

Bapu Jena:

Exactly, exactly.

Emily Oster:

A thing I find puzzling about this space is I feel like if you explained this, and I have to my 13-year-old, this concept that correlation is not causation and that you would really want to worry quite a lot about the other things that might be driving these relationships, I feel like that is a very intuitive idea. And not that everybody sees it immediately, but that it isn’t very hard for people to understand. Even in a short conversation, you can get people to see it. I think I spend a lot of time on Instagram, short form Instagram videos explaining this, and I think some of the time it hits.

The question I find more puzzling is why are there so many papers which do this? Which are written by people who haven’t just watched a 45-second Instagram video or had one conversation with you but who have spent literally many, many years taking classes on these exact topics and then writing papers. And these people are professors; they have a lot of education, they’re smart. And I just fundamentally don’t understand why so much of this literature keeps happening and whether the people who are writing it think that they are uncovering something causal or not. Do you have any insight?

Bapu Jena:

Yeah, I have answers. I don’t know if they’re insightful. But let me just say the following. First of all, and at some point later today, I’m going to tell you about a paper that we have coming out in probably a couple of weeks. It’ll be December. I don’t know when this will air. But it will fall into this category of a very weird correlation. And the way that we would describe it and the way that we do describe this finding, which I know I’m making it sound very, wow, what’s he going to drop on me a moment? is it’s just a hypothesis.

I would have no problem if people would do the kinds of studies that you’re describing, which are purely correlational studies, no experiment serving as the backbone behind the idea and saying, “Look, here’s this interesting observation. We don’t know what to make of it. It could be causal for X, Y, Z reason.” There’s a channel by which this thing, in this case dandelions, could have an effect on some health outcome, and it would work by this mechanism. This flower affects this protein in the body; this protein in the body affects these cells; these cells affect the development of dementia; something like that. Totally fine with that because then a reader could say, “All right, well, I want to investigate this or not investigate this any further.” But that’s not the way the studies come out. The studies come out is dandelion greens are strongly associated with dementia, and therefore you should eat fewer or more of these dandelion greens.

The question then is, to your question, why do you see this? And among educated researchers, I think it’s two things probably. One is I think that they might at their core believe this to be true. And this is something I’ve been thinking a lot about now, which is how do your beliefs impact the way that you do science, the way that you do research? Certainly if you have a belief that something is right or correct, true, you might do research in a way that validates that and you might interpret the findings in a way that validates that. I think that’s one problem. And I think that was something that we maybe saw a lot of actually during the COVID-19 pandemic.

And then the other problem is I think that the incentives aren’t there to do something more. It takes more work to do something clever and creative to solve a problem in a more rigorous way. It takes more effort and more thinking and more creativity. And if the incentives aren’t there to do that, then you’re not going to do that. And I think in general, medical journals don’t require that as a standard. You write a lot of economics papers; economics papers do require that as a standard. You would not typically see those kinds of papers published in the best economics journals, whereas almost every week in the best medical journals, you’ll see a paper like that come out.

Emily Oster:

Yeah, I think, for me, that is an answer, which is almost certainly right, which is my incentive as a researcher is to publish my paper is in the best, in JAMA. And if JAMA is happy to see a correlation, then that’s a much easier paper to produce than something else, and so maybe I do that. That, of course, pushes the question down the line to why don’t the editors of JAMA understand the difference between correlation and causality? A question I ask myself frequently.

And I get the impression when I read these papers that people think that the kinds of empirical adjustments they do are sufficient. When you read these papers… Just to back up, for people who don’t spend their days in papers about coffee and longevity, it’s very common in research like this for researchers to talk about adjusting for variables. They say, “In the data, I see not only do you eat dandelion greens and how long do you live or do you have dementia?” Let’s say. Let’s keep with our dementia thing. “Not only do I see your dandelion greens and your dementia, but I also see whether you went to high school and some categories of income and your sex and your age and maybe your race, and maybe I see more,” whatever it is. And they adjust for those. They use a statistical method to effectively try to match people with similar values of those variables. It’s not usually quite that technique, but something where I basically say, “I’m going to try to hold constant these other features so I can be closer to being causal.”

And my view informed by a lot of the research I’ve done is that most of the time that’s completely insufficient. And that, in fact, there’s tons of things we don’t see about people which are very important for their choices of, say, what to eat that are not summarized… A full component of somebody’s interest in their health is not summarized by two categories of education and three categories of income and that these are just really insufficient. But I wonder how much of it is that I just have a particular, I think, informed but maybe unusual view about just how important unobservables are and that most of the public health profession has a different view about how important unobservables are. How much do you think it’s that?

Bapu Jena:

I think that’s part of it, but I’ll tell you, as someone who spent most of my life writing the kinds of studies where we’re trying to find experiments and publishing them mostly in medical journals, there’s a lot of papers that I’ve said to journals where they come back to me, and the paper gets rejected, and they’ll explain why they think this is not causal. And I’m always thinking to myself, wow, this is definitely causal. It may not be important, but it’s definitely causal. And we’ve gone through great lengths to show that there’s a causal relationship here. It’s not as if the editors don’t have an understanding of these ideas, which is always puzzling to me because they reject… And it’s not just me, it’s other people who I work with who are submitting papers that are using these causal designs and they’ll get rejected. And one of the reasons they’ll get rejected is because they’ll say, “We’re not sure this is causal because of X, Y, and Z.” But then another paper, which is not at all causal, will go through.

And the only way I can rationalize this is to say that I think everybody knows that these studies that we’re talking about just can’t be true. The only way this makes sense is they just can’t be true, and so you hold a different standard to them. Even though the language is there, which is close to causal, there’s these disclaimers at the end about how the study is not causal or you can’t interpret anything about the cause relationships here based on the study design. But I think that they have an understanding of the issues, but they’re applied differently. And the question might be why are they applied differently?

Emily Oster:

First of all, that’s an incredibly cynical view, although perhaps true. But do you think the reason they’re applied differently is because you, as a journal editor, know that if you publish… Not that your work isn’t important, but some of the things that you work on I think are not… Some of the things you work on are very headline friendly, but some of the things you work on are not very headline friendly. And so you publish a Bapu paper, you know that it’s causal, but maybe The New York Times doesn’t care. You publish something that says ultra-processed foods cause dementia or dandelion greens don’t cause dementia or whatever, The New York Times well section, they’re calling, they’re calling you. How much do you think that matters?

Bapu Jena:

I think it matters. And as you’re saying that my mind is racing, how could you start to study that? I wonder if there are any shocks to media interest in these topics or something like that where you could really show that when The New York Times or something like that becomes interested in a topic that you start to see more papers being published in the top journals on that or something to that form. Because I do think that’s part of it that the journals are responding to.

Ultimately, the job of the journals is to create scientific information that shapes the scientific record. That’s certainly one goal, I think. But the other is to make sure people read their journal to feel like they’re being relevant. That’s a core part of their business model. If they’re not publishing things that people don’t want to read, that’s a problem. And one of the things that people have an appetite for, there’s a demand for, is this kind of science. Now, why that is, I don’t know, or why there’s a demand for this low-quality science, I don’t know. I certainly understanding why there’s an interest in understanding whether or not coffee has an effect on your life or all these things that we do all the time.

Emily Oster:

Yeah, I think people… Part of the reason for the demand is if you, as the… Now we’re on one step below this, but The New York Times would like people to click on their things and read them, and something that says, “Coffee has this effect,” or, “Plastics have…” Whatever it is, something that causes people to worry about a topic that is relevant for their lives, that is a great thing to get people to click on. That’s popular clicker location.

Bapu Jena:

Yeah, there is a question of what… I don’t know that the right word here is research integrity, but I will sometimes see things where people will lambast a journalist for talking about a study that’s clearly not causal. And my view is why are we picking on the journalists? The journalist wasn’t trained to do this. We should be picking on the journal itself, the journal editors or even the researchers who decided to do that. They really should know better, and yet they’re doing that. And I think that, again, we can always quibble about what’s an important study or a creative study, I would have no problem debating that, but I think that most people can agree that certain studies are just not likely to be right or you’d have no idea whether they’re right or wrong. And I think people would agree with that.

Emily Oster:

I find that very cynical. And I think maybe a different way to say it is I think it goes back to this thing you said at the beginning about people believe something to be true. If I believe it to be true for whatever reason, based on other things I know about the world, maybe from better studies, maybe something… If I really have a core belief that eating more ultra-processed foods is bad for you, which I think is a belief that many people have and may be a reasonable belief, and then I published a study in which there’s a correlation between the share of your food that’s ultra-processed and your health outcome, of which there are many studies, even though almost all of those studies, really, it’s very hard to know how much of that effect is the food and how much is all of the other differences. And really, maybe you would dig into people and they would say, “Well, yeah, there’s probably some other confounds.”

That the fact that they’re more comfortable saying to a journalist, “People should eat less ultra-processed foods,” because they’re coming in with the belief that that is right and that this paper reinforces something which they really already thought it was true. And even if you said, “Well, this particular paper didn’t really add that much to my knowledge, I was already very sure that was true, and so I’m comfortable saying that it should change people’s behavior, not so much because this particular piece of evidence that we’ve produced, but just because I generally think that that’s a thing people should do, and this is an opportunity to express that.”

Bapu Jena:

Yeah, I totally agree. It’s such a hard question to study. The way I’ve thought about this is are there particular topics where you could reasonably show that a researcher would have a particular view in mind and to see whether or not their studies align with those views? And the example that I sometimes tell my undergraduate students is, “Think about evidence on masking.” There’s not a ton of studies and certainly not a ton of high quality studies on this. But if you were to see that individuals who published research that showed that masks are beneficial for the prevention of COVID-19 and you look at researchers who found the opposite, if we looked at those researchers on social media prior to ever publishing any studies on this topic, would those who were in the first category be more likely to be wearing masks, to be tweeting about masks, things like that?

I think that there are ways to show that a researcher’s views about the world… And I won’t even call it ideology because there’s no ideology about coffee and longevity. It’s I think it could be true. There’s an incentive for me to publish this, so I’m going to do it. But thinking about how the incentives, whether it’s career incentives, financial incentives, ideological incentives, how much the role of incentives play in the questions that people ask and the way they study them and the answers that they find I think is an untapped area for research but could be really interesting.

Emily Oster:

Yeah. And I think there’s so many pieces of the incentives. There’s my personal incentives, there’s how aligned am I with an existing set of beliefs that other people have? I think there is some work on how much more difficult it is to publish something that goes against some existing scientific paradigm. Which of course, in some ways it should be because if everyone says the sky is blue and you write a paper that says the sky is green, probably we shouldn’t publish your paper. But that kind of thing in settings in which it’s less clear can result in you basically stasis in a field until something surprising happens.

Bapu Jena:

Yeah. You know what conversation I’ve never had? I’ve never asked a journal editor why they publish these studies. I would love to know the answer to that question.

Emily Oster:

I would love to know the answer to that question too. I’ve never managed to find someone to discuss it with me. The other thing that’s very interesting that’s probably too in the weeds for here, but an aspect that’s totally… In economics, the peer review process is very, very different than in medicine. In economics, the peer review process takes an extremely long time, and there are many people who have… It’s too extreme in some ways in my view, but you’ll have five different people writing you three pages about how terrible your paper is, and every single line is the worst; revise and resubmit. And it’s just a lot of commenting and a lot of people reading things in great detail. The referee process in medicine is much faster, but it also is much more about here is this paper. Do I like it or not more or less as is? And then that allows for less space for an expert reviewer to fix something.

Bapu Jena:

Yeah. Yeah. We have this paper. And I’ll tell you the findings. It’s not the-

Emily Oster:

Yeah. Are you going to tell me the finding of the paper now?

Bapu Jena:

Oh no, I want to tell you another one because it does illustrate some of the things we’re talking about.

Emily Oster:

Then I want to hear about this. Yeah.

Bapu Jena:

But then I do want to tell you about the other. Yeah. A couple years ago, we had this paper in the New England Journal of Medicine that showed that kids who are born in August have higher rates of ADHD diagnosis than kids who were born in September. And this other economist had shown this as well, but the idea was that kids who were born in August are often the youngest kids in their class, and so when they behave differently, that’s perceived to be reflective of ADHD as opposed to the recognition that these kids are just younger for their grade. And that all stems from the idea that ADHD is a subjective diagnosis. You don’t see the same thing for asthma. You don’t see the same thing for diabetes because those are more objectively determined.

And now fast-forward a couple years later, we have this paper now which shows that the diagnosis of ADHD goes up on Halloween. And so we look at millions of visits to pediatricians on Halloween versus all the surrounding days. We definitely account for the possibility that parents who go to the doctor on Halloween, take their kid there might be different. And we show that we don’t think that that’s what’s going on. And we show that the diagnosis rate of ADHD is higher. And what we think is going on is that there are some children who, the provider was thinking about this as a possibility of a diagnosis, ADHD, but they never pulled the trigger on the diagnosis. But on that day, the behavior of the child is different for obvious reasons. They’re excited about Halloween, so they’re just less attentive, they’re not answering questions, and so that diagnosis gets made. And it’s not a huge effect, but it’s clearly measurable. You can see it in the data.

And we write this paper up, and we don’t know whether this is underdiagnosis or overdiagnosis. Could it be? It could be overdiagnosis. Something is arbitrary as Halloween leads you to make a diagnosis that you otherwise wouldn’t have made. But it could also be underdiagnosis. This could be what we call a stress test. Like you do a cardiac stress test to look for heart disease, this could be an ADHD stress test where if that child responds in such a way on Halloween that you now think about the diagnosis of ADHD more firmly, that might be indicative of something that needs to be addressed in that child. That could be underdiagnosis.

We write this paper, and we send it to a lot of places; medical journals, all good places. And most of the reviews are actually generally, on the methods, they don’t have anything to say. But from a number of places we’ve gotten feedback back, both from the editors and from sometimes reviewers about how this finding is highly stigmatizing to the disease. And we’ve had an extraordinarily difficult time to get it published. Now we’re going a different route. But my pushback has been, well, look, we’re not saying this is overdiagnosis or these are frivolous diagnoses. In fact, if this is underdiagnosis, what does it mean to a parent that for their child, the only way that they would’ve gotten diagnosed with ADHD is if by chance they happen to go on Halloween, and otherwise it wouldn’t have been diagnosed, with a condition that their child might have? That arbitrariness in the diagnosis I think should be concerning to people. It might be inevitable, but it should certainly be something that flags your attention. And we’ve had a really difficult time getting this thing published because of, for lack of better words, what I think are the optics. That’s the feedback that we’re getting.

Emily Oster:

That’s very frustrating because of course, if you have the very high-minded view we’re all supposed to have of science, which is we’re going to learn things and we’re going to get data, and then we’re going to use it to make decisions and we’re going to put truth out in the world to say, “Well, here’s the thing: I believe it to be true, but I feel that people shouldn’t hear it because it will make them feel bad or it will…” There are a variety of reasons that… That must be very frustrating.

Bapu Jena:

It is, but we keep marching on. Tell me when you’re ready for this other random finding I have. I’ll-

Emily Oster:

Yes, I’m ready. Okay, wait, I have one other thing. I want to close this out, and then I want to hear your random finding. This sums up this entire conversation. I once asked my undergraduate class. We had gone through one of these papers that I was complaining about, and then I asked them this question we have here, which is whose fault is this? And there was a kid who thought about it for a while, and then he basically was like, “Well, I think it’s my fault.” And he said, “I clicked on that headline, and that’s why they put up the headline, and that’s why the person wrote the story, and that’s why the journal took the article, and that’s why the researcher wrote the article so the journal would take it. And the journal took it, so the guy would write about it. And the person wrote about it because they knew it would get a good headline. And they put the headline on so I would click on it, and so this is my fault.” And I was like, yeah, “I guess it is. Thank you for the lesson.” That’s why 20 year olds are smart.

Bapu Jena:

Yeah, that’s almost like reader shaming. I don’t think it’s his fault. I don’t think it’s his fault at all.

Emily Oster:

And I think just to close it, because this is a parenting podcast, I think that so much of what I spend my time doing is dialing down people’s panic around these kind of headlines which come up all the time in parenting because if you had to pick a space in the world to write panicky headlines, parenting would be it because that’s really what people like to click. And I guess just not to put too fine a point on it, but I think actually in the parenting space, it can be quite bad in the sense of we talk about this, you and I talk about this like, “Oh, of course nobody thinks that this is true and researchers know it’s not, but here’s why. Here’s this interesting academic thing.” But I think for many people who click on these headlines who see them, actually, it does make them feel really bad and it does make them worry about behaviors which they’re already taking. I let my kid watch a screen, and is it my fault that they have ADHD? It’s such a common question, and so it’s not free. This is not just an academic debate once it gets out into the world.

Bapu Jena:

Your statement makes me think of a research question, which is I think the journals and the people who write these articles probably view these articles as being costless, meaning that there’s no cost imposed on society from generating this kind of knowledge. But what if you did a study where you showed that if there is a paper that comes out that shows some purported link between Autism or ADHD or something where we don’t have a good, firm understanding of what’s causing it and it links it to a very common behavior or a common thing that people might do, I wonder if you see in the weeks or months following that you see increased diagnoses of anxiety or depression or increased prescription fills of young children who have that condition versus mothers of young children who do not have that condition or something like that. You’d have to find something that was really splashy that got people like, “Oh my goodness, could this possibly be true?” But if your hypothesis is right, which I totally think it is, you see studies like this and you think, oh, what did I do wrong?

Emily Oster:

What’d I do wrong?

Bapu Jena:

You didn’t do anything wrong. Yeah, what’d I do wrong? It’s a paralyzing feeling, I think, for a parent. It would be great to do that to show that these studies have impact. Maybe coffee and heart attacks is not a good one because no one’s really going to feel bad, but there are probably other examples where you might.

Emily Oster:

Like Tylenol and autism. And it’s a bunch of these things where I hear about them a lot.

Bapu Jena:

Or vaccine related things. Yeah. What is the causal effect of vaccines on various different… or COVID-19 vaccine of various different outcomes? If your child had some outcome and you ascribed it to a vaccine, your view of vaccines might be different. Your views of the federal government might be different after that for reasons that were totally unfounded and would never have entered your mind had that kind of study not been published.

Emily Oster:

Okay, to close, can you please tell us your results? I’m dying to hear.

Bapu Jena:

Wow. Now I feel bad because you’re going to see there’s a link between what I literally just said and what I’m about to say, but that’s life. All right, let me just frame it. There was a study, I don’t know, maybe 10 years ago, it was a long time ago, which looked at taxi cab drivers in London. And it showed that a particular part of the brain called the hippocampus is enlarged in an enhancing way in these drivers. And the reason why is because they had to memorize these streets. They’re driving around. There was actually a test that they had to take called The Knowledge. And the idea was that because the hippocampus is involved in spatial processing and spatial recognition, that that part of the brain becomes stronger, for lack of better words. Just like if you exercise certain parts of your body, they’ll become stronger. Same intuition.

The hippocampus has also been implicated in Alzheimer’s disease. Now, the study that’s coming out, and it’s coming out in the British Medical Journal in a couple of weeks, we use this data that you’re probably familiar with where death certificates in the United States have recently been linked to occupation. We’re looking at these data and what kind of questions can we answer? And I think about this study in London taxi cab drivers a long time ago, and I’m like, wow, I wonder if we look at taxi cab drivers in the United States, what do rates of Alzheimer’s dementia look like in taxi cab drivers as a cause of death compared to every other occupation?

And so we look at something like 450 occupations, and taxi cab drivers and ambulance drivers, not EMTs, but ambulance drivers who are just literally doing the same thing. They have to drive all sorts of different places. They don’t know where they’re going. They have to memorize these roads. They have the two lowest rates of Alzheimer’s-related dementia… Sorry, Alzheimer’s-related dementia contributing to mortality than all other occupations. And we show that they’re not lower in terms of other forms of dementia that are not thought to be related to the hippocampus or what we call Alzheimer’s disease. And so it’s a very strange finding. And of course we’re not going to come out and say, “You should become a taxi cab driver.” We’re not going to come out and say, you should stop using Google Maps so that you can-“

Emily Oster:

I think the headline here is Using Google Maps Causes dementia. That’s the headline I would go with.

Bapu Jena:

Yeah. And I’ve already shorted stocks for Google, or Alphabet, whatever it’s called, in anticipation of this. But literally one of the last lines to say is we view this as just generating a hypothesis. It’s an idea. Think more about how you might study this in a more causal way. We’re not arguing that this is causal, we’re not using causal language, we’re just saying it’s an idea, a really strange idea.

And so I think that there is space for correlational studies that are, A, answering interesting questions that have not been already published on 50 times before in the last two years. There is space for those studies if they’re, A, answering or looking for interesting relationships that haven’t previously been studied, and B, not claiming more than what they’re claiming and saying, “Look, here’s what it makes us think about. Here’s how we should look at this further.” I think those kinds of things are useful.

Emily Oster:

Yeah, and I think from this, there’s a million quite interesting studies one could think about doing that would dial more into causality. The obvious thing that comes to mind for me is randomly… Basically fMRI. Put people in an fMRI machine and then have them do two months in which they do some training on different kinds of maps and linking this to some of this research questions around can we encourage people to stay undemented by pushing their brain in various ways?

Bapu Jena:

Yeah. Yeah, looking at that and then maybe doing memory assessments of these individuals. There’s all sorts of ways that you could do this. And to be honest with you, probably none of them would work, but that’s totally fine, but at least it’s a different idea than what we see.

Emily Oster:

I love it. Thank you, Bapu. I really love having you.

Bapu Jena:

I love being here. Thank you. It feels like just yesterday.


CREDITS

Emily Oster:

ParentData is produced by Tamar Avishai with support from the ParentData team and PRX. If you have thoughts on this episode, please join the conversation on my Instagram, @ProfEmilyOster. If you want to support the show, become a subscriber to the ParentData Newsletter at ParentData.org, where I write weekly posts on everything to do with parents and data to help you make better, more informed parenting decisions.

And just for fun, I’d encourage you to go to our website and just search for panic headlines. The articles that come up run the gamut from whether you should worry about lead in your tampons or white noise or breast pump bacteria or microplastics. And after digging into the data, the answer is almost always no. Cut yourself some slack at parentdata.org. 

There are a lot of ways you can help people find out about us. Leave a rating or a review on Apple Podcasts. Text your friend about something you learned from this episode. Debate your mother-in-Law about the merits of something parents do now that is totally different from what she did. Post a story to your Instagram, debunking a panic headline of your own. Just remember to mention the podcast, too. Right, Penelope?

Penelope:

Right, mom.

Emily Oster:

We’ll see you next time.

Community Guidelines
0 Comments
Inline Feedbacks
View all comments
A stack of colorful magazines are seen close-up.

Apr. 24, 2019

2 minute read

Why Parents Panic Way Too Much Over Kids Who Learn to Walk Late

My friend Jane’s son was born three months after my own daughter. Now that they are in second grade, you’d Read more

Updated on Nov. 26, 2024

2 minute read

Does Bird Flu Spread Through Food?

Can you please weigh in on bird flu in our food supply? Do I need to worry about milk, meat, Read more

IUD in blue background

Updated on Dec. 6, 2024

7 minute read

Do Hormonal IUDs Increase Breast Cancer Risk?

I have an IUD. I know many of you do too, because when the recent panic headlines about IUDs and Read more

ParentData podcast art

Updated on Jan. 3, 2025

11 minute read

Researching the Importance of Paid Leave

Researching the Importance of Paid Leave with Kate Ahrens and Jenn Hutcheon