Hello, welcome to a special episode of the ParentData Podcast. This is not me reading one of my newsletters, but rather the audio from a talk that I gave at the South by Southwest EDU conference called The Power of Data and Its Limits. It is a talk all about the importance of data and the key mistakes that we make when we try to analyze data and when we try to understand it. The first slide, spoiler alert, says data is important but poorly understood. And the rest of the talk is all about that insight and trying to help you understand it a little bit better. So I hope that you get a chance to listen to this and that you enjoy it. Thanks so much.

ApplePodcast listen badge

Spotify listen podcast badge
Emily Oster:

All right, thank you. I am so excited to be here to do something in person with people. So that is just an incredible thrill. And thank you, Greg. That was a very nice introduction. I’ll tell you a little bit of who I am. So my main job, my professional identity is I’m a professor of economics at Brown University, and I also have a sort of another part of my life where I do a lot of writing about pregnancy and parenting and COVID, and I write a newsletter and I just do a lot of things in that space. And in some ways when I look at the things I’m doing, they seem sort of disconnected, but actually the piece that brings them all together, the piece that goes across all parts of this is the importance of data and the importance of being able to use data thoughtfully and helpfully to answer questions.

And the recognition that that usage of data is effectively the same. Whether your question is, should I have an epidural with my pregnancy? Or your question is how do people respond to a diabetes diagnosis in terms of their food consumption? Or if that question is how should we think about the value of vaccines in the COVID pandemic. That knowledge of data and understanding of data crosses all of those areas. And once you recognize that, or to the extent that I recognize that, it makes me think that sort of a key to helping people navigate many of these spaces is understanding data, understanding what it can tell us, and in some cases what it can’t tell us.

And despite that, I find that in general, people’s understanding of data is very poor. So I sometimes say data is important, but nobody really gets it, or people don’t always understand it as well as they could. And you might say, “Why does this matter?” And one answer is just like I’m obsessed with data. So of course I think it matters because it’s the thing that I do and we all think the things that we do are very important. So fair enough. But let me tell you why I believe that, there’s really two reasons. So one is that I think that actually for a lot of people, we would make better decisions in our own lives if we had a better understanding of what we can learn from evidence. And so I think that’s a sort of personal pitch to like, okay, invest in this so you can make choices that you are happier with.

I think there is also a very key element, and this is in some ways where the kind of education part comes in here. I think as a society we had a better understanding of some of these issues, it might be easier to make our societal decisions in a better or more informed way. And so in particular during COVID, this became in some ways very salient to me. I felt like a lot of the failures or some of the failures at least that we had in the US kind of related to not having the data, the information we needed to make good decisions. And I kept having the feeling if only people understood how important data is, we would have better data, we’d be more pressure to get better information and let us make most decisions better. Sometimes when I say that people are like, that’s a really naive thing to say you’re a person who loves data.

Okay, fair enough. Maybe it’s a little naive, but that’s my pitch. So in light of that idea, so today I really want to go through four big data lessons and I’ll talk a little bit about applications to COVID but not exclusively. And really these are kind of cases where questions where I think it’s possible to understand why they’re important. It’s not that hard to communicate that, but if we understood all of these better, we would be able to both communicate data better, to process data better and to make better choices. So I’m going to talk about the question of where data comes from. I’m going to talk about the question of statistical power, the idea that correlation and causation are not linked and how we can kind of see that. Then I’m going to talk a little bit about Bayes’ Rule. And in all of these cases I’m going to kind of come in the angle here of how do I teach this to people?

How would you convey this to people in a way that’s not like, here’s the math of what random sampling is, but sort of intuitively how do we get to the understanding of these sort of key issues? All right, so let’s start with where data comes from, and I’m reminding you that at the end we’ll do Q and A. And so if you have your little app thing, you should ask questions and then I will try to answer them later and I’m happy to answer questions about more or less anything that you want. So let’s get started. So where does data come from?

So I want to start with this statement. This is statement I pulled off some CDC website, some academic paper or something. CDC data indicates that 42.5% of adult Americans are obese. So the CDC has a lot of statements like this, and it’s on a website in which they report out some information about the health of the American population. So the question is, where would you get this? How do we know this piece of information? And so in some sense the answer to that seems obvious. So we must somehow know it from weighing people, presumably people in America, and presumably adults in America based on this sentence. So we weigh them and we must also measure their height. That’s how you get to a definition of obesity. And we calculate some number. Okay, so that’s roughly right.

And the question is, well, who are you weighing? And sometimes I’ll ask my students that and they’ll be like, “Well, I guess you weigh everybody, because it says adult Americans.” And I said, “Well, did you get weighed? Did they come way?”

“No. No, nobody weigh.”

“Okay, well it can’t be everybody then at least you personally were not weighed.” And in fact, the answer to this question, where does a number like this come from is in this particular case, a CDC survey called the National Health and Nutrition Examination survey that the CDC runs and it covers about 11,000 people every year. So they weigh the 11,000 people and they calculate their BMI and they calculate the share of them that are obese and then they kind of convert that into a number to be representative of the whole US population.

So in some sense this makes a lot of sense and sort of statistically of course you can’t weigh everybody, that would be practically impossible. And so you’ve got to have some system for using a smaller sample of people to extend up to everybody. But the key is how you pick that sample. If you want to weigh 11,000 people and make a statement like this that has some type of validity, you need to be pretty careful about how you pick those people. So for example, one way we could pick those people is we could take everybody in a particular location. So we could go to Missouri or New York or we could go to… You only need 11,000 people. You could go to, I don’t know, a Knicks game or a Nets game, or you just go to some stadium and just weigh the people, so convenient. Actually NHANES is an enormous amount of work. They have all these, I don’t know, mobile clinics and stuff. Why don’t you just show up at some sports event with 11,000 people? You weigh them and you’re done. That’s it. Very fast, cheap. And of course the problem with that is like, well that’s not going to tell you about everybody at America. That’s going to tell you about the people who went to the Nets game on that particular day. And I don’t know how those people compare to the overall population.

You could have sort of different approaches. You could weigh everybody who’s at the gym, or what if you only weighed people who showed up to be part of your study? You said, well come via my study, which is about weighing people, and what kind of people show up for your study about weighing people, not a random sample of people. You could only include people. And this is a little closer to maybe what we get to. You can only include people who picked up the phone when you called. All of these are ways in which our sampling would be non-random. And you can see particularly with some of these first couple of examples, why that kind of approach, why just doing something convenient is actually not going to lead you to the right answer. And it turns out that when you want to use a small sample to speak about a large sample, which is a very, very key part of using data and statistical methods, when you want to do that, you must think about sampling people randomly.

You must have some way to get a sample that is representative of the population about whom you wish to speak. Now, maybe that seems very straightforward, but I would argue that during COVID we totally fell down, particularly in the US on that approach. So let’s think about what I thought of as a very, very basic question that we had about COVID, which is what is the case rate in a given state at a given time? And this turned out to be tremendously policy relevant, right? Still many, many policies about what you should do or not do or who can go to this thing or that thing are based on this case rate number. It’s a number like what’s the number of people who have COVID out of every 100,000 people? So if it’s above this, it’s above that. So that was a very core number and there are two ways that we think about it getting that.

And two ways we think thought about measuring this, one is this case rate of the number of cases detected per 100,000 people. And that’s something we often report. So we don’t have a random sample for doing that. This was a point that the former President Trump made early in the pandemic, which was that if you test more people you will find more cases. And that has been weaponized as an argument, but it is in fact true that the places that do more testing end up finding more cases because that’s the way that works. And so if you measure cases in that way, you’re going to get sign of one number, which is sort of non-random and depends on the amount of testing. The other way we measure COVID rates is with something called the positivity rate. That’s the share of the tests that show up positive. If we don’t have a random sample, then that number is also kind of weird because that’s just the people who come.

And so if you’re not doing very many tests, the only people who show up may be people who almost certainly have it because you’re selecting that sample. In a funny way, in a world with a random sample where you were doing this with a random sample, these numbers would be the same. These things would line up. They wouldn’t be literally the same number, but they would capture exactly the same thing. If you had your sample of a million people and you were testing them every week, you would get the same kind of number, no matter the same conclusion whether you measured it with these case rates or positivity rates. But in practice, because there were so much variation in how much we were monitoring the pandemic across states and time, we ended up with some stuff that looked pretty weird. Here is a graph that I made at some point of the relationship between the positivity rate, which is here on the vertical axis.

This is a share of tests that are positive. And on the horizontal axis is this measure of this daily case rate. Now again, if we had a random sample, there’d be a 45-degree line, it would just be on the line because they would exactly line up with each other. In fact, you get things, and I sort of highlighted a couple of things here. So you get things like in South Dakota in November and in Arizona in January, I think these are 2020 and 2021, the case rate’s the same. So these are both places with a case rate of 130 cases detected per 100,000. But the positivity rate, the share of tests that are showing up positive is 50% in South Dakota at that time and about 15% in Arizona. And so what that kind of tells you is it must be the case that actually South Dakota has tremendously more COVID at this time than Arizona does, but they must be doing much less testing because half the tests they do are showing up positive.

We have seen sort of similar things all over that in some sense we’re seeing this case rate thing that we think of as this special measure that’s measuring something, is actually representing totally different things in different locations because we are not randomly sampling people. And so this problem continues to be true. And in fact I think will in some ways be more true as we go forward, as we do less testing, as we do less monitoring of the pandemic, we’re going to come up against these things as we sort of move forward. Okay? So that’s my first lesson, which is that data comes from the world, but we need to randomly sample. All right, second lesson, statistical power. And this is really a lesson about the ways in which data is potentially limited. All right? So I want to start with the idea here of a randomized trial.

This is probably sort of familiar to you guys by this point in the pandemic, everyone kind of knows what randomized trial is, but we’re imagining we’re having a randomized trial, which is a study in which we have a group of people and we randomly do one thing to one set of them, and one thing to the other set. And the particular randomized trial I want to imagine is that you have your high school class and you randomly pick half of them to be forced to be on the cross-country running team for the semester. Okay? That’s like that’s your treatment. And so we’re going to divide the kids through some kind of random process and half of them will have to be on the cross country team. And the first outcome we’re going to study is their ability to complete a five K in under that’s five, three miles in under 35 minutes.

Okay? And so then the question we’re going to ask is how many people do we need to have? How many kids do we need to include in our study in order to figure out if our treatment is statistically significant, if our treatment is having a statistical effect? So the key here to thinking about this question is thinking about how large our effect size is. So in a case like this, probably if you took a bunch of high school kids and you made them on the cross country team, probably like most of them could run a five K or a lot of them could end up running a five K within 35 minutes. That’s not that fast for a high school kid. So my guess is if you sort of reflect on this, the effect is probably very large. The difference between the sort of share of kids who can achieve this after a semester of cross country and the share who can achieve it without that, that’s probably a pretty big effect.

And what that means is that you actually probably don’t need that many kids. Probably if you had a couple hundred kids in these two groups, you would be statistically powered, as we’d say, sort of formally, you’d be statistically powered to figure out whether there was an effect here. And the sort of larger is your expected treatment, the smaller is the kind of sample size that you need. But then we think about another version of this trial in which it’s the same treatment, but now what we want to study is the impact on people’s blood pressure at the age of 50. So we’re going to have these kids, we’re going to make them be on the cross country team. We’re going to wait and wait and wait and wait and wait and wait and wait. And then when they’re 50, we’re going to find them. You’re not going to be able to find them, but let’s imagine you could find them.

Wait, wait, wait, we’re going to find them and then we’re going to measure their blood pressure. And the question is, how many people would you need to include here to be confident in whatever effect you’re going to find there? And the answer again depends on how big the effect is, but in this case, this effect is going to be really small. Maybe it’s the case that running for one semester in high school makes your blood pressure lower at age 50. It could be. But a lot of other stuff happens between the age of 17 and 50 that also affects your health. And so any effect size here is going to be like teeny tiny, so small. And so that means if you want to be able to detect that tiny, tiny effect, you need, I don’t know, millions of people, you got to run this experiment on every child in every 17 year old in America if you actually want to be able to figure out statistically is there an effect here?

Why is this important? Well I think that understanding this gives us a sense of where we can expect confidence in the data that we see and where it’s going to be very harder, very much harder, rather, very much harder. So the sort of example in the COVID space that I think is really, really important to think about is the case of vaccine. So vaccines are something where we have good randomized trial data. That’s how we learn about the efficacy of any kind of vaccine or medical treatment, typically. When we look at older adults, when we look at people in their 60s, 70s, 80s, actually you don’t need an enormously large sample size to figure out the impact of COVID vaccines on hospitalizations or deaths because that’s a fairly not common, but that is a somewhat more common outcome in that group. So when we look out in the world at data about our vaccines and how confident can we be about their efficacy in that group and how tight can we draw our confidence around numbers, we actually can draw it really tightly because that is an outcome that is fairly common in those groups.

When we have turned into something like looked for something like, well, what kind of efficacy are we going to see in kids under five? One of the things that we’ve run into there is that this is a group that has just really low risk of serious illness or mortality from COVID that’s extremely lucky on many dimensions. It makes the power really challenging. And so it means when people are looking to those studies and saying, “Okay, I want to be confident that we are able to detect an impact on hospitalization or an impact on serious illness.” Because that effect size is so small, you need really, really big samples to detect that. And because it’s hard to get those very big samples, we are sometimes up against really just the limits of what we conceivably would be able to say with these kind of data. And this comes up again as sort of a related point is that when we are trying to detect very rare complications of anything, but in this case of vaccines, the rarer the complication, the larger the sample you need to detect it. And I think that sometimes that’s sort of missed in some of these conversations. Okay. So that’s the second lesson.

Okay, so the third point I want to make in some ways is my favorite and it is that correlation is not causation. So a lot of my academic research is about diet and is about how we both understand the value of particular diet choices and how people make choices about diet. I don’t know, I just look very interested in diet, and one of the key questions that we sort of ask both as a kind of society when we’re developing these kind of dietary guidelines and as individuals, it is just like what is a good food? What is a healthy food? How should I be eating? What is the right way to diet? There’s a lot of literature on this, a lot of papers that are written about the question of what is the best diet? There is a very common data approach across all of these studies.

So this approach goes like this. You interview people about their diet and you ask them what did they eat? And you ask them, turns out there’s a very active question about how you can figure out what people eat. So one thing is you could have them write down everything they eat, but that’s actually really hard because once you ask people to write down what they eat, then they eat differently or they lie or both. And so it’s actually really hard to have once you’re keeping a food diary, you’re like, “I probably shouldn’t have so much ice cream today.” Or whatever it is. And so people aren’t very good at that. So instead what we use are dietary recall. So you say, okay, well think back to last Tuesday and list all the things you ate last Tuesday. But people also not very good at that and they also forget a lot of the things they ate or they don’t want to tell you.

So when you do studies like this, you’re understanding that you’re going to be sort of incomplete. And so generally they show up with things like, I don’t know, the average person in a study will eat like 1600, 1700 calories a day, which of course is kind substantially lower than the average American’s caloric consumption. But at any rate, you ask people about their dieting, you’re kind of hoping that what they tell you is a sort of approximation of what the truth is. And then you also weigh them and you collect typically some other health metrics about them. And then these papers correlate they will look at in the data relationship between the diet choices and people’s weight or health. So you ask a very simple question of among the people who eat this kind of food, are those people healthier or less healthy, more or less overweight, whatever is your outcome.

And typically then when you are going to report out the results of this kind of study, you will look at whether foods are correlated with higher BMI and if they are, you’ll say they’re unhealthy. So you’ll have something like people who eat too many Reese’s Peanut Butter Cups, that’s bad for you because it correlates with a higher wheat. So again, very, very common set of papers. So in thinking about this, at some point I ran a set of very simple regressions associating individual foods with weight. And so here is kind of what you get in this sort of standard approach. So one thing is there’s lot of different kinds of lettuce that people eat. There’s many different lettuce options. So one of the things you find is that people who eat iceberg and romaine lettuce, those tend to be associated with a higher BMI.

They tend to increase your weight, whereas if you choose to eat arugula or dandelion greens, that will lower your BMI. Also, you have some different options for fat. People who eat margarine tend to have a higher BMI. People eat butter, it’s kind of neutral, but if you eat a lot of olive oil, that’s going to lower your BMI. Also sugar substitutes, I want to be clear, sugar substitutes are items which contain no calories. So these are zero calorie items, that’s actually also true of lettuce. Sugar substitutes, what we find is that people who report consuming chemical-based sugar substitutes that’s like Splenda, tend to have a higher BMI. Whereas people who consume plant-based sugar substitutes, like stevia, those people have a lower BMI. If you sort of look at all of this together and it’s like these things cannot actually be causal, cannot actually be the case that iceberg lettuce makes you fat. Iceberg lettuce is just water, okay? It’s just like a crunchy water item, it’s not a thing.

And in the same token, margarine and olive oil are kind of similar. Sugar substitutes are literally all things with no calories. And so it can’t be that these items, this is why I put them up, why I sort of use these here. It kind of can’t be that these are really something we’d want to conclude causal relationships from. What’s really going on is that the kinds of individuals, the sort of other characteristics of the individuals who consume these different foods are different in other ways. And so in this case for example, when you look at what kind of individuals are consuming a lot of arugula and dandelion greens tends to be people with more education, tends to be people who are higher income, it tends to be people who are doing all kinds of other things that are positive for their health exercise or other kinds of dietary choices.

And separating out arugula specifically from generally having a healthy diet or exercising or being a person with access to medical care. These things are basically almost impossible to do in the data. One of the things that I work on, the last pre-pandemic paper I wrote before I started working only on COVID was about what happens when… But the way in which these kinds of things can occur through information sharing. So one of the things that you see is when we start telling people that something is healthy, the people who adopt that behavior tend to also be the people who are doing all of the other healthy things. And so we get our results are sort of relationships that we see in the data get more and more and more biased as a result of the messages that we are sending. If you look back at the 1980s, and you look at the relationship between sugar and obesity, it’s actually pretty flat.

And part of that is because there was actually this sort of current moment of sugar as the devil was not as true in the 1980s when the complete breakfast was a bowl of frosted flakes and a juice and a jammed toast. This is when my childhood was the complete breakfast had a tremendous amount of sugar in it. Now the complete breakfast is like two egg whites and an avocado toast. And so at that point in this sort of early period when sugar was not the enemy, you did not see much of a relationship between sugar and weight. In the current moment you see a tremendously large relationship between sugar and weight. And simultaneously you see that the relationship between education and sugar consumption has changed a lot. That the relationship between cigarette use and sugar consumption has changed a lot. Relationship between exercise and sugar consumption has changed a lot, that as we told people don’t eat sugar, the people who didn’t need to stop eating sugar were the people who were also doing all these other things. And that makes these kinds of relationships even more challenging to pick up in the data because they are effectively a moving target.

I was explaining this research to one of my graduate students the other day and he was like, “Are you telling me that we don’t know anything about what we should eat?” And I was like, “Yeah, basically. Yeah, I mean pretty much. That’s what it is. Don’t smoke cigarettes.” That’s what we got. All right, so this of course comes up in COVID and it comes up sort of most, I think in how we think about the relationship between COVID and on this sort of whole suite of what people call non-pharmaceutical interventions. So a whole suite of things that we do that kind of aren’t vaccines or drugs but are behaviors, lockdowns, masks, school closures, no indoor gatherings, all the kinds of societal restrictions. Part of what’s been very challenging about thinking about which of those we should turn on and off and when we should turn them on and off is that it’s very, very difficult in the data to separate these things from each other or from any other characteristics of the population. So an example, not to pick on masks, but a good example of this is there’s a paper from the CDC a couple of weeks ago in which they correlated mask use with COVID rates. And they argued that people who used masks were much less likely to have COVID.

That was sort of true in some correlational sense in the same way that iceberg lettuce is correlated with being overweight. But what was less true that you could make a causal claim there. And so in particular in that case, most of the people who use masks were also being tested for their job, whereas most of the people who did not use masks were being tested because they were sick. So there was this very obvious confound which was just like, what are the characteristics of these people? There’s a bunch of people who need to do surveillance testing for their job and a bunch of people who are coming to your testing site because they’re ill.

And those things were very highly correlated with mask use. Again, it’s not in a case like that, it’s not that would tell you masks don’t work or do work or it’s just like there’s no information conveyed, which is in some sense the same point as with the kind of foods. It’s not that dandelion greens aren’t good for you, they could be good for you. And if you enjoy eating a stringy hemp like substance with a little garlic, that’s fantastic, enjoy yourself. But it isn’t obvious from the data that that’s true. And I think the same is true of some of what we see about these non-pharmaceutical interventions and because that’s hard to understand. I think we have generated… it’s not the only reason, but one of the things that’s happened because our data on this is poor and because we have struggled to really get to anything factual, it has meant that the sort of data hole has been filled with politics.

And in a world in which you don’t know what the answer is, it’s very easy for people to sort of cherry-pick pieces of flawed evidence on both sides and say, “This shows that I’m right.” Because if there’s one thing people like it is to be right. So I think this is a place where I feel like if we had had a better approach, had a different approach, we might have been able to understand this better. There you go. Okay.

So the last thing I want to talk about before we answer questions, and you guys should putting your questions in the thing, is Bayes’ Rule. And here I’m going to make you answer a question. I mean you answer the questions. Okay, so look, we’re going to have a disease. Let’s say there’s a disease that affects one in 10,000 people, and we have a really, really good test for this disease. This is a test that detects 99.9% of cases of the disease and it has a 2% false positive rate. So to put that in your mind means 99% of the time we get it right and then there’s this sort of small chance of a false positive. And now we imagine we’re testing somebody and their test is positive. So what is the chance that they have the disease? Somebody yell a number. Yell louder. 98%, okay. 98% is a very common answer. I’ll take some other answers.

Speaker 2:


Emily Oster:

50%, good answer. Also very common. Okay, so we got a good range, between 98 and 0.0001. And when I do this in class, I usually get about that range. The last people will say 100% or zero. Usually we get the whole range. But okay, so the question is what is the answer? And the answer is one half of 1%. So think about why this is true and it’s actually, I think it is possible to intuitively get to the point where you get this, let me walk through it.

So we’ve got our 10,000 people. Let’s assume, to make this a little simpler, that the test is perfect, perfect in the sense of detecting every single case. So rather than 99.9, it’s 100% effective at detecting cases. So we get 10,000 people and one of them has the disease. So the test picks up that person for sure. So for sure we find out that that person has the disease. Now there’s this 2% false positive rate. Now the number seems small, but there’s 10,000 people here. And so 2% of that 10,000 people is 200 people. So when you run this test, one person you’re going to pick up who’s like a true positive and you’re going to pick up 200 false positives. And so now you have 201 positive test results and only one of those people has the disease. It’s about a half of 1%.

The intuition here, which is really, really challenging, I think, unless once you work it through, it’s easy to see the numbers, but it’s very hard to see this intuitively and the intuition or the kind of thing that’s driving this is that this is rare, that this disease is just uncommon. And so even with a very limited false positive rate, most people don’t have this. So when you show up, it’s very unlikely. So even if the test provides a tremendous amount of information, it started out being super unlikely. And so now it’s quite a bit more likely, actually. The difference between one in 10,000 and one in 200 is very, very large, but it is still really unlikely because we started at this very, very unlikely place.

This comes up in various ways all the time. And the sort of example outside of the COVID context that came up very recently was the New York Times published a very big kind of expose effectively on prenatal testing, which was about how there are many new prenatal tests for unusual conditions for sort of rare micro deletions or other kind of potential genetic conditions that are really, really, really uncommon. And there is now much better ways to test for those conditions. But the thing is that those conditions are really, really, really uncommon and the tests are not perfect. And so the New York Times story was about the fact that many people test positive on these tests and turn out not to have the conditions, just like in this example. But of course that’s true, that is exactly how the tests are designed. So that is a completely 100% expected.

That is how this should work. If you have a condition that’s very rare and you have a test with some false positive rate, you will learn a lot from the test, but you won’t learn everything. And because it’s so rare, it still could be really, really likely that actually everything is fine. And what was sort of interesting about the expose here was that I think for me it revealed not the pitch of the article, which was these prenatal tests are terrible and they’re ruining people’s lives. But in fact, nobody understands Bayes’ Rule because if you had just kind of understood Bayes’ Rule, you wouldn’t have been surprised by any of this. And in fact, instead what was happening was that people and their doctors were kind of reacting to this information in the sort of 98% way, or at least in the 50% way rather than in the sort of closer to 0.0001% way, because of this sort of misunderstanding of what we can really learn from these kind of tests. And it has real implications because one of the implications is that people say, “Well actually I don’t want that test. I don’t want that test because it’s not perfect.” But that test provides a lot of information. We just need to know how to process the information and we’re running the risk of throwing away pieces of information like that because we are not able to engage with them effectively.

There’s a just broader picture around Bayes’ Rule that I think is sort of worth making as my kind of final point before we do questions, which is that we talk about the key input is what we technically call our priors. So in this example here, your prior is the risk before you took the test. So there’s some prior chance that you had it here, it’s like one in 10,000. And every time we come into some new information, we come into it with some priors. We come into learning that piece of information with an expectation about how likely is that this information is true or what did I think about this relationship before I saw this new piece of information? Or at least that is how we should do it. So it should be the case when we see a new study about something that we file away in the back of our mind, all of the things that came before.

And we incorporate this new piece of information in the context of all of those things. And what that should mean is if every other thing you have seen in your entire life points you in one direction, and then there’s one piece of evidence that comes in that points in another direction. Unless that piece of evidence is wildly compelling and a billion times better than all this other stuff, it shouldn’t make you totally flip-flop over to everything else, to something new. It should move your prior, it should maybe move you in some direction, but we should be influenced by what has come before. I think that there is often so much of a kind of newness value or what’s the latest thing, I want to do the latest thing, that we can sometimes forget that the latest thing might just sort of be a false positive or we shouldn’t only listen to the newest thing.

And this again is something that I think drove in some ways, a lot of the lurky changes that we saw during many aspects of COVID that we would sort of a new piece of information would come out and would be like, okay, now we’re all in on that. So here’s one new study from Lichtenstein that says the following, and now we’re going to all do things totally differently until the next new study comes out the next day. As opposed to saying, “Look, we’re gathering together pieces of information from a lot of places and we’re trying to have them as a moving target and incorporate new pieces of information at any given time and update what we know.” Instead of that happening, what happened was we went from no masks to all masks to no masks to all masks or vaccines are 100% protective in all situations like a titanium shield to like, well, they don’t work at all and actually they work really well.

So sort of back and forth in these ways that are not helpful for public health messaging and for people’s understanding of what’s going on. So think in some ways the big message that I would sort of like to send here is this idea that if we understood the underpinnings of data better, we would be better at moving it out into varying situations. We be better at the sort of situational fluency of understanding the information that comes in and being able to process it for what it is and see where is it limited, where is it good, where can I learn from it? And I think that when we teach, we don’t do enough data. So I mean, I think one, the other people push for this in different ways than I do, but I think when I think about what high school kids are learning, I kind of wish they did more data and maybe a little less trigonometry, or at least we could have both of them because these kinds of things come up in how we understand our world and if people had a deeper knowledge of it doesn’t have to be infinitely deep.

It’s not infinitely complicated. I don’t think that we need everybody to know exactly how to calculate, exactly what the mathematical formulation of Bayes’ Rule is, or exactly how to calculate the statistical power that you’d need in a randomized trial given the effect size and the standard deviation or whatever it is. We don’t need to know that. But what we need to know are these intuitive messages, and that’s something that can be conveyed, and it can be conveyed in ways that are real for people, that people can sort of see. And I think that’s kind of the key to getting there, is to have people see the ways that this is relevant to the stuff that they’re reading every day or they’re seeing everyday sort of out in the world.

Thanks for listening. If you like what you heard, subscribe to ParentData in your favorite podcast app, and rate and review the show in Apple podcasts. You can subscribe to the whole newsletter for free at www.parentdata.org. Talk to you soon.