Look at any newspaper, magazine, or website in a given week and you are likely to encounter some coverage of new studies on diet. Fat might be good for you this week, and coffee is bad. Or maybe, fat is bad, and coffee is good. If you are a connoisseur of such articles—say, someone like me who would like to make “evidence-based choices” about health—the ping-ponging of studies and coverage will not have escaped your notice. In fact, you can also find articles—in this publication, among other—pointing out what is already pretty apparent: It’s hard, perhaps even impossible, to think all of these studies are simultaneously correct.
It is easy, especially as someone who is on the research side of things most of the time, to fault the media for sensational coverage of individual studies that fails to consider the broader context. And certainly there is a healthy dose of that all around us (for example, why write a headline like “Do Tomatoes Cause Heart Attacks?” when the answer is “no”?). But I don’t think this is the main problem, and at the very least, it’s not the only one. Instead, I would argue the main problem is that the studies that underlie this reporting are themselves subject to significant bias.
Nowhere is this more true than in studies of diet. Most studies of diet work in a similar way: Researchers take a population of people, ask them questions about their diet (ideally detailed and well-designed ones), and then relate their reported diet choices to outcomes like weight or cardiovascular health. But these studies have an obvious problem, namely, that people do not choose their diet randomly. When you look at one particular food in the data and try to understand its impact, it’s impossible to zero in on the impact of just that food—you’re also seeing the impact of all of the other features that go into determining what food you eat.
Community Guidelines
Log in