There are a few topics that come up in panic headlines all the time: “forever chemicals,” “EMF radiation,” microplastics. And today’s, food dyes. In all of these cases, the question is more or less the same. Are these dangerous? And if so, how dangerous? Is it worth upending our lives to avoid them? Is avoidance more of a “nice to do if possible”? Or is this just worth ignoring completely?
Today I’m going to dive into the topic of food dyes, which are back in the news. In particular, I’m going to talk through this paper, which is a report out of the Office of Environmental Health Hazard Assessment in California, published last year. This is a review article focusing on clinical trials of food dyes and looking at impacts on hyperactivity.
I want to issue a few caveats before we get going here.
First: As far as I know, there is no reason that food dyes should be used. These products do not add flavor to food, or nutrients. They literally just make the food a color. Having said that, this article, from 12 years ago, when we had our last big freakout over food dyes, makes the case that having food be colored improves our enjoyment of it. This is presumably true, although maybe humans are adaptable. Translucent gummy worms might be something we could get used to. This is all to say: I can see a case for reducing use of these dyes, on the principle that they add nothing.
Second: The literature on food dyes is very large, and I’ll focus here on one part of it. The relationship with child behavior is one that comes up frequently, especially in parenting circles, but this isn’t comprehensive.
Third: The reason I am focusing on this particular report is because the report, in turn, is focused on clinical trials — generally, randomized trials that look at intake and behavior. There are plenty of observational studies that link consumption of foods with a lot of food dye with behavior, but these are hard to interpret given the other differences in these foods and the correlation with other features of kids and families. As you’ll see below, I’ve got some complaints about the methods in these randomized trials too, but at least we’re moving toward causality.
With that! Let’s go.
Big question: Is there a link between red dyes and hyperactivity?
This report (again, link here) focuses on the possible relationship between synthetic food dyes and hyperactivity/attention issues in children. Although the discussion you see most often online is about red dye, this report focuses on seven dyes — covering most of the food dyes that people are exposed to. This includes Blue #1, Blue #2, Green #3, Red #3, Red #40, Yellow #5, and Yellow #6.
What’s in the new paper?
This paper is a “review article,” meaning the authors are reviewing existing findings and making an attempt to summarize them. That is distinct, I will note, from a “meta-analysis,” in which the authors try to formally combine the estimates from multiple papers.
To do their review, the authors scour the medical literature, looking for papers that adhere to the following six criteria:
- Human study
- Clinical trial design
- Participants were given a known quantity of synthetic food dyes or a diet low in or without synthetic food dyes
- A neurobehavioral outcome related to hyperactivity or inattention was assessed
- The majority of participants were children 19 years of age or younger
- The effects of an active ingredient or elimination diet were compared with those of a placebo
In summary: The authors look for any randomized studies in which kids were included, there was some dietary manipulation, along with a control group, and in which some measure of activity or attention was evaluated.
Using this criteria, they identify 25 studies and then summarize those results in a giant table (Table 2, if you want to peruse it for yourself).
What do they find?
Of the 25 studies included, in 13 of them, there is a significant association between food dye and behavior (the other 12 did not find a significant association). A notable feature is that more recently published studies (after 1990) were more likely to have significant results. An important note: Nearly all of these studies are done on children with some evidence of hyperactivity, if not necessarily diagnosed ADHD.
Based on the studies they summarize, the authors argue that there is evidence to support a negative association between food dyes and behavior.
Let’s go a little deeper
Looking only at the data that is presented in this paper, it seems a stretch to go all in on their findings. In particular: only about half of the studies here show a significant effect. This suggests, at a minimum, that there might be some heterogeneity in the data and impacts.
The best way to go deeper in a study like this is to look at the individual papers.
One of the largest and most highly cited of the studies included is this paper from 2004, published in Archives of Disease in Childhood. I want to go through this carefully because it illustrates a lot of what is hard and confusing here.
In this paper, the authors recruited about 400 children with hyperactivity, of whom 277 completed the study. They first put all the children on an elimination diet, which took out any foods with artificial coloring or sodium benzoate. Then they assigned the children to either “treatment” or “placebo” for a week. The treatment group added a fruit juice with artificial coloring; the placebo group added a fruit juice without this product. The test was blind: people did not know which juice they were getting. Behavior was evaluated by parents, and also weekly in a clinic by a research group.
After the week of treatment, there was a weeklong break, and then kids switched between treatment and control. This means that for each child, there are four periods: an initial “avoidance” period; an “active challenge” period (where they got the juice with coloring); a washout period; and a “placebo challenge” period (where they got the placebo). In half of the children, the placebo challenge came first; in the other half, the active challenge came first.
Putting this together, what the authors can calculate is a change in behavior over three phases: (1) the period where parents knowingly eliminate dyes from the diet (“the avoidance period”); (2) the period where children are provided the juice drink with food dye (“the active challenge period”); and, (3) the period where children are provided the juice drink without food dye (“the placebo challenge period”).
The graph below shows the changes during these periods as evaluated by the objective researcher measures and by parents. Positive values represent positive changes – reductions in hyperactivity.
In the first bars, representing the objective evaluation, there are no changes in any period. These measures didn’t find evidence of changed hyperactivity either during the initial elimination or during either the active or placebo challenge periods.
When we look at parent reports, they are much more responsive. Parents perceived improved behavior during the elimination period, and worsened behavior during both the active and placebo challenge periods. The effect is larger in the active period, but the difference is much smaller relative to the placebo change than relative to a reference point of no change.
Both the placebo change and the change during the avoidance period suggest the presence of a strong placebo effect: when parents think their child is (or might be) getting food dyes, they report more hyperactivity. But it doesn’t seem to matter very much if they are actually getting the dyes.
In their calculations, the authors point to the higher change in parental reports during the active challenge to argue that food dyes had an impact on behavior. This paper is included as one of the examples of a statistically significant and large finding. And yet: a look at the graph leaves me with questions about that conclusion.
This paper gets at an issue that is rife in this literature: often, parent perceptions of changes in behavior (especially after dietary changes, which are not blinded) are much more significant than evaluation by teachers or external factors. This is a problem. Even if your trial is randomized, if people know what arm they are in — if it is not blinded — it’s easy for the results to be driven by something other than true effects.
A second study in the “large and statistically significant” category is this 2007 paper in The Lancet. This paper has a similar design — it’s a randomized trial that is blinded and uses juice as the food dye delivery vehicle (it’s by the same authors as the other study). The authors test two different drinks (with varying coloring in them) in two age groups. In their main analysis, they find a marginally significant impact of Mix A on younger children and a marginally significant impact of Mix B on older children. They find more significant impacts if they limit to the children who drank more of the juice, but that’s a post-hoc test, which is not ideal from a method standpoint.
I am tempted to go through all 23 of the other studies, many of which have five or six participants, but I don’t think that would be productive. The overall picture in these studies is a sense of inconsistent, small, variable impacts. This observation is echoed in a formal meta-analysis (here) from 2015, which effectively concludes that the literature is really difficult to summarize. The combination of methods of measurement, different food dyes, different treatments, lack of blinding, etc. combines to create a hot mess.
I will add on top of this the issue of publication bias. In most cases, it’s easier to publish a significant effect than an insignificant one. If we see 25 publications and 13 are significant, it may well be the case that there are more insignificant ones in the background.
When a literature is confusing, summarizing it is also confusing.
The paper we started with concludes that we need better protections and to remove food dyes; California is considering some versions of this in response.
My take is somewhat different. The paper identifies 25 studies. In half of them, there is no impact estimated. In the other half, most estimated impacts are very small. And even in the studies where there are significant and large impacts estimated, in a number of the cases (like the one I discussed extensively above) the results actually don’t show up consistently across measurements. Adding the issue of publication bias to this, I find the argument for a meaningful effect here implausible. (I will note this is also the conclusion an FDA panel came to in 2011.)
Bottom line: People in general do not need food dye and I would not be sorry if we had less of it around, or less chemical approaches to it. It would be okay with me if Flamin’ Hot Doritos were a slightly less vibrant red. But concerns about avoiding this should not occupy significant brain space.