Several weeks ago, I published an interview with Jacqueline Nesi on how to think about social media with kids. We covered a wide range of what we know about social media’s effects on our kids, why it’s hard to figure out, and how parents might navigate these waters.
Today I wanted to follow up with a slightly deeper dive into one new research paper, about college students, social media, and mental health. While on one hand, this age group skews perhaps somewhat older than the children of my newsletter readers, this paper is (in my view) among the most compelling causal looks at the impact of social media on kids. Also, it’s a good example of getting causal estimates out of non-randomized data. That part’s for you data nerds out there.
The paper in question can be seen and downloaded here. The title, which also tells you the topic, is “Social Media and Mental Health.” The fundamental question of the paper is to what extent social media exposure is detrimental to mental health. The authors are focused on a population of college students.
It is worth pausing to be clear about why this is a hard question to answer. Imagine the most direct approach: collect data on college students, including information on their use of social media and their mental health. Evaluate the relationship between the two variables. You’d run immediately into a problem of statistical confounding. The kids who use social media are different in other ways from those who do not. In addition, they may be using social media because they are depressed or anxious, not the other way around. For both of these reasons, a simple correlation isn’t going to help answer this question.
Fundamentally, to get a causal answer here we need to exploit some random variation. This idea seems simple, but it underlies virtually all approaches to learning causal effects from observational data. If there is no randomness in the treatment — in this case, in the exposure to social media — we will not be able to estimate causal effects.
One way to get randomness is, of course, to explicitly randomize. And, indeed, there are papers that have tried to randomly encourage people to take social media breaks. This paper takes a different approach: it tries to exploit randomness in when people were initially exposed to a form of social media. Imagine that, due to some kind of randomness, social media was accessible to some college students sooner than others. If you know when students got access, and the timing varied, you can look at their mental health before and after, and compare across places with different timing.
The idea is simple. The devil is in the details.
In this paper, the authors exploit the timing of the Facebook rollout, between 2004 and 2006. Facebook was created at Harvard in 2004, and in 2006 was made fully public. Between these dates, it was rolled out in a staggered way across college campuses. When Facebook came to a campus, students jumped on it immediately. The authors combine information on when Facebook arrived at different campuses with survey data on student mental health. The idea is, as above, to look at whether mental health changes after a college gets access to Facebook.
A key issue with this empirical approach (which, in broad strokes, is a very common approach in economics) is the fact that Facebook didn’t decide the timing of its rollout at random. It initially targeted more-selective schools (the authors of the paper point to the desire to maintain a “flavor of exclusivity” as a reason for this).
That lack of randomness would be a significant problem if the data plan here was to compare mental health between early and late adopters at a single point in time. You’d worry that the groups were different in terms of mental health, even putting aside the role of Facebook. However: the authors have access to a long time series of data. This allows them to do several things. First, they can look at changes over time rather than just levels. And second, they can compare the early to the late adopters on their pre-period levels of mental health. It isn’t obvious that students at more-selective colleges have more or less good mental health overall (in fact, there are limited differences in the early period). If they look similar before — and especially if they are trending similarly before — we’ll be more convinced that changes after could be plausibly attributed to Facebook.
The primary results in this paper can be seen in an event study graph; I talked through how these work in this earlier post if you want a deeper dive. The basic idea, though, is to take each college that got access to Facebook and mark the first semester of access as time 0. Note that this “time 0” will be different for different colleges, depending on when their access happened. There is a “time 1” (the semester after), “time 2” (two semesters after), “time –1” (one semester before), and so on. The authors then compare their measure of poor mental health over time around this “time 0.” (Because each college adopted at a different time, this approach allows them to adjust for overall time trends and isolate the impact of the Facebook introduction.)
The figure below, which I pulled from the paper, shows the impact of Facebook introduction on mental health. It’s not good! The authors find a significant increase in students’ “poor mental health” index after Facebook comes in. Notably, they do not see a trend beforehand. The worse mental health continues for more or less as far as they look (five semesters).
These effects are in “standard deviations,” which makes them a bit difficult to interpret. The authors provide some benchmarks. For example: these effects are about 20% as large as the effect of losing your job (estimated in other studies).
The authors also evaluate the impact across various individual metrics that make up their overall index. I included these below. Facebook access seems to worsen mental health on more or less every dimension measured.
This paper contains an enormous number of robustness checks, in which the authors employ various econometric techniques to show that their results aren’t likely to be driven by non-causal explanations. Suffice it to say, I think they are quite convincing on the overall fact that the introduction of Facebook worsened mental health.
The paper is not able to be as convincing on the mechanisms for why this happens. The authors argue that it’s a result of unfavorable social comparisons, and provide some support for this in the form of evidence showing that the groups they think are most likely to be susceptible to those feelings (e.g. students who live off-campus; those who are not in sororities or fraternities) are more affected. This seems plausible, but I think even they would say the evidence isn’t as compelling.
Bottom line: This paper shows convincing evidence that Facebook access at colleges worsened the mental health of students. What this means for how we should think about social media now, what it means about (say) whether your 12-year-old should get a phone — these questions remain at least somewhat open. It makes clear, though, that these are questions we should be trying hard to answer.
Community Guidelines