You’re Bombarded With Parenting ‘Studies’ — Here’s How To Read Them
Another day, another fear-mongering study about how you’re permanently doing short and long … and medium-term damage to your kid. (Listen, nobody’s hands are clean here.) It’s easy to lose sight of the facts when the information being presented pertains to the thing you care most about in the world. Your kid. So, since you shouldn’t live every day in fear of the next child development study, why not learn to understand how these things work?
Emily Willingham, Ph.D is not only an outstanding science writer (just check out that h-index) for publications like Forbes, Slate, and Discover, but she’s a parent who reads a lot of parenting studies. In The Informed Parent, a book she co-authored with Tara Haelle, she scienced the shit out all those wonderfully divisive parenting topics from SIDS to circumcision and bed sharing to Bisphenol-A.
Here’s how you can separate the truth from those reports that just have a whiff of truthiness.
How To Look At Evidence Like A Scientist
“In my mind there’s a hierarchy in studies about how much the evidence should affect my thinking,” says Willingham. “Start with a case study, which is usually when just one person generates an idea. Then you go up and observe a bunch of people who all have something in common and draw some conclusions. But that isn’t very compelling, either.”
Willingham says that top studies use randomized control trials in which big populations experience one variable or another. And then researchers compare the outcomes between those 2 groups of people. “If [the evidence] is there, what does it do? If it’s not, what happens?” Essentially, you want to compare apples to a ton of other unrelated apples.
Don’t Get Confounded
There’s a lot of specious reasoning out there. Like that rock that keeps away tigers. “Data has this way of making things look related when they’re not,” says Willingham. “We see science as it relates to human beings, but it’s really a mathematical association. We want to see a pattern and make connections across patterns; we get deceived. You also want to look for words like adjusted for confounders.”
What’s a confounding variable? A confounder is that third factor that makes it seem like there’s an association between two things being studied where there isn’t. It’s one of the reasons that a subject like bed sharing is so polarizing. A study might show infant deaths occur in both sharers and non-sharers. But Willingham says that those non-sharers “are doing it out of desperation or by accident. The message is so strong not to bed share, they end up making it more dangerous by co-sleeping on a sofa or a chair.” And researchers aren’t asking if they were sleeping in a Laz-E-Boy with their 3-month old.“Whenever you see a study and it says things are related, it’s very unlikely the researchers established cause. It’s a red flag, and it doesn’t mean that it’s true,” she says.
Why Size Matters
“Sample size is how many tests you’re running. [Researchers] do pre-analysis to figure out how many people they’d need to include. What they should be doing is figuring out how many data points they need. The more data points the better, and the more reliable the results will be,” says Willingham.
One of the most famous examples is Andrew Wakefield’s study that linked the MMR vaccine to autism. There were lots of issues with the study (violating basic research ethic rules, being funded by lawyers on behalf of families suing the vaccine companies), but there was also the fact that it was just 12 kids that made up the research group.
Willingham says that if you’re looking at an observational study, take it with a huge grain of scientific salt. If you’re looking at randomized trial of thousands, you can take a smaller pinch. “When you look at studies in the news media, you can’t control or create sufficient population sizes. You have a lot of studies that don’t have a placebo control. You should at least match studies for age and sex.”
The Problem With Self-Reporting
You can’t remember what you had for dinner 3 nights ago (it was probably chicken — it’s always goddamn chicken) so how do you expect those test subjects in that caffeine and pregnancy study to remember how much coffee they were drinking the past 9 months? Self-reported studies rely on memory, which, in human beings, is pretty fallible. Even if the study is done well, the accuracy is limited by participants’ own recall bias, or the way they wished things went down.
Self-reporting also succumbs to a cognitive bias. While now it’s generally accepted that all screens aren’t created equal, why did science ease up on their recommendations? Mostly through parents filling out surveys — some more guiltily than others. If a scientist asks you to take an inventory of how much your kid is glued to the screen, who isn’t going to lowball that answer?
How Emotion Influences Science
“I practice agnosticism when it comes to data,” says Willingham, “And I like to follow where data goes.” Of course, anyone who has spent some time on an anti-vaxxer blog or attended a Trump rally knows, data isn’t the decider when it comes to what people believe.
“The big persuaders are the strong nodes in their social network that have influence over what people share and talk about. It’s the on-the-ground social network, rather than ‘Hey, look at all this great data! Don’t you believe us now?’ I think a lot of people who consider themselves science communicators grapple with how to present things that attract and inform accurately. It’s always a struggle.”