A Disease of Scienceyness
In February 2015, the cartoonist Scott Adams (creator of Dilbert) published a blog post titled “Science’s Biggest Fail.” In it, Adams complains that “It is hard to trust science” because “science” has made so many false claims over the years. “If you kick me in the balls for 20 years,” he writes, “how do you expect me to close my eyes and trust you?”
Who, exactly, does Adams think has been kicking him in the balls for 20 years?
Scientists themselves? Science teachers? Pop-science journalists? He downplays the roles of all these parties in his article, yet his focus returns again and again to “science” as the villain that’s wronged him by feeding him misleading claims.
Around the middle of the article, Adams finally asks a question that shows where his problem really originates. It’s a problem many of us share: “How is a common citizen supposed to know when science is ‘done’ and when it is halfway to done — which is the same as being wrong?”
How indeed? In the scientific journal papers I read, I rarely (if ever) encounter a scientist who claims anything like “this topic is now closed.” For one thing, such bald-faced egotism would be career suicide in the scientific community — and for another, a claim like that would fly in the face of the whole spirit of scientific work, which is founded on the ongoing aim of (as Adams himself writes), “being more right over time and fixing what it got wrong.”
If scientists themselves aren’t making claims like this, then where are they coming from? Who’s selling people like Adams the ideas that all carbs are inherently dangerous; that they should drink 10 liters of water a day; and all manner of other unproven nonsense; as if it’s scientifically proven?
The root of the problem
Twenty years ago, we could’ve just blamed pop-science journalists and left it at that. And while overblown science headlines are still a major aspect of the problem, many of your friends and relatives — and most likely, even you — are now implicated in this onslaught of misinformation.
The worst part is, the vast majority of these people genuinely believe they’re performing a public service by resharing inaccurate “science” stories. But in truth, these people are doing a disservice not only to the people who read their feeds, but also to the very same hard-working scientists they believe they’re helping.
Back in 2005, Stephen Colbert coined the word “truthiness” to describe an intuitive feeling that an idea just “feels right,” even if there’s no concrete evidence for that idea. So when I talk about inaccurate, overblown sciencey headlines that get shared and reshared by people who have no idea whether they’re actually true or not, I’m going to use the word “scienceyness.”
The trait that distinguishes scienceyness from actual science is that it’s got nothing to do with the scientific method at all.
Here’s the crux of this whole thing:
Sciencey headlines are pre-packaged cultural tokens that can be shared and reshared without any investment in analysis or critical thought — as if they were sports scores or fashion photos or poetry quotes — to reinforce one’s aesthetic self-identification as a “science lover.” One’s actual interest doesn’t have to extend beyond the headline itself.
And that, right there, is the difference between a love of science, and a love of scienceyness.
The big difference
Let me get one thing out of the way right now: I would love nothing more than for hard-nosed scientific research to become a pop-culture phenomenon.
If our schools were packed to the rafters with hard-working young biologists and astronomers; if our celebrities and science shows taught viewers that repeatable findings and source-checking are sexy as hell; I’d be thrilled to belong to that community, and I’d do everything in my power to help it keep growing.
But as we all know, scienceyness fandom looks nothing like this at all.
“So what?” you might ask. “At least people are supporting science on some level.” Well, unfortunately, it’s not that simple. The argument in favor of scienceyness fandom hinges on the assumption that uncritical hyping of sciencey content does scientific progress more good than harm. But is that really the case? Let’s take a look at some examples.
A history of harm
In 2013, the European Union announced the €1.2 billion Human Brain Project with a press campaign that claimed the project would eventually aim to produce a computer simulation of a whole human brain. Bloggers and Facebook posters got whipped into such a frenzy that they shared the story, and re-shared it, and discussed it at length, as if the project’s final goal was already within sight.
A year later, hundreds of researchers were threatening to boycott the project, its funding got cut, and the entire concept of big neuroscience projects took a big blow in the public eye. Many researchers outright refused to join the project, pointing out that its goals were hugely ambitious and also pretty vague. The project lost a huge amount of credibility — and a huge amount of grant funding — because people got overly excited about a story they hadn’t bothered to check out.
And that’s just one example. A very similar thing happened to artificial intelligence research in 1973, when an investigator named James Lighthill published a scathing report in one of the field’s most widely-read scientific journals, claiming that A.I. research wasn’t living up to expectations. Which expectations were those, exactly? As it turned out, mostly expectations created by overexcited media advocates — in other words, the exact same kind of fluff that Facebook loves today.
Scienceyness blurs the distinction between accurate sources and inaccurate ones. It creates inflated expectations that actually damage the reputation of scientific fields over the long term.
I’m not just talking in the abstract here — as I explained just above, this has actually happened, repeatedly. Real people have lost their jobs because of irresponsible scienceyness promotion. Entire projects have lost funding, and have had to pack up important work, because laypeople unknowingly spread false information about what they were working on.
In short, when you share science headlines without taking a few minutes to check what they actually mean, you are putting the reputations and livelihoods of your favorite scientists in danger.
Is scienceyness better than nothing?
One of the main counter-arguments I hear from scienceyness advocates is, “Not everyone has the time or expertise to read original journal papers; isn’t it better that we support from the sidelines than not at all?”
As far as the first half of that sentence; they’re absolutely right — even PhDs can’t analyze papers outside their own disciplines in any truly expert way. In the scientific community, experts often take each others’ word. But that doesn’t mean they take just anyone’s word.
Here’s an analogy for you: If you’re a science advocate, you probably don’t take religious texts at face value — even if billions of other people do. So why would you take a headline from an unknown blogger at face value? Why would you share it with all your Facebook friends without checking its backstory first?
I’m not saying you have to read the original journal paper. I’m not saying you have to understand all the technical jargon. I’m saying that you need to take ten goddamn seconds to run a Google search and determine whether the story is legitimate, untrue, or just plain blown out of all proportion to the actual evidence.
Here’s the litmus test: Before you click that “Share” button, ask yourself why you’re sharing the story.
Is it because you’re genuinely interested in the project? Or is it just because you think the headline and the picture look cool?
There’s nothing inherently wrong with the second motivation — as long as it’s backed up by the first one.