How do I know it’s OK? Swimming through the science communications minefield
Peanut butter for non-allergic babies may reduce later allergies: http://t.co/6ZWc1ItSnD #BehindTheHeadlines pic.twitter.com/WOJzabsYWS
— NHS Choices (@NHSChoices) February 25, 2015
How do I know it’s OK? This often goes through my mind when I’m writing about science. The mere fact that I’m a scientist doesn’t give me authority to write about science: my own research field was unbelievably narrow, and my PhD represents only a tiny fraction of even that field. As with all PhDs, for a short time, yes, I was the world expert in my tiny piece of the science kingdom, but no, that still does not make me an authority on science in general.
What it does mean is that 99.99% of the science writing I do needs research. It was relatively easy researching for my PhD; I knew the subject matter and had unlimited academic access to primary research journals. Nowadays my access is more limited, since most peer-reviewed publications are behind a paywall. I rely on open-access information readily available on the Internet and the journals that my college library holds (yes, lifelong learning—that’s me!). In addition, since the topic is usually outside my area of expertise, I am often hunting through resources outside my science comfort zone.
Which brings me to my major dilemma: how do I know if what I’m reading is reliable? How can I tell if the science is good enough to share on Talk Science to Me’s social media channels? Is the experimental design robust? Are the inferences supported? Does the news come from a genuine source? Am I propagating rubbish?
Should You Be Filling Your Tires With Mayonnaise Instead Of Air?
— Clickbait Headlines (@clickybait) December 28, 2014
There are a lot of clickbait-worthy health and science headlines floating around out there, easily spread in just a few clicks. Everyone wants to know the secret to curing cancer or prolonging life, or whether it’s all just down to bad luck—such news is viral. But why?
In December, researchers published an investigation into the source of clickbait: are scientists themselves promising the Moon in research papers? Or are overzealous academic public relations departments writing up fantastical press releases? Or maybe journalists themselves are to blame, rushed for deadlines and churning out eye-catching headlines?
Their conclusion? Compared to the primary sources, more than a third of university-issued press releases made exaggerated claims regarding the science they reported. The releases gave explicit advice, implied causation from correlational studies and over-inferred that results from animal studies applied to humans. And with the hype come the clickbait headlines and the social media whirlwind. (Note: it’s not always the press release that’s at fault…)
Longer sleep linked to stroke – see research and reporting here: http://t.co/ap5LPNcrX9 #BehindTheHeadlines pic.twitter.com/tQfCS9H3jS
— NHS Choices (@NHSChoices) February 26, 2015
In a perfect world, anyone creating or sharing stories about science would go back to the primary source and investigate for themselves. Using useful tips and tools like Carl Sagan’s Baloney Detector (discussed here by Maria Popova on BrainPickings) or referring to some of the tools mentioned in earlier posts here and here also works.
For me, the answer is to read more, read critically and read with an eye to quality in science writing. Checking in with trusted sources, throwing out the occasional “What does this mean?” on social media, and reading as much commentary as I can find slowly builds confidence. But it does take more than time than is strictly valid for RT-ing a juicy tweet.
Hmmm—maybe I should be tuning in to the UK’s NHS Behind the Headlines Twitter account for some truly excellent takedowns of clickbait headlines before I retweet.
No Comments