–Another study tests the wrong approach to social media in medical publishing
Here’s the main problem with a new study published in the Journal of the American Heart Association: they measured the wrong thing with the wrong method.
In their new paper the researchers randomized new studies appearing in Circulation to receive social promotion on Twitter and Facebook or no special promotion. They excluded papers for which Circulation issued a press release. The bottom line: there was no significant difference in the primary endpoint, the number of page views at 30 days. These results were consistent with an earlier study published in Circulation in 2014 by the same authors.
This study strikes me as a good illustration of the streetlight effect. You probably know the old joke:
A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is.”
The problem with both the new study and old study is that the researchers were trying to achieve the the wrong thing (increasing page views) using the wrong methods (mindlessly promoting random studies).
It’s easy to understand why the authors chose their methods. The number of page views is an objective, easily measured endpoint. It’s like the streetlight in the joke. And randomization of studies eliminates bias. So the paper has all the characteristics of a well-performed scientific study. Unfortunately, like many other similar “scientific” studies it doesn’t really lead to useful or important new information.
Some extremely important limitations were pointed out by Lee Aase, director of the Mayo Clinic’s Center for Social Media, in response to the earlier study by the same authors. He pointed out that the study wasn’t really a “social media” vs. “no social media” test, since Circulation has social media icons at the bottom of every article online. “Removing those buttons from below the ‘no promotion’ posts would have been a fairer test of social networking’s contribution. Facilitating sharing via these icons is itself a social strategy (and, I would submit, a good one.).”
Aase also astutely observed that the investigators handicapped themselves by eliminating papers that received press releases. The included studies, he writes, “are, in the judgment of professional staff, less newsworthy… Therefore, the studied articles were almost by definition the least interesting.”
Beyond these valid points about the weaknesses of the study’s methodology, I think the basic premise of the study is mistaken. The underlying assumption is that increasing page views is inherently valuable and a worthwhile objective. But does that goal stand up to scrutiny? The goal shouldn’t be to use social media as a microphone in order to gain the largest number of page views. Rather, a more appropriate goal is to reach the right audience and to develop a long-term relationship with that audience.
For some few important papers of general interest increasing page views may well be appropriate. But this is not true for the vast majority of the thousands of papers published each year in a field like cardiology. And for some not insignificant number of papers, unfortunately, fewer page views may be the desirable outcome, though of course few editors are likely to acknowledge this truth.
Let’s be clear: social media can’t— and shouldn’t— be used to fix the fact that most studies are mediocre and not worthy of more attention. Using social media to promote most stories is just adding fuel to the death spiral of over-publication of bad or unnecessary or completely obvious science.
As I wrote in response to the earlier study, the antiquated view of science and medicine expressed by this study relegates social media to the role of amplifying the megaphone that editors already have. In their view, as traditional academic leaders it is their role to tell people in their profession what to read and what to think about what they read. In other words, this is just another example of the top-down model in which the editors sit at the top.
This is the wrong way to think about social media. Social media needs to focus on the relationship with the user. Purely promotional tweets can’t help build a community. They offer no opportunity or invitation to interact or respond. They take the “social” out of “media” and leave most of their audience unresponsive at best, cynical and disaffected at worst.
I’ve said it before: social media in medicine is really tough. Trying to do it right is a bit like being the social director on a cruise for people with Asperger’s. But here’s the twist: it’s easy to be the social director on a cruise for sorority sisters and fraternity brothers, but you’re not really going to bring anything to the party that they won’t bring themselves. By contrast, those Asperger’s cruisers, just like many doctors, really need help making good use of social media. But it’s not easy to produce useful, interesting, and engaging content for them.
I just want to note that page views were about 10% higher in the social media group. It’s possible that social media increases page views by a small amount.
I don’t disagree with your overall point that page views (which have been associated with citations) are not the only or main reason to use social media.
Yep, I’m sure it had a small effect.
Great analogy, Larry. I hadn’t seen the updated study; I think you’ve analyzed the issues well. I also appreciate you calling out my observations on the previous study. It’s interesting that at least this time the study involved some boosting of posts on Facebook and multiple tweets.