A new study provides insight into how science journalists evaluate psychological research

A recent study sheds light on how science journalists evaluate psychological research and provides insight into the factors that matter most to them in determining credibility and newsworthiness. The study, published in Advances in Methods and Practices in Psychological Sciencefound that one factor—sample size—significantly outweighed the others in influencing how journalists evaluated research.

Science journalists play a key role in translating complex scientific findings to the general public. But how do they decide which studies to report and which to ignore? To better understand this process, the researchers conducted a survey of science journalists examining the factors that influence their judgments of trustworthiness and newsworthiness of research findings.

“I’m a metascientist and during my PhD I wanted to explore different parts of how science is done and communicated, including looking at the stakeholders in this process who don’t always get much attention,” explained study author Julia Bottessini, an independent researcher . “Science journalists play a really important role in the science communication ecosystem, and I wanted to better understand what is important to them and how they make decisions about what scientific findings to report.

Bottessini and her colleagues recruited a diverse group of 181 science journalists, mostly women (76.8%), from a variety of news organizations, including online, print, audio, video, and more. Their educational backgrounds varied, with some holding degrees in journalism and others having studied the natural or social sciences at undergraduate or graduate level.

To examine the factors that influence science journalists’ evaluations of research, the study presented participants with eight fictitious psychological research vignettes. Participants read and rated these vignettes, and their responses were then analyzed. These vignettes were strategically designed to manipulate four key variables: sample size, sample representativeness, p-values ​​(a statistical measure), and university prestige.

The sample size can be small (50-89 participants) or large (500-1000 participants). The sample type can be either a convenience sample (eg, local volunteers) or a more representative sample of the US (eg, people from a nationwide sample). The p-value can be high (between .05 and .03) or low (between .005 and .0001). The prestige of the university can be either higher (eg “Yale University”) or lower (eg “East Carolina University”).

Participants were also asked three open-ended questions related to their typical evaluation of the research results, their evaluation of the findings presented, and whether they had any assumptions about the manipulated variables.

Among the four manipulated variables, sample size had the most significant effect on journalists’ ratings. Studies with larger sample sizes are consistently perceived as more reliable and newsworthy. This finding is consistent with scientific reasoning: larger sample sizes generally provide more reliable evidence.

Surprisingly, Bottesini and her colleagues found that the representativeness of the sample had minimal influence on the judgments of science journalists. Whether a study used a representative or convenience sample did not significantly affect their perceptions of trustworthiness or novelty.

The exact p-value of a survey’s findings also had a limited impact on journalists’ evaluations. Results with p-values ​​close to the generally accepted significance threshold of 0.05 were treated similarly to results with highly significant p-values. However, in their open-ended responses, many journalists cited the presence of statistical significance as an important factor in judging the reliability of the study.

Contrary to expectations, the prestige of the institution where the research was conducted did not significantly affect science journalists’ perceptions of credibility or newsworthiness. This finding challenges the assumption that prestigious institutions automatically attract more attention from journalists.

“I definitely thought that findings coming from prestigious universities would have at least some influence on how newsworthy and/or credible they are perceived to be, and that was not the case at all in this study, which is good news,” Bottesini told PsyPost . “The qualitative responses suggest that other prestige factors may play a role, such as the prestige of the journal in which the discovery was published.”

For example, when assessing the credibility of a scientific discovery, one science journalist responded that a key factor was “the journal itself where the findings were published and its impact factor.”

Participants’ open-ended responses also indicated that there are a variety of other factors, including plausibility, exaggeration/exaggeration of findings, conflicts of interest, and outside expert opinions, that play a role in journalists’ assessments of research findings. Many journalists also view experimental research as more reliable than correlational research.

“Science journalists (at least those in our study) already use a wide range of strategies to verify the scientific information they convey to the public, which is great,” Bottesini said. “If nothing else, I hope this study can serve as a starting point for others to create training materials and tools to help science journalists be even more effective in their work.”

Although this study provides valuable insight into how science journalists evaluate research, it has its limitations. For example, findings are specific to a particular group of science journalists and may not generalize to all professionals in the field. Additionally, the study focused on psychological research and results may differ for other scientific fields.

Future research could delve into the factors that influence science journalists’ decisions, including the influence of researcher reputation, journal reputation, or the public importance of the topic. Additionally, expanding the study to include a more diverse group of science journalists and examining how their evaluations align with public perceptions of research may offer a more holistic understanding of science communication.

“I would say that this study is full of caveats that reflect how complex this topic is,” Bottesini said. “A study can only scratch the surface in terms of understanding it, and that’s what I feel our study did.” But there are many questions that still need to be addressed, and I hope that this study can serve as a starting point for other researchers to explore this topic.

The study “How do science journalists evaluate psychological research?” was authored by Julia G. Bottesini, Christie Aschwanden, Mijke Rhemtulla, and Simine Vazire.

Leave a Comment

Your email address will not be published. Required fields are marked *