ABOVE: STRANGE coauthor Mike Webster often works with species of shoaling fish that he captures using traps. To keep from biasing his samples, Webster uses different types of traps and nets to be sure he isn’t only capturing the boldest individuals.
© ISTOCK.COM, PLACEBO365

The world’s oldest behavioral biology journal, Ethology, announced on January 4 that it will be adopting a new experimental design and data reporting framework called STRANGE in an effort to address biases in animal behavior and cognition research. Moving forward, authors submitting manuscripts to the journal will need to evaluate their study animals for possible biases—factors such as genetics, personality differences, or prior experiences in research—and discuss how those facets can influence the study’s findings.

“Everybody knows that there are certain sampling biases that can affect the reproducibility and the generalizability of research findings in animal behavior, but quite often these are not declared,”...

Roughly a decade ago, the field of human psychology grappled with the recognition that most studies favored Western, educated, industrialized, rich, and democratic (WEIRD) populations. Individuals living in these societies make up roughly 80 percent of research participants, but account for only 12 percent of the world’s population, according to the American Psychological Association. The acknowledgement of a bias toward WEIRD subjects spurred a broader effort to diversify study participants and prompted new discussions around how scientists report, reproduce, and generalize their findings.

STRANGE, in combination with other existing frameworks, is now looking to replicate that success in the fields of animal behavior and animal cognition by asking researchers to consider the sources and implications of bias in their studies. 

See “UK Group Tackles Reproducibility in Research

Rutz and his St. Andrews colleague Michael Webster first introduced STRANGE in a Nature commentary in June 2020, laying out several possible sources of bias that could influence how animals behave during experiments, including their social background (S); trappability and self-selection (T); rearing history (R); acclimation and habituation (A); natural changes in responsiveness (N); genetic makeup (G); and experience (E). 

Wild-caught animals, for example, are known to behave differently than individuals of the same species that are bred and raised in a laboratory behave. And traps used to capture animals in the field may bias a sample by including only those individuals bold enough to approach. Among animals raised in captivity, prior exposure to research tasks can change how they respond to subsequent experiments.

It’s a really interesting framework to evaluate the suitability of the animals used.

—Nathalie Percie du Sert, National Centre for the Replacement, Refinement, and Reduction of Animals in Research 

In future manuscripts submitted to Ethology, researchers will be asked to add a short section to their methods or a supplementary table detailing the qualitative and quantitative “STRANGEness” of their study cohort as well as a brief section in the discussion that provides appropriate context for their findings vis-à-vis potential biases in experimental design. The conclusions drawn from any single study, according to STRANGE guidance, should be closely tied to the population of animals included in the research and not extrapolated to other populations or taxa. 

Animal behavior, like many scientific fields, suffers from a “reproducibility crisis” that makes it difficult to assess how reliable or universal findings are. In addition, animal studies sometimes generalize their findings from only a few individuals. By including more detail, STRANGE could make it easier to replicate experiments, says Webster. “Undeclared STRANGE effects may go some way to explaining why some experiments replicate and others do not.”

In addition to Ethology, Rutz says, two other journals—a niche animal behavior publication and a larger, interdisciplinary journal—are currently modifying their submission guidelines to incorporate STRANGE, although both declined to be named ahead of their formal announcements. 

Compatibility with ARRIVE and PREPARE

STRANGE is not the first such framework designed to increase transparency around how experiments are designed and how their results are shared in the literature. 

Nathalie Percie du Sert, a researcher at the National Centre for the Replacement, Refinement, and Reduction of Animals in Research who studies experimental design and reporting, first realized the need for new guidelines while completing a review of the model system of ferret she used during her PhD dissertation. As she analyzed the literature, she tried applying the same quality metrics used to assess human clinical trials. Studies that were not randomized or blinded, for example, would normally have been excluded from her review. Among animal studies, however, “if I’d kept those same rules, I would have had no studies to include in my systematic review,” she says. “It was that bad.”

Working alongside her colleagues, Percie du Sert helped to develop ARRIVE, a set of reporting guidelines adopted by more than 1,000 journals and promoted by several funding agencies since it was introduced in 2010. ARRIVE includes a checklist of 10 “essential” items researchers should include to ensure that their studies are reported with enough detail, including information about the species, strain, substrain, sex, weight, and age of each animal. 

STRANGE, Percie du Sert tells The Scientist, is “fully compatible” with ARRIVE, and goes a step beyond in addressing more granular concerns that are unique to the field of animal behavior. “It’s a really interesting framework to evaluate the suitability of the animals used, and STRANGE is not just about the reporting, it can be used at the design stage as well to assess whether the animals are actually appropriate for the objective of your experiments.” 

See “Fixing the Flaws in Animal Research

Together with another set of guidelines called PREPARE that are used during experimental design, the three span the continuum of scientific research—from conceptualization to data reporting. “STRANGE plugs straight into PREPARE, PREPARE calls out to ARRIVE,” Rutz says, adding that these frameworks are already changing how scientists engage with bias in their work. Both PREPARE and ARRIVE have been endorsed by the Association for the Study of Animal Behaviour and the Animal Behavior Society.

Benjamin Farrar, a PhD student in comparative psychology at the University of Cambridge who published a commentary about STRANGE in Learning and Behavior, says that the addition of yet another framework risks turning the practice of considering bias into a “box-checking exercise” when submitting manuscripts. “It seems to have a lot of redundancy with what ARRIVE is trying to achieve, and ARRIVE seems to have it in a much more comprehensive and thoughtful way,” Farrar says. “STRANGE is a really positive step forward, but in its current form, it doesn’t quite achieve the strong solution to sampling biases that it wants to be.”

Farrar points to changes made in the wake of WEIRD in how bias is assessed in human research. It’s not just bias inherent in the subjects themselves, he says, but also bias in the researchers—in the types of tests used to study cognition or behavior—and in the facilities themselves. Human psychologists have started using more-robust statistical tools such as models that account for even unmeasured sources of variation in their data.

While a handful of papers that voluntarily adopted STRANGE have already been published in journals such as Current Biology and Movement Ecology, it’s too early to say how effective the framework will be at addressing reproducibility in animal research. “I do think that there’s going to be bias at some level no matter what we do,” Webster tells The Scientist. “The best thing we can do is inform ourselves of what those biases may be. We thought it was high time to really highlight this problem and bring these different issues together and propose a solution.”

See “Potential Causes of Irreproducibility Revealed

Interested in reading more?

animal behavior, animal cognition, reproducibility, replication, publishing, research integrity, animal research, experimental design, data reporting

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!