replicability – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:24:46 +0000 en-GB hourly 1 Psychology is in Crisis: And Here’s How to Fix It https://ensr.oii.ox.ac.uk/psychology-is-in-crisis-and-heres-how-to-fix-it/ Thu, 23 Mar 2017 13:37:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4017
“Psychology emergency” by atomicity (Flickr).

Concerns have been raised about the integrity of the empirical foundation of psychological science, such as low statistical power, publication bias (i.e. an aversion to reporting statistically nonsignificant or “null” results), poor availability of data, the rate of statistical reporting errors (meaning that the data may not support the conclusions), and the blurring of boundaries between exploratory work (which creates new theory or develops alternative explanations) and confirmatory work (which tests existing theory). It seems that in psychology and communication, as in other fields of social science, much of what we think we know may be based on a tenuous empirical foundation.

However, a number of open science initiatives have been successful recently in raising awareness of the benefits of open science and encouraging public sharing of datasets. These are discussed by Malte Elson (Ruhr University Bochum) and the OII’s Andrew Przybylski in their special issue editorial: “The Science of Technology and Human Behavior: Standards, Old and New”, published in the Journal of Media Psychology. What makes this issue special is not the topic, but the scientific approach to hypothesis testing: the articles are explicitly confirmatory, that is, intended to test existing theory.

All five studies are registered reports, meaning they were reviewed in two stages: first, the theoretical background, hypotheses, methods, and analysis plans of a study were peer-reviewed before the data were collected. The studies received an “in-principle” acceptance before the researchers proceeded to conduct them. The soundness of the analyses and discussion section were reviewed in a second step, and the publication decision was not contingent on the outcome of the study: i.e. there was no bias against reporting null results. The authors made all materials, data, and analysis scripts available on the Open Science Framework (OSF), and the papers were checked using the freely available R package statcheck (see also: www.statcheck.io).

All additional (non-preregistered) analyses are explicitly labelled as exploratory. This makes it easier to see and understand what the researchers were expecting based on knowledge of the relevant literature, and what they eventually found in their studies. It also allows readers to build a clearer idea of the research process and the elements of the studies that came as inspiration after the reviews and the data collection were complete. The issue provide a clear example of how exploratory and confirmatory studies can coexist — and how science can thrive as a result. The articles published in this issue will hopefully serve as an inspiration and model for other media researchers, and encourage scientists studying media to preregister designs and share their data and materials openly.

Media research — whether concerning the Internet, video games, or film — speaks directly to everyday life in the modern world. It affects how the public forms their perceptions of media effects, and how professional groups and governmental bodies make policies and recommendations. Empirical findings disseminated to caregivers, practitioners, and educators should therefore be built on an empirical foundation with sufficient rigor. And indeed, the promise of building an empirically-based understanding of how we use, shape, and are shaped by technology is an alluring one. If adopted by media psychology researchers, this approach could support rigorous testing and development of promising theories, and retirement of theories that do not reliably account for observed data.

The authors close by noting their firm belief that incremental steps taken towards scientific transparency and empirical rigor — with changes to publishing practices to promote open, reproducible, high-quality research – will help us realize this potential.

We caught up with the editors to find out more about preregistration of studies:

Ed.: So this “crisis“ in psychology (including, for example, a lack of reproducibility of certain reported results) — is it unique to psychology, or does it extend more generally to other (social) sciences? And how much will is there in the community to do something about it?

Andy: Absolutely not. There is strong evidence in most social and medical sciences that computational reproducibility (i.e. re-running the code / data) and replicability (i.e. re-running the study) are much lower than we might expect. In psychology and medical research there is a lot of passion and expertise focused on improving the evidence base. We’re cautiously optimistic that researchers in allied fields such as computational social science will follow suit.

Malte: It’s important to understand that a failure to successfully replicate a previous finding is not a problem for scientific progress. Quite contrary, it tells us that previously held assumptions or predictions must be revisited. The number of replications in psychology’s flagship journals is still not overwhelming, but the research community has begun to value and incentivize this type of research.

Ed.: It’s really impressive not just what you’ve done with this special issue (and the intentions behind it), but also that the editor gave you free reign to do so — and also to investigate and report on the state of that journal’s previously published articles, including re-running stats, in what you describe as an act of “sincere self-reflection” on the part of the journal. How much acceptance is there in the field (psychology, and beyond) that things need to be shaken up?

Malte: I think it is uncontroversial to say that, as psychologists adapt their research practices (by preregistering their hypotheses, conducting high-powered replications, and sharing their data and materials) the reliability and quality of the scientific evidence they produce increases. However, we need to be careful not to depreciate the research generated before these changes. But that is exactly what science can help with: Meta-scientific analyses of already published research, be it in our editorial or elsewhere, provide guidance how it may (or may not) inform future studies on technology and human behavior.

Andy: We owe a lot to the editor-in-chief Nicole Krämer and the experts that reviewed submissions to the special issue. This hard work has helped us and the authors deliver a strong set of studies with respect to technology effects on behaviour. We are proud to say that registered reports is now a permanent submission track at the Journal of Media Psychology and 35 other journals. We hope this can help set an example for other areas of quantitative social science which may not yet realise they face the same serious challenges.

Ed: It’s incredibly annoying to encounter papers in review where the problem is clearly that the study should have been designed differently from the start. The authors won’t start over, of course, so you’re just left with a weak paper that the authors will be desperate to offload somewhere, but that really shouldn’t be published: i.e. a massive waste of everyone’s time. What structural changes are needed to mainstream pre-registration as a process, i.e. for design to be reviewed first, before any data is collected or analysed? And what will a tipping point towards preregistration look like, assuming it comes?

Andy: We agree that this is experience is aggravating as researchers invested in both the basic and applied aspects of science. We think this might come down to a carrot and stick approach. For quantitative science, pre-registration and replication could be a requirement for articles to be considered in the Research Exercise Framework (REF) and as part of UK and EU research council funding. For example, the Wellcome Trust now provides an open access open science portal for researchers supported by their funding (carrots). In terms of sticks, it may be the case that policy makers and the general public will become more sophisticated over time and simply will not value work that is not transparently conducted and shared.

Ed.: How aware / concerned are publishers and funding bodies of this crisis in confidence in psychology as a scientific endeavour? Will they just follow the lead of others (e.g. groups like the Center for Open Science), or are they taking a leadership role themselves in finding a way forward?

Malte: Funding bodies are arguably another source of particularly tasty carrots. It is in their vital interest that funded research is relevant and conducted rigorously, but also that it is sustainable. They depend on reliable groundwork to base new research projects on. Without it funding becomes, essentially, a gambling operation. Some organizations are quicker than others to take a lead, such as the The Netherlands Organisation for Scientific Research (NWO), who have launched a Replication Studies pilot programme. I’m optimistic we will see similar efforts elsewhere.

Andy: We are deeply concerned that the general public will see that science and scientists are missing a golden opportunity to correct itself and ourselves. Like scientists, funding bodies are adaptive and we (and others) speak directly to them about these challenges to the medical and social sciences.The public and research councils invest substantial resources in science and it is our responsibility to do our best and to deliver the best science we can. Initiatives like the Center for Open Science are key to this because they help scientists build tools to pool our resources and develop innovative methods for strengthening our work.

Ed.: I assume the end goal of this movement to embed it in the structure of science as-it-is-done? i.e. for good journals and major funding bodies to make pre-registration of studies a requirement, and for a clear distinction to be drawn between exploratory and confirmatory studies? Out of curiosity, what does (to pick a random journal) Nature make of all this? And the scientific press? Is there much awareness of preregistration as a method?

Malte: Conceptually, preregistration is just another word for how the scientific method is taught already: Hypotheses are derived from theory, and data are collected to test them. Predict, verify, replicate. Matching this concept by a formal procedure on some organizational level (such as funding bodies or journals) seems only consequential. Thanks to scientists like Chris Chambers, who is promoting the Registered Reports format, confidence that the number of journals offering this track will ever increase is warranted.

Andy: We’re excited to say that parts of these mega-journals and some science journalists are on board. Nature: Human Behavior now provides registered reports as a submission track and a number of science journalists including Ed Yong (@edyong209), Tom Chivers (@TomChivers), Neuroskeptic (@Neuro_Skeptic), and Jessie Singal (@jessesingal) are leading the way with critical and on point work that highlights the risks associated with the replication crisis and opportunities to improve reproducibility.

Ed.: Finally: what would you suggest to someone wanting to make sure they do a good study, but who is not sure where to begin with all this: what are the main things they should read and consider?

Andy: That’s a good question; the web is a great place to start. To learn more about registered reports and why they are important see this, and to learn about their place in robust science see this. To see how you can challenge yourself to do a pre-registered study and earn $1,000 see this, and to do a deep dive into open scientific practice see this.

Malte: Yeah, what Andy said. Also, I would thoroughly recommend joining social networks (Twitter, or the two sister groups Psychological Methods and PsychMAP on Facebook) where these issues are lively discussed.

Ed.: Anyway .. congratulations to you both, the issue authors, and the journal’s editor-in-chief, on having done a wonderful thing!

Malte: Thank you! We hope the research reports in this issue will serve as an inspiration and model for other psychologists.

Andy: Many thanks, we are doing our best to make the social sciences better and more reliable.

Read the full editorial: Elson, M. and Przybylski, A. (2017) The Science of Technology and Human Behavior: Standards, Old and New. Journal of Media Psychology. DOI: 10.1027/1864-1105/a000212


Malte Elson and Andrew Przybylski were talking to blog editor David Sutcliffe.

]]>