psychology – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:24:50 +0000 en-GB hourly 1 Psychology is in Crisis: And Here’s How to Fix It https://ensr.oii.ox.ac.uk/psychology-is-in-crisis-and-heres-how-to-fix-it/ Thu, 23 Mar 2017 13:37:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4017
“Psychology emergency” by atomicity (Flickr).

Concerns have been raised about the integrity of the empirical foundation of psychological science, such as low statistical power, publication bias (i.e. an aversion to reporting statistically nonsignificant or “null” results), poor availability of data, the rate of statistical reporting errors (meaning that the data may not support the conclusions), and the blurring of boundaries between exploratory work (which creates new theory or develops alternative explanations) and confirmatory work (which tests existing theory). It seems that in psychology and communication, as in other fields of social science, much of what we think we know may be based on a tenuous empirical foundation.

However, a number of open science initiatives have been successful recently in raising awareness of the benefits of open science and encouraging public sharing of datasets. These are discussed by Malte Elson (Ruhr University Bochum) and the OII’s Andrew Przybylski in their special issue editorial: “The Science of Technology and Human Behavior: Standards, Old and New”, published in the Journal of Media Psychology. What makes this issue special is not the topic, but the scientific approach to hypothesis testing: the articles are explicitly confirmatory, that is, intended to test existing theory.

All five studies are registered reports, meaning they were reviewed in two stages: first, the theoretical background, hypotheses, methods, and analysis plans of a study were peer-reviewed before the data were collected. The studies received an “in-principle” acceptance before the researchers proceeded to conduct them. The soundness of the analyses and discussion section were reviewed in a second step, and the publication decision was not contingent on the outcome of the study: i.e. there was no bias against reporting null results. The authors made all materials, data, and analysis scripts available on the Open Science Framework (OSF), and the papers were checked using the freely available R package statcheck (see also: www.statcheck.io).

All additional (non-preregistered) analyses are explicitly labelled as exploratory. This makes it easier to see and understand what the researchers were expecting based on knowledge of the relevant literature, and what they eventually found in their studies. It also allows readers to build a clearer idea of the research process and the elements of the studies that came as inspiration after the reviews and the data collection were complete. The issue provide a clear example of how exploratory and confirmatory studies can coexist — and how science can thrive as a result. The articles published in this issue will hopefully serve as an inspiration and model for other media researchers, and encourage scientists studying media to preregister designs and share their data and materials openly.

Media research — whether concerning the Internet, video games, or film — speaks directly to everyday life in the modern world. It affects how the public forms their perceptions of media effects, and how professional groups and governmental bodies make policies and recommendations. Empirical findings disseminated to caregivers, practitioners, and educators should therefore be built on an empirical foundation with sufficient rigor. And indeed, the promise of building an empirically-based understanding of how we use, shape, and are shaped by technology is an alluring one. If adopted by media psychology researchers, this approach could support rigorous testing and development of promising theories, and retirement of theories that do not reliably account for observed data.

The authors close by noting their firm belief that incremental steps taken towards scientific transparency and empirical rigor — with changes to publishing practices to promote open, reproducible, high-quality research – will help us realize this potential.

We caught up with the editors to find out more about preregistration of studies:

Ed.: So this “crisis“ in psychology (including, for example, a lack of reproducibility of certain reported results) — is it unique to psychology, or does it extend more generally to other (social) sciences? And how much will is there in the community to do something about it?

Andy: Absolutely not. There is strong evidence in most social and medical sciences that computational reproducibility (i.e. re-running the code / data) and replicability (i.e. re-running the study) are much lower than we might expect. In psychology and medical research there is a lot of passion and expertise focused on improving the evidence base. We’re cautiously optimistic that researchers in allied fields such as computational social science will follow suit.

Malte: It’s important to understand that a failure to successfully replicate a previous finding is not a problem for scientific progress. Quite contrary, it tells us that previously held assumptions or predictions must be revisited. The number of replications in psychology’s flagship journals is still not overwhelming, but the research community has begun to value and incentivize this type of research.

Ed.: It’s really impressive not just what you’ve done with this special issue (and the intentions behind it), but also that the editor gave you free reign to do so — and also to investigate and report on the state of that journal’s previously published articles, including re-running stats, in what you describe as an act of “sincere self-reflection” on the part of the journal. How much acceptance is there in the field (psychology, and beyond) that things need to be shaken up?

Malte: I think it is uncontroversial to say that, as psychologists adapt their research practices (by preregistering their hypotheses, conducting high-powered replications, and sharing their data and materials) the reliability and quality of the scientific evidence they produce increases. However, we need to be careful not to depreciate the research generated before these changes. But that is exactly what science can help with: Meta-scientific analyses of already published research, be it in our editorial or elsewhere, provide guidance how it may (or may not) inform future studies on technology and human behavior.

Andy: We owe a lot to the editor-in-chief Nicole Krämer and the experts that reviewed submissions to the special issue. This hard work has helped us and the authors deliver a strong set of studies with respect to technology effects on behaviour. We are proud to say that registered reports is now a permanent submission track at the Journal of Media Psychology and 35 other journals. We hope this can help set an example for other areas of quantitative social science which may not yet realise they face the same serious challenges.

Ed: It’s incredibly annoying to encounter papers in review where the problem is clearly that the study should have been designed differently from the start. The authors won’t start over, of course, so you’re just left with a weak paper that the authors will be desperate to offload somewhere, but that really shouldn’t be published: i.e. a massive waste of everyone’s time. What structural changes are needed to mainstream pre-registration as a process, i.e. for design to be reviewed first, before any data is collected or analysed? And what will a tipping point towards preregistration look like, assuming it comes?

Andy: We agree that this is experience is aggravating as researchers invested in both the basic and applied aspects of science. We think this might come down to a carrot and stick approach. For quantitative science, pre-registration and replication could be a requirement for articles to be considered in the Research Exercise Framework (REF) and as part of UK and EU research council funding. For example, the Wellcome Trust now provides an open access open science portal for researchers supported by their funding (carrots). In terms of sticks, it may be the case that policy makers and the general public will become more sophisticated over time and simply will not value work that is not transparently conducted and shared.

Ed.: How aware / concerned are publishers and funding bodies of this crisis in confidence in psychology as a scientific endeavour? Will they just follow the lead of others (e.g. groups like the Center for Open Science), or are they taking a leadership role themselves in finding a way forward?

Malte: Funding bodies are arguably another source of particularly tasty carrots. It is in their vital interest that funded research is relevant and conducted rigorously, but also that it is sustainable. They depend on reliable groundwork to base new research projects on. Without it funding becomes, essentially, a gambling operation. Some organizations are quicker than others to take a lead, such as the The Netherlands Organisation for Scientific Research (NWO), who have launched a Replication Studies pilot programme. I’m optimistic we will see similar efforts elsewhere.

Andy: We are deeply concerned that the general public will see that science and scientists are missing a golden opportunity to correct itself and ourselves. Like scientists, funding bodies are adaptive and we (and others) speak directly to them about these challenges to the medical and social sciences.The public and research councils invest substantial resources in science and it is our responsibility to do our best and to deliver the best science we can. Initiatives like the Center for Open Science are key to this because they help scientists build tools to pool our resources and develop innovative methods for strengthening our work.

Ed.: I assume the end goal of this movement to embed it in the structure of science as-it-is-done? i.e. for good journals and major funding bodies to make pre-registration of studies a requirement, and for a clear distinction to be drawn between exploratory and confirmatory studies? Out of curiosity, what does (to pick a random journal) Nature make of all this? And the scientific press? Is there much awareness of preregistration as a method?

Malte: Conceptually, preregistration is just another word for how the scientific method is taught already: Hypotheses are derived from theory, and data are collected to test them. Predict, verify, replicate. Matching this concept by a formal procedure on some organizational level (such as funding bodies or journals) seems only consequential. Thanks to scientists like Chris Chambers, who is promoting the Registered Reports format, confidence that the number of journals offering this track will ever increase is warranted.

Andy: We’re excited to say that parts of these mega-journals and some science journalists are on board. Nature: Human Behavior now provides registered reports as a submission track and a number of science journalists including Ed Yong (@edyong209), Tom Chivers (@TomChivers), Neuroskeptic (@Neuro_Skeptic), and Jessie Singal (@jessesingal) are leading the way with critical and on point work that highlights the risks associated with the replication crisis and opportunities to improve reproducibility.

Ed.: Finally: what would you suggest to someone wanting to make sure they do a good study, but who is not sure where to begin with all this: what are the main things they should read and consider?

Andy: That’s a good question; the web is a great place to start. To learn more about registered reports and why they are important see this, and to learn about their place in robust science see this. To see how you can challenge yourself to do a pre-registered study and earn $1,000 see this, and to do a deep dive into open scientific practice see this.

Malte: Yeah, what Andy said. Also, I would thoroughly recommend joining social networks (Twitter, or the two sister groups Psychological Methods and PsychMAP on Facebook) where these issues are lively discussed.

Ed.: Anyway .. congratulations to you both, the issue authors, and the journal’s editor-in-chief, on having done a wonderful thing!

Malte: Thank you! We hope the research reports in this issue will serve as an inspiration and model for other psychologists.

Andy: Many thanks, we are doing our best to make the social sciences better and more reliable.

Read the full editorial: Elson, M. and Przybylski, A. (2017) The Science of Technology and Human Behavior: Standards, Old and New. Journal of Media Psychology. DOI: 10.1027/1864-1105/a000212


Malte Elson and Andrew Przybylski were talking to blog editor David Sutcliffe.

]]>
Is internet gaming as addictive as gambling? (no, suggests a new study) https://ensr.oii.ox.ac.uk/is-internet-gaming-as-addictive-as-gambling-no-suggests-a-new-study/ Fri, 04 Nov 2016 09:43:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3842 New research by Andrew Przybylski (OII, Oxford University), Netta Weinstein (Cardiff University), and Kou Murayama (Reading University) published today in the American Journal of Psychiatry suggests that very few of those who play internet-based video games have symptoms suggesting they may be addicted. The article also says that gaming, though popular, is unlikely to be as addictive as gambling. Two years ago the APA identified a critical need for good research to look into whether internet gamers run a risk of becoming addicted and asked how such an addiction might be diagnosed properly. To the authors’ knowledge, these are the first findings from a large-scale project to produce robust evidence on the potential new problem of “internet gaming disorder”.

The authors surveyed 19,000 men and women from nationally representative samples from the UK, the United States, Canada and Germany, with over half saying they had played internet games recently. Out of the total sample, 1% of young adults (18-24 year olds) and 0.5% of the general population (aged 18 or older) reported symptoms linking play to possible addictive behaviour — less than half of recently reported rates for gambling.

They warn that researchers studying the potential “darker sides” of Internet-based games must be cautious. Extrapolating from their data, as many as a million American adults might meet the proposed DSM-5 criteria for addiction to online games — representing a large cohort of people struggling with what could be clinically dysregulated behavior. However, because the authors found no evidence supporting a clear link to clinical outcomes, they warn that more evidence for clinical and behavioral effects is needed before concluding that this is a legitimate candidate for inclusion in future revisions of the DSM. If adopted, Internet gaming disorder would vie for limited therapeutic resources with a range of serious psychiatric disorders.

Read the full article: Andrew K. Przybylski, Netta Weinstein, Kou Murayama (2016) Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. Published online: November 04, 2016.

We caught up with Andy to explore the broader implications of the study:

Ed.: Is “gaming addiction” or “Internet addition” really a thing? e.g. is it something dreamed up by politicians / media people, or is it something that has been discussed and reported by psychiatrists and GPs on the ground?

Andy: Although internet addiction started as a joke about the pathologizing of everyday behaviours, popular fears have put it on the map for policymakers and researchers. In other words, thinking about potential disorders linked to the internet, gaming, and technology have taken on a life of their own.

Ed.: Two years ago the APA identified “a critical need for good research to look into whether internet gamers run a risk of becoming addicted” and asked how such an addiction might be diagnosed properly (i.e. using a checklist of symptoms). What other work or discussion has come out of that call?

Andy: In recent years two groups of researchers have emerged, one arguing there is an international consensus about the potential disorder based on the checklist, the second arguing that it is problematic to pathologize internet gaming. This second group says we don’t understand enough about gaming to know if it’s any different from other hobbies, like being a sports fan. They’re concerned that it could lead other activities to be classified as pathological. Our study set out to test if the checklist approach works, a rigorous test of the APA call for research using the symptoms proposed.

Ed.: Do fears (whether founded or not) of addiction overlap at all with fears of violent video games perhaps altering players’ behaviour? Or are they very clearly discussed and understood as very separate issues?

Andy: Although the fears do converge, the evidence does not. There is a general view that some people might be more liable to be influenced by the addictive or violent aspects of gaming but this remains an untested assumption. In both areas the quality of the evidence base needs critical improvement before the work is valuable for policymakers and mental health professionals.

Ed.: And what’s the broad landscape like in this area – i.e. who are the main players, stakeholders, and pressure points?

Andy: In addition to the American Psychiatric Association (DSM-5), the World Health Organisation is considering formalising Gaming Disorder as a potential mental health issue in the next revision of the International Classifications of Disease (ICD) tool. There is a movement among researchers (myself included based on this research) to urge caution rushing to create new behavioural addition based on gaming for the ICD-11. It is likely that including gaming addiction will do more harm than good by confusing an already complex and under developed research area.

Ed.: And lastly: asking the researcher – do we have enough data and analysis to be able to discuss this sensibly and scientifically? What would a “definitive answer” to this question look like to you — and is it achievable?

Andy: The most important thing to understand about this research area is that there is very little high quality evidence. Generally speaking there are two kinds of empirical studies in the social and clinical sciences, exploratory studies and confirmatory ones. Most of the evidence about gaming addiction to date is exploratory, that is the analyses reported represent what ‘sticks to the wall’ after the data is collected. This isn’t a good evidence for health policy.

Our studies represent the first confirmatory research on gaming addiction. We pre-registered how we were going to collect and analyse our data before we saw it. We collected large representative samples and tested a priori hypotheses. This makes a big difference in the kinds of inferences you can draw and the value of the work to policymakers. We hope our work represents the first of many studies on technology effects that put open data, open code, and a pre-registered analysis plans at the centre of science in this area. Until the research field adopts these high standards we will not have accurate definitive answers about Internet Gaming Disorder.


Read the full article: Andrew K. Przybylski, Netta Weinstein, Kou Murayama (2016) Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. Published online: November 04, 2016.

Andy was talking to David Sutcliffe, Managing Editor of the Policy blog.

]]>