Uncategorised – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:24:51 +0000 en-GB hourly 1 Human Rights and Internet Technology: Six Considerations https://ensr.oii.ox.ac.uk/human-rights-and-internet-technology-six-considerations/ Tue, 17 Apr 2018 13:50:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4481 The Internet has drastically reshaped communication practices across the globe, including many aspects of modern life. This increased reliance on Internet technology also impacts human rights. The United Nations Human Rights Council has reaffirmed many times (most recently in a 2016 resolution) that “the same rights that people have offline must also be protected online”.

However, only limited guidance is given by international human rights monitoring bodies and courts on how to apply human rights law to the design and use of Internet technology, especially when developed by non-state actors. And while the Internet can certainly facilitate the exercise and fulfilment of human rights, it is also conducive to human rights violations, with many Internet organizations and companies currently grappling with their responsibilities in this area.

To help understand how digital technology can support the exercise of human rights, we—Corinne Cath, Ben Zevenbergen, and Christiaan van Veen—organized a workshop at the 2017 Citizen Lab Summer Institute in Toronto, on ‘Coding Human Rights Law’. By gathering together academics, technologists, human rights experts, lawyers, government officials, and NGO employees, we hoped to gather experience and scope the field to:

1. Explore the relationship between connected technology and human rights;

2. Understand how this technology can support the exercise of human rights;

3. Identify current bottlenecks for integrating human rights considerations into Internet technology; and

4. List recommendations to provide guidance to the various stakeholders working on human-rights strengthening technology.

In the workshop report “Coding Human Rights Law: Citizen Lab Summer Institute 2017 Workshop Report“, we give an overview of the discussion. We address multiple legal and technical concerns. We consider the legal issues arising from human rights law being state-centric, while most connected technologies are being developed by the private sector. We also discuss the applicability of current international human rights frameworks to debates about new technologies. We cover the technical issues that arise when trying to code for human rights, in particular when human rights considerations are integrated into the design and operationalization of Internet technology. We conclude by identifying some areas for further debate and reflection, six of which we list below:

Integrating Human Rights into Internet Technology Design: Six Considerations

Legal:

1. Further study of the application of instruments of the existing human rights framework, (like the UN Guiding Principles for Business and Human Rights) to Internet actors is needed, including the need for new legal instruments at the national and international level that specify the human rights responsibilities of non-state actors.

2. More research is needed to analyse and rebuild the theories underpinning human rights, given the premises and assumptions grounding them may have been affected by the transition to a digitally mediated society. Much has been done on the rights to privacy and free speech, but more analysis of the relevance of other human rights in this area is needed.

3. Human rights frameworks can best be approached as a legal minimum baseline, while other frameworks, like data protection legislation or technology-specific regulation, provide content to what is aimed for above and beyond this minimum threshold.

Technical:

1. Taking into account a wider range of international human rights would benefit the development of human rights oriented Internet technology. This means thinking beyond the right to privacy and freedom of expression to include (for example), the right to equality and non-discrimination, and the right to work.

2. Internet technologies, in general, must be developed with an eye towards their potential negative impact and human rights impact assessments undertaken to understand that impact. This includes knowledge of the inherent tensions that exist between different human rights and ensuring that technology developers are precise and considerate about where in the Internet stack they want to have an impact.

3. Technology designers, funders, and implementers need to be aware of the context and culture within which a technology will be used, by involving the target end-users in the design process. For instance, it is important to ensure that human-rights-enabling technology does not price out certain populations from using it.

Internet technology can enable the exercise of human rights—if it is context-aware, recognises the inherent tensions between certain rights (privacy and knowledge; free speech and protection from abuse for example), flexible yet specific, legally sound and ethically just, modest in its claims, and actively understanding and mitigating of potential risks.

With these considerations, we are entering uncharted waters. Unless states have included human rights obligations directly into their national laws, there are few binding obligations on the private sector actors pushing forward the technology. Likewise, there are also few methodologies for developing human-right-enabling technology—meaning that we should be careful and considerate about how these technologies are developed.

Read the workshop report: Corinne Cath, Ben Zevenbergen, and Cristiaan van Veen (2018) Coding Human Rights Law: Citizen Lab Summer Institute 2017 Workshop Report. Posted: 14 February, 2018.

 

]]>
Why we shouldn’t be pathologizing online gaming before the evidence is in https://ensr.oii.ox.ac.uk/why-we-shouldnt-be-pathologizing-online-gaming-before-the-evidence-is-in/ Tue, 10 Oct 2017 09:25:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4446 Internet-based video games are a ubiquitous form of recreation pursued by the majority of adults and young people. With sales eclipsing box office receipts, games are now an integral part of modern leisure. However, the American Psychiatric Association (APA) recently identified Internet Gaming Disorder (IGD) as a potential psychiatric condition and has called for research to investigate the potential disorder’s validity and its impacts on health and behaviour.

Research responding to this call for a better understanding of IGD is still at a formative stage, and there are active debates surrounding it. There is a growing literature that suggests there is a basis to expect that excessive or problematic gaming may be related to lower health, though findings in this area are mixed. Some argue for a theoretical framing akin to a substance abuse disorder (i.e. where gaming is considered to be inherently addictive), while others frame Internet-based gaming as a self-regulatory challenge for individuals.

In their article “A prospective study of the motivational and health dynamics of Internet Gaming Disorder“, Netta Weinstein, the OII’s Andrew Przybylski, and Kou Murayama address this gap in the literature by linking self-regulation and Internet Gaming Disorder research. Drawing on a representative sample of 5,777 American adults they examine how problematic gaming emerges from a state of individual “dysregulation” and how it predicts health — finding no evidence directly linking IGD to health over time.

This negative finding indicates that IGD may not, in itself, be robustly associated with important clinical outcomes. As such, it may be premature to invest in management of IGD using the same kinds of approaches taken in response to substance-based addiction disorders. Further, the findings suggests that more high-quality evidence regarding clinical and behavioural effects is needed before concluding that IGD is a legitimate candidate for inclusion in future revisions of the Diagnostic and Statistical Manual of Mental Disorders.

We caught up with Andy to explore the implications of the study:

Ed: To ask a blunt question upfront: do you feel that Internet Gaming Disorder is a valid psychiatric condition (and that “games can cause problems”)? Or is it still too early to say?

Andy: No, it is not. It’s difficult to overstate how sceptical the public should be of researchers who claim, and communicate their research, as if Internet addiction, gaming addiction, or Internet gaming disorder (IGD) are recognized psychiatric disorders. The fact of the matter is that American psychiatrists working on the most recent revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) highlighted that problematic online play was a topic they were interested in learning more about. These concerns are highlighted in Section III of the DSM5 (entitled “Emerging Measures and Models”). For those interested in this debate see this position paper.

Ed: Internet gaming seems like quite a specific activity to worry about: how does it differ from things like offline video games, online gambling and casino games; or indeed the various “problematic” uses of the Internet that lead to people admitting themselves to digital detox camps?

Andy: In some ways computer games, and Internet ones, are distinct from other activities. They are frequently updated to meet players expectations and some business models of games, such as pay-to-play are explicitly designed to target high engagement players to spend real money for in-game advantages. Detox camps are very worrying to me as a scientist because they have no scientific basis, many of those who run them have financial conflicts of interest when they comment in the press, and there have been a number of deaths at these facilities.

Ed: You say there are two schools of thought: that if IGD is indeed a valid condition, that it should be framed as an addiction, i.e. that there’s something inherently addictive about certain games. Alternatively, that it should be framed as a self-regulatory challenge, relating to an individual’s self-control. I guess intuitively it might involve a bit of both: online environments can be very persuasive, and some people are easily persuaded?

Andy: Indeed it could be. As researchers mainly interested in self-regulation we’re most interested in gaming as one of many activities that can be successfully (or unsuccessfully) integrated into everyday life. Unfortunately we don’t know much for sure about whether there is something inherently addictive about games because the research literature is based largely on inferences based on correlational data, drawn from convenience samples, with post-hoc analyses. Because the evidence base is of such low quality most of the published findings (i.e. correlations/factor analyses) regarding gaming addiction supporting it as valid condition likely suffer from the Texas Sharpshooter Fallacy.

Ed: Did you examine the question of whether online games may trigger things like anxiety, depression, violence, isolation etc. — or whether these conditions (if pre-existing) might influence the development of IGD?

Andy: Well, our modelling focused on the links between Internet Gaming Disorder, health (mental, physical, and social), and motivational factors (feeling competent, choiceful, and a sense of belonging) examined at two time points six months apart. We found that those who had their motivational needs met at the start of the study were more likely to have higher levels of health six months later and were less likely to say they experienced some of the symptoms of Internet Gaming Disorder.

Though there was no direct link between Internet Gaming Disorder and health six months later, we perform an exploratory analysis (one we did not pre-register) and found an indirect link between Internet Gaming Disorder and health by way of motivational factors. In other words, Internet Gaming Disorder was linked to lower levels of feeling competent, choiceful, and connected, which was in turn linked to lower levels of health.

Ed: All games are different. How would a clinician identify if someone was genuinely “addicted” to a particular game — there would presumably have to be game-by-game ratings of their addictive potential (like there are with drugs). How would anyone find the time to do that? Or would diagnosis focus more on the individual’s behaviour, rather than what games they play? I suppose this goes back to the question of whether “some games are addictive” or whether just “some people have poor self-control”?

Andy: No one knows. In fact, the APA doesn’t define what “Internet Games” are. In our research we define ask participants to define it for themselves by “Think[ing] about the Internet games you may play on Facebook (e.g. Farmville), Tablet/Smartphones (e.g. Candy Crush), or Computer/Consoles (e.g. Minecraft).” It’s very difficult to overstate how suboptimal this state of affairs is from a scientific perspective.

Ed: Is it odd that it was the APA’s Substance-Related Disorders Work Group that has called for research into IGD? Are “Internet Games” unique in being classed as a substance, or are there other information based-behaviours that fall under the group’s remit?

Andy: Yes it’s very odd. Our research group is not privy to these discussions but my understanding is that a range of behaviours and other technology-related activities, such as general Internet use have been discussed.

Ed: A huge amount of money must be spent on developing and refining these games, i.e. to get people to spend as much time (and money) as possible playing them. Are academics (and clinicians) always going to be playing catch-up to industry?

Andy: I’m not sure that there is one answer to this. One useful way to think of online games is using the example of a gym. Gyms are most profitable when many people are paying for (and don’t cancel) their memberships but owners can still maintain a small footprint. The world’s most successful gym might be a one square meter facility, with seven billion members but no one ever goes. Many online games are like this, some costs scale nicely, but others have high costs, like servers, community management, upkeep, and power. There are many studying the addictive potential of games but because they constantly reinvent the wheel by creating duplicate survey instruments (there are literally dozens that are only used once or a couple of times) very little of real-world relevance is ever learned or transmitted to the public.

Ed: It can’t be trivial to admit another condition into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)? Presumably there must be firm (reproducible) evidence that it is a (persistent) problem for certain people, with a specific (identifiable) cause — given it could presumably be admitted in courts as a mitigating condition, and possibly also have implications for health insurance and health policy? What are the wider implications if it does end up being admitted to the DSM-5?

Andy: It is very serious stuff. Opening the door to pathologizing one of the world’s most popular recreational activities risks stigmatizing hundreds of millions of people and shifting resources in an already overstretched mental health systems over the breaking point.

Ed: You note that your study followed a “pre-registered analysis plan” — what does that mean?

Andy: We’ve discussed the wider problems in social, psychological, and medical science before. But basically, preregistration, and Registered Reports provide scientists a way to record their hypotheses in advance of data collection. This improves the quality of the inferences researchers draw from experiments and large-scale social data science. In this study, and also in our other work, we recorded our sampling plan, our analysis plan, and our materials before we collected our data.

Ed: And finally: what follow up studies are you planning?

Andy: We are now conducting a series of studies investigating problematic play in younger participants with a focus on child-caregiver dynamics.

Read the full article: Weinstein N, Przybylski AK, Murayama K. (2017) A prospective study of the motivational and health dynamics of Internet Gaming Disorder. PeerJ 5:e3838 https://doi.org/10.7717/peerj.3838

Additional peer-reviewed articles in this area by Andy include:

Przybylski, A.K. & Weinstein N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital Screens and the Mental Well-Being of Adolescents. Psychological Science. DOI: 10.1177/0956797616678438.

Przybylski, A. K., Weinstein, N., & Murayama, K. (2016). Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. DOI: 10.1176/appi.ajp.2016.16020224.

Przybylski, A. K. (2016). Mischievous responding in Internet Gaming Disorder research. PeerJ, 4, e2401. https://doi.org/10.7717/peerj.2401

For more on the ongoing “crisis in psychology” and how pre-registration of studies might offer a solution, see this discussion with Andy and Malte Elson: Psychology is in crisis, and here’s how to fix it.

Andy Przybylski was talking to blog editor David Sutcliffe.

]]>
From private profit to public liabilities: how platform capitalism’s business model works for children https://ensr.oii.ox.ac.uk/from-private-profit-to-public-liabilities-how-platform-capitalisms-business-model-works-for-children/ Thu, 14 Sep 2017 08:52:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4395 Two concepts have recently emerged that invite us to rethink the relationship between children and digital technology: the “datafied child” (Lupton & Williamson, 2017) and children’s digital rights (Livingstone & Third, 2017). The concept of the datafied child highlights the amount of data that is being harvested about children during their daily lives, and the children’s rights agenda includes a response to ethical and legal challenges the datafied child presents.

Children have never been afforded the full sovereignty of adulthood (Cunningham, 2009) but both these concepts suggest children have become the points of application for new forms of power that have emerged from the digitisation of society. The most dominant form of this power is called “platform capitalism” (Srnicek, 2016). As a result of platform capitalism’s success, there has never been a stronger association between data, young people’s private lives, their relationships with friends and family, their life at school, and the broader political economy. In this post I will define platform capitalism, outline why it has come to dominate children’s relationship to the internet and suggest two reasons in particular why this is problematic.

Children predominantly experience the Internet through platforms

‘At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects’ (Srnicek 2016, p43). Examples of platforms capitalism include the technology superpowers – Google, Apple, Facebook, and Amazon. There are, however, many relevant instances of platforms that children and young people use. This includes platforms for socialising, platforms for audio-visual content, platforms that communicate with smart devices and toys, and platforms for games and sports franchises and platforms that provide services (including within in the public sector) that children or their parents use.

Young people choose to use platforms for play, socialising and expressing their identity. Adults have also introduced platforms into children’s lives: for example Capita SIMS is a platform used by over 80% of schools in the UK for assessment and monitoring (over the coming months at the Oxford Internet Institute we will be studying such platforms, including SIMS, for The Oak Foundation). Platforms for personal use have been facilitated by the popularity of tablets and smartphones.

Amongst the young, there has been a sharp uptake in tablet and smart phone usage at the expense of PC or laptop use. Sixteen per cent of 3-4 year olds have their own tablet, with this incidence doubling for 5-7 year olds. By the age of 12, smartphone ownership begins to outstrip tablet ownership (Ofcom, 2016). For our research at the OII, even when we included low-income families in our sample, 93% of teenagers owned a smartphone. This has brought forth the ‘appification’ of the web that Zittrain predicted in 2008. This means that children and young people predominately experience the internet via platforms that we can think of as controlled gateways to the open web.

Platforms exist to make money for investors

In public discourse some of these platforms are called social media. This term distracts us from the reason many of these publicly floated companies exist: to make money for their investors. It is only logical for all these companies to pursue the WeChat model that is becoming so popular in China. WeChat is a closed circuit platform, in that it keeps all engagements with the internet, including shopping, betting, and video calls, within its corporate compound. This brings WeChat closer to monopoly on data extraction.

Platforms have consolidated their success by buying out their competitors. Alphabet, Amazon, Apple, Facebook and Microsoft have made 436 acquisitions worth $131 billion over the last decade (Bloomberg, 2017). Alternatively, they just mimic the features of their competitors. For example, when Facebook acquired Instagram it introduced Stories, a feature use by Snapchat, which lets its users upload photos and videos as a ‘story’ (that automatically expires after 24 hours).

The more data these companies capture that their competitors are unable to capture, the more value they can extract from it and the better their business model works. It is unsurprising therefore that during our research we asked groups of teenagers to draw a visual representation of what they thought the world wide web and internet looked like – almost all of them just drew corporate logos (they also told us they had no idea that platforms such as Facebook own WhatsApp and Instagram, or that Google owns YouTube). Platform capitalism dominates and controls their digital experiences — but what provisions do these platforms make for children?

The General Data Protection Regulation (GDPR) (set to be implemented in all EU states, including the UK, in 2018) says that platforms collecting data about children below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian. Because most platforms are American-owned, they tend to apply a piece of Federal legislation known as COPPA; the age of consent for using Snapchat, WhatsApp, Facebook, and Twitter, for example, is therefore set at 13. Yet, the BBC found last year that 78% of children aged 10 to 12 had signed up to a platform, including Facebook, Instagram, Snapchat and WhatsApp.

Platform capitalism offloads its responsibilities onto the user

Why is this a problem? Firstly, because platform capitalism offloads any responsibility onto problematically normative constructs of childhood, parenting, and paternal relations. The owners of platforms assume children will always consult their parents before using their services and that parents will read and understand their terms and conditions, which, research confirms, in reality few users, children or adults, even look at.

Moreover, we found in our research many parents don’t have the knowledge, expertise, or time to monitor what their children are doing online. Some parents, for instance, worked night shifts or had more than one job. We talked to children who regularly moved between homes and whose estranged parents didn’t communicate with each other to supervise their children online. We found that parents who are in financial difficulties, or affected by mental and physical illness, are often unable to keep on top of their children’s digital lives.

We also interviewed children who use strategies to manage their parent’s anxieties so they would leave them alone. They would, for example, allow their parents to be their friends on Facebook, but do all their personal communication on other platforms that their parents knew nothing about. Often then the most vulnerable children offline, children in care for example, are the most vulnerable children online. My colleagues at the OII found 9 out of 10 of the teenagers who are bullied online also face regular ‘traditional’ bullying. Helping these children requires extra investment from their families, as well as teachers, charities and social services. The burden is on schools too to address the problem of fake news and extremism such as Holocaust denialism that children can read on platforms.

This is typical of platform capitalism. It monetises what are called social graphs: i.e. the networks of users who use its platforms that it then makes available to advertisers. Social graphs are more than just nodes and edges representing our social lives: they are embodiments of often intimate or very sensitive data (that can often be de-anonymised by linking, matching and combining digital profiles). When graphs become dysfunctional and manifest social problems such as abuse, doxxing, stalking, and grooming), local social systems and institutions — that are usually publicly funded — have to deal with the fall-out. These institutions are often either under-resourced and ill-equipped to these solve such problems, or they are already overburdened.

Are platforms too powerful?

The second problem is the ecosystems of dependency that emerge, within which smaller companies or other corporations try to monetise their associations with successful platforms: they seek to get in on the monopolies of data extraction that the big platforms are creating. Many of these companies are not wealthy corporations and therefore don’t have the infrastructure or expertise to develop their own robust security measures. They can cut costs by neglecting security or they subcontract out services to yet more companies that are added to the network of data sharers.

Again, the platforms offload any responsibility onto the user. For example, WhatsApp tells its users; “Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services”. These ecosystems are networks that are only as strong as their weakest link. There are many infamous examples that illustrate this, including the so-called ‘Snappening’ where sexually explicit pictures harvested from Snapchat — a platform that is popular with teenagers — were released on to the open web. There is also a growing industry in fake apps that enable illegal data capture and fraud by leveraging the implicit trust users have for corporate walled gardens.

What can we do about these problems? Platform capitalism is restructuring labour markets and social relations in such a way that opting out from it is becoming an option available only to a privileged few. Moreover, we found teenagers whose parents prohibited them from using social platforms often felt socially isolated and stigmatised. In the real world of messy social reality, platforms can’t continue to offload their responsibilities on parents and schools.

We need some solutions fast because, by tacitly accepting the terms and conditions of platform capitalism – particularly when that they tell us it is not responsible for the harms its business model can facilitate – we may now be passing an event horizon where these companies are becoming too powerful, unaccountable, and distant from our local reality.

References

Hugh Cunningham (2009) Children and Childhood in Western Society Since 1500. Routledge.

Sonia Livingstone, Amanda Third (2017) Children and young people’s rights in the digital age: An emerging agenda. New Media and Society 19 (5).

Deborah Lupton, Ben Williamson (2017) The datafied child: The dataveillance of children and implications for their rights. New Media and Society 19 (5).

Nick Srnicek (2016) Platform Capitalism. Wiley.

]]>
Exploring the Darknet in Five Easy Questions https://ensr.oii.ox.ac.uk/exploring-the-darknet-in-five-easy-questions/ Tue, 12 Sep 2017 07:59:09 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4388 Many people are probably aware of something called “the darknet” (also sometimes called the “dark web”) or might have a vague notion of what it might be. However, many probably don’t know much about the global flows of drugs, weapons, and other illicit items traded on darknet marketplaces like AlphaBay and Hansa, the two large marketplaces that were recently shut down by the FBI, DEA and Dutch National Police.

We caught up with Martin Dittus, a data scientist working with Mark Graham and Joss Wright on the OII’s darknet mapping project, to find out some basics about darknet markets, and why they’re interesting to study.

Firstly: what actually is the darknet?

Martin: The darknet is simply a part of the Internet you access using anonymising technology, so you can visit websites without being easily observed. This allows you to provide (or access) services online that can’t be tracked easily by your ISP or law enforcement. There are actually many ways in which you can visit the darknet, and it’s not technically hard. The most popular anonymising technology is probably Tor. The Tor browser functions just like Chrome, Internet Explorer or Firefox: it’s a piece of software you install on your machine to then open websites. It might be a bit of a challenge to know which websites you can then visit (you won’t find them on Google), but there are darknet search engines, and community platforms that talk about it.

The term ‘darknet’ is perhaps a little bit misleading, in that a lot of these activities are not as hidden as you might think: it’s inconvenient to access, and it’s anonymising, but it’s not completely hidden from the public eye. Once you’re using Tor, you can see any information displayed on darknet websites, just like you would on the regular internet. It is also important to state that this anonymisation technology is entirely legal. I would personally even argue that such tools are important for democratic societies: in a time where technology allows pervasive surveillance by your government, ISP, or employer, it is important to have digital spaces where people can communicate freely.

And is this also true for the marketplaces you study on the darknet?

Martin: Definitely not! Darknet marketplaces are typically set up to engage in the trading of illicit products and services, and as a result are considered criminal in most jurisdictions. These market platforms use darknet technology to provide a layer of anonymity for the participating vendors and buyers, on websites ranging from smaller single-vendor sites to large trading platforms. In our research, we are interested in the larger marketplaces, these are comparable to Amazon or eBay — platforms which allow many individuals to offer and access a variety of products and services.

The first darknet market platform to acquire some prominence and public reporting was the Silk Road — between 2011 and 2013, it attracted hundreds of millions of dollars worth of bitcoin-based transactions, before being shut down by the FBI. Since then, many new markets have been launched, shut down, and replaced by others… Despite the size of such markets, relatively little is known about the economic geographies of the illegal economic activities they host. This is what we are investigating at the Oxford Internet Institute.

And what do you mean by “economic geography”?

Martin: Economic geography tries to understand why certain economic activity happens in some places, but not others. In our case, we might ask where heroin dealers on darknet markets are geographically located, or where in the world illicit weapon dealers tend to offer their goods. We think this is an interesting question to ask for two reasons. First, because it connects to a wide range of societal concerns, including drug policy and public health. Observing these markets allows us to establish an evidence base to better understand a range of societal concerns, for example by tracing the global distribution of certain emergent practices. Second, it falls within our larger research interest of internet geography, where we try to understand the ways in which the internet is a localised medium, and not just a global one as is commonly assumed.

So how do you go about studying something that’s hidden?

Martin: While the strong anonymity on darknet markets makes it difficult to collect data about the geography of actual consumption, there is a large amount of data available about the offered goods and services themselves. These marketplaces are highly structured — just like Amazon there’s a catalogue of products, every product has a title, a price, and a vendor who you can contact if you have questions. Additionally, public customer reviews allow us to infer trading volumes for each product. All these things are made visible, because these markets seek to attract customers. This allows us to observe large-scale trading activity involving hundreds of thousands of products and services.

Almost paradoxically, these “hidden” dark markets allow us to make visible something that happens at a societal level that otherwise could be very hard to research. By comparison, studying the distribution of illicit street drugs would involve the painstaking investigative work of speaking to individuals and slowly trying to acquire the knowledge of what is on offer and what kind of trading activity takes place; on the darknet it’s all right there. There are of course caveats: for example, many markets allow hidden listings, which means we don’t know if we’re looking at all the activity. Also, some markets are more secretive than others. Our research is limited to platforms that are relatively open to the public.

Finally: will you be sharing some of the data you’re collecting?

Martin: This is definitely our intention! We have been scraping the largest marketplaces, and are now building a reusable dataset with geographic information at the country level. Initially, this will be used to support some of our own studies. We are currently mapping, visualizing, and analysing the data, building a fairly comprehensive picture of darknet market trades. It is also important for us to state that we’re not collecting detailed consumption profiles of participating individuals (not that we could). We are independent academic researchers, and work neither with law enforcement, nor with platform providers.

Primarily, we are interested in the activity as a large-scale global phenomenon, and for this purpose, it is sufficient to look at trading data in the aggregate. We’re interested in scenarios that might allow us to observe and think about particular societal concerns, and then measure the practices around those concerns in ways that are quite unusual, that otherwise would be very challenging. Ultimately, we would like to find ways of opening up the data to other researchers, and to the wider public. There are a number of practical questions attached to this, and the specific details are yet to be decided — so stay tuned!

Martin Dittus is a researcher and data scientist at the Oxford Internet Institute, where he studies the economic geography of darknet marketplaces. More: @dekstop

Follow the project here: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

 

Further reading (academic):

Further reading (popular):


Martin Dittus was talking to OII Managing Editor David Sutcliffe.

]]>
Internet Filtering: And Why It Doesn’t Really Help Protect Teens https://ensr.oii.ox.ac.uk/internet-filtering-and-why-it-doesnt-really-help-protect-teens/ Wed, 29 Mar 2017 08:25:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4035 Young British teens (between 12-15 years) spend nearly 19 hours a week online, raising concerns for parents, educators, and politicians about the possible negative experiences they may have online. Schools and libraries have long used Internet-filtering technologies as a means of mitigating adolescents’ experiences online, and major ISPs in Britain now filter new household connections by default.

However, a new article by Andrew Przybylski and Victoria Nash, “Internet Filtering Technology and Aversive Online Experiences in Adolescents”, published in the Journal of Pediatrics, finds equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. The authors analysed data from 1030 in-home interviews conducted with early adolescents as part of Ofcom’s Children and Parents Media Use and Attitudes Report.

The Internet is now a central fixture of modern life, and the positives and negatives of online Internet use need to be balanced by caregivers. Internet filters have been adopted as a tool for limiting the negatives; however, evidence of their effectiveness is dubious. They are expensive to develop and maintain, and also carry significant informational costs: even sophisticated filters over-block, which is onerous for those seeking information about sexual health, relationships, or identity, and might have a disproportionate effect on vulnerable groups. Striking the right balance between protecting adolescents and respecting their rights to freedom of expression and information presents a formidable challenge.

In conducting their study to address this uncertainty, the authors found convincing evidence that Internet filters were not effective at shielding early adolescents from aversive experiences online. Given this finding, they propose that evidence derived from a randomized controlled trial and registered research designs are needed to determine how far Internet-filtering technology supports or thwarts young people online. Only then will parents and policymakers be able to make an informed decision as to whether their widespread use justifies their costs.

We caught up with Andy and Vicki to discuss the implications of their study:

Ed.: Just this morning when working from home I tried to look up an article’s author and was blocked, Virgin Media presumably having decided he might be harmful to me. Where does this recent enthusiasm for default-filtering come from? Is it just that it’s a quick, uncomplicated (technological) fix, which I guess is what politicians / policy-people like?

Vicki: In many ways this is just a typical response to the sorts of moral panic which have long arisen around the possible risks of new technologies. We saw the same concerns arise with television in the 1960s, for example, and in that case the UK’s policy response was to introduce a ‘watershed’, a daily time after which content aimed at adults could be shown. I suppose I see filtering as fulfilling the same sort of policy gap, namely recognising that certain types of content can be legally available but should not be served up ‘in front of the children’.

Andy: My reading of the psychological and developmental literature suggests that filters provide a way of creating a safe walled space in schools, libraries, and homes for young people to use the internet. This of course does not mean that reading our article will be harmful!

Ed.: I suppose that children desperate to explore won’t be stopped by a filter; those who aren’t curious probably wouldn’t encounter much anyway — what is the profile of the “child” and the “harm-filtering” scenario envisaged by policy-makers? And is Internet filtering basically just aiming at the (easy) middle of the bell-curve?

Vicki: This is a really important point. Sociologists recognised many years ago that the whole concept of childhood is socially constructed, but we often forget about this when it comes to making policy. There’s a tendency for politicians, for example, either to describe children as inherently innocent and vulnerable, or to frame them as expert ‘digital natives’, yet there’s plenty of academic research which demonstrates the extent to which children’s experiences of the Internet vary by age, education, income and skill level.

This matters because it suggests a ‘one-size-fits-all’ approach may fail. In the context of this paper, we specifically wanted to check whether children with the technical know-how to get around filters experienced more negative experiences online than those who were less tech-savvy. This is often assumed to be true, but interestingly, our analysis suggests this factor makes very little difference.

Ed.: In all these discussions and policy decisions: is there a tacit assumption that these children are all growing up in a healthy, supportive (“normal”) environment — or is there a recognition that many children will be growing up in attention-poor (perhaps abusive) environments and that maybe one blanket technical “solution” won’t fit everyone? Is there also an irony that the best protected children will already be protected, and the least protected, probably won’t be?

Andy: Yes this is ironic and somewhat tragic dynamic. Unfortunately because the evidence for filtering effectiveness is at such an early age it’s not possible to know which young people (if any) are any more or less helped by filters. We need to know how effective filters are in general before moving on to see the young people for whom they are more or less helpful. We would also need to be able to explicitly define what would constitute an ‘attention-poor’ environment.

Vicki: From my perspective, this does always serve as a useful reminder that there’s a good reason why policy-makers turn to universalistic interventions, namely that this is likely to be the only way of making a difference for the hardest-to-reach children whose carers might never act voluntarily. But admirable motives are no replacement for efficacy, so as Andy notes, it would make more sense to find evidence that household internet filtering is effective, and second that it can be effective for this vulnerable group, before imposing default-on filters on all.

Ed.: With all this talk of potential “harm” to children posed by the Internet .. is there any sense of how much (specific) harm we’re talking about? And conversely .. any sense of the potential harms of over-blocking?

Vicki: No, you are right to see that the harms of Internet use are quite hard to pin down. These typically take the form of bullying, or self-harm horror stories related to Internet use. The problem is that it’s often easier to gauge how many children have been exposed to certain risky experiences (e.g. viewing pornography) than to ascertain whether or how they were harmed by this. Policy in this area often abides by what’s known as ‘the precautionary principle’.

This means that if you lack clear evidence of harm but have good reason to suspect public harm is likely, then the burden of proof is on those who would prove it is not likely. This means that policies aimed at protecting children in many contexts are often conservative, and rightly so. But it also means that it’s important to reconsider policies in the light of new evidence as it comes along. In this case we found that there is not as much evidence that Internet filters are effective at preventing exposure to negative experiences online as might be hoped.

Ed.: Stupid question: do these filters just filter “websites”, or do they filter social media posts as well? I would have thought young teens would be more likely to find or share stuff on social media (i.e. mobile) than “on a website”?

Andy: My understanding is that there are continually updated ‘lists’ of website that contain certain kinds of content such as pornography, piracy, gambling, or drug use (see this list on Wikipedia, for example) as these categories vary by UK ISP.

Vicki: But it’s not quite true to say that household filtering packages don’t block social media. Some of the filtering options offered by the UK’s ‘Big 4’ ISPs enable parents and carers to block social media sites for ‘homework time’ for example. A bigger issue though, is that much of children’s Internet use now takes place outside the home. So, household-level filters can only go so far. And whilst schools and libraries usually filter content, public wifi or wifi in friends’ houses may not, and content can be easily exchanged directly between kids’ devices via Bluetooth or messaging apps.

Ed: Do these blocked sites (like the webpage of that journal author I was trying to access) get notified that they have been blocked and have a chance to appeal? Would a simple solution to over blocking simply be to allow (e.g. sexual health, gender-issue, minority, etc.) sites to request that they be whitelisted, or apply for some “approved” certification?

Vicki: I don’t believe so. There are whitelisted sites, indeed that was a key outcome of an early inquiry into ‘over-blocking’ by the UK Children’s Council on Internet Safety. But in order for this to be a sufficient response, it would be necessary for all sites and apps that are subject to filtering to be notified, to allow for possible appeal. The Open Rights Group provide a tool that allows site owners to check the availability of their sites, but there is no official process for seeking whitelisting or appeal.

Ed.: And what about age verification as an alternative? (however that is achieved / validated), i.e. restricting content before it is indexed, rather than after?

Andy: To evaluate this we would need to conduct a randomised controlled trial where we tested how the application of age verification for different households, selected at random, would relate (or not) to young people encountering potentially aversive content online.

Vicki: But even if such a study could prove that age verification tools were effective in restricting access to underage Internet users, it’s not clear this would be a desirable scenario. It makes most sense for content that is illegal to access below a certain age, such as online gambling or pornography. But if content is age-gated without legal requirement, then it could prove a very restrictive tool, removing the possibility of parental discretion and failing to make any allowances for the sorts of differences in ability or maturity between children that I pointed out at the beginning.

Ed: Similarly to the the arguments over Google making content-blocking decisions (e.g. over the “right to forget”): are these filtering decisions left to the discretion of ISPs / the market / the software providers, or to some government dept / NGO? Who’s ultimately in charge of who sees what?

Vicki: Obviously when it comes to content that is illegal for children or adults to access then broad decisions about the delineation of what is illegal falls to governments and is then interpreted and applied by private companies. But when it comes to material that is not illegal, but just deemed harmful or undesirable, then ISPs and social media platforms are left to decide for themselves how to draw the boundaries and then how to apply their own policies. This increasing self-regulatory role for what Jonathan Zittrain has called ‘private sherriffs’ is often seen a flexible and appropriate response, but it does bring reduced accountability and transparency.

Ed.: I guess it’s ironic with all this attention paid to children, that we now find ourselves in an information environment where maybe we should be filtering out (fake) content for adults as well (joke..). But seriously: with all these issues around content, is your instinct that we should be using technical fixes (filtering, removing from indexes, etc.) or trying to build reflexivity, literacy, resilience in users (i.e. coping strategies). Or both? Both are difficult.

Andy: It is as ironic as it is tragic. When I talk to parents (both Vicki and I are parents) I hear that they have been let down by the existing advice which often amounts to little more than ‘turn it off’. Their struggles have nuance (e.g. how do I know who is in my child’s WhatsApp groups? Is snapchat OK if they’re just using it amongst best friends?) and whilst general broad advice is heard, this more detailed information and support is hard for parents to find.

Vicki: I agree. But I think it’s inevitable that we’ll always need a combination of tools to deal with the incredible array of content that develops online. No technical tool will ever be 100% reliable in blocking content we don’t want to see, and we need to know how to deal with whatever gets through. That certainly means having a greater social and political focus on education but also a willingness to consider that building resilience may mean exposure to risk, which is hard for some groups to accept.

Every element of our strategy should be underpinned by whatever evidence is available. Ultimately, we also need to stop thinking about these problems as technology problems: fake news is just as much a feature of increasing political extremism and alienation just as online pornography is a feature of a heavily sexualised mainstream culture. And we can be certain: neither of these broader social trends will be resolved by simple efforts to block out what we don’t wish to see.

Read the full article: Przybylski, A. and Nash, V. (2017) Internet Filtering Technology and Aversive Online Experiences in Adolescents. Journal of Pediatrics. DOI: http://dx.doi.org/10.1016/j.jpeds.2017.01.063


Andy Przybylski and Vicki Nash were talking to blog editor David Sutcliffe.

]]>
Psychology is in Crisis: And Here’s How to Fix It https://ensr.oii.ox.ac.uk/psychology-is-in-crisis-and-heres-how-to-fix-it/ Thu, 23 Mar 2017 13:37:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4017
“Psychology emergency” by atomicity (Flickr).

Concerns have been raised about the integrity of the empirical foundation of psychological science, such as low statistical power, publication bias (i.e. an aversion to reporting statistically nonsignificant or “null” results), poor availability of data, the rate of statistical reporting errors (meaning that the data may not support the conclusions), and the blurring of boundaries between exploratory work (which creates new theory or develops alternative explanations) and confirmatory work (which tests existing theory). It seems that in psychology and communication, as in other fields of social science, much of what we think we know may be based on a tenuous empirical foundation.

However, a number of open science initiatives have been successful recently in raising awareness of the benefits of open science and encouraging public sharing of datasets. These are discussed by Malte Elson (Ruhr University Bochum) and the OII’s Andrew Przybylski in their special issue editorial: “The Science of Technology and Human Behavior: Standards, Old and New”, published in the Journal of Media Psychology. What makes this issue special is not the topic, but the scientific approach to hypothesis testing: the articles are explicitly confirmatory, that is, intended to test existing theory.

All five studies are registered reports, meaning they were reviewed in two stages: first, the theoretical background, hypotheses, methods, and analysis plans of a study were peer-reviewed before the data were collected. The studies received an “in-principle” acceptance before the researchers proceeded to conduct them. The soundness of the analyses and discussion section were reviewed in a second step, and the publication decision was not contingent on the outcome of the study: i.e. there was no bias against reporting null results. The authors made all materials, data, and analysis scripts available on the Open Science Framework (OSF), and the papers were checked using the freely available R package statcheck (see also: www.statcheck.io).

All additional (non-preregistered) analyses are explicitly labelled as exploratory. This makes it easier to see and understand what the researchers were expecting based on knowledge of the relevant literature, and what they eventually found in their studies. It also allows readers to build a clearer idea of the research process and the elements of the studies that came as inspiration after the reviews and the data collection were complete. The issue provide a clear example of how exploratory and confirmatory studies can coexist — and how science can thrive as a result. The articles published in this issue will hopefully serve as an inspiration and model for other media researchers, and encourage scientists studying media to preregister designs and share their data and materials openly.

Media research — whether concerning the Internet, video games, or film — speaks directly to everyday life in the modern world. It affects how the public forms their perceptions of media effects, and how professional groups and governmental bodies make policies and recommendations. Empirical findings disseminated to caregivers, practitioners, and educators should therefore be built on an empirical foundation with sufficient rigor. And indeed, the promise of building an empirically-based understanding of how we use, shape, and are shaped by technology is an alluring one. If adopted by media psychology researchers, this approach could support rigorous testing and development of promising theories, and retirement of theories that do not reliably account for observed data.

The authors close by noting their firm belief that incremental steps taken towards scientific transparency and empirical rigor — with changes to publishing practices to promote open, reproducible, high-quality research – will help us realize this potential.

We caught up with the editors to find out more about preregistration of studies:

Ed.: So this “crisis“ in psychology (including, for example, a lack of reproducibility of certain reported results) — is it unique to psychology, or does it extend more generally to other (social) sciences? And how much will is there in the community to do something about it?

Andy: Absolutely not. There is strong evidence in most social and medical sciences that computational reproducibility (i.e. re-running the code / data) and replicability (i.e. re-running the study) are much lower than we might expect. In psychology and medical research there is a lot of passion and expertise focused on improving the evidence base. We’re cautiously optimistic that researchers in allied fields such as computational social science will follow suit.

Malte: It’s important to understand that a failure to successfully replicate a previous finding is not a problem for scientific progress. Quite contrary, it tells us that previously held assumptions or predictions must be revisited. The number of replications in psychology’s flagship journals is still not overwhelming, but the research community has begun to value and incentivize this type of research.

Ed.: It’s really impressive not just what you’ve done with this special issue (and the intentions behind it), but also that the editor gave you free reign to do so — and also to investigate and report on the state of that journal’s previously published articles, including re-running stats, in what you describe as an act of “sincere self-reflection” on the part of the journal. How much acceptance is there in the field (psychology, and beyond) that things need to be shaken up?

Malte: I think it is uncontroversial to say that, as psychologists adapt their research practices (by preregistering their hypotheses, conducting high-powered replications, and sharing their data and materials) the reliability and quality of the scientific evidence they produce increases. However, we need to be careful not to depreciate the research generated before these changes. But that is exactly what science can help with: Meta-scientific analyses of already published research, be it in our editorial or elsewhere, provide guidance how it may (or may not) inform future studies on technology and human behavior.

Andy: We owe a lot to the editor-in-chief Nicole Krämer and the experts that reviewed submissions to the special issue. This hard work has helped us and the authors deliver a strong set of studies with respect to technology effects on behaviour. We are proud to say that registered reports is now a permanent submission track at the Journal of Media Psychology and 35 other journals. We hope this can help set an example for other areas of quantitative social science which may not yet realise they face the same serious challenges.

Ed: It’s incredibly annoying to encounter papers in review where the problem is clearly that the study should have been designed differently from the start. The authors won’t start over, of course, so you’re just left with a weak paper that the authors will be desperate to offload somewhere, but that really shouldn’t be published: i.e. a massive waste of everyone’s time. What structural changes are needed to mainstream pre-registration as a process, i.e. for design to be reviewed first, before any data is collected or analysed? And what will a tipping point towards preregistration look like, assuming it comes?

Andy: We agree that this is experience is aggravating as researchers invested in both the basic and applied aspects of science. We think this might come down to a carrot and stick approach. For quantitative science, pre-registration and replication could be a requirement for articles to be considered in the Research Exercise Framework (REF) and as part of UK and EU research council funding. For example, the Wellcome Trust now provides an open access open science portal for researchers supported by their funding (carrots). In terms of sticks, it may be the case that policy makers and the general public will become more sophisticated over time and simply will not value work that is not transparently conducted and shared.

Ed.: How aware / concerned are publishers and funding bodies of this crisis in confidence in psychology as a scientific endeavour? Will they just follow the lead of others (e.g. groups like the Center for Open Science), or are they taking a leadership role themselves in finding a way forward?

Malte: Funding bodies are arguably another source of particularly tasty carrots. It is in their vital interest that funded research is relevant and conducted rigorously, but also that it is sustainable. They depend on reliable groundwork to base new research projects on. Without it funding becomes, essentially, a gambling operation. Some organizations are quicker than others to take a lead, such as the The Netherlands Organisation for Scientific Research (NWO), who have launched a Replication Studies pilot programme. I’m optimistic we will see similar efforts elsewhere.

Andy: We are deeply concerned that the general public will see that science and scientists are missing a golden opportunity to correct itself and ourselves. Like scientists, funding bodies are adaptive and we (and others) speak directly to them about these challenges to the medical and social sciences.The public and research councils invest substantial resources in science and it is our responsibility to do our best and to deliver the best science we can. Initiatives like the Center for Open Science are key to this because they help scientists build tools to pool our resources and develop innovative methods for strengthening our work.

Ed.: I assume the end goal of this movement to embed it in the structure of science as-it-is-done? i.e. for good journals and major funding bodies to make pre-registration of studies a requirement, and for a clear distinction to be drawn between exploratory and confirmatory studies? Out of curiosity, what does (to pick a random journal) Nature make of all this? And the scientific press? Is there much awareness of preregistration as a method?

Malte: Conceptually, preregistration is just another word for how the scientific method is taught already: Hypotheses are derived from theory, and data are collected to test them. Predict, verify, replicate. Matching this concept by a formal procedure on some organizational level (such as funding bodies or journals) seems only consequential. Thanks to scientists like Chris Chambers, who is promoting the Registered Reports format, confidence that the number of journals offering this track will ever increase is warranted.

Andy: We’re excited to say that parts of these mega-journals and some science journalists are on board. Nature: Human Behavior now provides registered reports as a submission track and a number of science journalists including Ed Yong (@edyong209), Tom Chivers (@TomChivers), Neuroskeptic (@Neuro_Skeptic), and Jessie Singal (@jessesingal) are leading the way with critical and on point work that highlights the risks associated with the replication crisis and opportunities to improve reproducibility.

Ed.: Finally: what would you suggest to someone wanting to make sure they do a good study, but who is not sure where to begin with all this: what are the main things they should read and consider?

Andy: That’s a good question; the web is a great place to start. To learn more about registered reports and why they are important see this, and to learn about their place in robust science see this. To see how you can challenge yourself to do a pre-registered study and earn $1,000 see this, and to do a deep dive into open scientific practice see this.

Malte: Yeah, what Andy said. Also, I would thoroughly recommend joining social networks (Twitter, or the two sister groups Psychological Methods and PsychMAP on Facebook) where these issues are lively discussed.

Ed.: Anyway .. congratulations to you both, the issue authors, and the journal’s editor-in-chief, on having done a wonderful thing!

Malte: Thank you! We hope the research reports in this issue will serve as an inspiration and model for other psychologists.

Andy: Many thanks, we are doing our best to make the social sciences better and more reliable.

Read the full editorial: Elson, M. and Przybylski, A. (2017) The Science of Technology and Human Behavior: Standards, Old and New. Journal of Media Psychology. DOI: 10.1027/1864-1105/a000212


Malte Elson and Andrew Przybylski were talking to blog editor David Sutcliffe.

]]>
What Impact is the Gig Economy Having on Development and Worker Livelihoods? https://ensr.oii.ox.ac.uk/what-impact-is-the-gig-economy-having-on-development-and-worker-livelihoods/ Mon, 20 Mar 2017 07:46:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3995
There are imbalances in the relationship between supply and demand of digital work, with the vast majority of buyers located in high-income countries (pictured). See the full article for details.

As David Harvey famously noted, workers are unavoidably place-based because “labor-power has to go home every night.” But the widespread use of the Internet has changed much of that. The confluence of rapidly spreading digital connectivity, skilled but under-employed workers, the existence of international markets for labour, and the ongoing search for new outsourcing destinations, has resulted in organisational, technological, and spatial fixes for virtual production networks of services and money. Clients, bosses, workers, and users of the end-products of work can all now be located in different corners of the planet.

A new article by Mark Graham, Isis Hjorth and Vili Lehdonvirta, “Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods”, published in Transfer, discusses the implications of the spatial unfixing of work for workers in some of the world’s economic margins, and reflects on some of the key benefits and costs associated with these new digital regimes of work. Drawing on a multi-year study with digital workers in Sub-Saharan Africa and South-east Asia, it highlights four key concerns for workers: bargaining power, economic inclusion, intermediated value chains, and upgrading.

As ever more policy-makers, governments and organisations turn to the gig economy and digital labour as an economic development strategy to bring jobs to places that need them, it is important to understand how this might influence the livelihoods of workers. The authors show that although there are important and tangible benefits for a range of workers, there are also a range of risks and costs that could negatively affect the livelihoods of digital workers. They conclude with a discussion of four broad strategies – certification schemes, organising digital workers, regulatory strategies and democratic control of online labour platforms — that could improve conditions and livelihoods for digital workers.

We caught up with the authors to explore the implications of the study:

Ed.: Shouldn’t increased digitisation of work also increase transparency (i.e. tracking, auditing etc.) around this work — i.e. shouldn’t digitisation largely be a good thing?

Mark: It depends. One of the goals of our research is to ask who actually wins and loses from the digitalisation of work. A good thing for one group (e.g. employers in the Global North) isn’t necessarily automatically a good thing for another group (e.g. workers in the Global South).

Ed.: You mention market-based strategies as one possible way to improve transparency around working conditions along value chains: do you mean something like a “Fairtrade” certification for digital work, i.e. creating a market for “fair work”?

Mark: Exactly. At the moment, we can make sure that the coffee we drink or the chocolate we eat is made ethically. But we have no idea if the digital services we use are. A ‘fair work’ certification system could change that.

Ed.: And what sorts of work are these people doing? Is it the sort of stuff that could be very easily replaced by advances in automation (natural language processing, pattern recognition etc.)? i.e. is it doubly precarious, not just in terms of labour conditions, but also in terms of the very existence of the work itself?

Mark: Yes, some of it is. Ironically, some of the paid work that is done is training algorithms to do work that used to be done by humans.

Ed.: You say that “digital workers have been unable to build any large-scale or effective digital labour movements” — is that because (unlike e.g. farm work which is spatially constrained), employers can very easily find someone else anywhere in the world who is willing to do it? Can you envisage the creation of any effective online labour movement?

Mark: A key part of the problem for workers here is the economic geography of this work. A worker in Kenya knows that they can be easily replaced by workers on the other side of the planet. The potential pool of workers willing to take any job is massive. For digital workers to have any sort of effective movement in this context means looking to what I call geographic bottlenecks in the system. Places in which work isn’t solely in a global digital cloud. This can mean looking to things like organising and picketing the headquarters of firms, clusters of workers in particular places, or digital locations (the web-presence of firms). I’m currently working on a new publication that deals with these issues in a bit more detail.

Ed.: Are there any parallels between the online gig work you have studied and ongoing issues with “gig work” services like Uber and Deliveroo (e.g. undercutting of traditional jobs, lack of contracts, precarity)?

Mark: A commonality in all of those cases is that platforms become intermediaries in between clients and workers. This means that rather than being employees, workers tend to be self-employed: a situation that offers workers freedom and flexibility, but also comes with significant risks to the worker (e.g. no wages if they fall ill).

Read the full article: Graham, M., Hjorth, I. and Lehdonvirta, V. (2017) Digital Labour and Development: Impacts of Global Digital Labour Platforms and the Gig Economy on Worker Livelihoods. Transfer. DOI: 10.1177/1024258916687250

Read the full report: Graham, M., Lehdonvirta, V., Wood, A., Barnard, H., Hjorth, I., Simon, D. P. (2017) The Risks and Rewards of Online Gig Work At The Global Margins [PDF]. Oxford: Oxford Internet Institute.

The article draws on findings from the research project “Microwork and Virtual Production Networks in Sub-Saharan Africa and South-east Asia”, funded by the International Development Research Centre (IDRC), grant number: 107384-001.


Mark Graham was talking to blog editor David Sutcliffe.

]]>
Five Pieces You Should Probably Read On: Reality, Augmented Reality and Ambient Fun https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-reality-augmented-reality-and-ambient-fun/ Fri, 03 Mar 2017 10:59:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3979

This is the third post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun!

The addictive gameplay of Pokémon GO has led to police departments warning people that they should be more careful about revealing their locations, players injuring themselves, finding dead bodies, and even the Holocaust Museum telling people to play elsewhere.. Our environments are increasingly augmented with digital information: but how do we assert our rights over how and where this information is used? And should we be paying more attention to the design of persuasive technologies in increasingly attention-scarce environments? Or should we maybe just bin all our devices and pack ourselves off to digital detox camp?

 

1. James Williams: Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun

23 July 2016 / 2500 words / 12 min / Gross misuses of the “Poké-” prefix: 6

“The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets.”

Pokémon GO signals the first mainstream adoption of a type of game — always on, always with you — that requires you to ‘Bring Your Own Boundaries’, says James Williams. Regulation of the games falls on the user; presenting us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology.

 

2. James Williams: Orwell, Huxley, Banksy

24 May 2014 / 1000 words / 5 min

“Orwell worried that what we fear could ultimately come to control us: the “boot stamping on a human face—forever.” Huxley, on the other hand, felt that what we love was more likely to control us — by seducing us and engineering our compliance from within — and was therefore more deserving of a wary eye. In the age of the Internet, this dichotomy is reflected in the interplay between information and attention.”

You could say that the core challenge of the Internet (when information overload leads to scarcity of attention) is that it optimizes more for our impulses than our intentions, says James Williams, who warns that we could significantly overemphasize informational challenges to the neglect of attentional ones. In Brave New World, the defenders of freedom had “failed to take into account man’s almost infinite appetite for distractions.” In the digital era, we are making the same mistake, says James: we need better principles and processes to help designers make products more respectful of users’ attention.

 

3. James Williams: Staying free in a world of persuasive technologies

29 July 2013 / 1500 words / 7 min

“The explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power. To me, the biggest ethical questions are those that concern individual freedom and autonomy. When, exactly, does a “nudge” become a “push”?”

Technologies are increasingly being designed to change the way we think and behave: the Internet is now part of the background of human experience, and rapid advances in analytics are enabling optimisation of technologies to reach greater levels of persuasiveness. The ethical questions raised aren’t new, says James Williams, but the environment in which we’re asking them makes them much more urgent to address.

 

4. Mark Graham, Joe Shaw: An Informational Right to the City? [The New Internationalist]

8 February 2017 / 1000 words / 5 min

“Contemporary cities are much more than bricks and mortar; streets and pipes. They are also their digital presences – abstract presences which can reproduce and change our material reality. If you accept this premise, then we need to ask important questions about what rights citizens have to not just public and private spaces, but also their digital equivalents.”

It’s time for the struggle for more egalitarian rights to the city to move beyond a focus on material spaces and into the realm of digital ones, say Mark Graham and Joe Shaw. And we can undermine and devalue the hold of large companies over urban information by changing our own behaviour, they say: by rejecting some technologies, by adopting alternative service providers, and by supporting initiatives to develop platforms that operate on a more transparent basis.

 

5. Theodora Sutton: Exploring the world of digital detoxing

2 March 2017 / 2000 words / 10 min

“The people who run Camp Grounded would tell you themselves that digital detoxing is not really about digital technology. That’s just the current scapegoat for all the alienating aspects of modern life. But at the same time I think it is a genuine conversation starter about our relationship with technology and how it’s designed.”

As our social interactions become increasingly entangled with the online world, some people are insisting on the benefits of disconnecting entirely from digital technology: getting back to so-called “real life“. In this piece, Theodora Sutton explores the digital detoxing community in the San Francisco Bay Area, getting behind the rhetoric of the digital detox to understand the views and values of those wanting to re-examine the role of technology in their lives.

 

The Authors

James Williams is an OII doctoral student. He studies the ethical design of persuasive technology. His research explores the complex boundary between persuasive power and human freedom in environments of high technological persuasion.

Mark Graham is the Professor of Internet Geography at the OII. His research focuses on Internet and information geographies, and the overlaps between ICTs and economic development.

Joe Shaw is an OII DPhil student and Research Assistant. His research is concerned with the geography of information, property market technologies (PropTech) and critical urbanism.

Theodora Sutton is an OII DPhil student. Her research in digital anthropology examines digital detoxing and the widespread cultural narrative that sees digital sociality as inherently ‘lesser’ or less ‘natural’ than previous forms of communication.

 

Coming up! .. The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

]]>
Estimating the Local Geographies of Digital Inequality in Britain: London and the South East Show Highest Internet Use — But Why? https://ensr.oii.ox.ac.uk/estimating-the-local-geographies-of-digital-inequality-in-britain/ Wed, 01 Mar 2017 11:39:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3962 Despite the huge importance of the Internet in everyday life, we know surprisingly little about the geography of Internet use and participation at sub-national scales. A new article on Local Geographies of Digital Inequality by Grant Blank, Mark Graham, and Claudio Calvino published in Social Science Computer Review proposes a novel method to calculate the local geographies of Internet usage, employing Britain as an initial case study.

In the first attempt to estimate Internet use at any small-scale level, they combine data from a sample survey, the 2013 Oxford Internet Survey (OxIS), with the 2011 UK census, employing small area estimation to estimate Internet use in small geographies in Britain. (Read the paper for more on this method, and discussion of why there has been little work on the geography of digital inequality.)

There are two major reasons to suspect that geographic differences in Internet use may be important: apparent regional differences and the urban-rural divide. The authors do indeed find a regional difference: the area with least Internet use is in the North East, followed by central Wales; the highest is in London and the South East. But interestingly, geographic differences become non-significant after controlling for demographic variables (age, education, income etc.). That is, demographics matter more than simply where you live, in terms of the likelihood that you’re an Internet user.

Britain has one of the largest Internet economies in the developed world, and the Internet contributes an estimated 8.3 percent to Britain’s GDP. By reducing a range of geographic frictions and allowing access to new customers, markets and ideas it strongly supports domestic job and income growth. There are also personal benefits to Internet use. However, these advantages are denied to people who are not online, leading to a stream of research on the so-called digital divide.

We caught up with Grant Blank to discuss the policy implications of this marked disparity in (estimated) Internet use across Britain.

Ed.: The small-area estimation method you use combines the extreme breadth but shallowness of the national census, with the relative lack of breadth (2000 respondents) but extreme richness (550 variables) of the OxIS survey. Doing this allows you to estimate things like Internet use in fine-grained detail across all of Britain. Is this technique in standard use in government, to understand things like local demand for health services etc.? It seems pretty clever..

Grant: It is used by the government, but not extensively. It is complex and time-consuming to use well, and it requires considerable statistical skills. These have hampered its spread. It probably could be used more than it is — your example of local demand for health services is a good idea..

Ed.: You say this method works for Britain because OxIS collects information based on geographic area (rather than e.g. randomly by phone number) — so we can estimate things geographically for Britain that can’t be done for other countries in the World Internet Project (including the US, Canada, Sweden, Australia). What else will you be doing with the data, based on this happy fact?

Grant: We have used a straightforward measure of Internet use versus non-use as our dependent variable. Similar techniques could predict and map a variety of other variables. For example, we could take a more nuanced view of how people use the Internet. The patterns of mobile use versus fixed-line use may differ geographically and could be mapped. We could separate work-only users, teenagers using social media, or other subsets. Major Internet activities could be mapped, including such things as entertainment use, information gathering, commerce, and content production. In addition, the amount of use and the variety of uses could be mapped. All these are major issues and their geographic distribution has never been tracked.

Ed.: And what might you be able to do by integrating into this model another layer of geocoded (but perhaps not demographically rich or transparent) data, e.g. geolocated social media / Wikipedia activity (etc.)?

Grant: The strength of the data we have is that it is representative of the UK population. The other examples you mention, like Wikipedia activity or geolocated social media, are all done by smaller, self-selected groups of people, who are not at all representative. One possibility would be to show how and in what ways they are unrepresentative.

Ed.: If you say that Internet use actually correlates to the “usual” demographics, i.e. education, age, income — is there anything policy makers can realistically do with this information? i.e. other than hope that people go to school, never age, and get good jobs? What can policy-makers do with these findings?

Grant: The demographic characteristics are things that don’t change quickly. These results point to the limits of the government’s ability to move people online. They say that 100% of the UK population will never be online. This raises the question, what are realistic expectations for online activity? I don’t know the answer to that but it is an important question that is not easily addressed.

Ed.: You say that “The first law of the Internet is that everything is related to age”. When are we likely to have enough longitudinal data to understand whether this is simply because older people never had the chance to embed the Internet in their lives when they were younger, or whether it is indeed the case that older people inherently drop out. Will this age-effect eventually diminish or disappear?

Grant: You ask an important but unresolved question. In the language of social sciences — is the decline in Internet use with age an age-effect or a cohort-effect. An age-effect means that the Internet becomes less valuable as people age and so the decline in use with age is just a reflection of the declining value of the Internet. If this explanation is true then the age-effect will persist into the indefinite future. A cohort-effect implies that the reason older people tend to use the Internet less is that fewer of them learned to use the Internet in school or work. They will eventually be replaced by active Internet-using people and Internet use will no longer be associated with age. The decline with age will eventually disappear. We can address this question using data from the Oxford Internet Survey, but it is not a small area estimation problem.

Read the full article: Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.

This work was supported by the Economic and Social Research Council [grant ES/K00283X/1]. The data have been deposited in the UK Data Archive under the name “Geography of Digital Inequality”.


Grant Blank was speaking to blog editor David Sutcliffe.

]]>
Information Architecture meets the Philosophy of Information https://ensr.oii.ox.ac.uk/information-architecture-meets-the-philosophy-of-information/ Fri, 01 Jul 2016 08:41:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3832 On June 27 the Ethics and Philosophy of Information Cluster at the OII hosted a workshop to foster a dialogue between the discipline of Information Architecture (IA) and the Philosophy of Information (PI), and advance the practical and theoretical basis for how we conceptualise and shape the infosphere.

A core topic of concern is how we should develop better principles to understand design practices. The latter surfaces when IA looks at other disciplines, like linguistics, design thinking, new media studies and architecture to develop the theoretical foundations that can back and/or inform its practice. Within the philosophy of information this need to understand general principles of (conceptual or informational) design arises in relation to the question of how we develop and adopt the right level of abstraction (what Luciano Floridi calls the logic of design). This suggests a two-way interaction between PI and IA. On the one hand, PI can become part of the theoretical background that informs Information Architecture as one of the disciplines from which it can borrow concepts and theories. The philosophy of information, on the other hand, can benefit from the rich practice of IA and the growing body of critical reflection on how, within a particular context, the access to online information is best designed.

Throughout the workshop, two themes emerged:

  1. The need for more integrated ways to reason about and describe (a) informational artefacts and infrastructures, (b) the design-processes that lead to their creation, and (c) the requirements to which they should conform. This presupposes a convergence between the things we build (informational artefacts) and the conceptual apparatus we rely on (the levels of abstraction we adopt), which surfaces in IA as well as in PI. At the same time, it also calls for novel frameworks and linguistic abstractions. This need to reframe the ways that we observe informational phenomena could be discerned in several contributions to the workshop. It surfaced in the more theoretically oriented contributions of Andrew Hinton, Jason Hobbs & Terence Fenn, Dan Klyn, and Andrea Resmini, that for instance questioned the role of language and place or described frameworks to direct our thinking about designs and problems within their broader context, but also played a role in the practical challenges described by Vicky Buser and Konstantin Weiss.
  2. The gap, and resulting need to negotiate, between human and computer-oriented conceptual frameworks that are used to describe and manipulate reality. Whereas this theme was explicitly brought up in Luke Church’s comparison of how end-user programming, machine-learning and interactive visualisation each address this problem, it quickly leads us back to some of the broader concerns in IA. It is, straightforwardly, an instance of how computer-oriented frameworks start to shape infrastructures that are means to be manipulated by humans (i.e. when technical requirements dictate the shape of the user-interface), but it indirectly also hints at the challenges associated with the design of cross-channel interactions and of understanding how information flows between different levels of abstraction.

A final concern that cuts across these two themes deserves to be mentioned as well, namely the need for a language for critique.

This workshop was organised by David Peter Simon, Luciano Floridi and Patrick Allo, and was part of the “Logics of Visualisation” project.

Photograph of workshop participants by David Peter Simon.
Photograph of workshop participants by David Peter Simon.
]]>