David Sutcliffe – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:22 +0000 en-GB hourly 1 Can “We the People” really help draft a national constitution? (sort of..) https://ensr.oii.ox.ac.uk/can-we-the-people-really-help-draft-a-national-constitution-sort-of/ Thu, 16 Aug 2018 14:26:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4687 As innovations like social media and open government initiatives have become an integral part of the politics in the twenty-first century, there is increasing interest in the possibility of citizens directly participating in the drafting of legislation. Indeed, there is a clear trend of greater public participation in the process of constitution making, and with the growth of e-democracy tools, this trend is likely to continue. However, this view is certainly not universally held, and a number of recent studies have been much more skeptical about the value of public participation, questioning whether it has any real impact on the text of a constitution.

Following the banking crisis, and a groundswell of popular opposition to the existing political system in 2009, the people of Iceland embarked on a unique process of constitutional reform. Having opened the entire drafting process to public input and scrutiny, these efforts culminated in Iceland’s 2011 draft crowdsourced constitution: reputedly the world’s first. In his Policy & Internet article “When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution”, Alexander Hudson examines the impact that the Icelandic public had on the development of the draft constitution. He finds that almost 10 percent of the written proposals submitted generated a change in the draft text, particularly in the area of rights.

This remarkably high number is likely explained by the isolation of the drafters from both political parties and special interests, making them more reliant on and open to input from the public. However, although this would appear to be an example of successful public crowdsourcing, the new constitution was ultimately rejected by parliament. Iceland’s experiment with participatory drafting therefore demonstrates the possibility of successful online public engagement — but also the need to connect the masses with the political elites. It was the disconnect between these groups that triggered the initial protests and constitutional reform, but also that led to its ultimate failure.

We caught up with Alexander to discuss his findings.

Ed: We know from Wikipedia (and other studies) that group decisions are better, and crowds can be trusted. However, I guess (re US, UK) I also feel increasingly nervous about the idea of “the public” having a say over anything important and binding. How do we distribute power and consultation, while avoiding populist chaos?  

Alexander: That’s a large and important question, which I can probably answer only in part. One thing we need to be careful of is what kind of public we are talking about. In many cases, we view self-selection as a bad thing — it can’t be representative. However, in cases like Wikipedia, we see self-selected individuals with specialized knowledge and an uncommon level of interest collaborating. I would suggest that there is an important difference between the kind of decisions that are made by careful and informed participants in citizens’ juries, deliberative polls, or Wikipedia editing, and the oversimplified binary choices that we make in elections or referendums.

So, while there is research to suggest that large numbers of ordinary people can make better decisions, there are some conditions in terms of prior knowledge and careful consideration attached to that. I have high hopes for these more deliberative forms of public participation, but we are right to be cautious about referendums. The Icelandic constitutional reform process actually involved several forms of public participation, including two randomly selected deliberative fora, self-selected online participation, and a popular referendum with several questions.

Ed: A constitution is a very technical piece of text: how much could non-experts realistically contribute to its development — or was there also contribution from specialised interest groups? Presumably there was a team of lawyers and drafters managing the process? 

Alexander: All of these things were going on in Iceland’s drafting process. In my research here and on a few other constitution-making processes in other countries, I’ve been impressed by the ability of citizens to engage at a high level with fundamental questions about the nature of the state, constitutional rights, and legal theory. Assuming a reasonable level of literacy, people are fully capable of reading some literature on constitutional law and political philosophy, and writing very well-informed submissions that express what they would like to see in the constitutional text. A small, self-selected set of the public in many countries seeks to engage in spirited and for the most part respectful debate on these issues. In the Icelandic case, these debates have continued from 2009 to the present.

I would also add that public interest is not distributed uniformly across all the topics that constitutions cover. Members of the public show much more interest in discussing issues of human rights, and have more success in seeing proposals on that theme included in the draft constitution. Some NGOs were involved in submitting proposals to the Icelandic Constitutional Council, but interest groups do not appear to have been a major factor in the process. Unlike some constitution-making processes, the Icelandic Constitutional Council had a limited staff, and the drafters themselves were very engaged with the public on social media.

Ed: I guess Iceland is fairly small, but also unusually homogeneous. That helps, presumably, in creating a general consensus across a society? Or will party / political leaning always tend to trump any sense of common purpose and destiny, when defining the form and identity of the nation?

Alexander: You are certainly right that Iceland is unusual in these respects, and this raises important questions of what this is a case of, and how the findings here can inform us about what might happen in other contexts. I would not say that the Icelandic people reached any sort of broad, national-level consensus about how the constitution should change. During the early part of the drafting process, it seems that those who had strong disagreements with what was taking place absented themselves from the proceedings. They did turn up later to some extent (especially after the 2012 referendum), and sought to prevent this draft from becoming law.

Where the small size and homogeneous population really came into play in Iceland is through the level of knowledge that those who participated had of one another before entering into the constitution-making process. While this has been over emphasized in some discussions of Iceland, there are communities of shared interests where people all seem to know each other, or at least know of each other. This makes forming new societies, NGOs, or interest groups easier, and probably helped to launch the constitution-making project in the first place. 

Ed: How many people were involved in the process — and how were bad suggestions rejected, discussed, or improved? I imagine there must have been divisive issues, that someone would have had to arbitrate? 

Alexander: The number of people who interacted with the process in some way, either by attending one of the public forums that took place early in the process, voting in the election for the Constitutional Council, or engaging with the process on social media, is certainly in the tens of thousands. In fact, one of the striking things about this case is that 522 people stood for election to the 25 member Constitutional Council which drafted the new constitution. So there was certainly a high level of interest in participating in this process.

My research here focused on the written proposals that were posted to the Constitutional Council’s website. 204 individuals participated in that more intensive way. As the members of the Constitutional Council tell it, they would read some of the comments on social media, and the formal submissions on their website during their committee meetings, and discuss amongst themselves which ideas should be carried forward into the draft. The vast majority of the submissions were well-informed, on topic, and conveyed a collegial tone. In this case at least, there was very little of the kind of abusive participation that we observe in some online networks. 

Ed: You say that despite the success in creating a crowd-sourced constitution (that passed a public referendum), it was never ratified by parliament — why is that? And what lessons can we learn from this?

Alexander: Yes, this is one of the most interesting aspects of the whole thing for scholars, and certainly a source of some outrage for those Icelanders who are still active in trying to see this draft constitution become law. Some of this relates to the specifics of Iceland’s constitutional amendment process (which disincentives parliament from approving changes in between elections), but I think that there are also a couple of broadly applicable things going on here. First, the constitution-making process arose as a response to the way that the Icelandic government was perceived to have failed in governing the financial system in the late 2000s. By the time a last-ditch attempt to bring the draft constitution up for a vote in parliament occurred right before the 2013 election, almost five years had passed since the crisis that began this whole saga, and the economic situation had begun to improve. So legislators were not feeling pressure to address those issues any more.

Second, since political parties were not active in the drafting process, too few members of parliament had a stake in the issue. If one of the larger parties had taken ownership of this draft constitution, we might have seen a different outcome. I think this is one of the most important lessons from this case: if the success of the project depends on action by elite political actors, they should be involved in the earlier stages of the process. For various reasons, the Icelanders chose to exclude professional politicians from the process, but that meant that the Constitutional Council had too few friends in parliament to ratify the draft.

Read the full article: Hudson, A. (2018) When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution. Policy & Internet 10 (2) 185-217. DOI: https://doi.org/10.1002/poi3.167

Alexander Hudson was talking to blog editor David Sutcliffe.

]]>
Bursting the bubbles of the Arab Spring: the brokers who bridge ideology on Twitter https://ensr.oii.ox.ac.uk/bursting-the-bubbles-of-the-arab-spring-the-brokers-who-bridge-ideology-on-twitter/ Fri, 27 Jul 2018 11:50:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4679 Online activism has become increasingly visible, with social media platforms being used to express protest and dissent from the Arab Spring to #MeToo. Scholarly interest in online activism has grown with its use, together with disagreement about its impact. Do social media really challenge traditional politics? Some claim that social media have had a profound and positive effect on modern protest — the speed of information sharing making online networks highly effective in building revolutionary movements. Others argue that this activity is merely symbolic: online activism has little or no impact, dilutes offline activism, and weakens social movements. Given online activity doesn’t involve the degree of risk, trust, or effort required on the ground, they argue that it can’t be considered to be “real” activism. In this view, the Arab Spring wasn’t simply a series of “Twitter revolutions”.

Despite much work on offline social movements and coalition building, few studies have used social network analysis to examine the influence of brokers of online activists (i.e. those who act as a bridge between different ideological groups), or their role in information diffusion across a network. In her Policy & Internet article “Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution”, Deena Abul-Fottouh tests whether social movements theory of networks and coalition building — developed to explain brokerage roles in offline networks, between established parties and organisations — can also be used to explain what happens online.

Social movements theory suggests that actors who occupy an intermediary structural position between different ideological groups are more influential than those embedded only in their own faction. That is, the “bridging ties” that link across political ideologies have a greater impact on mobilization than the bonding ties within a faction. Indeed, examining the Egyptian revolution and ensuing crisis, Deena finds that these online brokers were more evident during the first phase of movement solidarity between liberals, islamists, and socialists than in the period of schism and crisis (2011-2014) that followed the initial protests. However, she also found that the online brokers didn’t match the brokers on the ground: they played different roles, complementing rather than mirroring each other in advancing the revolutionary movement.

We caught up with Deena to discuss her findings:

Ed: Firstly: is the “Arab Spring” a useful term? Does it help to think of the events that took place across parts of the Middle East and North Africa under this umbrella term — which I suppose implies some common cause or mechanism?

Deena: Well, I believe it’s useful to an extent. It helps describe some positive common features that existed in the region such as dissatisfaction with the existing regimes, a dissatisfaction that was transformed from the domain of advocacy to the domain of high-risk activism, a common feeling among the people that they can make a difference, even though it did not last long, and the evidence that there are young people in the region who are willing to sacrifice for their freedom. On the other hand, structural forces in the region such as the power of deep states and the forces of counter-revolution were capable of halting this Arab Spring before it burgeoned or bore fruit, so may be the term “Spring” is no longer relevant.

Ed: Revolutions have been happening for centuries, i.e. they obviously don’t need Twitter or Facebook to happen. How significant do you think social media were in this case, either in sparking or sustaining the protests? And how useful are these new social media data as a means to examine the mechanisms of protest?

Deena: Social media platforms have proven to be useful in facilitating protests such as by sharing information in a speedy manner and on a broad range across borders. People in Egypt and other places in the region were influenced by Tunisia, and protest tactics were shared online. In other words, social media platforms definitely facilitate diffusion of protests. They are also hubs to create a common identity and culture among activists, which is crucial for the success of social movements. I also believe that social media present activists with various ways to circumvent policing of activism (e.g. using pseudonyms to hide the identity of the activists, sharing information about places to avoid in times of protests, many platforms offer the possibility for activists to form closed groups where they have high privacy to discuss non-public matters, etc.).

However, social media ties are weak ties. These platforms are not necessarily efficient in building the trust needed to bond social movements, especially in times of schism and at the level of high-risk activism. That is why, as I discuss in my article, we can see that the type of brokerage that is formed online is brokerage that is built on weak ties, not necessarily the same as offline brokerage that usually requires high trust.

Ed: It’s interesting that you could detect bridging between groups. Given schism seems to be fairly standard in society (Cf filter bubbles etc.) .. has enough attention been paid to this process of temporary shifting alignments, to advance a common cause? And are these incidental, or intentional acts of brokerage?

Deena: I believe further studies need to be made on the concepts of solidarity, schism and brokerage within social movements both online and offline. Little attention has been given to how movements come together or break apart online. The Egyptian revolution is a rich case to study these concepts as the many changes that happened in the path of the revolution in its first five years and the intervention of different forces have led to multiple shifts of alliances that deserve study. Acts of brokerage do not necessarily have to be intentional. In social movements studies, researchers have studied incidental acts that could eventually lead to formation of alliances, such as considering co-members of various social movements organizations as brokers between these organizations.

I believe that the same happens online. Brokerage could start with incidental acts such as activists following each other on Twitter for example, which could develop into stronger ties through mentioning each other. This could also build up to coordinating activities online and offline. In the case of the Egyptian revolution, many activists who met in protests on the ground were also friends online. The same happened in Moldova where activists coordinated tactics online and met on the ground. Thus, incidental acts that start with following each other online could develop into intentional coordinated activism offline. I believe further qualitative interviews need to be conducted with activists to study how they coordinate between online and offline activism, as there are certain mechanisms that cannot be observed through just studying the public profiles of activists or their structural networks.

Ed: The “Arab Spring” has had a mixed outcome across the region — and is also now perhaps a bit forgotten in the West. There have been various network studies of the 2011 protests: but what about the time between visible protests .. isn’t that in a way more important? What would a social network study of the current situation in Egypt look like, do you think?

Deena: Yes, the in-between times of waves of protests are as important to study as the waves themselves as they reveal a lot about what could happen, and we usually study them retroactively after the big shocks happen. A social network of the current situation in Egypt would probably include many “isolates” and tiny “components”, if I would use social network analysis terms. This started showing in 2014 as the effects of schism in the movement. I believe this became aggravated over time as the military coup d’état got a stronger grip over the country, suppressing all opposition. Many activists are either detained or have left the country. A quick look at their online profiles does not reveal strong communication between them. Yet, this is what apparently shows from public profiles. One of the levers that social media platforms offer is the ability to create private or “closed” groups online.

I believe these groups might include rich data about activists’ communication. However, it is very difficult, almost impossible to study these groups, unless you are a member or they give you permission. In other words, there might be some sort of communication occurring between activists but at a level that researchers unfortunately cannot access. I think we might call it the “underground of online activism”, which I believe is potentially a very rich area of study.

Ed: A standard criticism of “Twitter network studies” is that they aren’t very rich — they may show who’s following whom, but not necessarily why, or with what effect. Have there been any larger, more detailed studies of the Arab Spring that take in all sides: networks, politics, ethnography, history — both online and offline?

Deena: To my knowledge, there haven’t been studies that have included all these aspects together. Yet there are many studies that covered each of them separately, especially the politics, ethnography, and history of the Arab Spring (see for example: Egypt’s Tahrir Revolution 2013, edited by D. Tschirgi, W. Kazziha and S. F. McMahon). Similarly, very few studies have tried to compare the online and offline repertoires (see for example: Weber, Garimella and Batayneh 2013, Abul-Fottouh and Fetner 2018). In my doctoral dissertation (2018 from McMaster University), I tried to include many of these elements.

Read the full article: Abul-Fottouh, D. (2018) Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution. Policy & Internet 10: 218-240. doi:10.1002/poi3.169

Deena Abul-Fottouh was talking to blog editor David Sutcliffe.

]]>
Call for Papers: Government, Industry, Civil Society Responses to Online Extremism https://ensr.oii.ox.ac.uk/call-for-papers-responses-to-online-extremism/ Mon, 02 Jul 2018 12:52:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4666 We are calling for articles for a Special Issue of the journal Policy & Internet on “Online Extremism: Government, Private Sector, and Civil Society Responses”, edited by Jonathan Bright and Bharath Ganesh, to be published in 2019. The submission deadline is October 30, 2018.

Issue Outline

Governments, the private sector, and civil society are beginning to work together to challenge extremist exploitation of digital communications. Both Islamic and right-wing extremists use websites, blogs, social media, encrypted messaging, and filesharing websites to spread narratives and propaganda, influence mainstream public spheres, recruit members, and advise audiences on undertaking attacks.

Across the world, public-private partnerships have emerged to counter this problem. For example, the Global Internet Forum to Counter Terrorism (GIFCT) organized by the UN Counter-Terrorism Executive Directorate has organized a “shared hash database” that provides “digital fingerprints” of ISIS visual content to help platforms quickly take down content. In another case, the UK government funded ASI Data Science to build a tool to accurately detect jihadist content. Elsewhere, Jigsaw (a Google-owned company) has developed techniques to use content recommendations on YouTube to “redirect” viewers of extremist content to content that might challenge their views.

While these are important and admirable efforts, their impacts and effectiveness is unclear. The purpose of this special issue is to map and evaluate emerging public-private partnerships, technologies, and responses to online extremism. There are three main areas of concern that the issue will address:

(1) the changing role of content moderation, including taking down content and user accounts, as well as the use of AI techniques to assist;

(2) the increasing focus on “counter-narrative” campaigns and strategic communication; and

(3) the inclusion of global civil society in this agenda.

This mapping will contribute to understanding how power is distributed across these actors, the ways in which technology is expected to address the problem, and the design of the measures currently being undertaken.

Topics of Interest

Papers exploring one or more of the following areas are invited for consideration:

Content moderation

  • Efficacy of user and content takedown (and effects it has on extremist audiences);
  • Navigating the politics of freedom of speech in light of the proliferation of hateful and extreme speech online;
  • Development of content and community guidelines on social media platforms;
  • Effect of government policy, recent inquiries, and civil society on content moderation practices by the private sector (e.g. recent laws in Germany, Parliamentary inquiries in the UK);
  • Role and efficacy of Artificial Intelligence (AI) and machine learning in countering extremism.

Counter-narrative Campaigns and Strategic Communication

  • Effectiveness of counter-narrative campaigns in dissuading potential extremists;
  • Formal and informal approaches to counter narratives;
  • Emerging governmental or parastatal bodies to produce and disseminate counter-narratives;
  • Involvement of media and third sector in counter-narrative programming;
  • Research on counter-narrative practitioners;
  • Use of technology in supporting counter-narrative production and dissemination.

Inclusion of Global Civil Society

  • Concentration of decision making power between government, private sector, and civil society actors;
  • Diversity of global civil society actors involved in informing content moderation and counter-narrative campaigns;
  • Extent to which inclusion of diverse civil society/third sector actors improves content moderation and counter-narrative campaigns;
  • Challenges and opportunities faced by global civil society in informing agendas to respond to online extremism.

Submitting your Paper

We encourage interested scholars to submit 6,000 to 8,000 word papers that address one or more of the issues raised in the call. Submissions should be made through Policy & Internet’s manuscript submission system. Interested authors are encouraged to contact Jonathan Bright (jonathan.bright@oii.ox.ac.uk) and Bharath Ganesh (bharath.ganesh@oii.ox.ac.uk) to check the suitability of their paper.

Special Issue Schedule

The special issue will proceed according to the following timeline:

Paper submission: 30 October 2018

First round of reviews: January 2019

Revisions received: March 2019

Final review and decision: May 2019

Publication (estimated): December 2019

The special issue as a whole will be published at some time in late 2019, though individual papers will be published online in EarlyView as soon as they are accepted.

]]>
In a world of “connective action” — what makes an influential Twitter user? https://ensr.oii.ox.ac.uk/in-a-world-of-connective-action-what-makes-an-influential-twitter-user/ Sun, 10 Jun 2018 08:07:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4183 A significant part of political deliberation now takes place on online forums and social networking sites, leading to the idea that collective action might be evolving into “connective action”. The new level of connectivity (particularly of social media) raises important questions about its role in the political process. but understanding important phenomena, such as social influence, social forces, and digital divides, requires analysis of very large social systems, which traditionally has been a challenging task in the social sciences.

In their Policy & Internet article “Understanding Popularity, Reputation, and Social Influence in the Twitter Society“, David Garcia, Pavlin Mavrodiev, Daniele Casati, and Frank Schweitzer examine popularity, reputation, and social influence on Twitter using network information on more than 40 million users. They integrate measurements of popularity, reputation, and social influence to evaluate what keeps users active, what makes them more popular, and what determines their influence in the network.

Popularity in the Twitter social network is often quantified as the number of followers of a user. That implies that it doesn’t matter why some user follows you, or how important she is, your popularity only measures the size of your audience. Reputation, on the other hand, is a more complicated concept associated with centrality. Being followed by a highly reputed user has a stronger effect on one’s reputation than being followed by someone with low reputation. Thus, the simple number of followers does not capture the recursive nature of reputation.

In their article, the authors examine the difference between popularity and reputation on the process of social influence. They find that there is a range of values in which the risk of a user becoming inactive grows with popularity and reputation. Popularity in Twitter resembles a proportional growth process that is faster in its strongly connected component, and that can be accelerated by reputation when users are already popular. They find that social influence on Twitter is mainly related to popularity rather than reputation, but that this growth of influence with popularity is sublinear. In sum, global network metrics are better predictors of inactivity and social influence, calling for analyses that go beyond local metrics like the number of followers.

We caught up with the authors to discuss their findings:

Ed.: Twitter is a convenient data source for political scientists, but they tend to get criticised for relying on something that represents only a tiny facet of political activity. But Twitter is presumably very useful as a way of uncovering more fundamental / generic patterns of networked human interaction?

David: Twitter as a data source to study human behaviour is both powerful and limited. Powerful because it allows us to quantify and analyze human behaviour at scales and resolutions that are simply impossible to reach with traditional methods, such as experiments or surveys. But also limited because not every aspect of human behaviour is captured by Twitter and using its data comes with significant methodological challenges, for example regarding sampling biases or platform changes. Our article is an example of an analysis of general patterns of popularity and influence that are captured by spreading information in Twitter, which only make sense beyond the limitations of Twitter when we frame the results with respect to theories that link our work to previous and future scientific knowledge in the social sciences.

Ed.: How often do theoretical models (i.e. describing the behaviour of a network in theory) get linked up with empirical studies (i.e. of a network like Twitter in practice) but also with qualitative studies of actual Twitter users? And is Twitter interesting enough in itself for anyone to attempt to develop an overall theoretico-empirico-qualitative theory about it?

David: The link between theoretical models and large-scale data analyses of social media is less frequent than we all wish. But the gap between disciplines seems to be narrowing in the last years, with more social scientists using online data sources and computer scientists referring better to theories and previous results in the social sciences. What seems to be quite undeveloped is an interface with qualitative methods, specially with large-scale analyses like ours.

Qualitative methods can provide what data science cannot: questions about important and relevant phenomena that then can be explained within a wider theory if validated against data. While this seems to me as a fertile ground for interdisciplinary research, I doubt that Twitter in particular should be the paragon of such combination of approaches. I advocate for starting research from the aspect of human behaviour that is the subject of study, and not from a particularly popular social media platform that happens to be used a lot today, but might not be the standard tomorrow.

Ed.: I guess I’ve see a lot of Twitter networks in my time, but not much in the way of directed networks, i.e. showing direction of flow of content (i.e. influence, basically) — or much in the way of a time element (i.e. turning static snapshots into dynamic networks). Is that fair, or am I missing something? I imagine it would be fun to see how (e.g.) fake news or political memes propagate through a network?

David: While Twitter provides amazing volumes of data, its programming interface is notorious for the absence of two key sources: the date when follower links are created and the precise path of retweets. The reason for the general picture of snapshots over time is that researchers cannot fully trace back the history of a follower network, they can only monitor it with certain frequency to overcome the fact that links do not have a date attached.

The generally missing picture of flows of information is because when looking up a retweet, we can see the original tweet that is being retweeted, but not if the retweet is of a retweet of a friend. This way, without special access to Twitter data or alternative sources, all information flows look like stars around the original tweet, rather than propagation trees through a social network that allow the precise analysis of fake news or memes.

Ed.: Given all the work on Twitter, how well-placed do you think social scientists would be to advise a political campaign on “how to create an influential network” beyond just the obvious (Tweet well and often, and maybe hire a load of bots). i.e. are there any “general rules” about communication structure that would be practically useful to campaigning organisations?

David: When we talk about influence on Twitter, we usually talk about rather superficial behaviour, such as retweeting content or clicking on a link. This should not be mistaken as a more substantial kind of influence, the kind that makes people change their opinion or go to vote. Evaluating the real impact of Twitter influence is a bottleneck for how much social scientists can advise a political campaign. I would say than rather than providing general rules that can be applied everywhere, social scientists and computer scientists can be much more useful when advising, tracking, and optimizing individual campaigns that take into account the details and idiosyncrasies of the people that might be influenced by the campaign.

Ed.: Random question: but where did “computational social science” emerge from – is it actually quite dependent on Twitter (and Wikipedia?), or are there other commonly-used datasets? And are computational social science, “big data analytics”, and (social) data science basically describing the same thing?

David: Tracing back the meaning and influence of “computational social science” could take a whole book! My impression is that the concept started few decades ago as a spin on “sociophysics”, where the term “computational” was used as in “computational model”, emphasizing a focus on social science away from toy model applications from physics. Then the influential Science article by David Lazer and colleagues in 2009 defined the term as the application of digital trace datasets to test theories from the social sciences, leaving the whole computational modelling outside the frame. In that case, “computational” was used more as it is used in “computational biology”, to refer to social science with increased power and speed thanks to computer-based technologies. Later it seems to have converged back into a combination of both the modelling and the data analysis trends, as in the “Manifesto of computational social science” by Rosaria Conte and colleagues in 2012, inspired by the fact that we need computational modelling techniques from complexity science to understand what we observe in the data.

The Twitter and Wikipedia dependence of the field is just a path dependency due to the ease and open access to those datasets, and a key turning point in the field is to be able to generalize beyond those “model organisms”, as Zeynep Tufekci calls them. One can observe these fads in the latest computer science conferences, with the rising ones being Reddit and Github, or when looking at earlier research that heavily used product reviews and blog datasets. Computational social science seems to be maturing as a field, make sense out of those datasets and not just telling cool data-driven stories about one website or another. Perhaps we are beyond the peak of inflated expectations of the hype curve and the best part is yet to come.

With respect to big data and social data science, it is easy to get lost in the field of buzzwords. Big data analytics only deals with the technologies necessary to process large volumes of data, which could come from any source including social networks but also telescopes, seismographs, and any kind of sensor. These kind of techniques are only sometimes necessary in computational social science, but are far from the core of topics of the field.

Social data science is closer, but puts a stronger emphasis on problem-solving rather than testing theories from the social sciences. When using “data science” we usually try to emphasize a predictive or explorative aspect, rather than the confirmatory or generative approach of computational social science. The emphasis on theory and modelling of computational social science is the key difference here, linking back to my earlier comment about the role of computational modelling and complexity science in the field.

Ed.: Finally, how successful do you think computational social scientists will be in identifying any underlying “social patterns” — i.e. would you agree that the Internet is a “Hadron Collider” for social science? Or is society fundamentally too chaotic and unpredictable?

David: As web scientists like to highlight, the Web (not the Internet, which is the technical infrastructure connecting computers) is the largest socio-technical artifact ever produced by humanity. Rather than as a Hadron Collider, which is a tool to make experiments, I would say that the Web can be the Hubble telescope of social science: it lets us observe human behaviour at an amazing scale and resolution, not only capturing big data but also, fast, long, deep, mixed, and weird data that we never imagined before.

While I doubt that we will be able to predict society in some sort of “psychohistory” manner, I think that the Web can help us to understand much more about ourselves, including our incentives, our feelings, and our health. That can be useful knowledge to make decisions in the future and to build a better world without the need to predict everything.

Read the full article: Garcia, D., Mavrodiev, P., Casati, D., and Schweitzer, F. (2017) Understanding Popularity, Reputation, and Social Influence in the Twitter Society. Policy & Internet 9 (3) doi:10.1002/poi3.151

David Garcia was talking to blog editor David Sutcliffe.

]]>
How can we encourage participation in online political deliberation? https://ensr.oii.ox.ac.uk/how-can-we-encourage-participation-in-online-political-deliberation/ Fri, 01 Jun 2018 14:54:48 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4186 Political parties have been criticized for failing to link citizen preferences to political decision-making. But in an attempt to enhance policy representation, many political parties have established online platforms to allow discussion of policy issues and proposals, and to open up their decision-making processes. The Internet — and particularly the social web — seems to provide an obvious opportunity to strengthen intra-party democracy and mobilize passive party members. However, these mobilizing capacities are limited, and in most instances, participation has been low.

In their Policy & Internet article “Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party,” Katharina Gerl, Stefan Marschall, and Nadja Wilker examine the German Greens’ online collaboration platform to ask why only some party members and supporters use it. The platform aims improve the inclusion of party supporters and members in the party’s opinion-formation and decision-making process, but it has failed to reach inactive members. Instead, those who have already been active in the party also use the online platform. It also seems that classical resources such as education and employment status do not (directly) explain differences in participation; instead, participation is motivated by process-related and ideological incentives.

We caught up with the authors to discuss their findings:

Ed.: You say “When it comes to explaining political online participation within parties, we face a conceptual and empirical void” .. can you explain briefly what the offline models are, and why they don’t work for the Internet age?

Katharina / Stefan / Nadja: According to Verba et al. (1995) the reasons for political non-participation can be boiled down to three factors: (1) citizens do not want to participate, (2) they cannot, (3) nobody asked them to. Speaking model-wise we can distinguish three perspectives: Citizens need certain resources like education, information, time and civic skills to participate (resource model and civic voluntarism model). The social psychological model looks at the role of attitudes and political interest that are supposed to increase participation. In addition to resources and attitudes, the general incentives model analyses how motives, costs and benefits influence participation.

These models can be applied to online participation as well, but findings for the online context indicate that the mechanisms do not always work like in the offline context. For example, age plays out differently for online participation. Generally, the models have to be specified for each participation context. This especially applies for the online context as forms of online participation sometimes demand different resources, skills or motivational factors. Therefore, we have to adapt and supplemented the models with additional online factors like internet skills and internet sophistication.

Ed.: What’s the value to a political party of involving its members in policy discussion? (i.e. why go through the bother?)

Katharina / Stefan / Nadja: Broadly speaking, there are normative and rational reasons for that. At least for the German parties, intra-party democracy plays a crucial role. The involvement of members in policy discussion can serve as a means to strengthen the integration and legitimation power of a party. Additionally, the involvement of members can have a mobilizing effect for the party on the ground. This can positively influence the linkage between the party in central office, the party on the ground, and the societal base. Furthermore, member participation can be a way to react on dissatisfaction within a party.

Ed.: Are there any examples of successful “public deliberation” — i.e. is this maybe just a problem of getting disparate voices to usefully engage online, rather than a failure of political parties per se?

Katharina / Stefan / Nadja: This is definitely not unique to political parties. The problems we observe regarding online public deliberation in political parties also apply to other online participation platforms: political participation and especially public deliberation require time and effort for participants, so they will only be willing to engage if they feel they benefit from it. But the benefits of participation may remain unclear as public deliberation – by parties or other initiators – often takes place without a clear goal or a real say in decision-making for the participants. Initiators of public deliberation often fail to integrate processes of public deliberation into formal and meaningful decision-making procedures. This leads to disappointment for potential participants who might have different expectations concerning their role and scope of influence. There is a risk of a vicious circle and disappointed expectations on both sides.

Ed.: Based on your findings, what would you suggest that the Greens do in order to increase participation by their members on their platform?

Katharina / Stefan / Nadja: Our study shows that the members of the Greens are generally willing to participate online and appreciate this opportunity. However, the survey also revealed that the most important incentive for them is to have an influence on the party’s decision-making. We would suggest that the Greens create an actual cause for participation, meaning to set clear goals and to integrate it into specific and relevant decisions. Participation should not be an end in itself!

Ed.: How far do political parties try to harness deliberation where it happens in the wild e.g. on social media, rather than trying to get people to use bespoke party channels? Or might social media users see this as takeover by the very “establishment politics” they might have abandoned, or be reacting against?

Katharina / Stefan / Nadja: Parties do not constrain their online activities to their own official platforms and channels but also try to develop strategies for influencing discourses in the wild. However, this works much better and has much more authenticity as well as credibility if it isn’t parties as abstract organizations but rather individual politicians such as members of parliament who engage in person on social media, for example by using Twitter.

Ed.: How far have political scientists understood the reasons behind the so-called “crisis of democracy”, and how to address it? And even if academics came up with “the answer” — what is the process for getting academic work and knowledge put into practice by political parties?

Katharina / Stefan / Nadja: The alleged “crisis of democracy” is in first line seen as a crisis of representation in which the gap between political elites and the citizens has widened drastically within the last years, giving room to populist movements and parties in many democracies. Our impression is that facing the rise of populism in many countries, politicians have become more and more attentive towards discussions and findings in political science which have been addressing the linkage problems for years. But perhaps this is like shutting the stable door after the horse has bolted.

Read the full article: Gerl, K., Marschall, S., and Wilker, N. (2016) Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party. Policy & Internet doi:10.1002/poi3.149

Katharina Gerl, Stefan Marschall, and Nadja Wilker were talking to blog editor David Sutcliffe.

]]>
Making crowdsourcing work as a space for democratic deliberation https://ensr.oii.ox.ac.uk/making-crowdsourcing-work-as-a-space-for-democratic-deliberation/ Sat, 26 May 2018 12:44:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4245 There are a many instances of crowdsourcing in both local and national governance across the world, as governments implement crowdsourcing as part of their open government practices aimed at fostering civic engagement and knowledge discovery for policies. But is crowdsourcing conducive to deliberation among citizens or is it essentially just a consulting mechanism for information gathering? Second, if it is conducive to deliberation, what kind of deliberation is it? (And is it democratic?) Third, how representative are the online deliberative exchanges of the wishes and priorities of the larger population?

In their Policy & Internet article “Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland”, Tanja Aitamurto and Hélène Landemore examine a partially crowdsourced reform of the Finnish off-road traffic law. The aim of the process was to search for knowledge and ideas from the crowd, enhance people’s understanding of the law, and to increase the perception of the policy’s legitimacy. The participants could propose ideas on the platform, vote others’ ideas up or down, and comment.

The authors find that despite the lack of explicit incentives for deliberation in the crowdsourced process, crowdsourcing indeed functioned as a space for democratic deliberation; that is, an exchange of arguments among participants characterized by a degree of freedom, equality, and inclusiveness. An important finding, in particular, is that despite the lack of statistical representativeness among the participants, the deliberative exchanges reflected a diversity of viewpoints and opinions, tempering to a degree the worry about the bias likely introduced by the self-selected nature of citizen participation.

They introduce the term “crowdsourced deliberation” to mean the deliberation that happens (intentionally or unintentionally) in crowdsourcing, even when the primary aim is to gather knowledge rather than to generate deliberation. In their assessment, crowdsourcing in the Finnish experiment was conducive to some degree of democratic deliberation, even though, strikingly, the process was not designed for it.

We caught up with the authors to discuss their findings:

Ed.: There’s a lot of discussion currently about “filter bubbles” (and indeed fake news) damaging public deliberation. Do you think collaborative crowdsourced efforts (that include things like Wikipedia) help at all more generally, or .. are we all damned to our individual echo chambers?

Tanja and Hélène: Deliberation, whether taking place within a crowdsourced policymaking process or in another context, has a positive impact on society, when the participants exchange knowledge and arguments. While all deliberative processes are, to a certain extent, their own microcosms, there is typically at least some cross-cutting exposure of opinions and perspectives among the crowd. The more diverse the participant crowd is and the larger the number of participants, the more likely there is diversity also in the opinions, preventing strictly siloed echo chambers.

Moreover, it all comes down to design and incentives in the end. In our crowdsourcing platform we did not particularly try to attract a cross-cutting section of the population so there was a risk of having only a relatively homogenous population self-selecting into the process, which is what happened to a degree, demographically at last (over 90% of our participants were educated male professionals). In terms of ideas though, the pool was much more diverse than the demography would have suggested, and techniques we used (like clustering) helped maintain the visibility (to the researchers) of the minority views.

That said, if what you are is after is maximal openness and cross-cutting exposure, nothing beats random selection, like the one used in mini-publics of all kinds, from citizens’ juries to deliberative polls to citizens’ assemblies… That’s what Facebook and Twitter should use in order to break the filter bubbles in which people lock themselves: algorithms that randomize the content of our newsfeed and expose us to a vast range of opinions, rather than algorithms that maximize similarity with what we already like.

But for us the goal was different and so our design was different. Our goal was to gather knowledge and ideas and for this self-selection (the sort also at play in Wikipedia) is better than random-selection: whereas with random selection you shut the door on most people, in crowdsourcing platform you just let the door open to anyone who can self-identify as having a relevant form of knowledge and has the motivation to participate. The remarkable thing in our case is that even though we didn’t design the process for democratic deliberation, it occurred anyway, between the cracks of the design so to speak.

Ed.: I suppose crowdsourcing won’t work unless there is useful cooperation: do you think these successful relationships self-select on a platform, or do things perhaps work precisely because people may NOT be discussing other, divisive things (like immigration) when working together on something apparently unrelated, like an off-road law?

Tanja and Hélène: There is a varying degree of collaboration in crowdsourcing. In crowdsourced policymaking, the crowd does not typically collaborate on drafting the law (unlike the crowd does in Wikipedia writing), but they rather respond to the crowdsourcer’s, in this case, the government’s prompts. In this type of crowdsourcing, which was the case in the crowdsourced off-road traffic law reform, the crowd members don’t need to collaborate with each other in order the process to achieve its goal of finding new knowledge. The crowd, can, of course decide not to collaborate with the government and not answer the prompts, or start sabotaging the process.

The degree and success of collaboration will depend on the design and the goals of your experiment. In our case, crowdsourcing might have worked even without collaboration because our goal was to gather knowledge and information, which can be done by harvesting the contributions of the individual members of the crowd without them interacting with each other. But if what you are after is co-creation or deliberation, then yes you need to create the background conditions and incentives for cooperation.

Cooperation may require bracketing some sensitive topics or else learning to disagree in respectful ways. Deliberation, and more broadly cooperation are social skills — human technologies you might say — that we still don’t know how to use very well. This comes in part from the fact that our school systems do not teach those skills, focused as they are on promoting individual rather than collaborative success and creating an eco-system of zero-sum competition between students, when in the real world there is almost nothing you can do all by yourself and we would be much better off nurturing collaborative skills and the art or technology of deliberation.

Ed.: Have there been any other examples in Finland — i.e. is crowdsourcing (and deliberation) something that is seen as useful and successful by the government?

Tanja and Hélène: Yes, there has been several crowdsourced policymaking processes in Finland. One is a crowdsourced Limited Liability Housing Company Law reform, organized by the Ministry of Justice in the Finland government. We examined the quality of deliberation in the case, and the findings show that the quality of deliberation, as measured by Discourse Quality Index, was pretty good.

Read the full article: Aitamurto, T. and Landemore, H. (2016) Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland. Policy & Internet 8 (2) doi:10.1002/poi3.115.


Tanja Aitamurto and Hélène Landemore were talking to blog editor David Sutcliffe.

]]>
Habermas by design: designing public deliberation into online platforms https://ensr.oii.ox.ac.uk/habermas-by-design-designing-public-deliberation-into-online-platforms/ Thu, 03 May 2018 13:59:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4673 Advocates of deliberative democracy have always hoped that the Internet would provide the means for an improved public sphere. But what particular platform features should we look to, to promote deliberative debate online? In their Policy & Internet article “Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms“, Katharina Esau, Dennis Friess, and Christiane Eilders show how differences in the design of various news platforms result in significant variation in the quality of deliberation; measured as rationality, reciprocity, respect, and constructiveness.

The empirical findings of their comparative analysis across three types of news platforms broadly support the assumption that platform design affects the level of deliberative quality of user comments. Deliberation was most likely to be found in news fora, which are of course specifically designed to initiate user discussions. News websites showed a lower level of deliberative quality, with Facebook coming last in terms of meeting deliberative design criteria and sustaining deliberation. However, while Facebook performed poorly in terms of overall level of deliberative quality, it did promote a high degree of general engagement among users.

The study’s findings suggest that deliberative discourse in the virtual public sphere of the Internet is indeed possible, which is good news for advocates of deliberative theory. However, this will only be possible by carefully considering how platforms function, and how they are designed. Some may argue that the “power of design” (shaped by organizers like media companies), contradicts the basic idea of open debate amongst equals where the only necessary force is Habermas’s “forceless force of the better argument”. These advocates of an utterly free virtual public sphere may be disappointed, given it’s clear that deliberation is only likely to emerge if the platform is designed in a particular way.

We caught up with the authors to discuss their findings:

Ed: Just briefly: what design features did you find helped support public deliberation, i.e. reasoned, reciprocal, respectful, constructive discussion?

Katharina / Dennis / Christiane: There are several design features which are known to influence online deliberation. However, in this study we particularly focus on moderation, asynchronous discussion, clear topic definition, and the availability of information, which we have found to have a positive influence on the quality of online deliberation.

Ed.: I associate “Internet as a deliberative space” with Habermas, but have never read him: what’s the short version of what he thinks about “the public sphere” — and how the Internet might support this?

Katharina / Dennis / Christiane: Well, Habermas describes the public sphere as a space where free and equal people discuss topics of public import in a specific way. The respectful exchange of rational reasons is crucial in this normative ideal. Due to its open architecture, the Internet has often been presented as providing the infrastructure for large scale deliberation processes. However, Habermas himself is very skeptical as to whether online spaces support his ideas on deliberation. Ironically, he is one of the most influential authors in online deliberation scholarship.

Ed.: What do advocates of the Internet as a “deliberation space” hope for — simply that people will feel part of a social space / community if they can like things or comment on them (and see similar viewpoints); or that it will result in actual rational debate, and people changing their minds to “better” viewpoints, whatever they may be? I can personally see a value for the former, but I can’t imagine the latter ever working, i.e. given people basically don’t change?

Katharina / Dennis / Christiane: We are thinking that both hopes are present in the current debate, and we partly agree with your perception that changing minds seems to be difficult. But we may also be facing some methodological or empirical issues here, because changing of minds is not an easy thing to measure. We know from other studies that deliberation can indeed cause changes of opinion. However, most of this probably takes place within the individual’s mind. Robert E. Goodin has called this process “deliberation within” and this is not accessible through content analysis. People do not articulate “Oh, thanks for this argument, I have changed my mind”, but they probably take something away from online discussions which makes them more open minded.

Ed.: Does Wikipedia provide an example where strangers have (oddly!) come together to create something of genuine value — but maybe only because they’re actually making a specific public good? Is the basic problem of the idea of the “Internet supporting public discourse” that this is just too aimless an activity, with no obvious individual or collective benefit?

Katharina / Dennis / Christiane: We think Wikipedia is a very particular case. However, we can learn from this case that the collective goal plays a very important role for the quality of contributions. We know from empirical research that if people have the intention of contributing to something meaningful, discussion quality is significantly higher than in online spaces without that desire to have an impact.

Ed.: I wonder: isn’t Twitter the place where “deliberation” now takes place? How does it fit into, or inform, the deliberation literature, which I am assuming has largely focused on things like discussion fora?

Katharina / Dennis / Christiane: This depends on the definition of the term “deliberation”. We would argue that the limitation to 280 characters is probably not the best design feature for meaningful deliberation. However, we may have to think about deliberation in less complex contexts in order to reach more people; but this is a polarizing debate.

Ed.: You say that “outsourcing discussions to social networking sites such as Facebook is not advisable due to the low level of deliberative quality compared to other news platforms”. Facebook has now decided that instead of “connecting the world” it’s going to “bring people closer together” — what would you recommend that they do to support this, in terms of the design of the interactive (or deliberative) features of the platform?

Katharina / Dennis / Christiane: This is a difficult one! We think that the quality of deliberation on Facebook would strongly benefit from moderators, which should be more present on the platform to structure the discussions. By this we do not only mean professional moderators but also participative forms of moderation, which could be encouraged more by mechanisms which support such behaviour.

Read the full article: Katharina Esau, Dennis Friess, and Christiane Eilders (2017) Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms. Policy & Internet 9 (3) 321-342.

Katharina (@kathaesa), Dennis, and Christiane were talking to blog editor David Sutcliffe.

]]>
Could Counterfactuals Explain Algorithmic Decisions Without Opening the Black Box? https://ensr.oii.ox.ac.uk/could-counterfactuals-explain-algorithmic-decisions-without-opening-the-black-box/ Mon, 15 Jan 2018 10:37:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4465 The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly.

In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm.

Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation.

We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice:

Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems?

Sandra: Basically, every decision that can be made by a human can now be made by an algorithm. Which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and correlations that even experienced humans might miss, for example in predicting disease. They are also very cost efficient—they don’t get tired, and they don’t need holidays. This could help to cut costs, for example in healthcare.

Algorithms are also certainly more consistent than humans in making decisions. We have the famous example of judges varying the severity of their judgements depending on whether or not they’ve had lunch. That wouldn’t happen with an algorithm. That’s not to say algorithms are always going to make better decisions: but they do make more consistent ones. If the decision is bad, it’ll be distributed equally, but still be bad. Of course, in a certain way humans are also black boxes—we don’t understand what humans do either. But you can at least try to understand an algorithm: it can’t lie, for example.

Brent: In principle, any sector involving human decision-making could be prone to decision-making by algorithms. In practice, we already see algorithmic systems either making automated decisions or producing recommendations for human decision-makers in online search, advertising, shopping, medicine, criminal justice, etc. The information you consume online, the products you are recommended when shopping, the friends and contacts you are encouraged to engage with, even assessments of your likelihood to commit a crime in the immediate and long-term future—all of these tasks can currently be affected by algorithmic decision-making.

Ed: I can see that algorithmic decision-making could be faster and better than human decisions in many situations. Are there downsides?

Sandra: Simple algorithms that follow a basic decision tree (with parameters decided by people) can be easily understood. But we’re now also using much more complex systems like neural nets that act in a very unpredictable way, and that’s the problem. The system is also starting to become autonomous, rather than being under the full control of the operator. You will see the output, but not necessarily why it got there. This also happens with humans, of course: I could be told by a recruiter that my failure to land a job had nothing to do with my gender (even if it did); an algorithm, however, would not intentionally lie. But of course the algorithm might be biased against me if it’s trained on biased data—thereby reproducing the biases of our world.

We have seen that the COMPAS algorithm used by US judges to calculate the probability of re-offending when making sentencing and parole decisions is a major source of discrimination. Data provenance is massively important, and probably one of the reasons why we have biased decisions. We don’t necessarily know where the data comes from, and whether it’s accurate, complete, biased, etc. We need to have lots of standards in place to ensure that the data set is unbiased. Only then can the algorithm produce nondiscriminatory results.

A more fundamental problem with predictions is that you might never know what would have happened—as you’re just dealing with probabilities; with correlations in a population, rather than with causalities. Another problem is that algorithms might produce correct decisions, but not necessarily fair ones. We’ve been wrestling with the concept of fairness for centuries, without consensus. But lack of fairness is certainly something the system won’t correct itself—that’s something that society must correct.

Brent: The biases and inequalities that exist in the real world and in real people can easily be transferred to algorithmic systems. Humans training learning systems can inadvertently or purposefully embed biases into the model, for example through labelling content as ‘offensive’ or ‘inoffensive’ based on personal taste. Once learned, these biases can spread at scale, exacerbating existing inequalities. Eliminating these biases can be very difficult, hence we currently see much research done on the measurement of fairness or detection of discrimination in algorithmic systems.

These systems can also be very difficult—if not impossible—to understand, for experts as well as the general public. We might traditionally expect to be able to question the reasoning of a human decision-maker, even if imperfectly, but the rationale of many complex algorithmic systems can be highly inaccessible to people affected by their decisions. These potential risks aren’t necessarily reasons to forego algorithmic decision-making altogether; rather, they can be seen as potential effects to be mitigated through other means (e.g. a loan programme weighted towards historically disadvantaged communities), or at least to be weighed against the potential benefits when choosing whether or not to adopt a system.

Ed: So it sounds like many algorithmic decisions could be too complex to “explain” to someone, even if a right to explanation became law. But you propose “counterfactual explanations” as an alternative— i.e. explaining to the subject what would have to change (e.g. about a job application) for a different decision to be arrived at. How does this simplify things?

Brent: So rather than trying to explain the entire rationale of a highly complex decision-making process, counterfactuals allow us to provide simple statements about what would have needed to be different about an individual’s situation to get a different, preferred outcome. You basically work from the outcome: you say “I am here; what is the minimum I need to do to get there?” By providing simple statements that are generally meaningful, and that reveal a small bit of the rationale of a decision, the individual has grounds to change their situation or contest the decision, regardless of their technical expertise. Understanding even a bit of how a decision is made is better than being told “sorry, you wouldn’t understand”—at least in terms of fostering trust in the system.

Sandra: And the nice thing about counterfactuals is that they work with highly complex systems, like neural nets. They don’t explain why something happened, but they explain what happened. And three things people might want to know are:

(1) What happened: why did I not get the loan (or get refused parole, etc.)?

(2) Information so I can contest the decision if I think it’s inaccurate or unfair.

(3) Even if the decision was accurate and fair, tell me what I can do to improve my chances in the future.

Machine learning and neural nets make use of so much information that individuals have really no oversight of what they’re processing, so it’s much easier to give someone an explanation of the key variables that affected the decision. With the counterfactual idea of a “close possible world” you give an indication of the minimal changes required to get what you actually want.

Ed: So would a series of counterfactuals (e.g. “over 18” “no prior convictions” “no debt”) essentially define a space within which a certain decision is likely to be reached? This decision space could presumably be graphed quite easily, to help people understand what factors will likely be important in reaching a decision?

Brent: This would only work for highly simplistic, linear models, which are not normally the type that confound human capacities for understanding. The complex systems that we refer to as ‘black boxes’ are highly dimensional and involve a multitude of (probabilistic) dependencies between variables that can’t be graphed simply. It may be the case that if I were aged between 35-40 with an income of £30,000, I would not get a loan. But, I could be told that if I had an income of £35,000, I would have gotten the loan. I may then assume that an income over £35,000 guarantees me a loan in the future. But, it may turn out that I would be refused a loan with an income above £40,000 because of a change in tax bracket. Non-linear relationships of this type can make it misleading to graph decision spaces. For simple linear models, such a graph may be a very good idea, but not for black box systems; they could, in fact, be highly misleading.

Chris: As Brent says, we’re concerned with understanding complicated algorithms that don’t just use hard cut-offs based on binary features. To use your example, maybe a little bit of debt is acceptable, but it would increase your risk of default slightly, so the amount of money you need to earn would go up. Or maybe certain convictions committed in the past also only increase your risk of defaulting slightly, and can be compensated for with higher salary. It’s not at all obvious how you could graph these complicated interdependencies over many variables together. This is why we picked on counterfactuals as a way to give people a direct and easy to understand path to move from the decision they got now, to a more favourable one at a later date.

Ed: But could a counterfactual approach just end up kicking the can down the road, if we know “how” a particular decision was reached, but not “why” the algorithm was weighted in such a way to produce that decision?

Brent: It depends what we mean by “why”. If this is “why” in the sense of, why was the system designed this way, to consider this type of data for this task, then we should be asking these questions while these systems are designed and deployed. Counterfactuals address decisions that have already been made, but still can reveal uncomfortable knowledge about a system’s design and functionality. So it can certainly inform “why” questions.

Sandra: Just to echo Brent, we don’t want to imply that asking the “why” is unimportant—I think it’s very important, and interpretability as a field has to be pursued, particularly if we’re using algorithms in highly sensitive areas. Even if we have the “what”, the “why” question is still necessary to ensure the safety of those systems.

Chris: And anyone who’s talked to a three-year old knows there is an endless stream of “Why” questions that can be asked. But already, counterfactuals provide a major step forward in answering why, compared to previous approaches that were concerned with providing approximate descriptions of how algorithms make decisions—but not the “why” or the external facts leading to that decision. I think when judging the strength of an explanation, you also have to look at questions like “How easy is this to understand?” and “How does this help the person I’m explaining things to?” For me, counterfactuals are a more immediately useful explanation, than something which explains where the weights came from. Even if you did know, what could you do with that information?

Ed: I guess the question of algorithmic decision making in society involves a hugely complex intersection of industry, research, and policy making? Are we control of things?

Sandra: Artificial intelligence (and the technology supporting it) is an area where many sectors are now trying to work together, including in the crucial areas of fairness, transparency and accountability of algorithmic decision-making. I feel at the moment we see a very multi-stakeholder approach, and I hope that continues in the future. We can see for example that industry is very concerned with it—the Partnership in AI is addressing these topics and trying to come up with a set of industry guidelines, recognising the responsibilities inherent in producing these systems. There are also lots of data scientists (eg at the OII and Turing Institute) working on these questions. Policy-makers around the world (e.g. UK, EU, US, China) preparing their countries for the AI future, so it’s on everybody’s mind at the moment. It’s an extremely important topic.

Law and ethics obviously has an important role to play. The opacity, unpredictability of AI and its potentially discriminatory nature, requires that we think about the legal and ethical implications very early on. That starts with educating the coding community, and ensuring diversity. At the same time, it’s important to have an interdisciplinary approach. At the moment we’re focusing a bit too much on the STEM subjects; there’s a lot of funding going to those areas (which makes sense, obviously), but the social sciences are currently a bit neglected despite the major role they play in recognising things like discrimination and bias, which you might not recognise from just looking at code.

Brent: Yes—and we’ll need much greater interaction and collaboration between these sectors to stay ‘in control’ of things, so to speak. Policy always has a tendency to lag behind technological developments; the challenge here is to stay close enough to the curve to prevent major issues from arising. The potential for algorithms to transform society is massive, so ensuring a quicker and more reflexive relationship between these sectors than normal is absolutely critical.

Read the full article: Sandra Wachter, Brent Mittelstadt, Chris Russell (2018) Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology (Forthcoming).

This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.


Sandra Wachter, Brent Mittelstadt and Chris Russell were talking to blog editor David Sutcliffe.

]]>
Why we shouldn’t be pathologizing online gaming before the evidence is in https://ensr.oii.ox.ac.uk/why-we-shouldnt-be-pathologizing-online-gaming-before-the-evidence-is-in/ Tue, 10 Oct 2017 09:25:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4446 Internet-based video games are a ubiquitous form of recreation pursued by the majority of adults and young people. With sales eclipsing box office receipts, games are now an integral part of modern leisure. However, the American Psychiatric Association (APA) recently identified Internet Gaming Disorder (IGD) as a potential psychiatric condition and has called for research to investigate the potential disorder’s validity and its impacts on health and behaviour.

Research responding to this call for a better understanding of IGD is still at a formative stage, and there are active debates surrounding it. There is a growing literature that suggests there is a basis to expect that excessive or problematic gaming may be related to lower health, though findings in this area are mixed. Some argue for a theoretical framing akin to a substance abuse disorder (i.e. where gaming is considered to be inherently addictive), while others frame Internet-based gaming as a self-regulatory challenge for individuals.

In their article “A prospective study of the motivational and health dynamics of Internet Gaming Disorder“, Netta Weinstein, the OII’s Andrew Przybylski, and Kou Murayama address this gap in the literature by linking self-regulation and Internet Gaming Disorder research. Drawing on a representative sample of 5,777 American adults they examine how problematic gaming emerges from a state of individual “dysregulation” and how it predicts health — finding no evidence directly linking IGD to health over time.

This negative finding indicates that IGD may not, in itself, be robustly associated with important clinical outcomes. As such, it may be premature to invest in management of IGD using the same kinds of approaches taken in response to substance-based addiction disorders. Further, the findings suggests that more high-quality evidence regarding clinical and behavioural effects is needed before concluding that IGD is a legitimate candidate for inclusion in future revisions of the Diagnostic and Statistical Manual of Mental Disorders.

We caught up with Andy to explore the implications of the study:

Ed: To ask a blunt question upfront: do you feel that Internet Gaming Disorder is a valid psychiatric condition (and that “games can cause problems”)? Or is it still too early to say?

Andy: No, it is not. It’s difficult to overstate how sceptical the public should be of researchers who claim, and communicate their research, as if Internet addiction, gaming addiction, or Internet gaming disorder (IGD) are recognized psychiatric disorders. The fact of the matter is that American psychiatrists working on the most recent revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) highlighted that problematic online play was a topic they were interested in learning more about. These concerns are highlighted in Section III of the DSM5 (entitled “Emerging Measures and Models”). For those interested in this debate see this position paper.

Ed: Internet gaming seems like quite a specific activity to worry about: how does it differ from things like offline video games, online gambling and casino games; or indeed the various “problematic” uses of the Internet that lead to people admitting themselves to digital detox camps?

Andy: In some ways computer games, and Internet ones, are distinct from other activities. They are frequently updated to meet players expectations and some business models of games, such as pay-to-play are explicitly designed to target high engagement players to spend real money for in-game advantages. Detox camps are very worrying to me as a scientist because they have no scientific basis, many of those who run them have financial conflicts of interest when they comment in the press, and there have been a number of deaths at these facilities.

Ed: You say there are two schools of thought: that if IGD is indeed a valid condition, that it should be framed as an addiction, i.e. that there’s something inherently addictive about certain games. Alternatively, that it should be framed as a self-regulatory challenge, relating to an individual’s self-control. I guess intuitively it might involve a bit of both: online environments can be very persuasive, and some people are easily persuaded?

Andy: Indeed it could be. As researchers mainly interested in self-regulation we’re most interested in gaming as one of many activities that can be successfully (or unsuccessfully) integrated into everyday life. Unfortunately we don’t know much for sure about whether there is something inherently addictive about games because the research literature is based largely on inferences based on correlational data, drawn from convenience samples, with post-hoc analyses. Because the evidence base is of such low quality most of the published findings (i.e. correlations/factor analyses) regarding gaming addiction supporting it as valid condition likely suffer from the Texas Sharpshooter Fallacy.

Ed: Did you examine the question of whether online games may trigger things like anxiety, depression, violence, isolation etc. — or whether these conditions (if pre-existing) might influence the development of IGD?

Andy: Well, our modelling focused on the links between Internet Gaming Disorder, health (mental, physical, and social), and motivational factors (feeling competent, choiceful, and a sense of belonging) examined at two time points six months apart. We found that those who had their motivational needs met at the start of the study were more likely to have higher levels of health six months later and were less likely to say they experienced some of the symptoms of Internet Gaming Disorder.

Though there was no direct link between Internet Gaming Disorder and health six months later, we perform an exploratory analysis (one we did not pre-register) and found an indirect link between Internet Gaming Disorder and health by way of motivational factors. In other words, Internet Gaming Disorder was linked to lower levels of feeling competent, choiceful, and connected, which was in turn linked to lower levels of health.

Ed: All games are different. How would a clinician identify if someone was genuinely “addicted” to a particular game — there would presumably have to be game-by-game ratings of their addictive potential (like there are with drugs). How would anyone find the time to do that? Or would diagnosis focus more on the individual’s behaviour, rather than what games they play? I suppose this goes back to the question of whether “some games are addictive” or whether just “some people have poor self-control”?

Andy: No one knows. In fact, the APA doesn’t define what “Internet Games” are. In our research we define ask participants to define it for themselves by “Think[ing] about the Internet games you may play on Facebook (e.g. Farmville), Tablet/Smartphones (e.g. Candy Crush), or Computer/Consoles (e.g. Minecraft).” It’s very difficult to overstate how suboptimal this state of affairs is from a scientific perspective.

Ed: Is it odd that it was the APA’s Substance-Related Disorders Work Group that has called for research into IGD? Are “Internet Games” unique in being classed as a substance, or are there other information based-behaviours that fall under the group’s remit?

Andy: Yes it’s very odd. Our research group is not privy to these discussions but my understanding is that a range of behaviours and other technology-related activities, such as general Internet use have been discussed.

Ed: A huge amount of money must be spent on developing and refining these games, i.e. to get people to spend as much time (and money) as possible playing them. Are academics (and clinicians) always going to be playing catch-up to industry?

Andy: I’m not sure that there is one answer to this. One useful way to think of online games is using the example of a gym. Gyms are most profitable when many people are paying for (and don’t cancel) their memberships but owners can still maintain a small footprint. The world’s most successful gym might be a one square meter facility, with seven billion members but no one ever goes. Many online games are like this, some costs scale nicely, but others have high costs, like servers, community management, upkeep, and power. There are many studying the addictive potential of games but because they constantly reinvent the wheel by creating duplicate survey instruments (there are literally dozens that are only used once or a couple of times) very little of real-world relevance is ever learned or transmitted to the public.

Ed: It can’t be trivial to admit another condition into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)? Presumably there must be firm (reproducible) evidence that it is a (persistent) problem for certain people, with a specific (identifiable) cause — given it could presumably be admitted in courts as a mitigating condition, and possibly also have implications for health insurance and health policy? What are the wider implications if it does end up being admitted to the DSM-5?

Andy: It is very serious stuff. Opening the door to pathologizing one of the world’s most popular recreational activities risks stigmatizing hundreds of millions of people and shifting resources in an already overstretched mental health systems over the breaking point.

Ed: You note that your study followed a “pre-registered analysis plan” — what does that mean?

Andy: We’ve discussed the wider problems in social, psychological, and medical science before. But basically, preregistration, and Registered Reports provide scientists a way to record their hypotheses in advance of data collection. This improves the quality of the inferences researchers draw from experiments and large-scale social data science. In this study, and also in our other work, we recorded our sampling plan, our analysis plan, and our materials before we collected our data.

Ed: And finally: what follow up studies are you planning?

Andy: We are now conducting a series of studies investigating problematic play in younger participants with a focus on child-caregiver dynamics.

Read the full article: Weinstein N, Przybylski AK, Murayama K. (2017) A prospective study of the motivational and health dynamics of Internet Gaming Disorder. PeerJ 5:e3838 https://doi.org/10.7717/peerj.3838

Additional peer-reviewed articles in this area by Andy include:

Przybylski, A.K. & Weinstein N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital Screens and the Mental Well-Being of Adolescents. Psychological Science. DOI: 10.1177/0956797616678438.

Przybylski, A. K., Weinstein, N., & Murayama, K. (2016). Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. DOI: 10.1176/appi.ajp.2016.16020224.

Przybylski, A. K. (2016). Mischievous responding in Internet Gaming Disorder research. PeerJ, 4, e2401. https://doi.org/10.7717/peerj.2401

For more on the ongoing “crisis in psychology” and how pre-registration of studies might offer a solution, see this discussion with Andy and Malte Elson: Psychology is in crisis, and here’s how to fix it.

Andy Przybylski was talking to blog editor David Sutcliffe.

]]>
Censorship or rumour management? How Weibo constructs “truth” around crisis events https://ensr.oii.ox.ac.uk/censorship-or-rumour-management-how-weibo-constructs-truth-around-crisis-events/ Tue, 03 Oct 2017 08:48:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4350 As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality — as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking.

There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts“, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion.

We caught up with the authors to discuss their findings:

Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both exploring mechanisms to try to combat it. On the flip-side we have state-sponsored propaganda now suddenly very visible (e.g. Russia), in an attempt to reduce trust, destabilise institutions, and inject rumour into the public sphere. What is the difference between rumour, propaganda and fake news; and how do they play out online in China?

Jing / Chung-hong / King-wa: The definition of rumour is very fuzzy, and it is very common to see ‘rumour’ being used interchangeably with other related concepts. Our study drew the definition of rumour from the fields of sociology and social psychology, wherein this concept has been most thoroughly articulated.

Rumour is a form of unverified information circulated in uncertain circumstances. The major difference between rumour and propaganda lies in their functions. Rumour sharing is a social practice of sense-making, therefore it functions to help people make meaning of an uncertain situation. In contrast, the concept of propaganda is more political. Propaganda is a form of information strategically used to mobilise political support for a political force.

Fake news is a new buzz word and works closely with another buzz term – post-truth. There is no established and widely accepted definition of fake news, and its true meaning(s) should be understood with respect to specific contexts. For example, Donald Trump’s use of “fake news” in his tweets aims to attack a few media outlets who have reported unfavourable stories about the him, whereas ungrounded and speculative “fake news” is created and widely circulated on the public’s social media. If we simply understand fake news as a form of fabricated news, I would argue that fake news can operate as either rumour, propaganda, or both of them.

It is worth pointing out that, in the Chinese contexts, rumour may not always be fake and propaganda is not necessarily bad. As pointed out by different scholars, rumour functions as a social protest against the authoritarian state’s information control. And in the Chinese language, the Mandarin term Xuanchuan (‘propaganda’) does not always have the same negative connotation as does its English counterpart.

Ed.: You mention previous research finding that the “Chinese government’s propaganda and censorship policies were mainly used by the authoritarian regime to prevent collective action and to maintain social stability” — is that what you found as well? i.e. that criticism of the Government is tolerated, but not organised protest?

Jing / Chung-hong / King-wa: This study examined rumour communication around the 2015 Tianjin blasts, therefore our analyses did not directly address Weibo users’ attempts to organise protest. However, regarding the Chinese government’s response to Weibo users’ criticism of its handling of the crisis, our study suggested that some criticisms of the government were tolerated. For example, the messages about local government officials mishandling of the crisis were not heavily censored. Instead, what we have found seems to confirm that social stability is of paramount importance for the ruling regime and thus online censorship was used as a mean to maintain social stability. It explains Weibo’s decision to silence the discussions on the assault of a CNN reporter, the chaotic aftermath of the blasts and the local media’s reluctance to broadcast the blasts.

Ed.: What are people’s responses to obvious government attempts to censor or head-off online rumour, e.g. by deleting posts or issuing statements? And are people generally supportive of efforts to have a “clean, rumour-free Internet”, or cynical about the ultimate intentions or effects of censorship?

Jing / Chung-hong / King-wa: From our time series analysis, we found different responses from netizens with respect to topics but we cannot find a consistent pattern of a chilling effect. Basically, the Weibo rumour management strategies, either deleting posts or refuting posts, will usually stimulate more public interest. At least as shown in our data, netizens are not supportive of those censorship efforts and somehow end up posting more messages of rumours as a counter-reaction.

Ed.: Is online rumour particularly a feature of contemporary Chinese society — or do you think that’s just a human thing (we’ve certainly seen lots of lying in the Brexit and Trump campaigns)? How might rumour relate more generally to levels of trust in institutions, and the presence of a strong, free press?

Jing / Chung-hong / King-wa: Online rumour is common in China, but it can be also pervasive in any country where use of digital technologies for communication is prevalent. Rumour sharing is a human thing, yes you can say that. But it is more accurate to say, it is a societally constructed thing. As mentioned earlier, rumour is a social practice of collective sense-making under uncertain circumstances.

Levels of public trust in governmental organisations and the media can directly impact rumour circulation, and rumour-debunking efforts. When there is a lack of public trust in official sources of information, it opens up room for rumour circulation. Likewise, when the authorities have low credibility, the official rumour debunking efforts can backfire, because the public may think the authorities are trying to hide something. This might explain what we observed in our study.

Ed.: I guess we live in interesting times; Theresa May now wants to control the Internet, Trump is attacking the very institution of the press, social media companies are under pressure to accept responsibility for the content they host. What can we learn from the Chinese case, of a very sophisticated system focused on social control and stability?

Jing / Chung-hong / King-wa: The most important implication of this study is that the most sophisticated rumour control mechanism can only be developed on a good understanding of the social roots of rumour. As our study shows, without solving the more fundamental social cause of rumour, rumour debunking efforts can backfire.


Read the full article: Jing Zeng, Chung-hong Chan and King-wa Fu (2017) How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts. Policy & Internet 9 (3) 297-320. DOI: 10.1002/poi3.155

Jing Zeng, Chung-hong Chan and King-wa Fu were talking to blog editor David Sutcliffe.

]]>
Does Internet voting offer a solution to declining electoral turnout? https://ensr.oii.ox.ac.uk/does-internet-voting-offer-a-solution-to-declining-electoral-turnout/ Tue, 19 Sep 2017 09:27:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4379 e-voting had been discussed as one possible remedy for the continuing decline in turnout in Western democracies. In their Policy & Internet article “Could Internet Voting Halt Declining Electoral Turnout? New Evidence that e-Voting is Habit-forming”, Mihkel Solvak and Kristjan Vassil examine the degree to which e-voting is more habit forming than paper voting. Their findings indicate that while e-voting doesn’t seem to raise turnout, it might at least arrest its continuing decline in Western democracies. And any technology capable of stabilizing turnout is worth exploring.

Using cross-sectional survey data from five e-enabled elections in Estonia — a country with a decade’s experience of nationwide remote Internet voting — the authors show e-voting to be strongly persistent among voters, with clear evidence of habit formation. While a technological fix probably won’t address the underlying reasons for low turnout, it could help stop further decline by making voting easier for those who are more likely to turn out. Arresting turnout decline by keeping those who participate participating might be one realistic goal that e-voting is able to achieve.

We caught up with the authors to discuss their findings:

Ed.: There seems to be a general trend of declining electoral turnouts worldwide. Is there any form of consensus (based on actual data) on why voting rates are falling?

Mihkel / Kristjan: A consensus in terms of a single major source of turnout decline that the data points to worldwide is clearly lacking. There is however more of an agreement as to why certain regions are experiencing a comparatively steeper decline. Disenchantment with democracy and an overall disappointment in politics is the number one reason usually listed when discussing lower and declining turnout levels in new democracies.

While the same issues are nowadays also listed for older established democracies, there is no hard comparative evidence for it. We do know that the level of interest in and engagement with politics has declined across the board in Western Europe when compared to the 1960-70s, but this doesn’t count as disenchantment, and the clear decline in turnout levels in established democracies started a couple of decades later, in the early 1990s.

Given that turnout levels are still widely different depending on the country, the overall worldwide decline is probably a combination of the addition of new democracies with low and more-rapidly declining turnout levels, and a plethora of country-specific reasons in older democracies that are experiencing a somewhat less steep decline in turnout.

Ed.: Is the worry about voting decline really about “falling representation” per se, or that it might be symptomatic of deeper problems with the political ecosystem, i.e. fewer people choosing politics as a career, less involvement in local politics, less civic engagement (etc.). In other words — is falling voting (per se) even the main problem?

Mihkel / Kristjan: We can only agree; it clearly is a symptom of deeper problems. Although high turnout is a good thing, low turnout is not necessarily a problem as people have the freedom not to participate and not to be interested in politics. It becomes a problem when low turnout leads to a lack of legitimacy of the representative body and consequently also of the whole process of representation. And as you rightly point out, real problems start much earlier and at a lower level than voting in parliamentary elections. The paradox is that the technology we have examined in our article — remote internet voting — clearly can’t address these fundamental problems.

Ed.: I’m assuming the Estonian voters were voting remotely online (rather than electronically in a booth), i.e. in their own time, at their convenience? Are you basically testing the effect of offering a more convenient voting format? (And finding that format to be habit-forming?).

Mihkel / Kristjan: Yes. One of the reasons we examined Internet voting from this angle was the apparent paradox of every third vote being cast online but also only a minute increase in turnout. A few other countries also experimenting with electronic voting have seen no tangible differences in turnout levels. The explanation is of course that it is a convenience voting method that makes voting simpler for people who are already quite likely to vote — now they simply use a more convenient option to do so. But what we noted in our article was a clearly higher share of electronic voters who turned out more consistently over different elections in comparison to voters voting on paper, and even when they didn’t show traits that usually correlate with electronic voting, like living further away from polling stations. So convenience did not seem to tell the whole story, even though it might have been one of the original reasons why electronic voting was picked up.

Ed.: Presumably with remote online voting, it’s possible to send targeted advertising to voters (via email and social media), with links to vote, i.e. making it more likely people will vote in the moment, in response to whatever issues happen to be salient at the time. How does online campaigning (and targeting) change once you introduce online voting?

Mihkel / Kristjan: Theoretically, parties should be able to lock voters in more easily by advertising links to the voting solution in their online campaigns; as in banners saying “vote for me and you can do it directly here (linked)”. In the Estonian case there is an informal agreement to remain from doing that, however, in order to safeguard the neutrality of online voting. Trust in online voting is paramount, even more so than is the case with paper voting, so it probably is a good idea to try to ensure that people trust the online voting solution to be controlled by a neutral state agent tasked with conducting the elections, in order to avoid any possible associations between certain parties and the voting environment (which linking directly to the voting mechanism might cause to happen). That can never be 100% ensured though, so online campaigns coupled with online voting can make it harder for election authorities to convey the image of impartiality of their procedures.

As for voting in the moment I don’t see online voting to be substantially more susceptible to this than other voting modes — given last minute developments can influence voters voting on paper as well. I think the latest US and French presidential elections are a case in point. Some argue that the immediate developments and revelations in the Clinton email scandal investigation a couple of weeks before voting day turned the result. In the French case the hacking and release of Macron’s campaign communications immediately before voting day however didn’t play a role in the outcome. Voting in the moment will happen or not regardless of the voting mode being used.

Ed.: What do you think the barriers are to greater roll-out of online voting: presumably there are security worries, i.e. over election hacking and lack of a paper trail? (and maybe also worries about the possibility of coercive voting, if it doesn’t take place alone in a booth?)

Mihkel / Kristjan: The number one barrier to greater roll-out remains security worries about hacking. Given that people cannot observe electronic voting (i.e. how their vote arrives at the voting authorities) the role of trust becomes more central than for paper voting. And trust can be eroded easily by floating rumours even without technically compromising voting systems. The solution is to introduce verifiability into the system, akin to a physical ballot in the case of paper voting, but this makes online voting even more technologically complex.

A lot of research is being put into verifiable electronic voting systems to meet very strict security requirements. The funny thing is however that the fears holding back wider online voting are not really being raised for paper voting, even though they should. At a certain stage of the process all paper votes become bits of information in an information system as local polling stations enter or report them into computer systems that are used to aggregate the votes and determine the seat distribution. No election is fully paper based anymore.

Vote coercion problems of course cannot be ruled out and is by definition more likely when the voting authorities don’t exercise control over the immediate voting environment. I think countries that suffer from such problems shouldn’t introduce a system that might exacerbate that even more. But again, most countries allow for multiple modes that differ in the degree of neutrality and control exercised by the election authority. Absentee ballots and postal voting (which is very widespread in some countries, like Switzerland), are as vulnerable to voter coercion as is remote Internet voting. Online voting is simply one mode of voting — maintaining a healthy mix of voting modes is probably the best solution to ensure that elections are not compromised.

Ed.: I guess declining turnout is probably a problem that is too big and complex to be understood or “fixed” — but how would you go about addressing it, if asked to do so..?

Mihkel / Kristjan: We fully agree — the technology of online voting will not fix low turnout as it doesn’t address the underlying problem. It simply makes voting somewhat more convenient. But voting is not difficult in the first place — with weekend voting, postal voting and absentee ballots; just to name a few things that already ease participation.

There are technologies that have a revolutionary effect (i.e. that alter impact and that are truly innovatory) and then there are small technological fixes that provide for a simpler and more pleasurable existence. Online voting is not revolutionary; it does not give a new experience of participation, it is simply one slightly more convenient mode of voting and for that a very worthwhile thing. And I think this is the maximum that can be done and that is within our control when it comes to influencing turnout. Small incremental fixes to a large multifaceted problem.

Read the full article: Mihkel Solvak and Kristjan Vassil (2017) Could Internet Voting Halt Declining Electoral Turnout? New Evidence that e-Voting is Habit-forming. Policy & Internet. DOI: 10.1002/poi3.160
Mihkel Solvak and Kristjan Vassil were talking to blog editor David Sutcliffe.
]]>
Introducing Martin Dittus, Data Scientist and Darknet Researcher https://ensr.oii.ox.ac.uk/introducing-martin-dittus-data-scientist-and-darknet-researcher/ Wed, 13 Sep 2017 08:03:16 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4391 We’re sitting upstairs, hunched over a computer, and Martin is showing me the darknet. I guess I have as good an idea as most people what the darknet is, i.e. not much. We’re looking at the page of someone claiming to be in the UK who’s selling “locally produced” cannabis, and Martin is wondering if there’s any way of telling if it’s blood cannabis. How you would go about determining this? Much of what is sold on these markets is illegal, and can lead to prosecution, as with any market for illegal products.

But we’re not buying anything, just looking. The stringent ethics process governing his research means he currently can’t even contact anyone on the marketplace.

[Read more: Exploring the Darknet in Five Easy Questions]

Martin Dittus is a Data Scientist at the Oxford Internet Institute, and I’ve come to his office to find out about the OII’s investigation (undertaken with Mark Graham and Joss Wright) of the economic geographies of illegal economic activities in anonymous Internet marketplaces, or more simply: “mapping the darknet”. Basically: what’s being sold, by whom, from where, to where, and what’s the overall value?

Between 2011 and 2013, the Silk Road marketplace attracted hundreds of millions of dollars worth of bitcoin-based transactions before being closed down by the FBI, but relatively little is known about the geography of this global trade. The darknet throws up lots of interesting research topics: around traffic in illegal wildlife products, the effect of healthcare policies on demand for illegal prescription drugs, whether law enforcement has (or can have) much of an impact, questions around the geographies of trade (e.g. sites of production and consumption), and the economics of these marketplaces — as well as the ethics of researching all this.

OII researchers tend to come from very different disciplinary backgrounds, and I’m always curious about what brings people here. A computer scientist by training, Martin first worked as a software developer for Last.fm, an online music community that built some of the first pieces of big data infrastructure, “because we had a lot of data and very little money.” In terms of the professional experience he says it showed him how far you can get by being passionate about your work — and the importance of resourcefulness; “that a good answer is not to say, ‘No, we can’t do that,’ but to say: ‘Well, we can’t do it this way, but here are three other ways we can do it instead.’”

Resourcefulness is certainly something you need when researching darknet marketplaces. Two very large marketplaces (AlphaBay and Hansa) were recently taken down by the FBI, DEA and Dutch National Police, part-way through Martin’s data collection. Having your source suddenly disappear is a worry for any long-term data scraping process. However in this case, it raises the opportunity of moving beyond a simple observational study to a quasi-experiment. The disruption allows researchers to observe what happens in the overall marketplace after the external intervention — does trade actually go down, or simply move elsewhere? How resilient are these marketplaces to interference by law enforcement?

Having originally worked in industry for a few years, Martin completed a Master’s programme at UCL’s Centre for Advanced Spatial Analysis, which included training in cartography. The first time I climbed the three long flights of stairs to his office to say hello we quickly got talking about crisis mapping platforms, something he’d subsequently worked on during his PhD at UCL. He’s particularly interested in the historic context for the recent emergence of these platforms, where large numbers of people come together over a shared purpose: “Platforms like Wikipedia, for example, can have significant social and economic impact, while at the same time not necessarily being designed platforms. Wikipedia is something that kind of emerged, it’s the online encyclopaedia that somehow worked. For me that meant that there is great power in these platform models, but very little understanding of what they actually represent, or how to design them; even how to conceptualise them.”

“You can think of Wikipedia as a place for discourse, as a community platform, as an encyclopaedia, as an example of collective action. There are many theoretical ways to interpret it, and I think this makes it very powerful, but also very hard to understand what Wikipedia is; or indeed any large and complex online platform, like the darknet markets we’re looking at now. I think we’re at a moment in history where we have this new superpower that we don’t fully understand yet, so it’s a time to build knowledge.” Martin claims to have become “a PhD student by accident” while looking for a way to participate in this knowledge building: and found that doing a PhD was a great way to do so.

Whether discussing Wikipedia, crisis-mapping, the darknet, or indeed data infrastructures, it’s great to hear people talking about having to study things from many different angles — because that’s what the OII, as a multidisciplinary department, does in spades. It’s what we do. And Martin certainly agrees: “I feel incredibly privileged to be here. I have a technical background, but these are all intersectional, interdisciplinary, highly complex questions, and you need a cross-disciplinary perspective to look at them. I think we’re at a point where we’ve built a lot of the technological building blocks for online spaces, and what’s important now are the social questions around them: what does it mean, what are those capacities, what can we use them for, and how do they affect our societies?”

Social questions around darknet markets include the development of trust relationships between buyers and sellers (despite the explicit goal of law enforcement agencies to fundamentally undermine trust between them); identifying societal practices like consumption of recreational drugs, particularly when transplanted into a new online context; and the nature of market resilience, like when markets are taken down by law enforcement. “These are not, at core, technical questions,” Martin says. “Technology will play a role in answering them, but fundamentally these are much broader questions. What I think is unique about the OII is that it has a strong technical competence in its staff and research, but also a social, political, and economic science foundation that allows a very broad perspective on these matters. I think that’s absolutely unique.”

There were only a few points in our conversation where Martin grew awkward, a few topics he said he “would kind of dance around“ rather than provide on-record chat for a blog post. He was keen not to inadvertently provide a how-to guide for obtaining, say, fentanyl on the darknet; there are tricky unanswered questions of class (do these marketplaces allow a gentrification of illegal activities?) and the whitewashing of the underlying violence and exploitation inherent to these activities (thinking again about blood cannabis); and other areas where there’s simply not yet enough research to make firm pronouncements.

But we’ll certainly touch on some of these areas as we document the progress of the project over the coming months, exploring some maps of the global market as they are released, and also diving into the ethics of researching the darknet; so stay tuned!

Until then, Martin Dittus can be found at:

Web: https://www.oii.ox.ac.uk/people/martin-dittus/
Email: martin.dittus@oii.ox.ac.uk
Twitter: @dekstop

Follow the darknet project at: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

]]>
Exploring the Darknet in Five Easy Questions https://ensr.oii.ox.ac.uk/exploring-the-darknet-in-five-easy-questions/ Tue, 12 Sep 2017 07:59:09 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4388 Many people are probably aware of something called “the darknet” (also sometimes called the “dark web”) or might have a vague notion of what it might be. However, many probably don’t know much about the global flows of drugs, weapons, and other illicit items traded on darknet marketplaces like AlphaBay and Hansa, the two large marketplaces that were recently shut down by the FBI, DEA and Dutch National Police.

We caught up with Martin Dittus, a data scientist working with Mark Graham and Joss Wright on the OII’s darknet mapping project, to find out some basics about darknet markets, and why they’re interesting to study.

Firstly: what actually is the darknet?

Martin: The darknet is simply a part of the Internet you access using anonymising technology, so you can visit websites without being easily observed. This allows you to provide (or access) services online that can’t be tracked easily by your ISP or law enforcement. There are actually many ways in which you can visit the darknet, and it’s not technically hard. The most popular anonymising technology is probably Tor. The Tor browser functions just like Chrome, Internet Explorer or Firefox: it’s a piece of software you install on your machine to then open websites. It might be a bit of a challenge to know which websites you can then visit (you won’t find them on Google), but there are darknet search engines, and community platforms that talk about it.

The term ‘darknet’ is perhaps a little bit misleading, in that a lot of these activities are not as hidden as you might think: it’s inconvenient to access, and it’s anonymising, but it’s not completely hidden from the public eye. Once you’re using Tor, you can see any information displayed on darknet websites, just like you would on the regular internet. It is also important to state that this anonymisation technology is entirely legal. I would personally even argue that such tools are important for democratic societies: in a time where technology allows pervasive surveillance by your government, ISP, or employer, it is important to have digital spaces where people can communicate freely.

And is this also true for the marketplaces you study on the darknet?

Martin: Definitely not! Darknet marketplaces are typically set up to engage in the trading of illicit products and services, and as a result are considered criminal in most jurisdictions. These market platforms use darknet technology to provide a layer of anonymity for the participating vendors and buyers, on websites ranging from smaller single-vendor sites to large trading platforms. In our research, we are interested in the larger marketplaces, these are comparable to Amazon or eBay — platforms which allow many individuals to offer and access a variety of products and services.

The first darknet market platform to acquire some prominence and public reporting was the Silk Road — between 2011 and 2013, it attracted hundreds of millions of dollars worth of bitcoin-based transactions, before being shut down by the FBI. Since then, many new markets have been launched, shut down, and replaced by others… Despite the size of such markets, relatively little is known about the economic geographies of the illegal economic activities they host. This is what we are investigating at the Oxford Internet Institute.

And what do you mean by “economic geography”?

Martin: Economic geography tries to understand why certain economic activity happens in some places, but not others. In our case, we might ask where heroin dealers on darknet markets are geographically located, or where in the world illicit weapon dealers tend to offer their goods. We think this is an interesting question to ask for two reasons. First, because it connects to a wide range of societal concerns, including drug policy and public health. Observing these markets allows us to establish an evidence base to better understand a range of societal concerns, for example by tracing the global distribution of certain emergent practices. Second, it falls within our larger research interest of internet geography, where we try to understand the ways in which the internet is a localised medium, and not just a global one as is commonly assumed.

So how do you go about studying something that’s hidden?

Martin: While the strong anonymity on darknet markets makes it difficult to collect data about the geography of actual consumption, there is a large amount of data available about the offered goods and services themselves. These marketplaces are highly structured — just like Amazon there’s a catalogue of products, every product has a title, a price, and a vendor who you can contact if you have questions. Additionally, public customer reviews allow us to infer trading volumes for each product. All these things are made visible, because these markets seek to attract customers. This allows us to observe large-scale trading activity involving hundreds of thousands of products and services.

Almost paradoxically, these “hidden” dark markets allow us to make visible something that happens at a societal level that otherwise could be very hard to research. By comparison, studying the distribution of illicit street drugs would involve the painstaking investigative work of speaking to individuals and slowly trying to acquire the knowledge of what is on offer and what kind of trading activity takes place; on the darknet it’s all right there. There are of course caveats: for example, many markets allow hidden listings, which means we don’t know if we’re looking at all the activity. Also, some markets are more secretive than others. Our research is limited to platforms that are relatively open to the public.

Finally: will you be sharing some of the data you’re collecting?

Martin: This is definitely our intention! We have been scraping the largest marketplaces, and are now building a reusable dataset with geographic information at the country level. Initially, this will be used to support some of our own studies. We are currently mapping, visualizing, and analysing the data, building a fairly comprehensive picture of darknet market trades. It is also important for us to state that we’re not collecting detailed consumption profiles of participating individuals (not that we could). We are independent academic researchers, and work neither with law enforcement, nor with platform providers.

Primarily, we are interested in the activity as a large-scale global phenomenon, and for this purpose, it is sufficient to look at trading data in the aggregate. We’re interested in scenarios that might allow us to observe and think about particular societal concerns, and then measure the practices around those concerns in ways that are quite unusual, that otherwise would be very challenging. Ultimately, we would like to find ways of opening up the data to other researchers, and to the wider public. There are a number of practical questions attached to this, and the specific details are yet to be decided — so stay tuned!

Martin Dittus is a researcher and data scientist at the Oxford Internet Institute, where he studies the economic geography of darknet marketplaces. More: @dekstop

Follow the project here: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

 

Further reading (academic):

Further reading (popular):


Martin Dittus was talking to OII Managing Editor David Sutcliffe.

]]>
Digital platforms are governing systems — so it’s time we examined them in more detail https://ensr.oii.ox.ac.uk/digital-platforms-are-governing-systems-so-its-time-we-examined-them-in-more-detail/ Tue, 29 Aug 2017 09:49:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4346 Digital platforms are not just software-based media, they are governing systems that control, interact, and accumulate. As surfaces on which social action takes place, digital platforms mediate — and to a considerable extent, dictate — economic relationships and social action. By automating market exchanges they solidify relationships into material infrastructure, lend a degree of immutability and traceability to engagements, and render what previously would have been informal exchanges into much more formalized rules.

In his Policy & Internet article “Platform Logic: An Interdisciplinary Approach to the Platform-based Economy“, Jonas Andersson Schwarz argues that digital platforms enact a twofold logic of micro-level technocentric control and macro-level geopolitical domination, while supporting a range of generative outcomes between the two levels. Technology isn’t ‘neutral’, and what designers want may clash with what users want: so it’s important that we take a multi-perspective view of the role of digital platforms in contemporary society. For example, if we only consider the technical, we’ll notice modularity, compatibility, compliance, flexibility, mutual subsistence, and cross-subsidization. By contrast, if we consider ownership and organizational control, we’ll observe issues of consolidation, privatization, enclosure, financialization and protectionism.

When focusing on local interactions (e.g. with users), the digital nature of platforms is seen to strongly determine structure; essentially representing an absolute or totalitarian form of control. When we focus on geopolitical power arrangements in the “platform society”, patterns can be observed that are worryingly suggestive of market dominance, colonization, and consolidation. Concerns have been expressed that these (overwhelmingly US-biased) platform giants are not only enacting hegemony, but are on a road to “usurpation through tech — a worry that these companies could grow so large and become so deeply entrenched in world economies that they could effectively make their own laws”.

We caught up with Jonas to discuss his findings:

Ed.: You say that there are lots of different ways of considering “platforms”: what (briefly) are some of these different approaches, and why should they be linked up a bit? Certainly the conference your paper was presented at (“IPP2016: The Platform Society”) seemed to have struck an incredibly rich seam in this topic, and I think showed the value of approaching an issue like digital platforms from multiple disciplinary angles.

Jonas: In my article I’ve chosen to exclusively theorize *digital* platforms, which of course narrows down the meaning of the concept, to begin with. There are different interpretations as for what actually constitutes a digital platform. There has to be an element of proprietary control over the surface on which interaction takes place, for example. While being ubiquitous digital tools, free software and open protocols need not necessarily be considered as platforms, while proprietary operating systems should.

Within contemporary media studies there is considerable divergence as to whether one should define so-called over-the-top streaming services as platforms or not. Netflix, for example: In a strict technical sense, it’s not a platform for self-publishing and sharing in the way that YouTube is—but, in an economic sense, Netflix definitely enacts a multi-sided market, which is one of the key components of a what a platform does, economically speaking. Since platforms crystallize economic relationships into material infrastructure, conceptual conflation of this kind is unavoidable—different scholars tend to put different emphasis on different things.

Hence, when it comes to normative concerns, there are numerous approaches, ranging from largely apolitical computer science and design management studies, brandishing a largely optimistic view where blithe conceptions of innovation and generativity are emphasized, to critical approaches in political economy, where things like market dominance and consolidation are emphasized.

In my article, I try to relate to both of these schools of thought, by noting that they each are normative — albeit in vastly different ways — and by noting that not only do they each have somewhat different focus, they actually bring different research objects to the table: Usually, “efficacy” in purely technical interaction design is something altogether different than “efficacy” in matters of societal power relations, for example. While both notions can be said to be true, their respective validity might differ, depending on which matter of concern we are dealing with in each respective inquiry.

Ed.: You note in your article that platforms have a “twofold logic of micro-level technocentric control and macro-level geopolitical domination” .. which sounds quite a lot like what government does. Do you think “platform as government” is a useful way to think about this, i.e. are there any analogies?

Jonas: Sure, especially if we understand how platforms enact governance in really quite rigid forms. Platforms literally transform market relations into infrastructure. Compared to informal or spontaneous social structures, where there’s a lot of elasticity and ambiguity — put simply, giving-and-taking — automated digital infrastructure operates by unambiguous implementations of computer code. As Lawrence Lessig and others have argued, the perhaps most dangerous aspect of this is when digital infrastructures implement highly centralized modes of governance, often literally only having one point of command-and-control. The platform owner flicks a switch, and then certain listings and settings are allowed or disallowed, and so on…

This should worry any liberal, since it is a mode of governance that is totalitarian by nature; it runs counter to any democratic, liberal notion of spontaneous, emergent civic action. Funnily, a lot of Silicon Valley ideology appears to be indebted to theorists like Friedrich von Hayek, who observed a calculative rationality emerging out of heterogeneous, spontaneous market activity — but at the same time, Hayek’s call to arms was in itself a reaction to central planning of the very kind that I think digital platforms, when designed in too rigid a way, risk erecting.

Ed.: Is there a sense (in hindsight) that these platforms are basically the logical outcome of the ruthless pursuit of market efficiency, i.e. enabled by digital technologies? But is there also a danger that they could lock out equitable development and innovation if they become too powerful (e.g. leading to worries about market concentration and anti-trust issues)? At one point you ask: “Why is society collectively acquiescing to this development?” .. why do you think that is?

Jonas: The governance aspect above rests on a kind of managerialist fantasy of perfect calculative rationality that is conferred upon the platform as an allegedly neutral agent or intermediary; scholars like Frank Pasquale have begun to unravel some of the rather dodgy ideology underpinning this informational idealism, or “dataism,” as José van Dijck calls it. However, it’s important to note how much of this risk for overly rigid structures comes down to sheer design implementation; I truly believe there is scope for more democratically adaptive, benign platforms, but that can only be achieved either through real incentives at the design stage (e.g. Wikipedia, and the ways in which its core business idea involves quality control by design), or through ex-post regulation, forcing platform owners to consider certain societally desirable consequences.

Ed.: A lot of this discussion seems to be based on control. Is there a general theory of “control” — i.e. are these companies creating systems of user management and control that follow similar conceptual / theoretical lines, or just doing “what seems right” to them in their own particular contexts?

Jonas: Down the stack, there is always a binary logic of control at play in any digital infrastructure. Still, on a higher level in the stack, as more complexity is added, we should expect to see more non-linear, adaptive functionality that can handle complexity and context. And where computational logic falls short, we should demand tolerable degrees of human moderation, more than there is now, to be sure. Regulators are going this way when it comes to things like Facebook and hate speech, and I think there is considerable consumer demand for it, as when disputes arise on Airbnb and similar markets.

Ed.: What do you think are the main worries with the way things are going with these mega-platforms, i.e. the things that policy-makers should hopefully be concentrating on, and looking out for?

Jonas: Policymakers are beginning to realize the unexpected synergies that big data gives rise to. As The Economist recently pointed out, once you control portable smartphones, you’ll have instant geopositioning data on a massive scale — you’ll want to own and control map services because you’ll then also have data on car traffic in real time, which means you’d be likely to have the transportation market cornered, self driving cars especially… If one takes an agnostic, heterodox view on companies like Alphabet, some of their far-flung projects actually begin to make sense, if synergy is taken into consideration. For automated systems, the more detailed the data becomes, the better the system will perform; vast pools of data get to act as protective moats.

One solution that The Economist suggests, and that has been championed for years by internet veteran Doc Searls, is to press for vastly increased transparency in terms of user data, so that individuals can improve their own sovereignty, control their relationships with platform companies, and thereby collectively demand that the companies in question disclose the value of this data — which would, by extent, improve signalling of the actual value of the company itself. If today’s platform companies are reluctant to do this, is that because it would perhaps reveal some of them to be less valuable than what they are held out to be?

Another potentially useful, proactive measure, that I describe in my article, is the establishment of vital competitors or supplements to the services that so many of us have gotten used to being provided for by platform giants. Instead of Facebook monopolizing identity management online, which sadly seems to have become the norm in some countries, look to the Scandinavian example of BankID, which is a platform service run by a regional bank consortium, offering a much safer and more nationally controllable identity management solution.

Alternative platform services like these could be built by private companies as well as state-funded ones; alongside privately owned consortia of this kind, it would be interesting to see innovation within the public service remit, exploring how that concept could be re-thought in an era of platform capitalism.


Read the full article: Jonas Andersson Schwarz (2017) Platform Logic: An Interdisciplinary Approach to the Platform-based Economy. Policy & Internet DOI: 10.1002/poi3.159.

Jonas Andersson Schwarz was talking to blog editor David Sutcliffe.

]]>
Open government policies are spreading across Europe — but what are the expected benefits? https://ensr.oii.ox.ac.uk/open-government-policies-are-spreading-across-europe-but-what-are-the-expected-benefits/ Mon, 17 Jul 2017 08:34:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4272 Open government policies are spreading across Europe, challenging previous models of the public sector, and defining new forms of relationship between government, citizens, and digital technologies. In their Policy & Internet article “Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries,” Emiliana De Blasio and Donatella Selva present a qualitative analysis of policy documents from France, Italy, Spain, and the UK, in order to map out the different meanings of open government, and how it is framed by different national governments.

As a policy agenda, open government can be thought of as involving four variables: transparency, participation, collaboration, and digital technologies in democratic processes. Although the variables are all interpreted in different ways, participation, collaboration, and digital technology provide the greatest challenge to government, given they imply a major restructuring of public administration, whereas transparency goals (i.e., the disclosure of open data and the provision of monitoring tools) do not. Indeed, transparency is mentioned in the earliest accounts of open government from the 1950s.

The authors show the emergence of competing models of open government in Europe, with transparency and digital technologies being the most prominent issues in open government, and participation and collaboration being less considered and implemented. The standard model of open government seems to stress innovation and openness, and occasionally of public–private collaboration, but fails to achieve open decision making, with the policy-making process typically rooted in existing mechanisms. However, the authors also see the emergence of a policy framework within which democratic innovations can develop, testament to the vibrancy of the relationship between citizens and the public administration in contemporary European democracies.

We caught up with the authors to discuss their findings:

Ed.: Would you say there are more similarities than differences between these countries’ approaches and expectations for open government? What were your main findings (briefly)?

Emiliana / Donatella: We can imagine the four European countries (France, Italy, Spain and the UK) as positioned in a continuum between a participatory frame and an economic/innovation frame: on the one side, we could observe that French policies focus on open government in order to strenghten and innovate the tradition of débat public; at the opposite side, the roots of the UK’s open government are in cost-efficiency, accountability and transparency arguments. Just between those two poles, Italian and Spanish policies situate open government in the context of a massive reform of the public sector, in order to reduce the administrative burden and to restore citizen trust in institutions. Two years after we wrote the article, we can observe that both in Italy and Spain something has changed, and participation has regained attention as a public policy issue.

Ed.: How much does policy around open data change according to who’s in power? (Obama and Trump clearly have very different ideas about the value of opening up government..). Or do civil services tend to smooth out any ideological differences around openness and transparency, even as parties enter and leave power?

Emiliana / Donatella: The case of open data is quite peculiar: it is one of the few policy issues directly addressed by the European Union Commission, and now by the transnational agreement on the G8 Open Data Charter, and for this reason we could say there is a homogenising trend. Moreover, opening up data is an ongoing process — started at least eight years ago — that will be too difficult for any new government to stop. As for openness and transparency in general, Cameron (and now May), Hollande, Monti (and then Renzi) and Rajoy’s governments, all wrote policies with a strong emphasis on innovation and openness as the key for a better future.

In fact, we observed that at the national level, the rhetoric of innovation and openness is bipartisan, and not dependent on political orientation — although the concrete policy instruments and implementation strategies might differ. It is also for this reason that governments tend to remain in the “comfort zone” of transparency and public-private partnerships: they still evocate a change in the relationship between public sector and civil society, but they don’t actually address this change.

Still, we should highlight that at the regional and local levels open data, transparency and participation policies are mostly promoted by liberal and/or left-leaning administrations.

Ed.: Your results for France (i.e. almost no mention of the digital economy, growth, or reform of public services) are basically the opposite of Macron’s (winning) platform of innovation and reform. Did Macron identify a problem in France; and might you expect a change as he takes control?

Emiliana / Donatella: Macron’s electoral programme is based on what he already did while in charge at the Ministry of Economy: he pursued a French digital agenda willing to attract foreign investments, to create digital productive hubs (the French Tech), and innovate the whole economy. Interestingly, however, he did not frame those policies under the umbrella of open government, preferring to speak about “modernisation”. The importance given by Macron to innovation in the economy and public sector finds some antecedents in the policies we analysed: the issue of “modernisation” was prominent and we expect it will be even more, now that he has gained the presidency.

Ed.: In your article you analyse policy documents, i.e. texts that set out hopes and intentions. But is there any sense of how much practical effect these have: particularly given how expensive it is to open up data? You note “the Spanish and Italian governments are especially focused on restoring trust in institutions, compensating for scandals, corruption, and a general distrust which is typical in Southern Europe” .. and yet the current Spanish government is still being rocked by corruption scandals.

Emiliana / Donatella: The efficacy of any kind of policies can vary depending on many factors — such as internal political context, international constraints, economic resources, and clarity of policy instruments. In addition, we should consider that at the national level, very few policies have an immediate consequence on citizens’ everyday lives. This is surely one of the worst problems of open government: from the one side, it is a policy agenda promoted in a top-down perspective — from international and/or national institutions; and from the other side, it fails to engage local communities in a purposeful dialogue. At such, open government policies appear to be self-reflective acts by governments, as paradoxical as this might be.

Ed.: Despite terrible, terrible things like the Trump administration’s apparent deletion of climate data, do you see a general trend towards increased datafication, accountability, and efficiency (perhaps even driven by industry, as well as NGOs)? Or are public administrations far too subject to political currents and individual whim?

Emiliana / Donatella: As we face turbulent times, it would be very risky to assert that tomorrow’s world will be more open than today’s. But even if we observe some interruptions, the principles of open democracy and open government have colonised public agendas: as we have tried to stress in our article, openness, participation, collaboration and innovation can have different meanings and degrees, but they succeeded in acquiring the status of policy issues.

And as you rightly point out, the way towards accountability and openness is not a public sector’s prerogative any more: many actors from civil society and industry have already mobilised in order to influence government agendas, public opinion, and to inform citizens. As the first open government policies start to produce practical effects on people’s everyday lives, we might expect that public awareness will rise, and that no individual will be able to ignore it.

Ed.: And does the EU have any supra-national influence, in terms of promoting general principles of openness, transparency etc.? Or is it strictly left to individual countries to open up (if they want), and in whatever direction they like? I would have thought the EU would be the ideal force to promote rational technocratic things like open government?

Emiliana / Donatella: The EU has the power of stressing some policy issues, and letting some others be “forgotten”. The complex legislative procedures of the EU, together with the trans-national conflictuality, produce policies with different degrees of enforcement. Generally speaking, some EU policies have a direct influence on national laws, whereas some others don’t, leaving with national governments the decision of whether or not to act. In the case of open government, we see that the EU has been particularly influential in setting the Digital Agenda for 2020 and now the Sustainable Future Agenda for 2030; in both documents, Europe encourages Member States to dialogue and collaborate with private actors and civil society, in order to achieve some objectives of economic development.

At the moment, initiatives like the Open Government Partnership — which runs outside the EU competence and involves many European countries — are tying up governments in trans-national networks converging on a set of principles and methods. Because of that Partnership, for example, countries like Italy and Spain have experimented with the first national co-drafting procedures.

Read the full article: De Blasio, E. and Selva, D. (2016) Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries. Policy & Internet 8 (3). DOI: doi:10.1002/poi3.118.


Emiliana De Blasio and Donatella Selva were talking to blog editor David Sutcliffe.

]]>
Cyberbullying is far less prevalent than offline bullying, but still needs addressing https://ensr.oii.ox.ac.uk/cyberbullying-is-far-less-prevalent-than-offline-bullying-but-still-needs-addressing/ Wed, 12 Jul 2017 08:33:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4337 Bullying is a major public health problem, with systematic reviews supporting an association between adolescent bullying and poor mental wellbeing outcomes. In their Lancet article “Cyberbullying and adolescent well-being in England: a population-based cross sectional study”, Andrew Przybylski and Lucy Bowes report the largest study to date on the prevalence of traditional and cyberbullying, based on a nationally representative sample of 120,115 adolescents in England.

While nearly a third of the adolescent respondents reported experiencing significant bullying in the past few months, cyberbullying was much less common, with around five percent of respondents reporting recent significant experiences. Both traditional and cyberbullying were independently associated with lower mental well-being, but only the relation between traditional bullying and well-being was robust. This supports the view that cyberbullying is unlikely to provide a source for new victims, but rather presents an avenue for further victimisation of those already suffering from traditional forms of bullying.

This stands in stark contrast to media reports and the popular perception that young people are now more likely to be victims of cyberbullying than traditional forms. The results also suggest that interventions to address cyberbullying will only be effective if they also consider the dynamics of traditional forms of bullying, supporting the urgent need for evidence-based interventions that target *both* forms of bullying in adolescence. That said, as social media and Internet connectivity become an increasingly intrinsic part of modern childhood, initiatives fostering resilience in online and every day contexts will be required.

We caught up with Andy and Lucy to discuss their findings:

Ed.: You say that given “the rise in the use of mobile and online technologies among young people, an up to date estimation of the current prevalence of cyberbullying in the UK is needed.” Having undertaken that—what are your initial thoughts on the results?

Andy: I think a really compelling thing we learned in this project is that researchers and policymakers have to think very carefully about what constitutes a meaningful degree of bullying or cyberbullying. Many of the studies and reports we reviewed were really loose on details here while a smaller core of work was precise and informative. When we started our study it was difficult to sort through the noise but we settled on a solid standard—at least two or three experiences of bullying in the past month—to base our prevalence numbers and statistical models on.

Lucy: One of the issues here is that studies often use different measures, so it is hard to compare like for like, but in general our study supports other recent studies indicating that relatively few adolescents report being cyberbullied only—one study by Dieter Wolke and colleagues that collected between 2014-2015 found that whilst 29% of school students reported being bullied, only 1% of 11-16 year olds reported only cyberbullying. Whilst that study was only in a handful of schools in one part of England, the findings are strikingly similar to our own. In general then it seems that rates of cyberbullying are not increasing dramatically; though it is concerning that prevalence rates of both forms of bullying—particularly traditional bullying—have remained unacceptably high.

Ed.: Is there a policy distinction drawn between “bullying” (i.e. young people) and “harassment” (i.e. the rest of us, including in the workplace)—and also between “bullying” and “cyber-bullying”? These are all basically the same thing, aren’t they—why distinguish?

Lucy: I think this is a good point; people do refer to ‘bullying’ in the workplace as well. Bullying, at its core, is defined as intentional, repeated aggression targeted against a person who is less able to defend him or herself—for example, a younger or more vulnerable person. Cyberbullying has the additional definition of occurring only in an online format—but I agree that this is the same action or behaviour, just taking place in a different context. Whilst in practice bullying and harassment have very similar meanings and may be used interchangeably, harassment is unlawful under the Equality Act 2010, whilst bullying actually isn’t a legal term at all. However certain acts of bullying could be considered harassment and therefore be prosecuted. I think this really just reflects the fact that we often ‘carve up’ human behaviour and experience according to our different policies, practices and research fields—when in reality they are not so distinct.

Ed.: I suppose online bullying of young people might be more difficult to deal with, given it can occur under the radar, and in social spaces that might not easily admit adults (though conversely, leave actual evidence, if reported..). Why do you think there’s a moral panic about cyberbullying — is it just newspapers selling copy, or does it say something interesting about the Internet as a medium — a space that’s both very open and very closed? And does any of this hysteria affect actual policy?

Andy: I think our concern arises from the uncertainty and unfamiliarity people have about the possibilities the Internet provides. Because it is full of potential—for good and ill—and is always changing, wild claims about it capture our imagination and fears. That said, the panic absolutely does affect policy and parenting discussions in the UK. Statistics and figures coming from pressure groups and well-meaning charities do put the prevalence of cyberbullying at terrifying, and unrealistically high, levels. This certainly has affected the way parents see things. Policy makers tend to seize on the worse case scenario and interpret things through this lens. Unfortunately this can be a distraction when there are known health and behavioural challenges facing young people.

Lucy: For me, I think we do tend to panic and highlight the negative impacts of the online world—often at the expense of the many positive impacts. That said, there was—and remains—a worry that cyberbullying could have the potential to be more widespread, and to be more difficult to resolve. The perpetrator’s identity may be unknown, may follow the child home from school, and may be persistent—in that it may be difficult to remove hurtful comments or photos from the Internet. It is reassuring that our findings, as well as others’, suggest that cyberbullying may not be associated with as great an impact on well-being as people have suggested.

Ed.: Obviously something as deeply complex and social as bullying requires a complex, multivalent response: but (that said), do you think there are any low-hanging interventions that might help address online bullying, like age verification, reporting tools, more information in online spaces about available help, more discussion of it as a problem (etc.)?

Andy: No easy ones. Understanding that cyber- and traditional bullying aren’t dissimilar, parental engagement and keeping lines of communication open are key. This means parents should learn about the technology their young people are using, and that kids should know they’re safe disclosing when something scary or distressing eventually happens.

Lucy: Bullying is certainly complex; school-based interventions that have been successful in reducing more traditional forms of bullying have tended to involve those students who are not directly involved but who act as ‘bystanders’—encouraging them to take a more active stance against bullying rather than remaining silent and implicitly suggesting that it is acceptable. There are online equivalents being developed, and greater education that discourages people (both children and adults) from sharing negative images or words, or encourages them to actively ‘dislike’ such negative posts show promise. I also think it’s important that targeted advice and support for those directly affected is provided.

Ed.: Who’s seen as the primary body responsible for dealing with bullying online: is it schools? NGOs? Or the platform owners who actually (if not-intentionally) host this abuse? And does this topic bump up against wider current concerns about (e.g.) the moral responsibilities of social media companies?

Andy: There is no single body that takes responsibility for this for young people. Some charities and government agencies, like the Child Exploitation and Online Protection command (CEOP) are doing great work. They provide a forum for information for parents and professionals for kids that is stratified by age, and easy-to-complete forms that young people or carers can use to get help. Most industry-based solutions require users to report and flag offensive content and they’re pretty far behind the ball on this because we don’t know what works and what doesn’t. At present cyberbullying consultants occupy the space and the services they provide are of dubious empirical value. If industry and the government want to improve things on this front they need to make direct investments in supporting robust, open, basic scientific research into cyberbulling and trials of promising intervention approaches.

Lucy: There was an interesting discussion by the NSPCC about this recently, and it seems that people are very mixed in their opinions—some would also say parents play an important role, as well as Government. I think this reflects the fact that cyberbullying is a complex social issue. It is important that social media companies are aware, and work with government, NGOs and young people to safeguard against harm (as many are doing), but equally schools and parents play an important role in educating children about cyberbullying—how to stay safe, how to play an active role in reducing cyberbullying, and who to turn to if children are experiencing cyberbullying.

Ed.: You mention various limitations to the study; what further evidence do you think we need, in order to more completely understand this issue, and support good interventions?

Lucy: I think we need to know more about how to support children directly affected by bullying, and more work is needed in developing effective interventions for cyberbullying. There are some very good school-based interventions with a strong evidence base to suggest that they reduce the prevalence of at least traditional forms of bullying, but they are not being widely implemented in the UK, and this is a missed opportunity.

Andy: I agree—a focus on flashy cyberbullying headlines presents the real risk of distracting us from developing and implementing evidence-based interventions. The Internet cannot be turned off and there are no simple solutions.

Ed.: You say the UK is ranked 20th of 27 EU countries on the mental well-being index, and also note the link between well-being and productivity. Do you think there’s enough discussion and effort being put into well-being, generally? And is there even a general public understanding of what “well-being” encompasses?

Lucy: I think the public understanding of well-being is probably pretty close to the research definition—people have a good sense that this involves more than not having psychological difficulty for example, and that it refers to friendships, relationships, and doing well; one’s overall quality of life. Both research and policy is placing more of an emphasis on well-being—in part because large international studies have suggested that the UK may score particularly poorly on measures of well-being. This is very important if we are going to raise standards and improve people’s quality of life.


Read the full article: Andrew Przybylski and Lucy Bowes (2017) Cyberbullying and adolescent well-being in England: a population-based cross sectional study. The Lancet Child & Adolescent Health.

Andrew Przybylski is an experimental psychologist based at the Oxford Internet Institute. His research focuses on applying motivational theory to understand the universal aspects of video games and social media that draw people in, the role of game structure and content on human aggression, and the factors that lead to successful versus unsuccessful self-regulation of gaming contexts and social media use. @ShuhBillSkee

Lucy Bowes is a Leverhulme Early Career Research Fellow at Oxford’s Department of Experimental Psychology. Her research focuses on the impact of early life stress on psychological and behavioural development, integrating social epidemiology, developmental psychology and behavioural genetics to understand the complex genetic and environmental influences that promote resilience to victimization and early life stress. @DrLucyBowes

Andy Przybylski and Lucy Bowes were talking to the Oxford Internet Institute’s Managing Editor, David Sutcliffe.

]]>
Does Twitter now set the news agenda? https://ensr.oii.ox.ac.uk/does-twitter-now-set-the-news-agenda/ Mon, 10 Jul 2017 08:30:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4176 The information provided in the traditional media is of fundamental importance for the policy-making process, signalling which issues are gaining traction, which are falling out of favour, and introducing entirely new problems for the public to digest. But the monopoly of the traditional media as a vehicle for disseminating information about the policy agenda is being superseded by social media, with Twitter in particular used by politicians to influence traditional news content.

In their Policy & Internet article, “Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content?” Matthew A. Shapiro and Libby Hemphill examine the extent to which he traditional media is influenced by politicians’ Twitter posts. They draw on indexing theory, which states that media coverage and framing of key policy issues will tend to track elite debate. To understand why the newspaper covers an issue and predict the daily New York Times content, it is modelled as a function of all of the previous day’s policy issue areas as well as all of the previous day’s Twitter posts about all of the policy issue areas by Democrats and Republicans.

They ask to what extent are the agenda-setting efforts of members of Congress acknowledged by the traditional media; what, if any, the advantages are for one party over the other, measured by the traditional media’s increased attention; and whether there is any variance across different policy issue areas? They find that Twitter is a legitimate political communication vehicle for US officials, that journalists consider Twitter when crafting their coverage, and that Twitter-based announcements by members of Congress are a valid substitute for the traditional communiqué in journalism, particularly for issues related to immigration and marginalized groups, and issues related to the economy and health care.

We caught up with the authors to discuss their findings:

Ed.: Can you give a quick outline of media indexing theory? Does it basically say that the press reports whatever the elite are talking about? (i.e. that press coverage can be thought of as a simple index, which tracks the many conversations that make up elite debate).

Matthew: Indexing theory, in brief, states that the content of media reports reflects the degree to which elites – politicians and leaders in government in particular – are in agreement or disagreement. The greater the level of agreement or consensus among elites, the less news there is to report in terms of elite conflict. This is not to say that a consensus among elites is not newsworthy; indexing theory conveys how media reporting is a function of the multiple voices that exist when there is elite debate.

Ed.: You say Twitter seemed a valid measure of news indexing (i.e. coverage) for at least some topics. Could it be that the NYT isn’t following Twitter so much as Twitter (and the NYT) are both following something else, i.e. floor debates, releases, etc.?

Matthew: We can’t test for whether the NYT is following Twitter rather than floor debates/press releases without collecting data for the latter. Perhaps If the House and Senate Press Galleries are indexing the news based on House and Senate debates, and if Twitter posts by members of Congress reflect the House and Senate discussions, we could still argue that Twitter remains significant because there are no limits on the amount of discussion – i.e. the boundaries of the House and Senate floors no longer exist – and the media are increasingly reliant on politicians’ use of Twitter to communicate to the press. In any case, the existing research shows that journalists are increasingly relying on Twitter posts for updates from elites.

Ed.: I’m guessing that indexing theory only really works for non-partisan media that follow elite debates, like the NYT? Or does it also work for tabloids? And what about things like Breitbart (and its ilk) .. which I’m guessing appeals explicitly to a populist audience, rather than particularly caring what the elite are talking about?

Matthew: If a study similar to our was done to examine the indexing tendencies of tabloids, Breitbart, or a similar type of media source, the first step would be to determine what is being discussed regularly in these outlets. Assuming, for example, that there isn’t much discussion about marginalized groups in Breitbart, in the context of indexing theory it would not be relevant to examine the pool of congressional Twitter posts mentioning marginalized groups. Those posts are effectively off of Breitbart’s radar. But, generally, indexing theory breaks down if partisanship and bias drive the reporting.

Ed.: Is there any sense in which Trump’s “Twitter diplomacy” has overturned or rendered moot the recent literature on political uses of Twitter? We now have a case where a single (personal) Twitter account can upset the stock market — how does one theorise that?

Matthew: In terms of indexing theory, we could argue that Trump’s Twitter posts themselves generate a response from Democrats and Republicans in Congress and thus muddy the waters by conflating policy issues with other issues like his personality, ties to Russia, his fact-checking problems, etc. This is well beyond our focus in the article, but we speculate that Trump’s early-dawn use of Twitter is primarily for marketing, damage control, and deflection. There are really many different ways to study this phenomenon. One could, for example, examine the function of unfiltered news from politician to the public and compare it with the news that is simultaneously reported in the media. We would also be interested in understanding why Trump and politicians like Trump frame their Twitter posts the way they do, what effect these posts have on their devoted followers as well as their fence-sitting followers, and how this mobilizes Congress both online (i.e. on Twitter) and when discussing and voting on policy options on the Senate and House floors. These areas of research would all build upon rather than render moot the extant literature on the political uses of Twitter.

Ed.: Following on: how does Indexing theory deal with Trump’s populism (i.e. avowedly anti-Washington position), hatred and contempt of the media, and apparent aim of bypassing the mainstream press wherever possible: even ditching the press pool and favouring populist outlets over the NYT in press gaggles. Or is the media bigger than the President .. will indexing theory survive Trump?

Matthew: Indexing theory will of course survive Trump. What we are witnessing in the media is an inability, however, to limit gaper’s block in the sense that the media focus on the more inflammatory and controversial aspects of Trump’s Twitter posts – unfortunately on a daily basis – rather than reporting the policy implications. The media have to report what is news, and Presidential Twitter posts are now newsworthy, but we would argue that we are reaching a point where anything but the meat of the policy implications must be effectively filtered. Until we reach a point where the NYT ignores the inflammatory nature of Trumps Twitter posts, it will be challenging to test indexing theory in the context of the policy agenda setting process.

Ed.: There are recent examples (Brexit, Trump) of the media apparently getting things wrong because they were following the elites and not “the forgotten” (or deplorable) .. who then voted in droves. Is there any sense in the media industry that it needs to rethink things a bit — i.e. that maybe the elite is not always going to be in control of events, or even be an accurate bellwether?

Matthew: This question highlights an omission from our article, namely that indexing theory marginalizes the role of non-elite voices. We agree that the media could do a better job reporting on certain things; for instance, relying extensively on weather vanes of public opinion that do not account for inaccurate self-reporting (i.e. people not accurately representing themselves when being polled about their support for Trump, Brexit, etc.) or understanding why disenfranchised voters might opt to stay home on Election Day. When it comes to setting the policy agenda, which is the focus of our article, we stand by indexing theory given our assumption that the policy process itself is typically directed from those holding power. On that point, and regardless of whether it is normatively appropriate, elites are accurate bellwethers of the policy agenda.

Read the full article: Shapiro, M.A. and Hemphill, L. (2017) Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content? Policy & Internet 9 (1) doi:10.1002/poi3.120.


Matthew A. Shapiro and Libby Hemphill were talking to blog editor David Sutcliffe.

]]>
How policy makers can extract meaningful public opinion data from social media to inform their actions https://ensr.oii.ox.ac.uk/extracting-meaningful-public-opinion-data-from-social-media-to-inform-policy-makers/ Fri, 07 Jul 2017 09:48:53 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4325 The role of social media in fostering the transparency of governments and strengthening the interaction between citizens and public administrations has been widely studied. Scholars have highlighted how online citizen-government and citizen-citizen interactions favour debates on social and political matters, and positively affect citizens’ interest in political processes, like elections, policy agenda setting, and policy implementation.

However, while top-down social media communication between public administrations and citizens has been widely examined, the bottom-up side of this interaction has been largely overlooked. In their Policy & Internet article “The ‘Social Side’ of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle,” Andrea Ceron and Fedra Negri aim to bridge the gap between knowledge and practice, by examining how the information available on social media can support the actions of politicians and bureaucrats along the policy cycle.

Policymakers, particularly politicians, have always been interested in knowing citizens’ preferences, in measuring their satisfaction and in receiving feedback on their activities. Using the technique of Supervised Aggregated Sentiment Analysis, the authors show that meaningful information on public services, programmes, and policies can be extracted from the unsolicited comments posted by social media users, particularly those posted on Twitter. They use this technique to extract and analyse citizen opinion on two major public policies (on labour market reform and school reform) that drove the agenda of the Matteo Renzi cabinet in Italy between 2014 and 2015.

They show how online public opinion reacted to the different policy alternatives formulated and discussed during the adoption of the policies. They also demonstrate how social media analysis allows monitoring of the mobilization and de-mobilization processes of rival stakeholders in response to the various amendments adopted by the government, with results comparable to those of a survey and a public consultation that were undertaken by the government.

We caught up with the authors to discuss their findings:

Ed.: You say that this form of opinion monitoring and analysis is cheaper, faster and easier than (for example) representative surveys. That said, how commonly do governments harness this new form of opinion-monitoring (with the requirement for new data skills, as well as attitudes)? Do they recognise the value of it?

Andrea / Fedri: Governments are starting to pay attention to the world of social media. Just to give an idea, the Italian government has issued a call to jointly collect survey data together with the results of social media analysis and these two types of data are provided in a common report. The report has not been publicly shared, suggesting that the cabinet considers such information highly valuable. VOICES from the blogs, a spin-off created by Stefano Iacus, Luigi Curini and Andrea Ceron (University of Milan), has been involved in this and, for sure, we can attest that in a couple of instances the government modified its actions in line with shifts in public opinion observed both through survey polls and sentiment analysis. This happened with the law on Civil Unions and with the abolishment of the “voucher” (a flexible form of worker payment). So far these are just instances — although there are signs of enhanced responsiveness, particularly when online public opinion represents the core constituency of ruling parties, as the case of the school reform (discussed in the article) clearly indicates: teachers are in fact the core constituency of the Democratic Party.

Ed.: You mention that the natural language used by social media users evolves continuously and is sensitive to the discussed topic: resulting in error. The method you use involves scaling up of a human-coded (=accurate) ontology. Could you discuss how this might work in practice? Presumably humans would need to code the terms of interest first, as it wouldn’t be able to pick up new issues (e.g. around a completely new word: say, “Bowling Green”?) automatically.

Andrea / Fedri: Gary King says that the best technology is human empowered. There are at least two great advantages in exploiting human coders. First, with our technique coders manage to get rid of noise better than any algorithm, as often a single word can be judged to be in-topic or out of topic based on the context and on the rest of the sentence. Second, human-coders can collect deeper information by mining the real opinions expressed in the online conversations. This sometimes allows them to detect, bottom-up, some arguments that were completely ignored ex-ante by scholars or analysts.

Ed.: There has been a lot of debate in the UK around “false balance”, e.g. the BBC giving equal coverage to climate deniers (despite being a tiny, unrepresentative, and uninformed minority), in an attempt at “impartiality”: how do you get round issues of non-representativeness in social media, when tracking — and more importantly, acting on — opinion?

Andrea / Fedri: Nowadays social media are a non-representative sample of a country’s population. However, the idea of representativeness linked to the concept of “public opinion” dates back to the early days of polling. Today, by contrast, online conversations often represent an “activated public opinion” comprising stakeholders who express their voices in an attempt to build wider support around their views. In this regard, social media data are interesting precisely due to their non-representativeness. A tiny group can speak loudly and this voice can gain the support of an increasing number of people. If the activated public opinion acts as an “influencer”, this implies that social media analysis could anticipate trends and shifts in public opinion.

Ed.: As data becomes increasingly open and tractable (controlled by people like Google, Facebook, or monitored by e.g. GCHQ / NSA), and text-techniques become increasingly sophisticated: what is the extreme logical conclusion in terms of government being able to track opinion, say in 50 years, following the current trajectory? Or will the natural messiness of humans and language act as a natural upper limit on what is possible?

Andrea / Fedri: The purpose of scientific research, particularly applied research, is to improve our well-being and to make our life easier. For sure there could be issues linked with the privacy of our data and, in a sci-fi scenario, government and police will be able to read our minds — either to prevent crimes and terrorist attacks (as in the Minority Report movie) or to detect, isolate and punish dissent. However, technology is not a standalone object and we should not forget that there are humans behind it. Whether these humans are governments, activists or common citizens, can certainly make a difference. If governments try to misuse technology, they will certainly meet a reaction from citizens — which can be amplified precisely via this new technology.

Read the full article: Ceron, A. and Negri, F. (2016) The “Social Side” of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle. Policy & Internet 8 (2) DOI:10.1002/poi3.117


Andrea Ceron and Fedra Negri were talking to blog editor David Sutcliffe.

]]>
We should pay more attention to the role of gender in Islamist radicalization https://ensr.oii.ox.ac.uk/we-should-pay-more-attention-to-the-role-of-gender-in-islamist-radicalization/ Tue, 04 Jul 2017 08:54:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4249 One of the key current UK security issues is how to deal with British citizens returning from participation in ISIS in Syria and Iraq. Most of the hundreds fighting with ISIS were men and youths. But, dozens of British women and girls also travelled to join Islamic State in Syria and Iraq. For some, online recruitment appeared to be an important part of their radicalization, and many took to the Internet to praise life in the new Caliphate once they arrived there. These cases raised concerns about female radicalization online, and put the issue of women, terrorism, and radicalization firmly on the policy agenda. This was not the first time such fears had been raised. In 2010, the university student Roshonara Choudhry stabbed her Member of Parliament, after watching YouTube videos of the radical cleric Anwar Al Awlaki. She is the first and only British woman so far convicted of a violent Islamist attack.

In her Policy & Internet article “The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad”, Elizabeth Pearson explores how gender might have factored in Roshonara’s radicalization, in order to present an alternative to existing theoretical explanations. First, in precluding her from a real-world engagement with Islamism on her terms, gender limitations in the physical world might have pushed her to the Internet. Here, a lack of religious knowledge made her particularly vulnerable to extremist ideology; a susceptibility only increased through Internet socialization and to an active radical milieu. Finally, it might have created a dissonance between her online and multiple “real” gendered identities, resulting in violence.

As yet, there is no adequately proven link between online material and violent acts. But given the current reliance of terrorism research on the online environment, and the reliance of policy on terrorism research, the relationship between the virtual and offline domains must be better understood. So too must the process of “radicalization” — which still lacks clarity, and relies on theorizing that is rife with assumptions. Whatever the challenges, understanding how men and women become violent radicals, and the differences there might be between them, has never been more important.

We caught up with Elizabeth to discuss her findings:

Ed.: You note “the Internet has become increasingly attractive to many women extremists in recent years” — do these extremist views tend to be found on (general) social media or on dedicated websites? Presumably these sites are discoverable via fairly basic search?

Elizabeth: Yes and no. Much content is easily found online. ISIS has been very good at ‘colonizing’ popular social media platforms with supporters, and in particular, Twitter was for a period the dominant site. It was ideal as it allowed ISIS fans to find one another, share material, and build networks and communities of support. In the past 18 months Twitter has made a concerted – and largely successful – effort to ‘take down’ or suspend accounts. This may simply have pushed support elsewhere. We know that Telegram is now an important channel for information, for example. Private groups, the dark web and hidden net resources exist alongside open source material on sites such as Facebook, familiar to everyone. Given the illegality of much of this content, there has been huge pressure on companies to respond. Still there is criticism from bodies such as the Home Affairs Select Committee that they are not responding quickly or efficiently enough.

Ed.: This case seemed to represent a collision not just of “violent jihadists vs the West” but also “Salafi-Jihadists vs women” (as well as “Western assumptions of Muslim assumptions of acceptable roles for women”) .. were these the main tensions at play here?

Elizabeth: One of the key aspects of Roshonara’s violence was that it was transgressive. Violent Jihadist groups tend towards conservatism regarding female roles. Although there is no theological reason why women should not participate in the defensive Jihad, they are not encouraged to do so. ISIS has worked hard in its propaganda to keep female roles domestic – yet ideologically so. Roshonara appears to have absorbed Al Awlaki’s messaging regarding the injustices faced by Muslims, but only acted when she saw a video by Azzam, a very key scholar for Al Qaeda supporters, which she understood as justifying female violence. Hatred of western foreign policy, and support for intervention in Iraq appeared to be the motivation for her attack; a belief that women could also fight is what prompted her to carry this out herself.

Ed.: Does this struggle tend to be seen as a political struggle about land and nationhood; or a supranational religious struggle — or both? (with the added complication of Isis conflating nation and religion..)

Elizabeth: Nobody yet understands exactly why people ‘radicalize’. It’s almost impossible to profile violent radicals beyond saying they tend to be mainly male – and as we know, that is not a hard and fast rule either. What we can say is that there are complex factors, and a variety of recurrent themes cited by violent actors, and found in propaganda and messaging. One narrative is about political struggle on behalf of Muslims, who face injustice, particularly from the West. ISIS has made this struggle about the domination of land and nationhood, a development of Al Qaeda’s message. Religion is also important to this. Despite different levels of knowledge of Islam, supporters of the violent Jihad share commitment to battle as justified in the Quran. They believe that Islam is the way, the only way, and they find in their faith an answer to global issues, and whatever is happening personally to them. It is not possible, in my view, to ignore the religious component declared in this struggle. But there are other factors too. That’s what makes this so difficult and complex.

Ed.: You say that Roshonara “did not follow the path of radicalization set out in theory”. How so? But also .. how important and grounded is this “theory” in the practice of counter-radicalization? And what do exceptions like Roshonara Choudhry signify?

Elizabeth: Theory — based on empirical evidence — suggests that violence is a male preserve. Violent Jihadist groups also generally restrict their violence to men, and men only. Theory also tells us that actors rarely carry out violence alone. Belonging is an important part of the violent Jihad and ‘entrance’ to violence is generally through people you know, friends, family, acquaintances. Even where we have seen young women for example travel to join ISIS, this has tended to be facilitated through friends, or online contacts, or family. Roshanara as a female acting alone in this time before ISIS is therefore something quite unusual. She signifies – through her somewhat unique case – just how transgressive female violence is, and just how unusual solitary action is. She also throws into question the role of the internet. The internet alone is not usually sufficient for radicalization; offline contacts matter. In her case there remain some questions of what other contacts may have influenced her violence.

I’m not entirely sure how joined up counter-radicalization practices and radicalization theory are. The Prevent strategy aside, there are many different approaches, in the UK alone. The most successful that I have seen are due to committed individuals who know the communities they are based in and are trusted by them. It is relationships that seem to count, above all else.

Ed.: Do you think her case is an interesting outlier (a “lone wolf” as people commented at the time), or do you think there’s a need for more attention to be paid to gender (and women) in this area, either as potential threats, or solutions?

Elizabeth: Roshonara is a young woman, still in jail for her crime. As I wrote this piece I thought of her as a student at King’s College London, as I am, and I found it therefore all the more affecting that she did what she did. There is a connection through that shared space. So it’s important for me to think of her in human terms, in terms of what her life was like, who her friends were, what her preoccupations were and how she managed, or did not manage, her academic success, her transition to a different identity from the one her parents came from. She is interesting to me because of this, and because she is an outlier. She is an outlier who reveals certain truths about what gender means in the violent Jihad. That means women, yes, but also men, ideas about masculinity, male and female roles. I don’t think we should think of young Muslim people as either ‘threats’ or ‘solutions’. These are not the only possibilities. We should think about society, and how gender works within it, and within particular communities within it.

Ed.: And is gender specifically “relevant” to consider when it comes to Islamic radicalization, or do you see similar gender dynamics across all forms of political and religious extremism?

Elizabeth: My current PhD research considers the relationship between the violent Jihad and the counter-Jihad – cumulative extremism. To me, gender matters in all study. It’s not really anything special or extra, it’s just a recognition that if you are looking at groups you need to take into account the different ways that men and women are affected. To me that seems quite basic, because otherwise you are not really seeing a whole picture. Conservative gender dynamics are certainly also at work in some nationalist groups. The protection of women, the function of women as representative of the honour or dishonour of a group or nation – these matter to groups and ideologies beyond the violent Jihad. However, the counter-Jihad is in other ways progressive, for example promoting narratives of protecting gay rights as well as women’s rights. So women for both need to be protected – but what they need to be protected from and how differs for each. What is important is that the role of women, and of gender, matters in consideration of any ‘extremism’, and indeed in politics more broadly.

Ed.: You’re currently doing research on Boko Haram — are you also looking at gender? And are there any commonalities with the British context you examined in this article?

Elizabeth: Boko Haram interests me because of the ways in which it has transgressed some of the most fundamental gender norms of the Jihad. Since 2014 they have carried out hundreds of suicide attacks using women and girls. This is highly unusual and in fact unprecedented in terms of numbers. How this impacts on their relationship with the international Jihad, and since 2015, ISIS, to whom their leader gave a pledge of allegiance is something I have been thinking about.

There are many local aspects of the Nigerian conflict that do not translate – poverty, the terrain, oral traditions of preaching, human rights violations, Sharia in northern Nigerian states, forced recruitment.. In gender terms however, the role of women, the honour/dishonour of women, and gender-based violence translate across contexts. In particular, women are frequently instrumentalized by movements for a greater cause. Perhaps the greatest similarity is the resistance to the imposition of Western norms, including gender norms, free-mixing between men and women and gender equality. This is a recurrent theme for violent Jihadists and their supporters across geography. They wish to protect the way of life they understand in the Quran, as they believe this is the word of God, and the only true word, superseding all man-made law.

Read the full article: Pearson, E. (2016) The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad. Policy & Internet 8 (1) doi:10.1002/poi3.101.


Elizabeth Pearson was talking to blog editor David Sutcliffe.

]]>
What are the barriers to big data analytics in local government? https://ensr.oii.ox.ac.uk/what-are-the-barriers-to-big-data-analytics-in-local-government/ Wed, 28 Jun 2017 08:11:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4208 The concept of Big Data has become very popular over the last decade, with many large technology companies successfully building their business models around its exploitation. The UK’s public sector has tried to follow suit, with local governments in particular trying to introduce new models of service delivery based on the routine extraction of information from their own big data. These attempts have been hailed as the beginning of a new era for the public sector, with some commentators suggesting that it could help local governments transition toward a model of service delivery where the quantity and quality of commissioned services is underpinned by data intelligence on users and their current and future needs.

In their Policy & Internet article “Data Intelligence for Local Government? Assessing the Benefits and Barriers to Use of Big Data in the Public Sector“, Fola Malomo and Vania Sena examine the extent to which local governments in the UK are indeed using intelligence from big data, in light of the structural barriers they face when trying to exploit it. Their analysis suggests that the ambitions around the development of big data capabilities in local government are not reflected in actual use. Indeed, these methods have mostly been employed to develop new digital channels for service delivery, and even if the financial benefits of these initiatives are documented, very little is known about the benefits generated by them for the local communities.

While this is slowly changing as councils start to develop their big data capability, the overall impression gained from even a cursory overview is that the full potential of big data is yet to be exploited.

We caught up with the authors to discuss their findings:

Ed.: So what actually is “the full potential” that local government is supposed to be aiming for? What exactly is the promise of “big data” in this context?

Fola / Vania: Local governments seek to improve service delivery amongst other things. Big Data helps to increase the number of ways that local service providers can reach out to, and better the lives of, local inhabitants. In addition, the exploitation of Big Data allows to better target the beneficiaries of their services and emphasise early prevention which may result into a reduction of the delivery costs. Commissioners in a Council needed to understand the drivers of the demand for services across different departments and their connections: how the services are connected to each other and how changes in the provision of “upstream” services can affect the “downstream” provision. Many local governments have reams of data (both hard data and soft data) on local inhabitants and local businesses. Big Data can be used to improve services, increase quality of life and make doing business easier.

Ed.: I wonder: can the data available to a local authority even be considered to be “big data” — you mention that local government data tends to be complex, rather than “big and fast”, as in the industry understanding of “big data”. What sorts of data are we talking about?

Fola / Vania: Local governments hold data on individuals, companies, projects and other activities concerning the local community. Health data, including information on children and other at-risk individuals, forms a huge part of the data within local governments. We use the concept of the data-ecosystem to talk about Big Data within local governments. The data ecosystem consists of different types of data on different topics and units which may be used for different purposes.

Complexity within data is driven by the volume of data and the large number of data sources. One must consider the fact that public agencies address needs from communities that cross administrative boundaries of a single administrative body. Also, the choice of data collection methodology and observation unit is driven by reporting requirements which is influenced by central government. Lastly, data storage infrastructure may be designed to comply with reporting requirements rather than linking data across agencies; data is not necessarily produced to be merged The data is not always “big and fast” but requires the use of advanced storage and analytic tools to get useful information that local areas benefit from.

Ed.: Do you think local governments will ever have the capacity (budget, skill) to truly exploit “big data”? What were the three structural barriers you particularly identified?

Fola / Vania: Without funding there is no chance that local governments can fully exploit big data. With funding, Local government can benefit from Big Data in a number of ways. The improved usage of Big Data usually requires collaboration between agents. The three main structural barriers to the fruitful exploitation of big data by local governments are: data access; ethical issues; and organisational changes. In addition, skill gaps; and investment in information technology have proved problematic.

Data access can be a problem if data exists in separate locations with little communication between the housing organisations and no easy way to move the data from one place to another. The main advantage of big data technologies is their ability to merge different types of data; mine them for insights; and combine them for actionable insights. Nevertheless, while the use of big data approaches to data exploitation assumes that organisations can access all the data they need; this is not the case in the public sector. A uniform practice on what data can be shared locally has not yet emerged. Furthermore there is no solution to the fact that data can span across organisations that are not part of the public sector and that may therefore be unwilling to share data with public bodies.

De-identifying personal data is another key requirement to fulfil before personal data can be shared under the terms of the Data Protection Agreement. It is argued that this requirement is relevant when trying to merge small data sets as individuals can be easily re-identified once the data linkage is completed. As a result, the only option left to facilitate the linkage of data sets with personal information is to create a secure environment where data can be safely de-identified and then matched. Safe havens and trusted third parties have been developed exactly for` this purpose. Data warehouses, where data from local governments and from other parts of the public sector can be matched and linked, have been developed as an intermediate solution to the lack of infrastructure for matching sensitive data.

Due to the personal nature of the data, ethical issues arise concerning how to use information about individuals and whether persons should be identifiable. There is a huge debate on ethical challenges posed by the routine extraction of information from Big Data. The extraction and manipulation of personal information cannot be easily reconciled with what is perceived to be ethically acceptable in this area. Additional ethical issues related to the re-use of output from specific predictive models for other purposes within the public sector. This issue is particularly relevant given the fact that most predictive analytics algorithms only provide an estimate of the risk of an event.

Data usage is related to culture; and organisational changes can be a medium to longer term process. As long as key stakeholders in the organisation accept that insights from data will inform service delivery; big data technologies can be used as levers to introduce changes in the way services are provided. Unfortunately, it is commonly believed that the deployment of big data technologies simply implies a change in the way data are interrogated and interpreted and therefore should not have any bearing on the way internal processes are organised.

In addition, data usage can involve investment in information technology and training. It is well known that investment in IT has been very uneven between the private and public sector, and within the private sector as well. Despite the growth in information and communications technology (ICT) budgets across the private sector, the banking sector and the financial services industry spend 8 percent of their total operating expenditure on ICT, among local authorities, ICT spending makes up only 3-6% of the total budget. Furthermore, successful deployment of Big Data technologies needs to be accompanied by the development of internal skills that allow for the analysis and modelling of complex phenomena that is essential to the development of a data-driven approach to decision making within local governments. However, local governments tend to lack these skills and this skills gap may be exacerbated by the high turnover in the sector. All this, in addition to the sector’s fragmentation in terms of IT provision, reinforces the structural silos that prevent local authorities from sharing and exploiting their data.

Ed.: And do you think these big data techniques will just sort-of seep in to local government, or that there will need to be a proper step-change in terms of skills and attitudes?

Fola / Vania: The benefits of data-driven analysis are being increasingly accepted. Whilst the techniques used might seem to be steadily accepted by local governments, in order to make a real and lasting improvement public bodies should ideally have a big data strategy in place to determine how they will use the data they have available to them. Attitudes can take time to change and the provision of information can help people become more willing to use Big Data in their work.

Ed.: I suppose one solution might for local councils to buy in the services of third-party specialist “big data for local government” providers, rather than trying to develop in-house capacity: do these providers exist? I imagine local government might have data that would be attractive to commercial companies, maybe as a profit-sharing data partnership?

Fola / Vania: The truth is that providers do exist and they always charge local governments. What is underestimated is the role that data centres can play in this arena. The authors are members of the economic and social research council funded business and local government data research centre for smart analytics. This centre helps local councils use their big data better by collating data and performing analysis that is of use to local councils. The centre also provides training to public officials, giving them tools to understand and use data better. The centre is a collaboration between the Universities of Essex, Kent, East Anglia and the London School of Economics. Academics work closely with public officials to come up with solutions to problems facing local areas. In addition, commercial companies are interested in working with local government data. Working with third-party organisations is a good method to ease into the process of using Big Data solutions without having to make a huge changes to one’s organisation.

Ed.: Finally — is there anything that central Government can do (assuming it isn’t already 100% occupied with Brexit) to help local governments develop their data analytic capacity?

Fola / Vania: Central governments influence the environment in which local government operate. Despite local councils making decisions over things such as how data is stored, central government can assist by removing some of the previously-mentioned barriers to data usage. For example, government cuts are excessive and are making the sector very volatile so financial help will be useful in this area. Moreover, data access and transfer is made easier with uniformity of data storage protocols. In addition, the public will have more confidence in providing data if there is transparency in the collection, usage and provision of data. Guidelines for the use of sensitive data should be agreed upon and made known in order to improve the quality of the work. Central governments can also help change the general culture of local governments and attitudes towards Big Data. In order for Big Data to work well for all, individuals, companies, local governments and central governments should be well informed about the issues and able to effect change concerning Big Data issues.

Read the full article: Malomo, F. and Sena, V. (2107) Data Intelligence for Local Government? Assessing the Benefits and Barriers to Use of Big Data in the Public Sector. Policy & Internet 9 (1) DOI: 10.1002/poi3.141.


Fola Malomo and Vania Sena were talking to blog editor David Sutcliffe.

]]>
How ready is Africa to join the knowledge economy? https://ensr.oii.ox.ac.uk/how-ready-is-africa-to-join-the-knowledge-economy/ Thu, 22 Jun 2017 07:38:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4243 “In times past, we searched for gold, precious stones, minerals, and ore. Today, it is knowledge that makes us rich and access to information is all-powerful in enabling individual and collective success.” Lesotho Ministry of Communications, Science and Technology, 2005.

Changes in the ways knowledge is created and used — and how this is enabled by new information technologies — are driving economic and social development worldwide. Discussions of the “knowledge economy” see knowledge both as an economic output in itself, and as an input that strengthens economic processes; with developing countries tending to be described as planning or embarking on a journey of transformation into knowledge economies to bring about economic gain. Indeed, increasing connectivity has sparked many hopes for the democratization of knowledge production in sub-Saharan Africa.

Despite the centrality of digital connectivity to the knowledge economy, there are few studies of the geographies of digital knowledge and information. In their article “Engagement in the Knowledge Economy: Regional Patterns of Content Creation with a Focus on Sub-Saharan Africa”, published in Information Technologies & International Development, Sanna Ojanperä, Mark Graham, Ralph K. Straumann, Stefano De Sabbata, and Matthew Zook investigate the patterns of knowledge creation in the region. They examine three key metrics: spatial distributions of academic articles (i.e. traditional knowledge production), collaborative software development, and Internet domain registrations (i.e. digitally mediated knowledge production).

Contrary to expectations, they find distribution patterns of digital content (measured by collaborative coding and domain registrations) to be more geographically uneven than those of academic articles: despite the hopes for the democratizing power of the information revolution. This suggests that the factors often framed as catalysts for a knowledge economy do not relate to these three metrics uniformly.

Connectivity is an important enabler of digital content creation, but it seems to be only a necessary, not a sufficient, condition; wealth, innovation capacity, and public spending on education are also important factors. While the growth in telecommunications might be reducing the continent’s reliance on extractive industries and agriculture, transformation into a knowledge economy will require far more concentrated effort than simply increasing Internet connectivity.

We caught up with Sanna to discuss the article’s findings:

Ed.: You chose three indices (articles, domain registration, collaborative coding) to explore the question of Africa’s “readiness” to join the knowledge economy. Are these standard measures for the (digital) knowledge economy?

Sanna: Academic articles is a measure often used to estimate knowledge-rich activity, so you could consider it a traditional variable in this area. Other previous work measuring the geographies of codified knowledge have focused on particular aspects or segments of it, such as patents, citations, and innovation systems.

What we found to be an interesting gap in the estimation of knowledge economies is that even if digital connectivity is central to the knowledge economy discourse, studies of the current geographies of digital knowledge and information on online platforms are rare. We argue that digitally mediated participation in information- and knowledge-intensive activities offers a metric that closely measures human capacity and skills. An analysis of digitally mediated traces of skills and information might thus complement the knowledge economy discussion and offer a way to better detect the boundaries of contemporary knowledge economies. To address the gap of research on digital content, we examine the geography of activities in collaborative software development (using the GitHub platform), and the registration of top-level domains. While there are other indicators that we could have included in the analysis, we selected these two, as they have a global reach and because they measure two distinct, but important segments of knowledge economy.

Ed.: To what extent do the drivers commonly associated with knowledge economies (e.g., GDP, broadband Internet, education, innovation) explain the patterns you found? And what were these patterns?

Sanna: While connectivity plays a role in all three categories, it seems to have a strong effect only on digital content creation. Conversely, the production of academic articles is more strongly related to GDP than to connectivity. Innovation capacity appears to have a positive relationship to all three content types. Education as a topically narrower variable appears, perhaps unexpectedly, to be related only to variance in academic articles.

In terms of the patterns of these variables, we find that the geographies of collaborative coding and domain registrations are more uneven than the spatial distribution of academic authoring. Sub-Saharan Africa contributes the smallest share of content to all three categories, providing only 1.1% of academic articles. With 0.5% of collaborative coding and 0.7% of domain registrations, SSA produces an even smaller share of digital content.

While comparison across absolute numbers informs us of the total volume of content creation, it is useful to pair that with a standardized measure that informs us of the propensity of content creation across the populations. Considering the most productive countries in terms of their per capita content creation suggests geographies even more clustered in Europe than looking at total numbers. In SSA, the level of individual countries’ content production falls within the two lowest quintiles more often in the case of collaborative coding and domain registrations than with academic articles. This runs contrary to the expectation of contemporary digitally mediated content being more evenly geographically distributed than traditional content.

Ed.: You measured “articles” by looking at author affiliations. Could you just as well have used “universities” as the measure? Or is there an assumption that connectivity will somehow encourage international co-authorship (does it?) — or that maybe “articles” is a better measure of knowledge quality than presence of universities per se?

Sanna: We chose this indicator, because we consider scientific output in the form of academic articles to represent the progress of science. Publication of academic articles and the permanent scientific record they form are central for the codification of knowledge and are a key enabler of knowledge-intensive processes. Beyond being an indicator often included in the knowledge economy indices, we believe that academic articles offer a relatively uniform measure of knowledge-intensive output, as the process of peer-reviewed publishing and the way in which it constructs a permanent scientific record are rather similar around the world. In contrast, other systems for knowledge-intensive outputs such as registering patents and innovation systems are known to be vary greatly between countries and regions (Griliches, 1990).

We didn’t use the number of universities as an indicator of the knowledge economy, because we wanted to look at measures of knowledge-intensive content-creation. While universities educate individuals and increase the nation’s human capital, this ‘output’ is very diverse and looks very different between universities. Further, we wanted to assess whether education in fact drives the development of knowledge economy, and used a measure of enrollment rates in secondary and tertiary education as an explanatory variable in our analysis.

Ed.: There’s a lot of talk of Africa entering the “global” marketplace: but how global is the knowledge economy — particularly given differences in language and culture? I imagine most cultural and knowledge production remains pretty local?

Sanna: The knowledge economy could be seen as a new dynamic stage in the global economic restructuring, where economic processes and practices that place greater emphasis of intellectual abilities take place in the context of an increasingly interconnected world. To the extent that African information- and knowledge-intensive goods and services compete in these global markets, one could consider the region entering the global knowledge economy. While the markets for knowledge-based goods and services may be smaller in various African countries, many produce regionally or nationally and locally targeted knowledge-rich products. However, this understanding of the concept of knowledge economy tends to focus on commercial activities and scientific and technical knowledge and neglect indigenous, local or cultural knowledge. These types of knowledge have a higher tendency of being tacit rather than codifiable in nature. Unlike codified knowledge, which can be recorded and transmitted through symbols or become materialized in concrete form such as tools or machinery, tacit knowledge takes time to obtain and is not as easily diffused. While these tacit types of knowledge are prevalent and carry significant value for their users and producers, they are less easily converted to commercial value. This makes their measurement more challenging and as a result the discourse of knowledge economies tends to focus less on these types of knowledge production.

Ed.: Is the knowledge economy always going to be a “good” thing, or could it lead to (more) economic exploitation of the region — for example if it got trapped into supplying a market for low-quality work? (I guess the digital equivalent of extractive, rather than productive industries .. call centres, gig-work etc.)

Sanna: As is the case with any type of economic activity, the distributional effects of knowledge economies are affected by a myriad of factors. On one hand, many of the knowledge- and information-rich economic activities require human capital and technological resources, and tend to yield goods and services with higher value added. The investment in and the greater presence of these resources may help nations and individuals to access more opportunities to increase their welfare. However, countries don’t access the global information- and knowledge-based markets as equal players and the benefits from knowledge economies are not distributed equally. It is possible that exploitative practices exist in particular where institutions and regulatory practices are not sufficiently powerful to ensure adequate working conditions. In a previous study on the Sub-Saharan African gig economy and digital labour – both areas that could be considered to form part of the knowledge economy – some of the authors found that while a range of workers in these domains enjoy important and tangible benefits, they also face risks and costs such as low bargaining power, limited economic inclusion, intermediated value chains leading to exploitation of less experienced workers, and restrictions in upgrading skills in order to move upwards in professional roles.

Ed.: I guess it’s difficult to unpack any causation in terms of Internet connectivity, economic development, and knowledge economies — despite hopes of the Internet “transforming” Sub-Saharan African economies. Is there anything in your study (or others) to hint at an answer to the question of causality?

Sanna: In order to discuss causality, we would need to study the effects of a given intervention or treatment, as measured in an ideal randomized controlled experiment. As we’re not investigating the effect of a particular intervention, but studying descriptive trends in the three dependent variables using the Ordinary Least Squares (OLS) method of estimation in multiple linear regression framework, we cannot make strong claims about causality. However, we find that both the descriptive study of RQ1 as well as the regression modeling and residual mapping for RQ2 offer statistically significant results, which lend themselves for interpretations with relevance to our RQs and that have important implications, which we discuss in the concluding section of the article.

Read the full article: Ojanperä, S., Graham, M., Straumann, R.K., De Sabbata, S., & Zook, M. (2017). Engagement in the knowledge economy: Regional patterns of content creation with a focus on sub-Saharan Africa. Information Technologies & International Development 13: 33–51.


Sanna Ojanperä was talking to blog editor David Sutcliffe.

]]>
What explains variation in online political engagement? https://ensr.oii.ox.ac.uk/what-explains-variation-in-online-political-engagement/ Wed, 21 Jun 2017 07:05:48 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4204
Sweden is a leader in terms of digitalization, but poorer municipalities struggle to find the resources to develop digital forms of politics. Image: Stockholm by Peter Tandlund (Flickr CC BY-NC-ND 2.0)

While much of the modern political process is now carried out digitally, ICTs have yet to bring democracies to their full utopian ideal. The drivers of involvement in digital politics from an individual perspective are well studied, but less attention has been paid to the supply-side of online engagement in politics. In his Policy & Internet article “Inequality in Local Digital Politics: How Different Preconditions for Citizen Engagement Can Be Explained,” Gustav Lidén examines the supply of channels for digital politics distributed by Swedish municipalities, in order to understand the drivers of variation in local online engagement.

He finds a positive trajectory for digital politics in Swedish municipalities, but with significant variation between municipalities when it comes to opportunities for engagement in local politics via their websites. These patterns are explained primarily by population size (digital politics is costly, and larger societies are probably better able to carry these costs), but also by economic conditions and education levels. He also find that a lack of policies and unenthusiastic politicians creates poor possibilities for development, verifying previous findings that without citizen demand — and ambitious politicians — successful provision of channels for digital politics will be hard to achieve.

We caught up with Gustav to discuss his findings:

Ed.: I guess there must be a huge literature (also in development studies) on the interactions between connectivity, education, the economy, and supply and demand for digital government: and what the influencers are in each of these relationships. Not to mention causality.. I’m guessing “everything is important, but nothing is clear”: is that fair? And do you think any “general principles” explaining demand and supply of electronic government / democracy could ever be established, if they haven’t already?

Gustav: Although the literature in this field is becoming vast the subfield that I am primarily engaged in, that is the conditions for digital policy at the subnational level, has only recently attracted greater numbers of scholars. Even if predictors of these phenomena can be highly dependent on context, there are some circumstances that we can now regard as being the ‘usual suspects’. Not surprisingly, resources of both economic and human capital appear to be important, irrespective of the empirical case. Population size also seems to be a key determinant that can influence these kind of resources.

In terms of causality, few studies that I am familiar with have succeeded in examining the interplay of both demand for and supply of digital forms of politics. In my article I try to get closer to the causal chain by examining both structural predictors as well as adding qualitative material from two cases. This makes it possible to establish better precision on causal chains since it enables judgements on how structural conditions influence key stakeholders.

Ed.: You say government-citizen interactions in Sweden “are to a larger extent digital in larger and better-off societies, while ‘analog’ methods prevail in smaller and poorer ones.” Does it particularly matter whether things are digital or analog at municipal level: as long as they all have equal access to national-level things?

Gustav: I would say so, yes. However, this could vary in relation to the responsibilities of municipalities among different countries. The municipal sector in Sweden is significant. Its general costs represent about one quarter of the country’s GDP and the sector is responsible for important parts of the welfare sector. In addition to this, municipalities also represent the most natural arena for political engagement — the typical political career starts off in the local council. Great variation in digital politics among municipalities is therefore problematic — there is a risk of inequality between municipalities if citizens from one municipality face greater possibilities for information and participation while those residing in another are more restrained.

Ed.: Sweden has areas of very low population density: are paper / telephone channels cheaper for municipalities to deliver in these areas, or might that just be an excuse for any lack of enthusiasm? i.e. what sorts of geographical constraints does Sweden face?

Gustav: This is a general problem for a large proportion of the Swedish municipalities. Due to government efforts, ambitions for assuring high-speed internet connections (including more sparsely populated areas), are under way. Yet in recent research, the importance for fast internet access in relation to municipalities’ work with digital politics has been quite ambiguous. My guess would, however, be that if the infrastructure is in place it will, sooner or later, be impossible for municipalities to refrain from working with more digital forms of politics.

Ed.: I guess a cliche of the Swedes (correct me if I’m wrong!) is that despite the welfare state / tradition of tolerance, they’re not particularly social — making it difficult, for example, for non-Swedes to integrate. How far do you think cultural / societal factors play a role in attempts to create “digital community,” in Sweden, or elsewhere?

Gustav: This cliche is perhaps most commonly related to the Swedish countryside. However, the case studies in my article illustrates a contrary image. Take the municipality of Gagnef as an example, one of my two cases, in which informants describe a vibrant civil society with associations representing a great variety of sectors. One interesting finding, though, is that local engagement is channeled through these traditional forms and not particularly through digital media. Still, from a global perspective, Sweden is rightfully described as an international leader in terms of digitalization. This is perhaps most visible in the more urban parts of the country; even if there are many good examples from the countryside in which the technology is one way to counteract great distances and low population density.

Ed.: And what is the role of the central government in all this? i.e. should they (could they? do they?) provide encouragement and expertise in providing local-level digital services, particularly for the smaller and poorer districts?

Gustav: Due to the considerable autonomy among the municipalities the government has not regulated municipalities working with this issue. However, they have encouraged and supported parts of it, primarily when it comes to the investment of technological infrastructure. My research does show that smaller and poorer municipalities have a hard time finding the right resources for developing digital forms of politics. Local political leaders find it hard to prioritize these issues when there is almost a constant need for more resources for schools and elderly care. But this is hardly unique for Sweden. In a study of the local level in the US, Norris and Reddick show how lack of financial resources is the number one constraint for the development of digital services. I think that government regulation, i.e. forcing municipalities to distribute specific digital channels, could lower inequalities between municipalities but would be unthinkable without additional government funding.

Ed.: Finally: do you see it as “inevitable” that everyone will eventually be online, or could pockets of analog government-citizen interaction persist basically indefinitely?

Gustav: Something of a countermovement opposing the digital society appears to exist in several societies. In general, I think we need to find a more balanced way to describe the consequences of digitalization. Hopefully, most people see both the value and the downsides of a digital society, but the debate tends to be dominated either by naïve optimists or complete pessimists. Policy makers need though, to start thinking of the consequences of both inequalities in relation to this technique and pay more attention to the risks related to it.

Read the full article: Lidén, G. (2016) Inequality in Local Digital Politics: How Different Preconditions for Citizen Engagement Can Be Explained. Policy & Internet 8 (3) doi:10.1002/poi3.122.


Gustav Lidén was talking to blog editor David Sutcliffe.

See his websites: https://www.miun.se/Personal/gustavliden/ and http://gustavliden.blogspot.se/

]]>
Our knowledge of how automated agents interact is rather poor (and that could be a problem) https://ensr.oii.ox.ac.uk/our-knowledge-of-how-automated-agents-interact-is-rather-poor-and-that-could-be-a-problem/ Wed, 14 Jun 2017 15:12:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4191 Recent years have seen a huge increase in the number of bots online — including search engine Web crawlers, online customer service chat bots, social media spambots, and content-editing bots in online collaborative communities like Wikipedia. (Bots are important contributors to Wikipedia, completing about 15% of all Wikipedia edits in 2014 overally, and more than 50% in certain language editions.)

While the online world has turned into an ecosystem of bots (by which we mean computer scripts that automatically handle repetitive and mundane tasks), our knowledge of how these automated agents interact with each other is rather poor. But being automata without capacity for emotions, meaning-making, creativity, or sociality, we might expect bot interactions to be relatively predictable and uneventful.

In their PLOS ONE article “Even good bots fight: The case of Wikipedia“, Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri analyze the interactions between bots that edit articles on Wikipedia. They track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction outcomes. Although Wikipedia bots are intended to support the encyclopaedia — identifying and undoing vandalism, enforcing bans, checking spelling, creating inter-language links, importing content automatically, mining data, identifying copyright violations, greeting newcomers, etc. — the authors find they often undid each other’s edits, with these sterile “fights” sometimes continuing for years.

They suggest that even relatively “dumb” bots may give rise to complex interactions, carrying important implications for Artificial Intelligence research. Understanding these bot-bot interactions will be crucial for managing social media, providing adequate cyber-security, and designing autonomous vehicles (that don’t crash..).

We caught up with Taha Yasseri and Luciano Floridi to discuss the implications of the findings:

Ed.: Is there any particular difference between the way individual bots interact (and maybe get bogged down in conflict), and lines of vast and complex code interacting badly, or having unforeseen results (e.g. flash-crashes in automated trading): i.e. is this just (another) example of us not always being able to anticipate how code interacts in the wild?

Taha: There are similarities and differences. The most notable difference is that here bots are not competing. They all work based on same rules and more importantly to achieve the same goal that is to increase the quality of the encyclopedia. Considering these features, the rather antagonistic interactions between the bots come as a surprise.

Ed.: Wikipedia have said that they know about it, and that it’s a minor problem: but I suppose Wikipedia presents a nice, open, benevolent system to make a start on examining and understanding bot interactions. What other bot-systems are you aware of, or that you could have looked at?

Taha: In terms of content generating bots, Twitter bots have turned out to be very important in terms of online propaganda. The crawlers bots that collect information from social media or the web (such as personal information or email addresses) are also being heavily deployed. In fact we have come up with a first typology of the Internet bots based on their type of action and their intentions (benevolent vs malevolent), that is presented in the article.

Ed.: You’ve also done work on human collaborations (e.g. in the citizen science projects of the Zooniverse) — is there any work comparing human collaborations with bot collaborations — or even examining human-bot collaborations and interactions?

Taha: In the present work we do compare bot-bot interactions with human-human interactions to observe similarities and differences. The most striking difference is in the dynamics of negative interactions. While human conflicts heat up very quickly and then disappear after a while, bots undoing each others’ contribution comes as a steady flow which might persist over years. In the HUMANE project, we discuss the co-existence of humans and machines in the digital world from a theoretical point of view and there we discuss such ecosystems in details.

Ed.: Humans obviously interact badly, fairly often (despite being a social species) .. why should we be particularly worried about how bots interact with each other, given humans seem to expect and cope with social inefficiency, annoyances, conflict and break-down? Isn’t this just more of the same?

Luciano: The fact that bots can be as bad as humans is far from reassuring. The fact that this happens even when they are programmed to collaborate is more disconcerting than what happens among humans when these compete, or fight each other. Here are very elementary mechanisms that through simple interactions generate messy and conflictual outcomes. One may hope this is not evidence of what may happen when more complex systems and interactions are in question. The lesson I learnt from all this is that without rules or some kind of normative framework that promote collaboration, not even good mechanisms ensure a good outcome.

Read the full article: Tsvetkova M, Garcia-Gavilanes R, Floridi, L, Yasseri T (2017) Even good bots fight: The case of Wikipedia. PLoS ONE 12(2): e0171774. doi:10.1371/journal.pone.0171774


Taha Yasseri and Luciano Floridi were talking to blog editor David Sutcliffe.

]]>
Could Voting Advice Applications force politicians to keep their manifesto promises? https://ensr.oii.ox.ac.uk/could-voting-advice-applications-force-politicians-to-keep-their-manifesto-promises/ Mon, 12 Jun 2017 09:00:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4199 In many countries, Voting Advice Applications (VAAs) have become an almost indispensable part of the electoral process, playing an important role in the campaigning activities of parties and candidates, an essential element of media coverage of the elections, and being widely used by citizens. A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for.

These applications are based on the idea of issue and proximity voting — the parties and candidates recommended by VAAs are those with the highest number of matching positions on a number of political questions and issues. Many of these questions are much more specific and detailed than party programs and electoral platforms, and show the voters exactly what the party or candidates stand for and how they will vote in parliament once elected. In his Policy & Internet article “Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote,” Andreas Ladner examines the extent to which VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises.

His main hypothesis is that VAAs lead to “promissory representation” — where parties and candidates are elected for their promises and sanctioned by the electorate if they don’t keep them. He suggests that as these tools become more popular, the “delegate model” is likely to increase in popularity: i.e. one in which politicians are regarded as delegates voted into parliament to keep their promises, rather than being voted a free mandate to act how they see fit (the “trustee model”).

We caught up with Andreas to discuss his findings:

Ed.: You found that issue-voters were more likely (than other voters) to say they would sanction a politician who broke their election promises. But also that issue voters are less politically engaged. So is this maybe a bit moot: i.e. if the people most likely to force the “delegate model” system are the least likely to enforce it?

Andreas: It perhaps looks a bit moot in the first place, but what happens if the less engaged are given the possibility to sanction them more easily or by default. Sanctioning a politician who breaks an election promise is not per se a good thing, it depends on the reason why he or she broke it, on the situation, and on the promise. VAA can easily provide information to what extent candidates keep their promises — and then it gets very easy to sanction them simply for that without taking other arguments into consideration.

Ed.: Do voting advice applications work best in complex, multi-party political systems? (I’m not sure anyone would need one to distinguish between Trump / Clinton, for example?)

Andreas: Yes, I believe that in very complex systems – like for example in the Swiss case where voters not only vote for parties but also for up to 35 different candidates – VAAs are particularly useful since they help to process a huge amount of information. If the choice is only between two parties or two candidates which are completely different, than VAAs are less helpful.

Ed.: I guess the recent elections / referendum I am most familiar with (US, UK, France) have been particularly lurid and nasty: but I guess VAAs rely on a certain quiet rationality to work as intended? How do you see your Swiss results (and Swiss elections, generally) comparing with these examples? Do VAAs not just get lost in the noise?

Andreas: The idea of VAAs is to help voters to make better informed choices. This is, of course, opposed to decisions based on emotions. In Switzerland, elections are not of outmost importance, due to specific features of our political system such as direct democracy and power sharing, but voters seem to appreciate the information provided by smartvote. Almost 20% of the voter cast their vote after having consulted the website.

Ed.: Macron is a recent example of someone who clearly sought (and received) a general mandate, rather than presenting a detailed platform of promises. Is that unusual? He was criticised in his campaign for being “too vague,” but it clearly worked for him. What use are manifesto pledges in politics — as opposed to simply making clear to the electorate where you stand on the political spectrum?

Andreas: Good VAAs combine electoral promises on concrete issues as well as more general political positions. Voters can base their decisions on either of them, or on a combination of both of them. I am not arguing in favour of one or the other, but they clearly have different implications. The former is closer to the delegate model, the latter to the trustee model. I think good VAAs should make the differences clear and should even allow the voters to choose.

Ed.: I guess Trump is a contrasting example of someone whose campaign was all about promises (while also seeking a clear mandate to “make America great again”), but who has lied, and broken these (impossible) promises seemingly faster than people can keep track of them. Do you think his supporters care, though?

Andreas: His promises were too far away from what he can possibly keep. Quite a few of his voters, I believe, do not want them to be fully realized but rather that the US move a bit more into this direction.

Ed.: I suppose another example of an extremely successful quasi-pledge was the Brexit campaign’s obviously meaningless — but hugely successful — “We send the EU £350 million a week; let’s fund our NHS instead.” Not to sound depressing, but do promises actually mean anything? Is it the candidate / issue that matters (and the media response to that), or the actual pledges?

Andreas: I agree that the media play an important role and not always into the direction they intend to do. I do not think that it is the £350 million a week which made the difference. It is much more a general discontent and a situation which was not sufficiently explained and legitimized which led to this unexpected decision. If you lose the support for your policy than it gets much easier for your opponents. It is difficult to imagine that you can get a majority built on nothing.

Ed.: I’ve read all the articles in the Policy & Internet special issue on VAAs: one thing that struck me is that there’s lots of incomplete data, e.g. no knowledge of how people actually voted in the end (or would vote in future). What are the strengths and weaknesses of VAAs as a data source for political research?

Andreas: The quality of the data varies between countries and voting systems. We have a self-selection bias in the use of VAAs and often also into the surveys conducted among the users. In general we don’t know how they voted, and we have to believe them what they tell us. In many respects the data does not differ that much from what we get from classic electoral studies, especially since they also encounter difficulties in addressing a representative sample. VAAs usually have much larger Ns on the side of the voters, generate more information about their political positions and preferences, and provide very interesting information about the candidates and parties.

Read the full article: Ladner, A. (2016) Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote. Policy & Internet 8 (4). DOI: doi:10.1002/poi3.137.


Andreas Ladner was talking to blog editor David Sutcliffe.

]]>
Social media and the battle for perceptions of the U.S.–Mexico border https://ensr.oii.ox.ac.uk/social-media-and-the-battle-for-perceptions-of-the-u-s-mexico-border/ Wed, 07 Jun 2017 07:33:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4195 The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories.

However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it.

The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”.

In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of the second creates a policy problem that is more value-laden than empirically based and that creates distrust and polarization among stakeholders and between the countries. The U.S.–Mexico border is clearly a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border; possibly creating the greater cooperation we need to solve many of the urgent problems faced by border communities.

We caught up with the authors to discuss their findings:

Ed.: Who created the videos you studied: were they created by the public, or were they also produced by perhaps more progressive media outlets? i.e. were you able to disentangle the effect of the media in terms of these narratives?

Mark / Donna: For this study, we studied YouTube videos, using the “relevance” filter. Thus, the videos were ordered by most related to our topic and by most frequently viewed. With this selection method we captured videos produced by a variety of sources; some that contained embedded videos from mainstream media, others created by non-profit groups and public television groups, but also videos produced by interested citizens or private groups. The non-profit and media groups more often discuss the beneficial elements of the border (trade, shared environmental protection, etc.), while individual citizens or groups tended to post the more emotional and narrative-driven videos more likely to construct the border residents in a non-deserving sense.

Ed.: How influential do you think these videos are? In a world of extreme media concentration (where even the US President seems to get his news from Fox headlines and the 42 people he follows on Twitter) .. how significant is “home grown” content; which after all may have better, or at least more locally-representative, information than certain parts of the national media?

Mark / Donna: Today’s extreme media world supplies us with constant and fast-moving news. YouTube is part of the media mix, frequently mentioned as the second largest search engine on the web, and as such is influential. Media sources report that a large number of diverse people use YouTube, thus the videos encompass a broad swath of international, domestic and local issues. That said, as with most news sources today, some individuals gravitate to the stories that represent their point of view, and YouTube makes it possible for individuals to do just this. In other words, if a person perceives the US-Mexico border as a horrible place, they can use key words to search YouTube videos that represent that point of view.

However, we believe YouTube to be more influential than some other sources precisely because it encompasses diversity, thus, even when searching using specific terms, there will likely be a few videos included in search results that provide a different point of view. Furthermore, we did find some local, “home grown” content included in search results, again adding to the diversity presented to the individual watching YouTube. Although, we found less homegrown content than initially expected. Overall, there is selectivity bias with YouTube, like any type of media, but YouTube’s greater diversity of postings and viewers and broad distribution may increase both exposure and influence.

Ed.: Your article was published pre-Trump. How do you think things might have changed post-election, particularly given the uncertainty over “the wall“ and NAFTA — and Trump’s rather strident narratives about each? Is it still a case of “negative traditional media; equivocal social media”?

Mark / Donna: Our guess is that anti-border forces are more prominent on YouTube since Trump’s election and inauguration. Unless there is an organized effort to counter discussion of “the wall” and produce positive constructions of the border, we expect that YouTube videos posted over the past few months lean more toward non-deserving constructions.

Ed.: How significant do you think social media is for news and politics generally, i.e. its influence in this information environment — compared with (say) the mainstream press and party-machines? I guess Trump’s disintermediated tweeting might have turned a few assumptions on their heads, in terms of the relation between news, social media and politics? Or is the media always going to be bigger than Trump / the President?

Mark / Donna: Social media, including YouTube and Twitter, is interactive and thus allows anyone to bypass traditional institutions. President Trump can bypass institutions of government, media institutions, even his own political party and staff and communicate directly with people via Twitter. Of course, there are advantages to that, including hearing views that differ from the “official lines,” but there are also pitfalls, such as minimized editing of comments.

We believe people see both the strengths and the weakness with social media, and thus often read news from both traditional media sources and social media. Traditional media is still powerful and connected to traditional institutions, thus, remains a substantial source of information for many people — although social media numbers are climbing, particularly with the President’s use of Twitter. Overall, both types of media influence politics, although we do not expect future presidents will necessarily emulate President Trump’s use of social media.

Ed.: Another thing we hear a lot about now is “filter bubbles” (and whether or not they’re a thing). YouTube filters viewing suggestions according to what you watch, but still presents a vast range of both good and mad content: how significant do you think YouTube (and the explosion of smartphone video) content is in today’s information / media environment? (And are filter bubbles really a thing..?)

Mark / Donna: Yeah, we think that the filter bubbles are real. Again, we think that social media has a lot of potential to provide new information to people (and still does); although currently social media is falling into the same selectivity bias that characterizes the traditional media. We encourage our students to use online technology to seek out diverse sources; sources that both mirror their opinions and that oppose their opinions. People in the US can access diverse sources on a daily basis, but they have to be willing to seek out perspectives that differ from their own view, perspectives other than their favoured news source.

The key is getting individuals to want to challenge themselves and to be open to cognitive dissonance as they read or watch material that differs from their belief systems. Technology is advanced but humans still suffer the cognitive limitations from which they have always suffered. The political system in the US, and likely other places, encourages it. The key is for individuals to be willing to listen to views unlike their own.

Read the full article: Lybecker, D.L., McBeth, M.K., Husmann, M.A, and Pelikan, N. (2015) Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube. Policy & Internet 7 (4). DOI: 10.1002/poi3.94.


Mark McBeth and Donna Lybecker were talking to blog editor David Sutcliffe.

]]>
Using Open Government Data to predict sense of local community https://ensr.oii.ox.ac.uk/using-open-government-data-to-predict-sense-of-local-community/ Tue, 30 May 2017 09:31:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4137 Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community — i.e. the ability of people to act individually or collectively to benefit the community — is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions — including opportunities and skills development, resource mobilization, leadership, participatory decision making, etc. — all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured.

A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data”, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables.

The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose — being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy.

We caught up with the authors to discuss their findings:

Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys?

Authors: Our research stemmed from an observation of the measures of social characteristics available. These are generally obtained through expensive surveys, so we asked ourselves “how could we generate them in a more economic and efficient way?” In recent years, the UK government has openly released a wealth of datasets, which could be used to provide information for other purposes — in our case, providing measures of sense of community and participation — than those for which they had been created. We started our work by consulting papers from the social science domain, to understand which factors were associated to sense of community and participation. Afterwards, we matched the factors that were most commonly mentioned in the literature with “actual” variables found in UK Open Government Data sources.

Ed.: You say “the most determinant variables in our models were only partially in agreement with the most influential factors for sense of community and participation according to the social science literature” — which were they, and how do you account for the discrepancy?

Authors: We observed two types of discrepancy. The first was the case of variables that had roughly the same level of importance in our models and in others previously developed, but with a different rank. For instance, median age was by far the most determinant variable in our model for sense of community. This variable was not ranked among the top five variables in the literature, although it was listed among the significant variables.

The second type of discrepancy regarded variables which were highly important in our models and not influential in others, or vice versa. An example is the socioeconomic status of residents of a neighbourhood, which appeared to have no effect on participation in prior studies, but was the top-ranking variable in our participation model (operationalised as the number of people in intermediate occupation).

We believe that there are multiple explanations for these phenomena, all of which deserve further investigation. First, highly determinant predictors in conventional statistical models have been proven to have little or no importance in ensemble algorithms, such as the one we used [1]. Second, factors influencing sense of community and civic participation may vary according to the context (e.g. different countries; see [3] about sense of community in China for an example). Finally, different methods may measure different aspects related to a socially meaningful concept, leading to different partial explanations.

Ed.: What were the predictors for “lack of community” — i.e. what would a terrible community look like, according to your models?

Authors: Our work did not really focus on finding “good” and “bad” communities. However, we did notice some characteristics that were typical of communities with low sense of community or participation in our dataset. For example, sense of community had a strong negative correlation with work and stores accessibility, with ethnic fragmentation, and with the number of people living in the UK for less than 10 years. On the other hand, it was positively correlated with the age of residents. Participation, instead, was negatively correlated with household composition and occupation of its residents, whilst it had a positive relation with their level of education and the weekly worked hours. Of course, these data would require to be interpreted by a social scientist, in order to properly contextualise and understand them.

Ed.: Do you see these techniques as being more useful to highlight issues and encourage discussion, or actually being used in planning? For example, I can see it might raise issues if machine-learning models “proved” that presence of immigrant populations, or neighbourhoods of mixed economic or ethnic backgrounds, were less cohesive than homogeneous ones (not sure if they are?).

Authors: How machine learning algorithms work is not always clear, even to specialists, and this has led some people to describe them as “black boxes”. We believe that models like those we developed can be extremely useful to challenge existing perspectives based on past data available in the social science literature, e.g. they can be used to confirm or reject previous measures in the literature. Additionally, machine learning models can serve as indicators that can be more frequently consulted: they are cheaper to produce, we can use them more often, and see whether policies have actually worked.

Ed.: It’s great that existing data (in this case, Open Government Data) can be used, rather than collecting new data from scratch. In practice, how easy is it to repurpose this data and build models with it — including in countries where this data may be more difficult to access? And were there any variables you were interested in that you couldn’t access?

Authors: Identifying relevant datasets and getting hold of them was a lengthy process, even in the UK, where plenty of work has been done to make government data openly available. We had to retrieve many datasets from the pages of the government department that produced them, such as the Department for Work and Pensions or the Home Office, because we could not find them through the portal data.gov.uk. Next to this, the ONS website was another very useful resource, which we used to get census data.

The hurdles encountered in gathering the data led us to recommend the development of methods that would be able to more automatically retrieve datasets from a list of sources and select the ones that provide the best results for predictive models of social dimensions.

Ed.: The OII has done some similar work, estimating the local geography of Internet use across Britain, combining survey and national census data. The researchers said the small-area estimation technique wasn’t being used routinely in government, despite its power. What do you think of their work and discussion, in relation to your own?

Authors: One of the issues we were faced with in our research was the absence of nationwide data about sense of community and participation at a neighbourhood level. The small area estimation approach used by Blank et al., 2017 [2] could provide a suitable solution to the issue. However, the estimates produced by their approach understandably incorporate a certain amount of error. In order to use estimated values as training data for predictive models of community measures it would be key to understand how this error would be propagated to the predicted values.

[1] Berk, R. 2006. “ An Introduction to Ensemble Methods for Data Analysis.” Sociological Methods & Research 34 (3): 263–95.
[2] Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.
[3] Xu, Q., Perkins, D.D. and Chow, J.C.C., 2010. Sense of community, neighboring, and social capital as predictors of local political participation in China. American journal of community psychology, 45(3-4), pp.259-271.

Read the full article: Piscopo, A., Siebes, R. and Hardman, L. (2017) Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data. Policy & Internet 9 (1) doi:10.1002/poi3.145.


Alessandro Piscopo, Ronald Siebes, and Lynda Hardman were talking to blog editor David Sutcliffe.

]]>
Should adverts for social casino games be covered by gambling regulations? https://ensr.oii.ox.ac.uk/should-adverts-for-social-casino-games-be-covered-by-gambling-regulations/ Wed, 24 May 2017 07:05:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4108 Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry — social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetized features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators.

Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable.

However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorizing and normalization of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers. Given the receptivity of young people to messages that encourage gambling, the authors recommend that gaming companies embrace corporate social responsibility standards, including adding warning messages to advertisements for gambling-themed games. They hope that their qualitative research may complement existing quantitative findings, and facilitate discussions about appropriate policies for advertisements for social casino games and other gambling-themed games.

We caught up with Brett to discuss their findings:

Ed.: You say there are no policies related to the advertising of social casino games — why is this? And do you think this will change?

Brett: Social casino games are regulated under general consumer regulations, but there are no specific regulations for these types of games and they do not fall under gambling regulation. Although several gambling regulatory bodies have considered these games, as they do not require payment to play and prizes have no monetary value they are not considered gambling activities. Where the games include branding for gambling companies or are considered advertising, they may fall under relevant legislation. Currently it is up to individual consumers to consider if they are relevant, which includes parents considering their children’s’ use of the games.

Ed.: Is there work on whether these sorts of games actually encourage gambling behaviour? As opposed to gambling behaviour simply pre-existing — i.e. people are either gamblers or not, susceptible or not.

Brett: We have conducted previous research showing that almost one-fifth of adults who played social casino games had gambled for money as a direct result of these games. Research also found that two-thirds of adolescents who had paid money to play social casino games had gambled directly as a result of these games. This builds on other international research suggesting that there is a pathway between games and gambling. For some people, the games are perceived to be a way to ‘try out’ or practice gambling without money and most are motivated to gamble due to the possibility of winning real money. For some people with gambling problems, the games can trigger the urge to gamble, although for others, the games are used as a way to avoid gambling in an attempt to cut back. The pathway is complicated and needs further specific research, including longitudinal studies.

Ed.: Possibly a stupid question: you say social games are a huge and booming market, despite being basically free to play. Where does the revenue come from?

Brett: Not a stupid question at all! When something is free, of course it makes sense to question where the money comes from. The revenue in these business models comes from advertisements and players. The advertisement revenue model is similar to other revenue models, but the player revenue model, which is based largely on micropayments, is a major component of how these games make money. Players can typically play free, and micropayments are voluntary. However, when they run out of free chips, players have to wait to continue to play, or they can purchase additional chips.

The micropayments can also improve game experience, such as to obtain in-game items, as a temporary boost in the game, to add lives/strength/health to an avatar or game session, or unlock the next stage in the game. In social casino games, for example, micropayments can be made to acquire more virtual chips with which to play the slot game. Our research suggests that only a small fraction of the player base actually makes micropayments, and a smaller fraction of these pay very large amounts. Since many of these games are free to play, but one can pay to advance through game in certain ways, they have colloquially been referred to as “freemium” games.

Ed.: I guess social media (like Facebook) are a gift to online gambling companies: i.e. being able to target (and A/B test) their adverts to particular population segments? Are there any studies on the intersection of social media, gambling and behavioural data / economics?

Brett: There is a reasonable cross-over in social casino game players and gamblers – our Australian research found 25% of Internet and 5% of land-based gamblers used social casino games and US studies show around one-third of social casino gamers visit land-based casinos. Many of the most popular and successful social casino games are owned by companies that also operate gambling, in venues and online. Some casino companies offer social casino games to continue to engage with customers when they are not in the venue and may offer prizes that can be redeemed in venues. Games may allow gambling companies to test out how popular games will be before they put them in venues. Although, as most players do not pay to play social casino games, they may engage with these differently from gambling products.

Ed.: We’ve seen (with the “fake news” debate) social media companies claiming to simply be a conduit to others’ content, not content providers themselves. What do they say in terms of these social games: I’m assuming they would either claim that they aren’t gambling, or that they aren’t responsible for what people use social media for?

Brett: We don’t want to speak for the social media companies themselves, and they appear to leave quite a bit up to the game developers. Advertising standards have become more lax on gambling games – the example we give in our article is Google, who had a strict policy against advertisements for gambling-related content in the Google Play store but in February 2015 began beta testing advertisements for social casino games. In some markets where online gambling is restricted, online gambling sites offer ‘free’ social casino games that link to real money sites as a way to reach these markets.

Ed.: I guess this is just another example of the increasingly attention-demanding, seductive, sexualised, individually targeted, ubiquitous, behaviourally attuned, monetised environment we (and young children) find ourselves in. Do you think we should be paying attention to this trend (e.g. noticing the close link between social gaming and gambling) or do you think we’ll all just muddle along as we’ve always done? Is this disturbing, or simply people doing what they enjoy doing?

Brett: We should certainly be paying attention to this trend, but don’t think the activity of social casino games is disturbing. A big part of the goal here is awareness, followed by conscious action. We would encourage companies to take more care in controlling who accesses their games and to whom their advertisements are targeted. As you note, David, we are in such a highly-targeted, specified state of advertising. As a result, we should, theoretically, be able to avoid marketing games to young kids. Companies should also certainly be mindful of the potential effect of cartoon games. We don’t automatically assign a sneaky, underhanded motive to the industry, but at the same time there is a percentage of the population that is at risk for gambling problems and we don’t want to exacerbate the situation by inadvertently advertising to young people, who are more susceptible to this type of messaging.

Read the full article: Abarbanel, B., Gainsbury, S.M., King, D., Hing, N., and Delfabbro, P.H. (2017) Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults? Policy & Internet 9 (2). DOI: 10.1002/poi3.135.


Brett Abarbanel was talking to blog editor David Sutcliffe.

]]>
How useful are volunteer crisis-mappers in a humanitarian crisis? https://ensr.oii.ox.ac.uk/how-useful-are-volunteer-crisis-mappers-in-a-humanitarian-crisis/ Thu, 18 May 2017 09:11:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4129 User-generated content can provide a useful source of information during humanitarian crises like armed conflict or natural disasters. With the rise of interactive websites, social media, and online mapping tools, volunteer crisis mappers are now able to compile geographic data as a humanitarian crisis unfolds, allowing individuals across the world to organize as ad hoc groups to participate in data collection. Crisis mappers have created maps of earthquake damage and trapped victims, analyzed satellite imagery for signs of armed conflict, and cleaned Twitter data sets to uncover useful information about unfolding extreme weather events like typhoons.

Although these volunteers provide useful technical assistance to humanitarian efforts (e.g. when maps and records don’t exist or are lost), their lack of affiliation with “formal” actors, such as the United Nations, and the very fact that they are volunteers, makes them a dubious data source. Indeed, concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put. Most of these concerns assume that volunteers have no professional training. And herein lies the contradiction: by doing the work for free and at their own will the volunteers make these efforts possible and innovative, but this is also why crisis mapping is doubted and questioned by experts.

By investigating crisis-mapping volunteers and organizations, Elizabeth Resor’s article “The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers” published in Policy & Internet presents evidence of a more professional cadre of volunteers and a means to distinguish between different types of volunteer organizations. Given these organizations now play an increasingly integrated role in humanitarian responses, it’s crucial that their differences are understood and that concerns about the volunteers are answered.

We caught up with Elizabeth to discuss her findings:

Ed.: We have seen from Citizen Science (and Wikipedia) that large crowds of non-professional volunteers can produce work of incredible value, if projects are set up right. Are the fears around non-professional crisis mappers valid? For example, is this an environment where everything “must be correct”, rather than “probably mostly correct”?

Elizabeth: Much of the fears around non-professional crisis mappers comes from a lack of understanding about who the volunteers are and why they are volunteering. As these questions are answered and professional humanitarian actors become more familiar with the concept of volunteer humanitarians, I think many of these fears are diminishing.

Due to the fast-paced and resource-constrained environments of humanitarian crises, traditional actors, like the UN, are used to working with “good enough” data, or data that are “probably mostly correct”. And as you point out, volunteers can often produce very high quality data. So when you combine these two facts, it stands to reason that volunteer crisis mappers can contribute necessary data that is most likely as good as (if not better) than the data that humanitarian actors are used to working with. Moreover, in my research I found that most of these volunteers are not amateurs in the full sense because they come from related professional fields (such as GIS).

Ed.: I suppose one way of assuaging fears is to maybe set up an umbrella body of volunteer crisis mapping organisations, and maybe offer training opportunities and certification of output. But then I suppose you just end up as professionals. How blurry are the lines between useful-not useful / professional-amateur in crisis mapping?

Elizabeth: There is an umbrella group for volunteer organizations set up exactly for that reason! It’s called the Digital Humanitarian Network. At the time that I was researching this article, the DHN was very new and so I wasn’t able to ask if actors were more comfortable working with volunteers contacted through the DHN, but that would be an interesting issue to look into.

The two crisis mapping organizations I researched — the Standby Task Force and the GIS Corps — both offer training and some structure to volunteer work. They take very different approaches to the volunteer work — the Standby Task Force work can include very simple micro-tasks (like classifying photographs), whereas the GIS Corps generally provides quite specialised technical assistance (like GIS analysis). However, both of these kinds of tasks can produce useful and needed data in a crisis.

Ed.: Another article in the journal examined the effective take-over of a Russian crisis volunteer website by the Government, i.e. by professionalising (and therefore controlling) the site and volunteer details they had control over who did / didn’t turn up in disaster areas (effectively meaning nonprofessionals were kept out). How do humanitarian organisations view volunteer crisis mappers: as useful organizations to be worked with in parallel, or as something to be controlled?

Elizabeth: I have seen examples of humanitarian and international development agencies trying to lead or create crowdsourcing responses to crises (for example, USAID “Mapping to End Malaria“). I take this as a sign that these agencies understand the value in volunteer contributions — something they wouldn’t have understood without the initial examples created by those volunteers.

Still, humanitarian organizations are large bureaucracies, and even in a crisis they function as bureaucracies, while volunteer organizations take a nimble and flexible approach. This structural difference is part of the value that volunteers can offer humanitarian organizations, so I don’t believe that it would be in the best interest of the humanitarian organizations to completely co-opt or absorb the volunteer organizations.

Ed.: How does liability work? Eg if crisis workers in a conflict zone are put in danger by their locations being revealed by well-meaning volunteers? Or mistakes being being made on the ground because of incorrect data — perhaps injected by hostile actors to create confusion (thinking of our current environment of hybrid warfare..).

Elizabeth: Unfortunately, all humanitarian crises are dangerous and involve threats to “on the ground” response teams as well as affected communities. I’m not sure how liability is handled. Incorrect data or revealed locations might not be immediately traced back to the source of the problem (i.e. volunteers) and the first concern would be minimizing the harm, not penalizing the cause.

Still, this is the greatest challenge to volunteer crisis mapping that I see. Volunteers don’t want to cause more harm than good, and to do this they must understand the context of the crisis in which they are getting involved (even if it is remotely). This is where relationships with organizations “on the ground” are key. Also, while I found that most volunteers had experience related to GIS and/or data analysis, very few had experience in humanitarian work. This seems like an area where training can help volunteers understand the gravity of their work, to ensure that they take it seriously and do their best work.

Ed.: Finally, have you ever participated as a volunteer crisis mapper? And also: how do you the think the phenomenon is evolving, and what do you think researchers ought to be looking at next?

Elizabeth: I haven’t participated in any active crises, although I’ve tried some of the tools and trainings to get a sense of the volunteer activities.

In terms of future research, you mentioned hybridized warfare and it would be interesting to see how this change in the location of a crisis (i.e. in online spaces as well as physical spaces) is changing the nature of volunteer responses. For example, how can many dispersed volunteers help monitor ISIS activity on YouTube and Twitter? Or are those tasks better suited for an algorithm? I would also be curious to see how the rise of isolationist politicians in Europe and the US has influenced volunteer crisis mapping. Has this caused more people to want to reach out and participate in international crises or is it making them more inward-looking? It’s certainly an interesting field to follow!

Read the full article: Resor, E. (2016) The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers. Policy & Internet 8 (1) DOI:10.1002/poi3.112.

Elizabeth Resor was talking to blog editor David Sutcliffe.

]]>
Is Left-Right still meaningful in politics? Or are we all just winners or losers of globalisation now? https://ensr.oii.ox.ac.uk/is-left-right-still-meaningful-in-politics-or-are-we-all-just-winners-or-losers-of-globalisation-now/ Tue, 16 May 2017 08:18:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4167 The Left–Right dimension — based on the traditional cleavage in society between capital and labor — is the most common way of conceptualizing ideological difference. But in an ever more globalized world, are the concepts of Left and Right still relevant? In recent years political scientists have increasingly come to talk of a two-dimensional politics in Europe, defined by an economic (Left–Right) dimension, and a cultural dimension that relates to voter and party positions on sociocultural issues.

In his Policy & Internet article “Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data”, Jonathan Wheatley argues that the cleavage that exists in many European societies between “winners” and “losers” of globalization has engendered a new ideological dimension pitting “cosmopolitans” against “communitarians” and that draws on cultural issues relating to identity — rather than economic issues.

He identifies latent dimensions from opinion data generated by two Voting Advice Applications deployed in England in 2014 and 2015 — finding that the political space in England is defined by two main ideological dimensions: an economic Left–Right dimension and a cultural communitarian–cosmopolitan dimension. While they co-vary to a significant degree, with economic rightists tending to be more communitarian and economic leftists tending to be more cosmopolitan, these tendencies do not always hold and the two dimensions should be considered as separate.

The identification of the communitarian–cosmopolitan dimension lends weight to the hypothesis of Kriesi et al. (2006) that politics is increasingly defined by a cleavage between “winners” and “losers” of globalization, with “losers” tending to adopt a position of cultural demarcation and to perceive “outsiders” such as immigrants and the EU, as a threat. If an economic dimension pitting Left against Right (or labour against capital) defined the political arena in Europe in the twentieth century, maybe it’s a cultural cleavage that pits cosmopolitans against communitarians that defines politics in the twenty-first.

We caught up with Jonathan to discuss his findings:

Ed.: The big thing that happened since your article was published was Brexit — so I guess the “communitarian–cosmopolitan” dimension (Trump!) makes obvious intuitive sense as a political cleavage plane. Will you be comparing your GE2015 VAA data with GE2017 data? And what might you expect to see?

Jonathan: Absolutely! We will be launching the WhoGetsMyVoteUK Voting Advice Application next week. This VAA will be launched by three universities: Oxford Brookes University (where I am based), Queen Mary University London and the University of Bristol. This should provide extensive data that will allow us to make a longitudinal study: before and after Brexit.

Ed.: There was a lot of talk (for the first time) after Brexit of “the left behind” — I suppose partly corresponding to your “communitarians” — but that all seems to have died down. Of course they’re still there: is there any sense of how they will affect the upcoming election — particularly the “communitarian leftists”?

Jonathan: Well this is the very group that Theresa May’s Conservative Party seems to be targeting. We should note that May has attempted to appeal directly to this group by her claim that “if you believe you’re a citizen of the world, you’re a citizen of nowhere” made at the Tory Party Conference last autumn, and by her assertion that “Liberalism and globalisation have left people behind” made at the Lord Mayor’s banquet late last year. Her (at least superficially) economically leftist proposals during the election campaign to increase the living wage and statutory rights for family care and training, and to strengthen labour laws, together with her “hard Brexit stance” and confrontational rhetoric towards European leaders seems specifically designed to appeal to this group. Many of these “communitarian leftists” have previously been tempted by UKIP, but the Conservatives seem to be winning the battle for their votes at the moment.

Ed.: Does the UK’s first-past-the-post system (resulting in a non-proportionally representative set of MPs) just hide what is happening underneath, i.e. I’m guessing a fairly constant, unchanging spectrum of political leanings? Presumably UKIP’s rise didn’t signify a lurch to the right: it was just an efficient way of labelling (for a while) people who were already there?

Jonathan: To a certain extent, yes. Superficially the UK has very much been a case of “business as usual” in terms of its party system, notwithstanding the (perhaps brief) emergence of UKIP as a significant force in around 2012. This can be contrasted with Sweden, Finland and the Netherlands, where populist right parties obtained significant representation in parliament. And UKIP may prove to be a temporary phenomenon. The first-past-the-post system provides more incentives for parties to reposition themselves to reflect the new reality than it does for new parties to emerge. In fact it is this repositioning, from a economically right-wing, mildly cosmopolitan party to an (outwardly) economically centrist, communitarian party, that seems to characterise the Tories today.

Ed.: Everything seems to be in a tremendous mess (parties imploding, Brexit horror, blackbox campaigning, the alt-right, uncertainty over tactical voting, “election hacking”) and pretty volatile. But are these exciting times for political scientists? Or are things too messy and the data (for example, on voting intensions as well as outcomes) too inaccessible to distinguish any grand patterns?

Jonathan: Exciting from a political science point of view; alarming from the point of view of a member of society.

Ed.: But talking of “grand patterns”: do you have any intuition why “the C20 might be about capital vs labour; the C21 about local vs global”? Is it simply the next obvious reaction to ever-faster technological development and economic concentration bumping against societal inertia, or something more complex and unpredictable?

Jonathan: Over generations European societies gradually developed mechanisms of accountability to constrain their leaders and ensure they did not over-reach their powers. This is how democracy became consolidated. Hoverver, given that power is increasingly accruing to transnational and multinational corporations and networks that are beyond the reach of citizens operating in the national sphere, we must learn how to do this all over again on a global scale. Until we do so, globalisation will inevitably create “winners” and “losers” and will, I think, inevitably lead to more populism and upheaval.

Read the full article: Wheatley, J. (2016) Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data. Policy & Internet 8 (4) doi:10.1002/poi3.129

Jonathan Wheatley was talking to blog editor David Sutcliffe.

]]>
Has Internet policy had any effect on Internet penetration in Sub-Saharan Africa? https://ensr.oii.ox.ac.uk/has-internet-policy-had-any-effect-on-internet-penetration-in-sub-saharan-africa/ Wed, 10 May 2017 08:08:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4112 There is a consensus among researchers that ICT is an engine for growth, and it’s also considered by the OECD to be a part of fundamental infrastructure, like electricity and roads. The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Some African countries have an Internet penetration of over 50 percent (such as the Seychelles and South Africa) whereas some resemble digital deserts, not even reaching two percent. Even more surprisingly, countries that are seemingly comparable in terms of economic development often show considerable differences in terms of Internet access (e.g., Kenya and Ghana).

Being excluded from the Internet economy has negative economic and social implications; it is therefore important for policymakers to ask how policy can bridge this inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables? In their Policy & Internet article “Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter?”, Robert Wentrup, Xiangxuan Xu, H. Richard Nakamura, and Patrik Ström address the dearth of research assessing the interplay between policy and Internet penetration by identifying Internet penetration-related policy variables and institutional constructs in Sub-Saharan Africa. It is a first attempt to investigate whether Internet policy variables have any effect on Internet penetration in Sub-Saharan Africa, and to shed light on them.

Based on a literature review and the available data, they examine four variables: (i) free flow of information (e.g. level of censorship); (ii) market concentration (i.e. whether or not internet provision is monopolistic); (iii) the activity level of the Universal Service Fund (a public policy promoted by some governments and international telecom organizations to address digital inclusion); and (iv) total tax on computer equipment, including import tariffs on personal computers. The results show that only the activity level of the USF and low total tax on computer equipment are significantly positively related to Internet penetration in Sub-Saharan Africa. Free flow of information and market concentration show no impact on Internet penetration. The latter could be attributed to underdeveloped competition in most Sub-Saharan countries.

The authors argue that unless states pursue an inclusive policy intended to enhance Internet access for all its citizens, there is a risk that the skewed pattern between the “haves” and the “have nots” will persist, or worse, be reinforced. They recommend that policymakers promote the policy instrument of Universal Service and USF and consider substituting tax on computer equipment with other tax revenues (i.e. introduce consumption-friendly incentives), and not to blindly trust the market’s invisible hand to fix inequality in Internet diffusion.

We caught up with Robert to discuss the findings:

Ed.: I would assume that Internet penetration is rising (or possibly even booming) across the continent, and that therefore things will eventually sort themselves out — is that generally true? Or is it already stalling / plateauing, leaving a lot of people with no apparent hope of ever getting online?

Robert: Yes, generally we see a growth in Internet penetration across Africa. But it is very heterogeneous and unequal in its character, and thus country-specific. Some rich African countries are doing quite well whereas others are lagging behind. We have also seen known that Internet connectivity is vulnerable due to the political situation. The recent shutdown of the Internet in Cameroon demonstrates this vulnerability.

Ed.: You mention that “determining the causality between Internet penetration and [various] factors is problematic” – i.e. that the relation between Internet use and economic growth is complex. This presumably makes it difficult for effective and sweeping policy “solutions” to have a clear effect. How has this affected Internet-policy in the region, if at all?.

Robert: On the one hand one can say that if there is economic growth, there will be money to invest in Internet infrastructure and devices, and on the other hand if there are investments in Internet infrastructure, there will be economic growth. This resembles the chicken and egg problem. For many African countries, which lack large public investment funds and at the same time suffer from other more important socio-economic challenges, it might be tricky to put effort into Internet policy issues. But there are some good examples of countries that have actually managed to do this, for example Kenya. As a result these efforts can lead to positive effects on the society and economy as a whole. The local context, and a focus on instruments that disseminate Internet usage in unprivileged geographic areas and social groups (like the Universal Service Fund) are very important.

Ed.: How much of the low Internet perpetration in large parts of Africa is simply due to large rural populations — and therefore something that will either never be resolved properly, or that will naturally resolve itself with the ongoing shift to urban centres?

Robert: We did not see a clear causality between rural population and low Internet on a country level, mainly due to the fact that countries with a large rural population are often quite rich and thus also have money to invest in Internet infrastructure. Africa is very dependent on agriculture. Although the Internet connectivity issue might be “self-resolved” to some degree by urban migration, other issues would emerge from such a shift such as an increased socio-economic divide in the urban areas. Hence, it is more effective to make sure that the Internet reaches rural areas at an early stage.

Ed.: And how much does domestic policy (around things like telecoms) get set internally, as opposed to externally? Presumably some things (e.g. the continent-wide cables required to connect Africa to the rest of the world) are easier to bring about if there is a strong / stable regional policy around regulation of markets and competition — whether organised internally, or influenced by outside governments and industry?

Robert: The influence of telecom ministries and telecom operators is strong, but of course they are affected by intra-regional organisations, private companies etc. In the past Africa has had difficulties in developing pan-regional trade and policies. But such initiatives are encouraged, not least in order to facilitate cost-sharing of large Internet-related investments.

Ed.: Leaving aside the question of causality, you mention the strong correlation between economic activity and Internet penetration: are there any African countries that buck this trend — at either end of the economic scale?

Robert: We have seen that Kenya and Nigeria have had quite impressive rates of Internet penetration in relation to GDP. Gabon on the other hand is a relatively rich African country, but with quite low Internet penetration.

Read the full article: Wentrup, R., Xu, X., Nakamura, H.R., and Ström, P. (2016) Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter? Policy & Internet 8 (3). doi:10.1002/poi3.123


Robert Wentrup was talking to blog editor David Sutcliffe.

]]>
We aren’t “rational actors” when it come to privacy — and we need protecting https://ensr.oii.ox.ac.uk/we-arent-rational-actors-when-it-come-to-privacy-and-we-need-protecting/ Fri, 05 May 2017 08:00:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4100
We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection — and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist” — an autonomous individual who makes informed decisions about the disclosure of their personal information.

But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference — a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and benefits associated with information exchange.

Academic critiques of this model have tended to focus on the methodological and theoretical validity of the pragmatist framework; however, in light of two recent studies that suggest individuals are resigned to the loss of privacy online, this article argues for the need to examine a possibility that has been overlooked as a consequence of this focus on Westin’s typology of privacy preferences: that people have opted out of the discussion altogether. Considering a theory of resignation alters how the problem of privacy is framed and opens the door to alternative discussions around policy solutions.

We caught up with Nora to discuss her findings:

Ed.: How easy is it even to discuss privacy (and people’s “rational choices”), when we know so little about what data is collected about us through a vast number of individually innocuous channels — or the uses to which it is put?

Nora: This is a fundamental challenge in current discussions around privacy. There are steps that we can take as individuals that protect us from particular types of intrusion, but in an environment where seemingly benign data flows are used to understand and predict our behaviours, it is easy for personal privacy protection to feel like an uphill battle. In such an environment, it is increasingly important that we consider resigned inaction to be a rational choice.

Ed.: I’m not surprised that there will be people who basically give up in exhaustion, when faced with the job of managing their privacy (I mean, who actually reads the Google terms that pop up every so often?). Is there a danger that this lack of engagement with privacy will be normalised during a time that we should actually be paying more, not less, attention to it?

Nora: This feeling of powerlessness around our ability to secure opportunities for privacy has the potential to discourage individual or collective action around privacy. Anthropologists Peter Benson and Stuart Kirsch have described the cultivation of resignation as a strategy to discourage collective action against undesirable corporate practices. Whether or not these are deliberate efforts, the consequences of creating a nearly unnavigable privacy landscape is that people may accept undesirable practices as inevitable.

Ed.: I suppose another irony is the difficulty of getting people to care about something that nevertheless relates so fundamentally and intimately to themselves. How do we get privacy to seem more interesting and important to the general public?

Nora: People experience the threats of unwanted visibility very differently. For those who are used to the comfortable feeling of public invisibility — the types of anonymity we feel even in public spaces — the likelihood of an unwanted privacy breach can feel remote. This is one of the problems of thinking about privacy purely as a personal issue. When people internalize the idea that if they have done nothing wrong, they have no reason to be concerned about their privacy, it can become easy to dismiss violations when they happen to others. We can become comfortable with a narrative that if a person’s privacy has been violated, it’s likely because they failed to use the appropriate safeguards to protect their information.

This cultivation of a set of personal responsibilities around privacy is problematic not least because it has the potential to blame victims rather than those parties responsible for the privacy incursions. I believe there is real value in building empathy around this issue. Efforts to treat privacy as a community practice and, perhaps, a social obligation may encourage us to think about privacy as a collective rather than individual value.

Ed.: We have a forthcoming article that explores the privacy views of Facebook / Google (companies and employees), essentially pointing out that while the public may regard privacy as pertaining to whether or not companies collect information in the first place, the companies frame it as an issue of “control” — they collect it, but let users subsequently “control” what others see. Is this fundamental discrepancy (data collection vs control) something you recognise in the discussion?

Nora: The discursive and practical framing of privacy as a question of control brings together issues addressed in your previous two questions. By providing individuals with tools to manage particular aspects of their information, companies are able to cultivate an illusion of control. For example, we may feel empowered to determine who in our digital network has access to a particular posted image, but little ability to determine how information related to that image — for example, its associated metadata or details on who likes, comments, or reposts it — is used.

The “control” framework further encourages us to think about privacy as an individual responsibility. For example, we may assume that unwanted visibility related to that image is the result of an individual’s failure to correctly manage their privacy settings. The reality is usually much more complicated than this assigning of individual blame allows for.

Ed.: How much of the privacy debate and policy making (in the States) is skewed by economic interests — i.e. holding that it’s necessary for the public to provide data in order to keep business competitive? And is the “Europe favours privacy, US favours industry” truism broadly true?

Nora: I don’t have a satisfactory answer to this question. There is evidence from past surveys I’ve done with colleagues that people in the United States are more alarmed by the collection and use of personal information by political parties than they are by similar corporate practices. Even that distinction, however, may be too simplistic. Political parties have an established history of using consumer information to segment and target particular audience groups for political purposes. We know that the U.S. government has required private companies to share information about consumers to assist in various surveillance efforts. Discussions about privacy in the U.S. are often framed in terms of tradeoffs with, for example, technological and economic innovation. This is, however, only one of the ways in which the value of privacy is undermined through the creation of false tradeoffs. Daniel Solove, for example, has written extensively on how efforts to frame privacy in opposition to safety encourages capitulation to transparency in the service of national security.

Ed.: There are some truly terrible US laws (e.g. the General Mining Act of 1872) that were developed for one purpose, but are now hugely exploitable. What is the situation for privacy? Is the law still largely fit for purpose, in a world of ubiquitous data collection? Or is reform necessary?

Nora: One example of such a law is the Electronic Communication Privacy Act (ECPA) of 1986. This law was written before many Americans had email accounts, but continues to influence the scope authorities have to access digital communications. One of the key issues in the ECPA is the differential protection for messages depending on when they were sent. The ECPA, which was written when emails would have been downloaded from a server onto a personal computer, treats emails stored for more than 180 days as “abandoned.” While messages received in the past 180 days cannot be accessed without a warrant, so-called abandoned messages require only a subpoena. Although there is some debate about whether subpoenas offer adequate privacy protections for messages stored on remote servers, the issue is that the time-based distinction created by “180-day rule” makes little sense when access to cloud storage allows people to save messages indefinitely. Bipartisan efforts to introduce the Email Privacy Act, which would extend warrant protections to digital communication that is over 180 days old has received wide support from those in the tech industry as well as from privacy advocacy groups.

Another challenge, which you alluded to in your first question, pertains to the regulation of algorithms and algorithmic decision-making. These technologies are often described as “black boxes” to reflect the difficulties in assessing how they work. While the consequences of algorithmic decision-making can be profound, the processes that lead to those decisions are often opaque. The result has been increased scholarly and regulatory attention on strategies to understand, evaluate, and regulate the processes by which algorithms make decisions about individuals.

Read the full article: Draper, N.A. (2017) From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates. Policy & Internet 9 (2). doi:10.1002/poi3.142.


Nora A. Draper was talking to blog editor David Sutcliffe.

]]>
How do we encourage greater public inclusion in Internet governance debates? https://ensr.oii.ox.ac.uk/how-do-we-encourage-greater-public-inclusion-in-internet-governance-debates/ Wed, 03 May 2017 08:00:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4095 The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed, and role of the public in Internet governance debates is a critical issue for policymaking.

The current dominant mechanism for public inclusion is the multistakeholder approach, i.e. one that includes governments, industry and civil society in governance debates. Despite at times being used as a shorthand for public inclusion, multistakeholder governance is implemented in many different ways and has faced criticism, with some arguing that multistakeholder discussions serve as a cover for the growth of state dominance over the Web, and enables oligarchic domination of discourses that are ostensibly open and democratic.

In her Policy & Internet article “Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial”, Sarah Myers West examines the role of the public in Internet governance debates, with reference to public inclusion at the 2014 Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial). NETmundial emerged at a point when public legitimacy was a particular concern for the Internet governance community, so finding ways to include the rapidly growing, and increasingly diverse group of stakeholders in the governance debate was especially important for the meeting’s success.

This is particularly significant as the Internet governance community faces problems of increasing complexity and diversity of views. The growth of the Internet has made the public central to Internet governance — but introduces problems around the growing number of stakeholders speaking different languages, with different technical backgrounds, and different perspectives on the future of the Internet.

However, the article suggests that rather than attempting to unify behind a single institution or achieve public consensus through a single, deliberative forum, the NETmundial example suggests that the Internet community may further fragment into multiple publics, further redistributing into a more networked and “agonistic” model. This doesn’t quite reflect the model of the “public sphere” Habermas may have envisioned, but it may ultimately befit the network of networks it is forged around.

We caught up with Sarah to discuss her findings:

Ed.: You say governance debates involve two levels of contestation: firstly in how we define “the Internet community”, and secondly around the actual decision-making process. How do we even start defining what “the public” means?

Sarah: This is a really difficult question, and it’s really the one that drove me throughout my research. I think that observing examples of publics ‘in the wild’ — how they are actually constituted within the Internet governance space — is one entry point. As I found in the article, there are a number of different kinds of publics that have emerged over the history of the internet, some fairly structured and centralized and others more ad hoc and decentralized. There’s also a difference between the way public inclusion is described/structured and the way things work out in practice. But better understanding what kinds of publics actually exist is only the first step to analyzing deeper questions — about the workings of power on and through the Internet.

Ed.: I know Internet governance is important but haven’t the faintest idea who represents me (as a member of “the public”) in these debates. Are my interests represented by the UK Government? Europe? NGOs? Industry? Or by self-proclaimed “public representatives”?

Sarah: All of the above — and also, maybe, none of the above. There are a number of different kinds of stakeholders representing different constituencies on the Internet — at NETmundial, this was separated into Government, Business, Civil Society, Academia and the Technical Community. In reality, there are blurred boundaries around all these categories, and each of these groups could make claims about representing the public, though which aspects of the public interest they represent is worth a closer look.

Many Internet governance fora are constituted in a way that would also allow each of us to represent ourselves: at NETmundial, there was a lot of thought put in to facilitating remote participation and bringing in questions from the Internet. But there are still barriers — it’s not the same as being in the room with decision makers, and the technical language that’s developed around Internet governance certainly makes these discussions hard for newcomers to follow.

Ed.: Is there a tension here between keeping a process fairly closed (and efficient) vs making it totally open and paralysed? And also between being completely democratic vs being run by people (engineers) who actually understand how the Internet works? i.e. what is the point of including “the public” (whatever that means) at a global level, instead of simply being represented by the governments we elect at a national (or European) level?

Sarah: There definitely is a tension there, and I think this is part of the reason why we see such different models of public inclusion in different kinds of forums. For starters, I’m not sure that, at present, there’s a forum that I can think of that is fully democratic. But I think there is still a value in trying to be more democratic, and to placing the public at the centre of these discussions. As we’ve seen in the years following the Snowden revelations, the interests of state actors are not always aligned, and sometimes are completely at odds, with those of the public.

The involvement of civil society, academia and the technical community is really critical to counterbalancing these interests — but, as many civil society members remarked after NETmundial, this can be an uphill battle. Governments and corporations have an easier time in these kinds of forums identifying and advocating for a narrow set of interests and values, whereas civil society doesn’t always come in to these discussions with as clear a consensus. It can be a messy process.

Ed.: You say that “analyzing the infrastructure of public participation makes it possible to examine the functions of Internet governance processes at a deeper level.” Having done so, are you hopeful or cynical about “Internet governance” as it is currently done?

Sarah: I’m hopeful about the attentiveness to public inclusion exhibited at NETmundial — it really was a central part of the process and the organizers made a number of investments in ensuring it was as broadly accessible as possible. That said, I’m a bit critical of whether building technological infrastructure for inclusion on its own can overcome the real resource imbalances that effect who can participate in these kinds of forums. It’s probably going to require investments in both — there’s a danger that by focusing on the appearance of being democratic, these discussions can mask the underlying power discrepancies that inhibit deliberation on an even playing field.

Read the full article: West, S.M. (2017) Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial. Policy & Internet 9 (2). doi:10.1002/poi3.143


Sarah Myers West was talking to blog editor David Sutcliffe.

]]>
We should look to automation to relieve the current pressures on healthcare https://ensr.oii.ox.ac.uk/we-should-look-to-automation-to-relieve-the-current-pressures-on-healthcare/ Thu, 20 Apr 2017 08:36:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4075
Image by TheeErin (Flickr CC BY-NC-ND 2.0), who writes: “Working on a national cancer research project. This is the usual volume of mail that comes in two-days time.”

In many sectors, automation is seen as a threat due to the potential for job losses. By contrast, automation is seen as an opportunity in healthcare, as a way to address pressures including staff shortages, increasing demand and workloads, reduced budget, skills shortages, and decreased consultation times. Automation may address these pressures in primary care, while also reconfiguring the work of staff roles and changing the patient-doctor relationship.

In the interview below, Matt Willis discusses a project, funded by The Health Foundation, which looks at opportunities and challenges to automation in NHS England general practice services. While the main goal of the project is to classify work tasks and then calculate the probability that each task will be automated, Matt is currently conducting ethnographic fieldwork in primary care sites to understand the work practices of surgery staff and clinicians.

Since the first automated pill counting machine was introduced in 1970 the role of the pharmacist has expanded to where they now perform more patient consultations, consult with primary care physicians, and require greater technical skill (including a Pharm.D degree). While this provides one clear way in which a medical profession has responded to automation, the research team is now looking at how automation will reconfigure other professions in primary care, and how it will shape its technical and digital infrastructures.

We caught up with Matt Willis to explore the implications of automation in primary care.

Ed.: One finding from an analysis by Frey and Osborne is that most healthcare occupations (that involve things like social intelligence, caring etc.) show a remarkably low probability for computerisation. But what sorts of things could be automated, despite that?

Matt: While providing care is the most important work that happens in primary care, there are many tasks that support that care. Many of those tasks are highly structured and repetitive, ideal things we can automate. There is an incredible amount of what I call “letter work” that occurs in primary care. It’s tasks like responding to requests for information from secondary care, an information request from a medical supplier, processing a trusted assessment, and so on.

There is also generating the letters that are sent to other parts of the NHS — and letters are also triaged at the beginning of each day depending on the urgency of the request. Medical coding is another task that can be automated as well as medication orders and renewal. All of these tasks require someone working with paper or digital text documents and gathering information according to a set of criteria. Often surgeries are overwhelmed with paperwork, so automation is a potential way to make a dent in the way information is processed.

Ed.: I suppose that the increasing digitisation of sensors and data capture (e.g. digital thermometers) and patient records actually helps in this: i.e. automation sounds like the obvious next step in an increasingly digital environment? But is it really as simple as that?

Matt: Well, it’s never as simple as you think it’s going to be. The commonality of data originating in a digital format usually does make data easier to work with, manipulate, analyze, and make actionable. Even when information is entirely digital there can be barriers of interoperability between systems. Automation could even be automating the use of data from one system to the next. There are also social and policy barriers to the use of digital data for automation. Think back to the recent care.data debacle that was supposed to centralize much of the NHS data from disparate silos.

Ed.: So will automation of these tasks be driven by government / within the NHS, or by industry / the market? i.e. is there already a market for automating aspects of healthcare?

Matt: Oh yes, I think it will be a variety of those forces you mention. There is already partial automation in many little ways all over NHS. Automation of messages and notifications, blood pressure cuffs, and other medical devices. Automation is not entirely new to healthcare. The pharmacist is an exemplar health profession to look at if we want to see how automation has changed the tasks of a profession for decades. Many of the electronic health record providers in the UK have different workflow automation features or let clinicians develop workflow efficiency protocols that may automate things in specific ways.

Ed.: You say that one of the bottlenecks to automating healthcare is lack of detailed knowledge of the sorts of tasks that could actually be automated. Is this what you’re working on now?

Matt: Absolutely. The data from labour statistics is self-reported and many of the occupations were lumped together meaning all receptionists in different sectors are just listed under receptionist. One early finding I have that I have been thinking about is how a receptionist in the healthcare sector is different in their information work than a receptionist’s counterpart in another sector. I see this with occupations across health, that there are unique features that differentiate health occupations from similar occupations. This begs the need to tease out those details in the data.

Additionally, we need to understand the use of technologies in primary care and what tasks those technologies perform. One of the most important links I am trying to understand is that between the tasks of people and the tasks of technologies. I am working on not only understanding the opportunities and challenges of automation in primary care but also what are the precursors that exist that may support the implementation of automation.

Ed.: When I started in journals publishing I went to the post room every day to mail out hardcopy proofs to authors. Now everything I do is electronic. I’m not really aware of when the shift happened, or what I do with the time freed up (blog, I suppose..). Do you think it will be similarly difficult in healthcare to pin-point a moment when “things got automated”?

Matt: Well, often times with technology and the change of social practices it’s rarely something that happens overnight. You probably started to gradually send out less and less paper manuscripts over a period of time. It’s the frog sitting in a pot where the heat is slowly turned up. There is a theory that technological change comes in swarm patterns — meaning it’s not one technological change that upends everything, but the advent of numerous technologies that start to create big change.

For example, one of the many reasons that the application of automation technologies is increasing is the swarming of prior technologies like “big data” sets, advances in machine vision, machine learning, machine pattern recognition, mobile robotics, the proliferation of sensors, and further development of autonomous technologies. These kinds of things drive big advances forward.

Ed.: I don’t know if people in the publishing house I worked in lost their jobs when things like post rooms and tea trolleys got replaced by email and coffee machines — or were simply moved to different types of jobs. Do you think people will “lose their jobs“ as automation spreads through the health sector, or will it just drive a shift to people doing something else instead?

Matt: One of the justifications in the project is that in many sectors automation is seen as a threat, however, automation is seen as an opportunity in healthcare. This is in great part due to the current state of the NHS and that the smart and appropriate application of automation technologies can be a force multiplier, particularly in primary care.

I see it as not that people will be put out of jobs, but that you’ll be less likely to have to work 12 hours when you should be working 8 and to not have a pile of documents stacking up that you are three months behind in processing. The demand for healthcare is increasing, the population is aging, and people live longer. One of the ways to keep up with this trend is to implement automation technologies that support healthcare workers and management.

I think we are a long ways away from the science fiction future where a patient lays in an entirely automated medical pod that scans them and administers whatever drug, treatment, procedure, or surgery they need. A person’s tasks and the allocation of work will shift in part due to technology. But that has been happening for decades. There is also a longstanding debate about if technology creates more jobs in the long term than it destroys. It’s likely that in healthcare we will see new occupational roles, job titles, and tasks emerge that are in part automation related. Also, that tasks like filing paperwork or writing a letter will seem barbaric when a computer can, through little time and effort, do that for you.


Matthew Willis was talking to blog editor David Sutcliffe.

]]>
Should citizens be allowed to vote on public budgets? https://ensr.oii.ox.ac.uk/should-citizens-be-allowed-to-vote-on-public-budgets/ Tue, 18 Apr 2017 09:26:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4089
Image: a youth occupation of Belo Horizonte to present and discuss different forms of occupation of urban space, by upsilon (Flickr CC BY-SA).

There is a general understanding that public decision-making could generate greater legitimacy for political decisions, greater trust in government action and a stronger sense of representation. One way of listening to citizens’ demands and improving their trust in politics is the creation of online communication channels whereby issues, problems, demands, and suggestions can be addressed. One example, participatory budgeting, is the process by which ordinary citizens are given the opportunity to make decisions regarding a municipal budget, by suggesting, discussing, and nominating projects that can be carried out within it. Considered to be a successful example of empowered democratic governance, participatory budgeting has spread among many cities in Brazil, and after being recommended by the World Bank and UN-Habitat, also implemented in various cities worldwide.

The Policy & Internet article “Do Citizens Trust Electronic Participatory Budgeting? Public Expression in Online Forums as an Evaluation Method in Belo Horizonte” by Samuel A. R. Barros and Rafael C. Sampaio examines the feelings, emotions, narratives, and perceptions of political effectiveness and political representation shared in these forums. They discuss how online messages and feelings expressed through these channels can be used to assess public policies, as well as examining some of the consequences of ignoring them.

Recognized as one of the most successful e-democracy experiences in Brazil, Belo Horizonte’s electronic participatory budgeting platform was created in 2006 to allow citizens to deliberate and vote in online forums provided by the city hall. The initiative involved around 174,000 participants in 2006 and 124,000 in 2008. However, only 25,000 participants took part in the 2011 edition, indicating significant loss of confidence in the process. It is a useful case to assess the reasons for success and failure of e-participation initiatives.

There is some consensus in the literature on participants’ need to feel that their contributions will be taken into consideration by those who promote initiatives and, ideally, that these contributions will have effects and practical consequences in the formulation of public policies. By offering an opportunity to participate, the municipality sought to improve perceptions of the quality of representation. Nonetheless, government failure to carry out the project chosen in 2008 and lack of confidence in the voting mechanism itself may have contributed to producing the opposite effect.

Moderators didn’t facilitate conversation or answer questions or demands. No indication was given as to whether these messages were being taken into consideration or even read, and the organizers never explained how or whether the messages would be used later on. In other words, the municipality took no responsibility for reading or evaluating the messages posted there. Thus, it seems to be an online forum that offers little or no citizen empowerment.

We caught up with the authors to discuss their findings:

Ed.: You say that in 2008 62.5% of the messages expressed positive feelings, but in 2011, 59% expressed negative ones. Quite a drop! Is that all attributable to the deliberation not being run properly, i.e. to the danger of causing damage (i.e. fall in trust) by failing to deliver on promises?

Samuel + Rafael: That’s the million dollar question! The truth is: it’s hard to say. Nevertheless, our research does show some evidence of this. Most negative feelings were directly connected to this failure to deliver the previously approved work. As participatory budgeting processes are very connected to practical issues, this was probably the main reason of the drop we saw. We also indicate how the type of complaint changed significantly from one edition to another. For instance, in 2008 many people asked for small adjustments in each of the proposed works, while in 2011 they were complaining about the scope or even the relevance of the works.

Ed.: This particular example aside: is participatory budgeting generally successful? And does it tend to be genuinely participatory (and deliberative?), or more like: “do you want option A or B”?

Samuel + Rafael: That’s also a very good question. In Brazil’s case, most participatory budgeting exercises achieved good levels of participation and contributed to at least minor change in the bureaucratic routines of public servants and officials. Thus, they can be considered successful. Of course, there are many cases of failure as well, since participatory budgeting can be hard to do properly and as our article indicates, a single mistake can disrupt it for good.

Regarding the second question, we would say that it’s more about choosing what you want and what you can deliver as the public power. In actual fact, most participatory budgeting exercises are not as deliberative as everyone believes — they are more about bargaining and negotiation. Nevertheless, while the daily practice of participation may not be as deliberative as we may want it, it still achieves a lot of other benefits, such as keeping the community engaged and informed, letting people know more about how the budget works — and the negotiation itself may involve a certain amount of empathy and role-taking.

Ed.: Is there any evidence that crowdsourcing decisions of this sort leads to better ones? (or at least, more popular ones). Or was this mostly done just to “keep the people happy”?

Samuel + Rafael: We shouldn’t assume that every popular decision is necessarily better, or we’ll get trapped in the caveats. On the other hand, considering how our representative system was designed, how people feel powerless and how rulers are usually set apart from their constituents, we can easily support any real attempt to give the people a greater chance of stating their minds and even deciding on things. If the process is well designed, if the managers (i.e. public servants and officials) are truly open to these inputs and if the public is informed, we can hope for better decisions.

Ed.: Is there any conflict here between “what is popular” and “what is right”? i.e. how much should be opened up to mass voting (with the inevitable skews and take-over by vested interests).

Samuel + Rafael: This is the “dark side” of participation that we mentioned before. We should not automatically consider participation to be good in and of itself. It can be misinformed, biased, and lead to worse and not better decisions. Particularly, when people are not informed enough, when the topic was not discussed enough in the public sphere, we might end up with bad popular decisions. For instance, would Brexit have occurred with a different method of voting?

Let’s imagine several months of small scale discussions between citizens (i.e. minipublics) both face-to-face and in online deliberation spaces. Maybe these groups would reach the same decision, but at least all participants would feel more confident in their decisions, because they had enough information and were confronted with different points of view and arguments before voting. Thus, we believe that mass voting can be used for big decisions, but that there is a need for greater conversation and consensus before it.

Ed.: Is there any link between participatory budgeting and public scrutiny of public budgets (which can be tremendously corrupt, e.g. when it comes to building projects) — or does participatory budgeting tend to be viewed as something very different to oversight?

Samuel + Rafael: This is actually one of the benefits of participatory budgeting that is not correlated to participation alone. It makes corruption and bribery harder to do. As there are more people discussing and monitoring the budget, the process itself needs to be more transparent and accountable. There are some studies that find a correlation between participatory budgeting and tax payment. The problem is that participatory budgeting tends to concern only a small amount of the budget, thus this public control does not reach the whole process. Still, it proves how public participation may lead to a series of benefits both for the public agents and the public itself.

Read the full article: Barros, S.A.R. and Sampaio, R.C. (2016) Do Citizens Trust Electronic Participatory Budgeting? Public Expression in Online Forums as an Evaluation Method in Belo Horizonte. Policy & Internet 8 (3). doi:10.1002/poi3.125


Samuel A. R. Barros and Rafael C. Sampaio were talking to blog editor David Sutcliffe.

]]>
Governments Want Citizens to Transact Online: And This Is How to Nudge Them There https://ensr.oii.ox.ac.uk/governments-want-citizens-to-transact-online-and-this-is-how-to-nudge-them-there/ Mon, 10 Apr 2017 10:08:33 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4078
A randomized control trial that “nudged” users of a disability parking scheme to renew online showed a six percentage point increase in online renewals. Image: Wendell (Flickr).

In an era when most transactions occur online, it’s natural for public authorities to want the vast bulk of their contacts with citizens to occur through the Internet. But they also face a minority for whom paper and face-to-face interactions are still preferred or needed — leading to fears that efforts to move services online “by default” might reinforce or even encourage exclusion. Notwithstanding these fears, it might be possible to “nudge” citizens from long-held habits by making online submission advantageous and other routes of use more difficult.

Behavioural public policy has been strongly advocated in recent years as a low-cost means to shift citizen behaviour, and has been used to reform many standard administrative processes in government. But how can we design non-obtrusive nudges to make users shift channels without them losing access to services? In their new Policy & Internet article “Nudges That Promote Channel Shift: A Randomized Evaluation of Messages to Encourage Citizens to Renew Benefits Online” Peter John and Toby Blume design and report a randomized control trial that encouraged users of a disability parking scheme to renew online.

They found that by simplifying messages and adding incentives (i.e. signalling the collective benefit of moving online) users were encouraged to switch from paper to online channels by about six percentage points. As a result of the intervention and ongoing efforts by the Council, virtually all the parking scheme users now renew online.

The finding that it’s possible to appeal to citizens’ willingness to act for collective benefit is encouraging. The results also support the more general literature that shows that citizens’ use of online services is based on trust and confidence with public services and that interventions should go with the grain of citizen preferences and norms.

We caught up with Peter John to discuss his findings, and the role of behavioural public policy in government:

Ed.: Is it fair to say that the real innovation of behavioural public policy isn’t so much Government trying to nudge us into doing things in subtle, unremarked ways, but actually using experimental techniques to provide some empirical backing for the success of any interventions? i.e. like industry has done for years with things like A/B testing?

Peter: There is some truth in this, but the late 2000s was a time when policy-makers got more interested in the findings of the behavioural sciences and in redesigning initiatives to incorporate behaviour insights. Randomised controlled trials (RCTs) have become more commonly used more generally across governments to test for the impact of public policies — these are better than A/B testing as randomisation protocols are followed and clear reports are made of the methods used. A/B testing can be dodgy — or at least, it is done in secret and we don’t know how good the methods used are. There is much better reporting of government RCTs.

Ed.: The UK Government’s much-discussed “Nudge Unit” was part-privatised a few years ago, and the Government now has to pay for its services: was this a signal of the tremendous commercial value of behavioural economics (and all the money to be made if you sell off bits of the Civil Service), or Government not really knowing what to do with it?

Peter: I think the language of privatisation is not quite right to describe what happened. The unit was spun out of government, but government still owns a share, with Nesta owning the other large portion. It is not a profit-making enterprise, but a non-profit one — the freedom allows it to access funds from foundations and other funders. The Behavioural Insights Team is still very much a public service organisation, even if it has got much bigger since moving out of direct control by Government. Where there are public funds involved there is scrutiny through ministers and other funders — it matters that people know about the nudges, and can hold policy-makers to account.

Ed.: You say that “interventions should go with the grain of citizen preferences and norms” to be successful. Which I suppose is a sort-of built-in ethical safeguard. But do you know of any behavioural pushes that make us go against our norms, or that might raise genuine ethical concerns, or calls for oversight?

Peter: I think some of the shaming experiments done on voter turnout are on the margins of what is ethically acceptable. I agree that the natural pragmatism and caution of public agencies helps them agree relatively low key interventions.

Ed.: Finally — having spent some time studying and thinking about Government nudges .. have you ever noticed or suspected that you might have been subjected to one, as a normal citizen? I mean: how ubiquitous are they in our public environment?

Peter: Indeed — a lot of our tax letters are part of an experiment. But it’s hard to tell of course, as making nudges non-obtrusive is one of the key things. It shouldn’t be a problem that I am part of an experiment of the kind I might commission.

Read the full article: John, P. and Blume, T. (2017) Nudges That Promote Channel Shift: A Randomized Evaluation of Messages to Encourage Citizens to Renew Benefits Online. Policy & Internet. DOI: 10.1002/poi3.148.


Peter John was talking to blog editor David Sutcliffe.

]]>
Did you consider Twitter’s (lack of) representativeness before doing that predictive study? https://ensr.oii.ox.ac.uk/did-you-consider-twitters-lack-of-representativeness-before-doing-that-predictive-study/ Mon, 10 Apr 2017 06:12:36 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4062 Twitter data have many qualities that appeal to researchers. They are extraordinarily easy to collect. They are available in very large quantities. And with a simple 140-character text limit they are easy to analyze. As a result of these attractive qualities, over 1,400 papers have been published using Twitter data, including many attempts to predict disease outbreaks, election results, film box office gross, and stock market movements solely from the content of tweets.

Easy availability of Twitter data links nicely to a key goal of computational social science. If researchers can find ways to impute user characteristics from social media, then the capabilities of computational social science would be greatly extended. However few papers consider the digital divide among Twitter users. But the question of who uses Twitter has major implications for research attempts to use the content of tweets for inference about population behaviour. Do Twitter users share identical characteristics with the population interest? For what populations are Twitter data actually appropriate?

A new article by Grant Blank published in Social Science Computer Review provides a multivariate empirical analysis of the digital divide among Twitter users, comparing Twitter users and nonusers with respect to their characteristic patterns of Internet activity and to certain key attitudes. It thereby fills a gap in our knowledge about an important social media platform, and it joins a surprisingly small number of studies that describe the population that uses social media.

Comparing British (OxIS survey) and US (Pew) data, Grant finds that generally, British Twitter users are younger, wealthier, and better educated than other Internet users, who in turn are younger, wealthier, and better educated than the offline British population. American Twitter users are also younger and wealthier than the rest of the population, but they are not better educated. Twitter users are disproportionately members of elites in both countries. Twitter users also differ from other groups in their online activities and their attitudes.

Under these circumstances, any collection of tweets will be biased, and inferences based on analysis of such tweets will not match the population characteristics. A biased sample can’t be corrected by collecting more data; and these biases have important implications for research based on Twitter data, suggesting that Twitter data are not suitable for research where representativeness is important, such as forecasting elections or gaining insight into attitudes, sentiments, or activities of large populations.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698

We caught up with Grant to explore the implications of the findings:

Ed.: Despite your cautions about lack of representativeness, you mention that the bias in Twitter could actually make it useful to study (for example) elite behaviours: for example in political communication?

Grant: Yes. If you want to study elites and channels of elite influence then Twitter is a good candidate. Twitter data could be used as one channel of elite influence, along with other online channels like social media or blog posts, and offline channels like mass media or lobbying. There is an ecology of media and Twitter is one part.

Ed.: You also mention that Twitter is actually quite successful at forecasting certain offline, commercial behaviours (e.g. box office receipts).

Grant: Right. Some commercial products are disproportionately used by wealthier or younger people. That certainly would include certain forms of mass entertainment like cinema. It also probably includes a number of digital products like smartphones, especially more expensive phones, and wearable devices like a Fitbit. If a product is disproportionately bought by the same population groups that use Twitter then it may be possible to forecast sales using Twitter data. Conversely, products disproportionately used by poorer or older people are unlikely to be predictable using Twitter.

Ed.: Is there a general trend towards abandoning expensive, time-consuming, multi-year surveys and polling? And do you see any long-term danger in that? i.e. governments and media (and academics?) thinking “Oh, we can just get it off social media now”.

Grant: Yes and no. There are certainly people who are thinking about it and trying to make it work. The ease and low cost of social media is very seductive. However, that has to be balanced against major weaknesses. First the population using Twitter (and other social media) is unclear, but it is not a random sample. It is just a population of Twitter users, which is not a population of interest to many.

Second, tweets are even less representative. As I point out in the article, over 40% of people with a Twitter account have never sent a tweet, and the top 15% of users account for 85% of tweets. So tweets are even less representative of any real-world population than Twitter users. What these issues mean is that you can’t calculate measures of error or confidence intervals from Twitter data. This is crippling for many academic and government uses.

Third, Twitter’s limited message length and simple interface tends to give it advantages on devices with restricted input capability, like phones. It is well-suited for short, rapid messages. These characteristics tend to encourage Twitter use for political demonstrations, disasters, sports events, and other live events where reports from an on-the-spot observer are valuable. This suggests that Twitter usage is not like other social media or like email or blogs.

Fourth, researchers attempting to extract the meaning of words have 140 characters to analyze and they are littered with abbreviations, slang, non-standard English, misspellings and links to other documents. The measurement issues are immense. Measurement is hard enough in surveys when researchers have control over question wording and can do cognitive interviews to understand how people interpret words.

With Twitter (and other social media) researchers have no control over the process that generated the data, and no theory of the data generating process. Unlike surveys, social media analysis is not a general-purpose tool for research. Except in limited areas where these issues are less important, social media is not a promising tool.

Ed.: How would you respond to claims that for example Facebook actually had more accurate political polling than anyone else in the recent US Election? (just that no-one had access to its data, and Facebook didn’t say anything)?

Grant: That is an interesting possibility. The problem is matching Facebook data with other data, like voting records. Facebook doesn’t know where people live. Finding their location would not be an easy problem. It is simpler because Facebook would not need an actual address; it would only need to locate the correct voting district or the state (for the Electoral College in US Presidential elections). Still, there would be error of unknown magnitude, probably impossible to calculate. It would be a very interesting research project. Whether it would be more accurate than a poll is hard to say.

Ed.: Do you think social media (or maybe search data) scraping and analysis will ever successfully replace surveys?

Grant: Surveys are such versatile, general purpose tools. They can be used to elicit many kinds information on all kinds of subjects from almost any population. These are not characteristics of social media. There is no real danger that surveys will be replaced in general.

However, I can see certain specific areas where analysis of social media will be useful. Most of these are commercial areas, like consumer sentiments. If you want to know what people are saying about your product, then going to social media is a good, cheap source of information. This is especially true if you sell a mass market product that many people use and talk about; think: films, cars, fast food, breakfast cereal, etc.

These are important topics to some people, but they are a subset of things that surveys are used for. Too many things are not talked about, and some are very important. For example, there is the famous British reluctance to talk about money. Things like income, pensions, and real estate or financial assets are not likely to be common topics. If you are a government department or a researcher interested in poverty, the effect of government assistance, or the distribution of income and wealth, you have to depend on a survey.

There are a lot of other situations where surveys are indispensable. For example, if the OII wanted to know what kind of jobs OII alumni had found, it would probably have to survey them.

Ed.: Finally .. 1400 Twitter articles in .. do we actually know enough now to say anything particularly useful or concrete about it? Are we creeping towards a Twitter revelation or consensus, or is it basically 1400 articles saying “it’s all very complicated”?

Grant: Mostly researchers have accepted Twitter data at face value. Whatever people write in a tweet, it means whatever the researcher thinks it means. This is very easy and it avoids a whole collection of complex issues. All the hard work of understanding how meaning is constructed in Twitter and how it can be measured is yet to be done. We are a long way from understanding Twitter.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698


Grant Blank was talking to blog editor David Sutcliffe.

]]>
Exploring the world of self-tracking: who wants our data and why? https://ensr.oii.ox.ac.uk/exploring-the-world-of-self-tracking-who-wants-our-data-and-why/ Fri, 07 Apr 2017 07:14:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4052 Benjamin Franklin used to keep charts of his time spent and virtues lived up to. Today, we use technology to self-track: our hours slept, steps taken, calories consumed, medications administered. But what happens when we turn our everyday experience — in particular, health and wellness-related experience — into data?

Self-Tracking” (MIT Press) by Gina Neff and Dawn Nafus examines how people record, analyze, and reflect on this data — looking at the tools they use and the communities they become part of, and offering an introduction to the essential ideas and key challenges of using these technologies. In considering self-tracking as a social and cultural phenomenon, they describe not only the use of data as a kind of mirror of the self but also how this enables people to connect to, and learn from, others.

They also consider what’s at stake: who wants our data and why, the practices of serious self-tracking enthusiasts, the design of commercial self-tracking technology, and how people are turning to self-tracking to fill gaps in the healthcare system. None of us can lead an entirely untracked life today, but in their book, Gina and Dawn show us how to use our data in a way that empowers and educates us.

We caught up with Gina to explore the self-tracking movement:

Ed.: Over one hundred million wearable sensors were shipped last year to help us gather data about our lives. Is the trend and market for personal health-monitoring devices ever-increasing, or are we seeing saturation of the device market and the things people might conceivably want to (pay to) monitor about themselves?

Gina: By focusing on direct-to-consumer wearables and mobile apps for health and wellness in the US we see a lot of tech developed with very little focus on impact or efficacy. I think to some extent we’ve hit the trough in the ‘hype’ cycle, where the initial excitement over digital self-tracking is giving way to the hard and serious work of figuring out how to make things that improve people’s lives. Recent clinical trial data show that activity trackers, for example, don’t help people to lose weight. What we try to do in the book is to help people figure out what self-tracking to do for them and advocate for people being able to access and control their own data to help them ask — and answer — the questions that they have.

Ed.: A question I was too shy to ask the first time I saw you speak at the OII — how do you put the narrative back into the data? That is, how do you make stories that might mean something to a person, out of the vast piles of strangely meaningful-meaningless numbers that their devices accumulate about them?

Gina: We really emphasise community. It might sound clichéd but it truly helps. When I read some scholars’ critiques of the Quantified Self meetups that happen around the world I wonder if we have actually been to the same meetings. Instead of some kind of technophilia there are people really working to make sense of information about their lives. There’s a lot of love for tech, but there are also people trying to figure out what their numbers mean, are they normal, and how to design their own ‘n of 1’ trials to figure out how to make themselves better, healthier, and happier. Putting narrative back into data really involves sharing results with others and making sense together.

Ed.: There’s already been a lot of fuss about monetisation of NHS health records: I imagine the world of personal health / wellness data is a vast Wild West of opportunity for some (i.e. companies) and potential exploitation of others (i.e. the monitored), with little law or enforcement? For a start .. is this health data or social data? And are these equivalent forms of data, or are they afforded different protections?

Gina: In an opinion piece in Wired UK last summer I asked what happens to data ownership when your smartphone is your doctor. Right now we afford different privacy protection to health-related data than other forms of personal data. But very soon trace data may be useful for clinical diagnoses. There are already in place programmes for using trace data for early detection of mood disorders, and research is underway on using mobile data for the diagnosis of movement disorders. Who will have control and access to these potential early alert systems for our health information? Will it be legally protected to the same extent as the information in our medical records? These are questions that society needs to settle.

Ed.: I like the central irony of “mindfulness” (a meditation technique involving a deep awareness of your own body), i.e. that these devices reveal more about certain aspects of the state of your body than you would know yourself: but you have to focus on something outside of yourself (i.e. a device) to gain that knowledge. Do these monitoring devices support or defeat “mindfulness”?

Gina: I’m of two minds, no pun intended. Many of the Quantified Self experiments we discuss in the book involved people playing with their data in intentional ways and that level of reflection in turn influences how people connect the data about themselves to the changes they want to make in their behaviour. In other words, the act of self-tracking itself may help people to make changes. Some scholars have written about the ‘outsourcing’ of the self, while others have argued that we can develop ‘exosenses’ outside our bodies to extend our experience of the world, bringing us more haptic awareness. Personally, I do see the irony in smartphone apps intended to help us reconnect with ourselves.

Ed.: We are apparently willing to give up a huge amount of privacy (and monetizable data) for convenience, novelty, and to interact with seductive technologies. Is the main driving force of the wearable health-tech industry the actual devices themselves, or the data they collect? i.e. are these self-tracking companies primarily device/hardware companies or software/data companies?

Gina: Sadly, I think it is neither. The drop off in engagement with wearables and apps is steep with the majority falling into disuse after six months. Right now one of the primary concerns I have as an Internet scholar is the apparent lack of empathy companies seem to have for their customers in this space. People operate under the assumption that the data generated by the devices they purchase is ‘theirs’, yet companies too often operate as if they are the sole owners of that data.

Anthropologist Bill Maurer has proposed replacing data ownership with a notion of data ‘kinship’ – that both technology companies and their customers have rights and responsibilities to the data that they produce together. Until we have better social contracts and legal frameworks for people to have control and access to their own data in ways that allow them to extract it, query it, and combine it with other kinds of data, then that problem of engagement will continue and activity trackers will sit unused on bedside tables or uncharged in the back of drawers. The ability to help people ask the next question or design the next self-tracking experiment is where most wearables fail today.

Ed.: And is this data at all clinically useful / interoperable with healthcare and insurance systems? i.e. do the companies producing self-monitoring devices work to particular data and medical standards? And is there any auditing and certification of these devices, and the data they collect?

Gina: This idea that the data is just one interoperable system away from usefulness is seductive but so, so wrong. I was recently at a panel of health innovators, the title of which was ‘No more Apps’. The argument was that we’re not going to get to meaningful change in healthcare simply by adding a new data stream. Doctors in our study said things like ‘I don’t need more data; I need more resources.’ Right now we have few protections for individuals that this data won’t be able to harm their rights to insurance, or won’t be used to discriminate against them and yet there are few results that show how the commercially available wearable devices are delivering clinical value. There’s still a lot of work needed before this can happen.

Ed.: Lastly — just as we share our music on iTunes; could you see a scenario where we start to share our self-status with other device wearers? Maybe to increase our sociability and empathy by being able to send auto-congratulations to people who’ve walked a lot that day, or to show concern to people with elevated heart rates / skin conductivity (etc.)? Given the logical next step to accumulating things is to share them..

Gina: We can see that future scenario now in groups like Patients Like Me, Cure Together, and Quantified Self meetups. What these ‘edge’ use cases teach us for more everyday self-tracking uses is that real support and community can form around people sharing their data with others. These are projects that start from individuals with information about themselves and work to build toward collective, social knowledge. Other types of ‘citizen science’ projects are underway like the Personal Genome Project where people can donate their health data for science. The Stanford-led MyHeart Counts study on iPhone and Apple Watch recruited in its first two weeks 6,000 people for its study and now has over 40,000 US participants. Those are numbers for clinical studies that we’ve just never seen before.

My co-author led the development of an interesting tool, Data Sense, that lets people without stats training visualize the relationships among variables in their own data or easily combine their data with data from other people. When people can do that they can begin asking the questions that matter for them and for their communities. What we know won’t work in the future of self-tracking data, though, are the lightweight online communities that technology brands just throw together. I’m just not going to be motivated by a random message from LovesToWalk1949, but under the right conditions I might be motivated by my mom, my best friend or my social network. There is still a lot of hard work that has to be done to get the design of self-tracking tools, practices, and communities for social support right.


Gina Neff was talking to blog editor David Sutcliffe about her book (with Dawn Naffs) “Self-Tracking” (MIT Press).

]]>
Why we shouldn’t believe the hype about the Internet “creating” development https://ensr.oii.ox.ac.uk/why-we-shouldnt-believe-the-hype-about-the-internet-creating-development/ Thu, 30 Mar 2017 06:29:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4025 Vast sums of money have been invested in projects to connect the world’s remaining four billion people, with these ambitious schemes often presenting digital connectivity as a means to achieve a range of social and economic developmental goals. This is especially the case for Africa, where Internet penetration rates remain relatively low, while the need for effective development strategies continues to be pressing.

Development has always grappled with why some people and places have more than others, but much of that conversation is lost within contemporary discourses of ICTs and development. As states and organisations rush to develop policies and plans, build drones and balloons, and lay fibre-optic cables, much is said about the power of ICTs to positively transform the world’s most underprivileged people and places.

Despite the vigour of such claims, there is actually a lack of academic consensus about the impacts of digital connectivity on economic development. In their new article, Nicolas Friederici, Sanna Ojanperä and Mark Graham review claims made by African governments and large international institutions about the impacts of connectivity, showing that the evidence base to support them is thin.

It is indeed possible that contemporary grand visions of connectivity are truly reflective of a promising future, but it is equally possible that many of them are hugely overblown. The current evidence base is mixed and inconclusive. More worryingly, visions of rapid ICT-driven development might not only fail to achieve their goals — they could actively undermine development efforts in a world of scarce resources. We should therefore refuse to believe it is self-evident that ICTs will automatically bring about development, and should do more to ask the organisations and entities who produce these grand visions to justify their claims.

Read the full article: Friederici, N., Ojanperä, S., and Graham, M. (2017) The Impact of Connectivity in Africa: Grand Visions and the Mirage of Inclusive Digital Development. Electronic Journal of Information Systems in Developing Countries, 79(2), 1–20.

We caught up with the authors to discuss their findings.

Ed.: Who is paying for these IT-development projects: are they business and profit-led, or donor led: and do the donors (and businesses) attach strings?

Nicolas: Funding has become ever more mixed. Foundational infrastructure like fibre-optic cables have usually been put in place through public private partnerships, where private companies lay out the network while loans, subsidies, and policy support are provided by national governments and organizations like the World Bank. Development agencies have mostly funded more targeted connectivity projects, like health or agricultural information platforms.

Recently, philanthropic foundations and tech corporations have increased their footprint, for instance, the Rockefeller Foundation’s Digital Jobs project or Facebook’s Open Cellular Base stations. So we are seeing an increasingly complex web of financial channels. What discourse does is pave the way for funding to flow into such projects.

The problem is that, while private companies may stop investing when they don’t see returns, governments and development funders might continue to pour resources into an agenda as long as it suits their ideals or desirable and widely accepted narratives. Of course, these resources are scarce; so, at the minimum, we need to allow scrutiny and look for alternatives about how development funding could be used for maximum effect.

Ed.: Simple, aspirational messages are obviously how politicians get people excited about things (and to pay for them). What is the alternative?

Nicolas: We’re not saying that the rhetoric of politicians is the problem here. We’re saying that many of the actors who are calling the shots in development are stubbornly evading valid concerns that academics and some practitioners have brought forward. The documents that we analyze in the article — and these are very influential sources — pretend that it is an unquestionable fact that there is a causal, direct and wide-spread positive impact of Internet and ICTs on all facets of development, anywhere. This assertion is not only simplistic, it’s also problematic and maybe even dangerous to think about a complex and important topic like (human, social) development in this way.

The alternative is a more open and plural conversation where we openly admit that resources spent on one thing can’t be spent on another, and where we enable different and critical opinions to enter the fray. This is especially important when a nation’s public is disempowered or misinformed, or when regulators are weak. For example, in most countries in Europe, advocacy groups and strong telecoms regulators provide a counterforce to the interests of technology corporations. Such institutions are often absent in the Global South, so the onus is on development organizations to regulate themselves, either by engaging with people “on the ground” or with academics. For instance, the recent World Development Report by the World Bank did this, which led the report to, we think, much more reliable and balanced conclusions compared to the Bank’s earlier outputs.

Ed.: You say these visions are “modernist” and “techno-determinist” — why is that? Is it a quirk of the current development landscape, or does development policy naturally tend to attract fixers (rather than doubters and worriers..). And how do we get more doubt into policy?

Nicolas: Absolutely, development organizations are all about fixing development problems, and we do not take issue with that. However, these organizations also need to understand that “fixing development” is not like fixing a machine (that is, a device that functions according to mechanical principles). It’s not like one could input “technology” or “the Internet,” and get “development” as an output.

In a nutshell, that’s what we mean when we say that visions are modernist and techno-determinist: many development organizations, governments, and corporations make the implicit assumption that technological progress is fixing development, that this is an apolitical and unstoppable process, and that this is working out in the same way everywhere on earth. This assumption glances over contestation, political choices and trade-offs, and the cultural, economic, and social diversity of contexts.

Ed.: Presumably if things are very market-led: the market will decide if the internet “solves” everything: ie either it will, or it won’t. Has there been enough time yet to verify the outcomes of these projects (e.g. how has the one-laptop initiative worked out)?

Nicolas: I’m not sure I agree with the implication that markets can decide if the Internet solves everything. It’s us humans who are deciding, making choices, prioritizing, allocating resources, setting policies, etc. As humans, we might decide that we want a market (that is, supply and demand matched by a price mechanism) to regulate some array of transactions. This is exactly what is happening, for instance, with the spread of mobile money in Kenya or the worldwide rise of smartphones: people feel they benefit from using a product and are willing to pay money to a supplier.

The issue with technology and development is (a) that in many cases, markets are not the mechanism that achieves the best development outcomes (think about education or healthcare), (b) that even the freest of markets needs to be enabled by things like political stability, infrastructure, and basic institutions (think about contract law and property rights), and (c) that many markets need regulatory intervention or power-balancing institutions to prevent one side of the exchange to dominate and exploit the other (think about workers’ rights).

In each case, it is thus a matter of evaluating what mixture of technology, markets, and protections works best to achieve the best development outcomes, keeping in mind that development is multi-dimensional and goes far beyond economic growth. These evaluations and discussions are challenging, and it takes time to determine what works, where, and when, but ultimately we’re improving our knowledge and our practice if we keep the conversation open, critical, and diverse.

Ed.: Is there a consensus on ICT and development, or are there basically lots of camps, ranging from extreme optimists to extreme pessimists? I get the impression that basically “it’s complicated” — is that fair? And how much discussion or recognition (beyond yourselves) is there about the gap between these statements and reality?

Nicolas: ICT and development has seen a lot of soul-searching, and scholars and practitioners have spent over 20 years debating the field’s nature and purpose. There is certainly no consensus on what ICTD should do, or how ICTs effect/affect development, and maybe that is an unrealistic — and undesirable — goal. There are certainly optimistic and pessimistic voices, like you mention, but there is also a lot of wisdom that is not widely acknowledged, or not in the public domain at all. There are thousands of practitioners from the Global North and South who have been in the trenches, applied their critical and curious minds, and seen what makes an impact and what is a pipe dream.

So we’re far from the only ones who are aware that much of the ICTD rhetoric is out of touch with realities, and we’re also not the first ones to identify this problem. What we tried to point out in our article is that the currently most powerful, influential, and listened to sources tend to be the ones that are overly optimistic and overly simplistic, ignoring all the wisdom and nuance created through hard scholarly and practical work. These actors seem to be detached from the messy realities of ICTD.

This carries a risk, because it is these organizations (governments, global consultancies, multilateral development organizations, and international tech corporations) that are setting the agenda, distributing the funds, making the hiring decisions, etc. in development practice.

Read the full article: Friederici, N., Ojanperä, S., and Graham, M. (2017) The Impact of Connectivity in Africa: Grand Visions and the Mirage of Inclusive Digital Development. Electronic Journal of Information Systems in Developing Countries, 79(2), 1–20.


Nicolas Friederici was talking to blog editor David Sutcliffe.

]]>
Internet Filtering: And Why It Doesn’t Really Help Protect Teens https://ensr.oii.ox.ac.uk/internet-filtering-and-why-it-doesnt-really-help-protect-teens/ Wed, 29 Mar 2017 08:25:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4035 Young British teens (between 12-15 years) spend nearly 19 hours a week online, raising concerns for parents, educators, and politicians about the possible negative experiences they may have online. Schools and libraries have long used Internet-filtering technologies as a means of mitigating adolescents’ experiences online, and major ISPs in Britain now filter new household connections by default.

However, a new article by Andrew Przybylski and Victoria Nash, “Internet Filtering Technology and Aversive Online Experiences in Adolescents”, published in the Journal of Pediatrics, finds equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. The authors analysed data from 1030 in-home interviews conducted with early adolescents as part of Ofcom’s Children and Parents Media Use and Attitudes Report.

The Internet is now a central fixture of modern life, and the positives and negatives of online Internet use need to be balanced by caregivers. Internet filters have been adopted as a tool for limiting the negatives; however, evidence of their effectiveness is dubious. They are expensive to develop and maintain, and also carry significant informational costs: even sophisticated filters over-block, which is onerous for those seeking information about sexual health, relationships, or identity, and might have a disproportionate effect on vulnerable groups. Striking the right balance between protecting adolescents and respecting their rights to freedom of expression and information presents a formidable challenge.

In conducting their study to address this uncertainty, the authors found convincing evidence that Internet filters were not effective at shielding early adolescents from aversive experiences online. Given this finding, they propose that evidence derived from a randomized controlled trial and registered research designs are needed to determine how far Internet-filtering technology supports or thwarts young people online. Only then will parents and policymakers be able to make an informed decision as to whether their widespread use justifies their costs.

We caught up with Andy and Vicki to discuss the implications of their study:

Ed.: Just this morning when working from home I tried to look up an article’s author and was blocked, Virgin Media presumably having decided he might be harmful to me. Where does this recent enthusiasm for default-filtering come from? Is it just that it’s a quick, uncomplicated (technological) fix, which I guess is what politicians / policy-people like?

Vicki: In many ways this is just a typical response to the sorts of moral panic which have long arisen around the possible risks of new technologies. We saw the same concerns arise with television in the 1960s, for example, and in that case the UK’s policy response was to introduce a ‘watershed’, a daily time after which content aimed at adults could be shown. I suppose I see filtering as fulfilling the same sort of policy gap, namely recognising that certain types of content can be legally available but should not be served up ‘in front of the children’.

Andy: My reading of the psychological and developmental literature suggests that filters provide a way of creating a safe walled space in schools, libraries, and homes for young people to use the internet. This of course does not mean that reading our article will be harmful!

Ed.: I suppose that children desperate to explore won’t be stopped by a filter; those who aren’t curious probably wouldn’t encounter much anyway — what is the profile of the “child” and the “harm-filtering” scenario envisaged by policy-makers? And is Internet filtering basically just aiming at the (easy) middle of the bell-curve?

Vicki: This is a really important point. Sociologists recognised many years ago that the whole concept of childhood is socially constructed, but we often forget about this when it comes to making policy. There’s a tendency for politicians, for example, either to describe children as inherently innocent and vulnerable, or to frame them as expert ‘digital natives’, yet there’s plenty of academic research which demonstrates the extent to which children’s experiences of the Internet vary by age, education, income and skill level.

This matters because it suggests a ‘one-size-fits-all’ approach may fail. In the context of this paper, we specifically wanted to check whether children with the technical know-how to get around filters experienced more negative experiences online than those who were less tech-savvy. This is often assumed to be true, but interestingly, our analysis suggests this factor makes very little difference.

Ed.: In all these discussions and policy decisions: is there a tacit assumption that these children are all growing up in a healthy, supportive (“normal”) environment — or is there a recognition that many children will be growing up in attention-poor (perhaps abusive) environments and that maybe one blanket technical “solution” won’t fit everyone? Is there also an irony that the best protected children will already be protected, and the least protected, probably won’t be?

Andy: Yes this is ironic and somewhat tragic dynamic. Unfortunately because the evidence for filtering effectiveness is at such an early age it’s not possible to know which young people (if any) are any more or less helped by filters. We need to know how effective filters are in general before moving on to see the young people for whom they are more or less helpful. We would also need to be able to explicitly define what would constitute an ‘attention-poor’ environment.

Vicki: From my perspective, this does always serve as a useful reminder that there’s a good reason why policy-makers turn to universalistic interventions, namely that this is likely to be the only way of making a difference for the hardest-to-reach children whose carers might never act voluntarily. But admirable motives are no replacement for efficacy, so as Andy notes, it would make more sense to find evidence that household internet filtering is effective, and second that it can be effective for this vulnerable group, before imposing default-on filters on all.

Ed.: With all this talk of potential “harm” to children posed by the Internet .. is there any sense of how much (specific) harm we’re talking about? And conversely .. any sense of the potential harms of over-blocking?

Vicki: No, you are right to see that the harms of Internet use are quite hard to pin down. These typically take the form of bullying, or self-harm horror stories related to Internet use. The problem is that it’s often easier to gauge how many children have been exposed to certain risky experiences (e.g. viewing pornography) than to ascertain whether or how they were harmed by this. Policy in this area often abides by what’s known as ‘the precautionary principle’.

This means that if you lack clear evidence of harm but have good reason to suspect public harm is likely, then the burden of proof is on those who would prove it is not likely. This means that policies aimed at protecting children in many contexts are often conservative, and rightly so. But it also means that it’s important to reconsider policies in the light of new evidence as it comes along. In this case we found that there is not as much evidence that Internet filters are effective at preventing exposure to negative experiences online as might be hoped.

Ed.: Stupid question: do these filters just filter “websites”, or do they filter social media posts as well? I would have thought young teens would be more likely to find or share stuff on social media (i.e. mobile) than “on a website”?

Andy: My understanding is that there are continually updated ‘lists’ of website that contain certain kinds of content such as pornography, piracy, gambling, or drug use (see this list on Wikipedia, for example) as these categories vary by UK ISP.

Vicki: But it’s not quite true to say that household filtering packages don’t block social media. Some of the filtering options offered by the UK’s ‘Big 4’ ISPs enable parents and carers to block social media sites for ‘homework time’ for example. A bigger issue though, is that much of children’s Internet use now takes place outside the home. So, household-level filters can only go so far. And whilst schools and libraries usually filter content, public wifi or wifi in friends’ houses may not, and content can be easily exchanged directly between kids’ devices via Bluetooth or messaging apps.

Ed: Do these blocked sites (like the webpage of that journal author I was trying to access) get notified that they have been blocked and have a chance to appeal? Would a simple solution to over blocking simply be to allow (e.g. sexual health, gender-issue, minority, etc.) sites to request that they be whitelisted, or apply for some “approved” certification?

Vicki: I don’t believe so. There are whitelisted sites, indeed that was a key outcome of an early inquiry into ‘over-blocking’ by the UK Children’s Council on Internet Safety. But in order for this to be a sufficient response, it would be necessary for all sites and apps that are subject to filtering to be notified, to allow for possible appeal. The Open Rights Group provide a tool that allows site owners to check the availability of their sites, but there is no official process for seeking whitelisting or appeal.

Ed.: And what about age verification as an alternative? (however that is achieved / validated), i.e. restricting content before it is indexed, rather than after?

Andy: To evaluate this we would need to conduct a randomised controlled trial where we tested how the application of age verification for different households, selected at random, would relate (or not) to young people encountering potentially aversive content online.

Vicki: But even if such a study could prove that age verification tools were effective in restricting access to underage Internet users, it’s not clear this would be a desirable scenario. It makes most sense for content that is illegal to access below a certain age, such as online gambling or pornography. But if content is age-gated without legal requirement, then it could prove a very restrictive tool, removing the possibility of parental discretion and failing to make any allowances for the sorts of differences in ability or maturity between children that I pointed out at the beginning.

Ed: Similarly to the the arguments over Google making content-blocking decisions (e.g. over the “right to forget”): are these filtering decisions left to the discretion of ISPs / the market / the software providers, or to some government dept / NGO? Who’s ultimately in charge of who sees what?

Vicki: Obviously when it comes to content that is illegal for children or adults to access then broad decisions about the delineation of what is illegal falls to governments and is then interpreted and applied by private companies. But when it comes to material that is not illegal, but just deemed harmful or undesirable, then ISPs and social media platforms are left to decide for themselves how to draw the boundaries and then how to apply their own policies. This increasing self-regulatory role for what Jonathan Zittrain has called ‘private sherriffs’ is often seen a flexible and appropriate response, but it does bring reduced accountability and transparency.

Ed.: I guess it’s ironic with all this attention paid to children, that we now find ourselves in an information environment where maybe we should be filtering out (fake) content for adults as well (joke..). But seriously: with all these issues around content, is your instinct that we should be using technical fixes (filtering, removing from indexes, etc.) or trying to build reflexivity, literacy, resilience in users (i.e. coping strategies). Or both? Both are difficult.

Andy: It is as ironic as it is tragic. When I talk to parents (both Vicki and I are parents) I hear that they have been let down by the existing advice which often amounts to little more than ‘turn it off’. Their struggles have nuance (e.g. how do I know who is in my child’s WhatsApp groups? Is snapchat OK if they’re just using it amongst best friends?) and whilst general broad advice is heard, this more detailed information and support is hard for parents to find.

Vicki: I agree. But I think it’s inevitable that we’ll always need a combination of tools to deal with the incredible array of content that develops online. No technical tool will ever be 100% reliable in blocking content we don’t want to see, and we need to know how to deal with whatever gets through. That certainly means having a greater social and political focus on education but also a willingness to consider that building resilience may mean exposure to risk, which is hard for some groups to accept.

Every element of our strategy should be underpinned by whatever evidence is available. Ultimately, we also need to stop thinking about these problems as technology problems: fake news is just as much a feature of increasing political extremism and alienation just as online pornography is a feature of a heavily sexualised mainstream culture. And we can be certain: neither of these broader social trends will be resolved by simple efforts to block out what we don’t wish to see.

Read the full article: Przybylski, A. and Nash, V. (2017) Internet Filtering Technology and Aversive Online Experiences in Adolescents. Journal of Pediatrics. DOI: http://dx.doi.org/10.1016/j.jpeds.2017.01.063


Andy Przybylski and Vicki Nash were talking to blog editor David Sutcliffe.

]]>
Psychology is in Crisis: And Here’s How to Fix It https://ensr.oii.ox.ac.uk/psychology-is-in-crisis-and-heres-how-to-fix-it/ Thu, 23 Mar 2017 13:37:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4017
“Psychology emergency” by atomicity (Flickr).

Concerns have been raised about the integrity of the empirical foundation of psychological science, such as low statistical power, publication bias (i.e. an aversion to reporting statistically nonsignificant or “null” results), poor availability of data, the rate of statistical reporting errors (meaning that the data may not support the conclusions), and the blurring of boundaries between exploratory work (which creates new theory or develops alternative explanations) and confirmatory work (which tests existing theory). It seems that in psychology and communication, as in other fields of social science, much of what we think we know may be based on a tenuous empirical foundation.

However, a number of open science initiatives have been successful recently in raising awareness of the benefits of open science and encouraging public sharing of datasets. These are discussed by Malte Elson (Ruhr University Bochum) and the OII’s Andrew Przybylski in their special issue editorial: “The Science of Technology and Human Behavior: Standards, Old and New”, published in the Journal of Media Psychology. What makes this issue special is not the topic, but the scientific approach to hypothesis testing: the articles are explicitly confirmatory, that is, intended to test existing theory.

All five studies are registered reports, meaning they were reviewed in two stages: first, the theoretical background, hypotheses, methods, and analysis plans of a study were peer-reviewed before the data were collected. The studies received an “in-principle” acceptance before the researchers proceeded to conduct them. The soundness of the analyses and discussion section were reviewed in a second step, and the publication decision was not contingent on the outcome of the study: i.e. there was no bias against reporting null results. The authors made all materials, data, and analysis scripts available on the Open Science Framework (OSF), and the papers were checked using the freely available R package statcheck (see also: www.statcheck.io).

All additional (non-preregistered) analyses are explicitly labelled as exploratory. This makes it easier to see and understand what the researchers were expecting based on knowledge of the relevant literature, and what they eventually found in their studies. It also allows readers to build a clearer idea of the research process and the elements of the studies that came as inspiration after the reviews and the data collection were complete. The issue provide a clear example of how exploratory and confirmatory studies can coexist — and how science can thrive as a result. The articles published in this issue will hopefully serve as an inspiration and model for other media researchers, and encourage scientists studying media to preregister designs and share their data and materials openly.

Media research — whether concerning the Internet, video games, or film — speaks directly to everyday life in the modern world. It affects how the public forms their perceptions of media effects, and how professional groups and governmental bodies make policies and recommendations. Empirical findings disseminated to caregivers, practitioners, and educators should therefore be built on an empirical foundation with sufficient rigor. And indeed, the promise of building an empirically-based understanding of how we use, shape, and are shaped by technology is an alluring one. If adopted by media psychology researchers, this approach could support rigorous testing and development of promising theories, and retirement of theories that do not reliably account for observed data.

The authors close by noting their firm belief that incremental steps taken towards scientific transparency and empirical rigor — with changes to publishing practices to promote open, reproducible, high-quality research – will help us realize this potential.

We caught up with the editors to find out more about preregistration of studies:

Ed.: So this “crisis“ in psychology (including, for example, a lack of reproducibility of certain reported results) — is it unique to psychology, or does it extend more generally to other (social) sciences? And how much will is there in the community to do something about it?

Andy: Absolutely not. There is strong evidence in most social and medical sciences that computational reproducibility (i.e. re-running the code / data) and replicability (i.e. re-running the study) are much lower than we might expect. In psychology and medical research there is a lot of passion and expertise focused on improving the evidence base. We’re cautiously optimistic that researchers in allied fields such as computational social science will follow suit.

Malte: It’s important to understand that a failure to successfully replicate a previous finding is not a problem for scientific progress. Quite contrary, it tells us that previously held assumptions or predictions must be revisited. The number of replications in psychology’s flagship journals is still not overwhelming, but the research community has begun to value and incentivize this type of research.

Ed.: It’s really impressive not just what you’ve done with this special issue (and the intentions behind it), but also that the editor gave you free reign to do so — and also to investigate and report on the state of that journal’s previously published articles, including re-running stats, in what you describe as an act of “sincere self-reflection” on the part of the journal. How much acceptance is there in the field (psychology, and beyond) that things need to be shaken up?

Malte: I think it is uncontroversial to say that, as psychologists adapt their research practices (by preregistering their hypotheses, conducting high-powered replications, and sharing their data and materials) the reliability and quality of the scientific evidence they produce increases. However, we need to be careful not to depreciate the research generated before these changes. But that is exactly what science can help with: Meta-scientific analyses of already published research, be it in our editorial or elsewhere, provide guidance how it may (or may not) inform future studies on technology and human behavior.

Andy: We owe a lot to the editor-in-chief Nicole Krämer and the experts that reviewed submissions to the special issue. This hard work has helped us and the authors deliver a strong set of studies with respect to technology effects on behaviour. We are proud to say that registered reports is now a permanent submission track at the Journal of Media Psychology and 35 other journals. We hope this can help set an example for other areas of quantitative social science which may not yet realise they face the same serious challenges.

Ed: It’s incredibly annoying to encounter papers in review where the problem is clearly that the study should have been designed differently from the start. The authors won’t start over, of course, so you’re just left with a weak paper that the authors will be desperate to offload somewhere, but that really shouldn’t be published: i.e. a massive waste of everyone’s time. What structural changes are needed to mainstream pre-registration as a process, i.e. for design to be reviewed first, before any data is collected or analysed? And what will a tipping point towards preregistration look like, assuming it comes?

Andy: We agree that this is experience is aggravating as researchers invested in both the basic and applied aspects of science. We think this might come down to a carrot and stick approach. For quantitative science, pre-registration and replication could be a requirement for articles to be considered in the Research Exercise Framework (REF) and as part of UK and EU research council funding. For example, the Wellcome Trust now provides an open access open science portal for researchers supported by their funding (carrots). In terms of sticks, it may be the case that policy makers and the general public will become more sophisticated over time and simply will not value work that is not transparently conducted and shared.

Ed.: How aware / concerned are publishers and funding bodies of this crisis in confidence in psychology as a scientific endeavour? Will they just follow the lead of others (e.g. groups like the Center for Open Science), or are they taking a leadership role themselves in finding a way forward?

Malte: Funding bodies are arguably another source of particularly tasty carrots. It is in their vital interest that funded research is relevant and conducted rigorously, but also that it is sustainable. They depend on reliable groundwork to base new research projects on. Without it funding becomes, essentially, a gambling operation. Some organizations are quicker than others to take a lead, such as the The Netherlands Organisation for Scientific Research (NWO), who have launched a Replication Studies pilot programme. I’m optimistic we will see similar efforts elsewhere.

Andy: We are deeply concerned that the general public will see that science and scientists are missing a golden opportunity to correct itself and ourselves. Like scientists, funding bodies are adaptive and we (and others) speak directly to them about these challenges to the medical and social sciences.The public and research councils invest substantial resources in science and it is our responsibility to do our best and to deliver the best science we can. Initiatives like the Center for Open Science are key to this because they help scientists build tools to pool our resources and develop innovative methods for strengthening our work.

Ed.: I assume the end goal of this movement to embed it in the structure of science as-it-is-done? i.e. for good journals and major funding bodies to make pre-registration of studies a requirement, and for a clear distinction to be drawn between exploratory and confirmatory studies? Out of curiosity, what does (to pick a random journal) Nature make of all this? And the scientific press? Is there much awareness of preregistration as a method?

Malte: Conceptually, preregistration is just another word for how the scientific method is taught already: Hypotheses are derived from theory, and data are collected to test them. Predict, verify, replicate. Matching this concept by a formal procedure on some organizational level (such as funding bodies or journals) seems only consequential. Thanks to scientists like Chris Chambers, who is promoting the Registered Reports format, confidence that the number of journals offering this track will ever increase is warranted.

Andy: We’re excited to say that parts of these mega-journals and some science journalists are on board. Nature: Human Behavior now provides registered reports as a submission track and a number of science journalists including Ed Yong (@edyong209), Tom Chivers (@TomChivers), Neuroskeptic (@Neuro_Skeptic), and Jessie Singal (@jessesingal) are leading the way with critical and on point work that highlights the risks associated with the replication crisis and opportunities to improve reproducibility.

Ed.: Finally: what would you suggest to someone wanting to make sure they do a good study, but who is not sure where to begin with all this: what are the main things they should read and consider?

Andy: That’s a good question; the web is a great place to start. To learn more about registered reports and why they are important see this, and to learn about their place in robust science see this. To see how you can challenge yourself to do a pre-registered study and earn $1,000 see this, and to do a deep dive into open scientific practice see this.

Malte: Yeah, what Andy said. Also, I would thoroughly recommend joining social networks (Twitter, or the two sister groups Psychological Methods and PsychMAP on Facebook) where these issues are lively discussed.

Ed.: Anyway .. congratulations to you both, the issue authors, and the journal’s editor-in-chief, on having done a wonderful thing!

Malte: Thank you! We hope the research reports in this issue will serve as an inspiration and model for other psychologists.

Andy: Many thanks, we are doing our best to make the social sciences better and more reliable.

Read the full editorial: Elson, M. and Przybylski, A. (2017) The Science of Technology and Human Behavior: Standards, Old and New. Journal of Media Psychology. DOI: 10.1027/1864-1105/a000212


Malte Elson and Andrew Przybylski were talking to blog editor David Sutcliffe.

]]>
What Impact is the Gig Economy Having on Development and Worker Livelihoods? https://ensr.oii.ox.ac.uk/what-impact-is-the-gig-economy-having-on-development-and-worker-livelihoods/ Mon, 20 Mar 2017 07:46:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3995
There are imbalances in the relationship between supply and demand of digital work, with the vast majority of buyers located in high-income countries (pictured). See the full article for details.

As David Harvey famously noted, workers are unavoidably place-based because “labor-power has to go home every night.” But the widespread use of the Internet has changed much of that. The confluence of rapidly spreading digital connectivity, skilled but under-employed workers, the existence of international markets for labour, and the ongoing search for new outsourcing destinations, has resulted in organisational, technological, and spatial fixes for virtual production networks of services and money. Clients, bosses, workers, and users of the end-products of work can all now be located in different corners of the planet.

A new article by Mark Graham, Isis Hjorth and Vili Lehdonvirta, “Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods”, published in Transfer, discusses the implications of the spatial unfixing of work for workers in some of the world’s economic margins, and reflects on some of the key benefits and costs associated with these new digital regimes of work. Drawing on a multi-year study with digital workers in Sub-Saharan Africa and South-east Asia, it highlights four key concerns for workers: bargaining power, economic inclusion, intermediated value chains, and upgrading.

As ever more policy-makers, governments and organisations turn to the gig economy and digital labour as an economic development strategy to bring jobs to places that need them, it is important to understand how this might influence the livelihoods of workers. The authors show that although there are important and tangible benefits for a range of workers, there are also a range of risks and costs that could negatively affect the livelihoods of digital workers. They conclude with a discussion of four broad strategies – certification schemes, organising digital workers, regulatory strategies and democratic control of online labour platforms — that could improve conditions and livelihoods for digital workers.

We caught up with the authors to explore the implications of the study:

Ed.: Shouldn’t increased digitisation of work also increase transparency (i.e. tracking, auditing etc.) around this work — i.e. shouldn’t digitisation largely be a good thing?

Mark: It depends. One of the goals of our research is to ask who actually wins and loses from the digitalisation of work. A good thing for one group (e.g. employers in the Global North) isn’t necessarily automatically a good thing for another group (e.g. workers in the Global South).

Ed.: You mention market-based strategies as one possible way to improve transparency around working conditions along value chains: do you mean something like a “Fairtrade” certification for digital work, i.e. creating a market for “fair work”?

Mark: Exactly. At the moment, we can make sure that the coffee we drink or the chocolate we eat is made ethically. But we have no idea if the digital services we use are. A ‘fair work’ certification system could change that.

Ed.: And what sorts of work are these people doing? Is it the sort of stuff that could be very easily replaced by advances in automation (natural language processing, pattern recognition etc.)? i.e. is it doubly precarious, not just in terms of labour conditions, but also in terms of the very existence of the work itself?

Mark: Yes, some of it is. Ironically, some of the paid work that is done is training algorithms to do work that used to be done by humans.

Ed.: You say that “digital workers have been unable to build any large-scale or effective digital labour movements” — is that because (unlike e.g. farm work which is spatially constrained), employers can very easily find someone else anywhere in the world who is willing to do it? Can you envisage the creation of any effective online labour movement?

Mark: A key part of the problem for workers here is the economic geography of this work. A worker in Kenya knows that they can be easily replaced by workers on the other side of the planet. The potential pool of workers willing to take any job is massive. For digital workers to have any sort of effective movement in this context means looking to what I call geographic bottlenecks in the system. Places in which work isn’t solely in a global digital cloud. This can mean looking to things like organising and picketing the headquarters of firms, clusters of workers in particular places, or digital locations (the web-presence of firms). I’m currently working on a new publication that deals with these issues in a bit more detail.

Ed.: Are there any parallels between the online gig work you have studied and ongoing issues with “gig work” services like Uber and Deliveroo (e.g. undercutting of traditional jobs, lack of contracts, precarity)?

Mark: A commonality in all of those cases is that platforms become intermediaries in between clients and workers. This means that rather than being employees, workers tend to be self-employed: a situation that offers workers freedom and flexibility, but also comes with significant risks to the worker (e.g. no wages if they fall ill).

Read the full article: Graham, M., Hjorth, I. and Lehdonvirta, V. (2017) Digital Labour and Development: Impacts of Global Digital Labour Platforms and the Gig Economy on Worker Livelihoods. Transfer. DOI: 10.1177/1024258916687250

Read the full report: Graham, M., Lehdonvirta, V., Wood, A., Barnard, H., Hjorth, I., Simon, D. P. (2017) The Risks and Rewards of Online Gig Work At The Global Margins [PDF]. Oxford: Oxford Internet Institute.

The article draws on findings from the research project “Microwork and Virtual Production Networks in Sub-Saharan Africa and South-east Asia”, funded by the International Development Research Centre (IDRC), grant number: 107384-001.


Mark Graham was talking to blog editor David Sutcliffe.

]]>
Tackling Digital Inequality: Why We Have to Think Bigger https://ensr.oii.ox.ac.uk/tackling-digital-inequality-why-we-have-to-think-bigger/ Wed, 15 Mar 2017 11:42:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3988 Numerous academic studies have highlighted the significant differences in the ways that young people access, use and engage with the Internet and the implications it has in their lives. While the majority of young people have some form of access to the Internet, for some their connections are sporadic, dependent on credit on their phones, an available library, or Wi-Fi open to the public. Qualitative data in a variety of countries has shown such limited forms of access can create difficulties for these young people as an Internet connection becomes essential for socialising, accessing public services, saving money, and learning at school.

While the UK government has financed technological infrastructure and invested in schemes to address digital inequalities, the outcomes of these schemes are rarely uniformly positive or transformative for the people involved. This gap between expectation and reality demands theoretical attention; with more attention placed on the cultural, political and economic contexts of the digitally excluded, and the various attempts to “include” them.

Focusing on a two-year digital inclusion scheme for 30 teenagers and their families initiated by a local council in England, a qualitative study by Huw C. Davies, Rebecca Eynon, and Sarah Wilkin analyses why, despite the good intentions of the scheme’s stakeholders, it fell short of its ambitions. It also explains how the neoliberalist systems of governance that are increasingly shaping the cultures and behaviours of Internet service providers and schools — that incentivise action that is counterproductive to addressing digital inequality and practices — cannot solve the problems they create.

We caught up with the authors to discuss the study’s findings:

Ed.: It was estimated that around 10% of 13 year olds in the study area lacked dependable access to the Internet, and had no laptop or PC at home. How does this impact educational outcomes?

Huw: It’s impossible to disaggregate technology from everything else that can affect a young person’s progress through school. However, one school in our study had transferred all its homework and assessments online while the other schools were progressing to this model. The students we worked with said doing research for homework is synonymous with using Google or Wikipedia, and it’s the norm to send homework and coursework to teachers by email, upload it to Virtual Learning Environments, or print it out at home. Therefore students who don’t have access to the Internet have to spend time and effort finding work-arounds such as using public libraries. Lack of access also excludes such students from casual learning from resources online or pursuing their own interests in their own time.

Ed.: The digital inclusion scheme was designed as a collaboration between a local council in England (who provided Internet services) and schools (who managed the scheme) in order to test the effect of providing home Internet access on educational outcomes in the area. What was your own involvement, as researchers?

Huw: Initially, we were the project’s expert consultants: we were there to offer advice, guidance and training to teachers and assess the project’s efficacy on its conclusion. However, as it progressed we took on the responsibility of providing skills training to the scheme’s students and technical support to their families. When it came to assessing the scheme, by interviewing young people and their families at their homes, we were therefore able to draw on our working knowledge of each family’s circumstances.

Ed.: What was the outcome of the digital inclusion project —- i.e. was it “successful”?

Huw: As we discuss in the article, defining success in these kinds of schemes is difficult. Subconsciously many people involved in these kinds of schemes expect technology to be transformative for the young people involved yet in reality the changes you see are more nuanced and subtle. Some of the scheme’s young people found apprenticeships or college courses, taught themselves new skills, used social networks for the first time and spoke to friends and relatives abroad by video for free. These success stories definitely made the scheme worthwhile. However, despite the significant good will of the schools, local council, and the families to make the scheme a success there were also frustrations and problems. In the article we talk about these problems and argue that the challenges the scheme encountered are not just practical issues to be resolved, but are systemic issues that need to be explicitly recognised in future schemes of this kind.

Ed.: And in the article you use neoliberalism as a frame to discuss these issues..?

Huw: Yes. But we recognise in the article that this is a concept that needs to be used with care. It’s often used pejoratively and/or imprecisely. We have taken it to mean a set of guiding principles that are intended to produce a better quality of services through competition, targets, results, incentives and penalties. The logic of these principles, we argue, influences they way organisations treat individual users of their services.

For example, for Internet Service Providers (ISPs) the logic of neoliberalism is to subcontract out the constituent parts of an overall service provision to create mini internal markets that (in theory) promote efficiency through competition. Yet this logic only really works if everyone comes to the market with similar resources and abilities to make choices. If their customers are well informed and wealthy enough to remind companies that they can take their business elsewhere these companies will have a strong incentive to improve their services and reduce their costs. If customers are disempowered by lack of choice the logic of neoliberalism tends to marginalise or ignore their needs. These were low-income families with little or no experience of exercising consumer choice and rights. For them therefore these mini markets didn’t work.

In the schools we worked with the logic of neoliberalism meant staff and students felt under pressure to meet certain targets — they all had to priortise things that were measured and measurable. Failure to meet these targets would then mean they would have to account for what went wrong, face losing out on a reward or they would expect disciplinary action. It therefore becomes much more difficult for schools to devote time and energy to schemes such as this.

Ed.: Were there any obvious lessons that might lead to a better outcome if the scheme were to be repeated: or are the (social, economic, political) problems just too intractable, and therefore too difficult and expensive to sort out?

Huw: Many of the families told us that access to the Internet was becoming evermore vital. This was not just for homework but also for access to public and health services (that are being increasingly delivered online) and getting to the best deals online for consumer services. They often told us therefore that they would do whatever it took to keep their connection after the two-year scheme ended. This often meant paying for broadband out of their social security benefits or income that was too low to be taxable: income that could otherwise have been spent on, for example, food and clothing. Given its necessity, we should have a national conversation about providing this service to low income families for free.

Ed.: Some of the families included in the study could be considered “hard to reach”. What were your experiences of working with them?

Huw: There are many practical and ethical issues to address before these sorts of schemes can begin. These families often face multiple intersecting problems that involve many agencies (who don’t necessarily communicate with each other) intervening in their lives. For example, some of the scheme’s families were dealing with mental illness, disability, poor housing, and debt all at the same time. It is important that such schemes are set up with an awareness of this complexity. We are very grateful to the families that took part in the scheme and the insights they gave us for how such schemes should run in the future.

Ed.: Finally, how do your findings inform all the studies showing that “digital inclusion schemes are rarely uniformly positive or transformative for the people involved”. Are these studies gradually leading to improved knowledge (and better policy intervention), or simply showing the extent of the problem without necessarily offering “solutions”?

Huw: We have tried to put this scheme into a broader context to show such policy interventions have to be much more ambitious, intelligent, and holistic. We never assumed digital inequality is an isolated problem that can be fixed with a free broadband connection, but when people are unable to afford the Internet it is an indication of other forms of disadvantage that, in a sympathetic and coordinated way, have to be addressed simultaneously. Hopefully, we have contributed to the growing awareness that such attempts to ameliorate the symptoms may offer some relief but should never be considered a cure in itself.

Read the full article: Huw C. Davies, Rebecca Eynon, Sarah Wilkin (2017) Neoliberal gremlins? How a scheme to help disadvantaged young people thrive online fell short of its ambitions. Information, Communication & Society. DOI: 10.1080/1369118X.2017.1293131

The article is an output of the project “Tackling Digital Inequality Amongst Young People: The Home Internet Access Initiative“, funded by Google.

Huw Davies was talking to blog editor David Sutcliffe.

]]>
Five Pieces You Should Probably Read On: Reality, Augmented Reality and Ambient Fun https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-reality-augmented-reality-and-ambient-fun/ Fri, 03 Mar 2017 10:59:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3979

This is the third post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun!

The addictive gameplay of Pokémon GO has led to police departments warning people that they should be more careful about revealing their locations, players injuring themselves, finding dead bodies, and even the Holocaust Museum telling people to play elsewhere.. Our environments are increasingly augmented with digital information: but how do we assert our rights over how and where this information is used? And should we be paying more attention to the design of persuasive technologies in increasingly attention-scarce environments? Or should we maybe just bin all our devices and pack ourselves off to digital detox camp?

 

1. James Williams: Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun

23 July 2016 / 2500 words / 12 min / Gross misuses of the “Poké-” prefix: 6

“The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets.”

Pokémon GO signals the first mainstream adoption of a type of game — always on, always with you — that requires you to ‘Bring Your Own Boundaries’, says James Williams. Regulation of the games falls on the user; presenting us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology.

 

2. James Williams: Orwell, Huxley, Banksy

24 May 2014 / 1000 words / 5 min

“Orwell worried that what we fear could ultimately come to control us: the “boot stamping on a human face—forever.” Huxley, on the other hand, felt that what we love was more likely to control us — by seducing us and engineering our compliance from within — and was therefore more deserving of a wary eye. In the age of the Internet, this dichotomy is reflected in the interplay between information and attention.”

You could say that the core challenge of the Internet (when information overload leads to scarcity of attention) is that it optimizes more for our impulses than our intentions, says James Williams, who warns that we could significantly overemphasize informational challenges to the neglect of attentional ones. In Brave New World, the defenders of freedom had “failed to take into account man’s almost infinite appetite for distractions.” In the digital era, we are making the same mistake, says James: we need better principles and processes to help designers make products more respectful of users’ attention.

 

3. James Williams: Staying free in a world of persuasive technologies

29 July 2013 / 1500 words / 7 min

“The explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power. To me, the biggest ethical questions are those that concern individual freedom and autonomy. When, exactly, does a “nudge” become a “push”?”

Technologies are increasingly being designed to change the way we think and behave: the Internet is now part of the background of human experience, and rapid advances in analytics are enabling optimisation of technologies to reach greater levels of persuasiveness. The ethical questions raised aren’t new, says James Williams, but the environment in which we’re asking them makes them much more urgent to address.

 

4. Mark Graham, Joe Shaw: An Informational Right to the City? [The New Internationalist]

8 February 2017 / 1000 words / 5 min

“Contemporary cities are much more than bricks and mortar; streets and pipes. They are also their digital presences – abstract presences which can reproduce and change our material reality. If you accept this premise, then we need to ask important questions about what rights citizens have to not just public and private spaces, but also their digital equivalents.”

It’s time for the struggle for more egalitarian rights to the city to move beyond a focus on material spaces and into the realm of digital ones, say Mark Graham and Joe Shaw. And we can undermine and devalue the hold of large companies over urban information by changing our own behaviour, they say: by rejecting some technologies, by adopting alternative service providers, and by supporting initiatives to develop platforms that operate on a more transparent basis.

 

5. Theodora Sutton: Exploring the world of digital detoxing

2 March 2017 / 2000 words / 10 min

“The people who run Camp Grounded would tell you themselves that digital detoxing is not really about digital technology. That’s just the current scapegoat for all the alienating aspects of modern life. But at the same time I think it is a genuine conversation starter about our relationship with technology and how it’s designed.”

As our social interactions become increasingly entangled with the online world, some people are insisting on the benefits of disconnecting entirely from digital technology: getting back to so-called “real life“. In this piece, Theodora Sutton explores the digital detoxing community in the San Francisco Bay Area, getting behind the rhetoric of the digital detox to understand the views and values of those wanting to re-examine the role of technology in their lives.

 

The Authors

James Williams is an OII doctoral student. He studies the ethical design of persuasive technology. His research explores the complex boundary between persuasive power and human freedom in environments of high technological persuasion.

Mark Graham is the Professor of Internet Geography at the OII. His research focuses on Internet and information geographies, and the overlaps between ICTs and economic development.

Joe Shaw is an OII DPhil student and Research Assistant. His research is concerned with the geography of information, property market technologies (PropTech) and critical urbanism.

Theodora Sutton is an OII DPhil student. Her research in digital anthropology examines digital detoxing and the widespread cultural narrative that sees digital sociality as inherently ‘lesser’ or less ‘natural’ than previous forms of communication.

 

Coming up! .. The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

]]>
Exploring the world of digital detoxing https://ensr.oii.ox.ac.uk/exploring-the-world-of-digital-detoxing/ Thu, 02 Mar 2017 10:50:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3973 As our social interactions become increasingly entangled with the online world, there are some who insist on the benefits of disconnecting entirely from digital technology. These advocates of “digital detoxing” view digital communication as eroding our ability to concentrate, to empathise, and to have meaningful conversations.

A 2016 survey by OnePoll found that 40% of respondents felt they had “not truly experienced valuable moments such as a child’s first steps or graduation” because “technology got in the way”, and OfCom’s 2016 survey showed that 15 million British Internet users (representing a third of those online), have already tried a digital detox. In recent years, America has sought to pathologise a perceived over-use of digital technology as “Internet addiction”. While the term is not recognized by the DSM, the idea is commonly used in media rhetoric and forms an important backdrop to digital detoxing.

The article Disconnect to reconnect: The food/technology metaphor in digital detoxing (First Monday) by Theodora Sutton presents a short ethnography of the digital detoxing community in the San Francisco Bay Area. Her informants attend an annual four-day digital detox and summer camp for adults in the Californian forest called Camp Grounded. She attended two Camp Grounded sessions in 2014, and followed up with semi-structured interviews with eight detoxers.

We caught up with Theodora to examine the implications of the study and to learn more about her PhD research, which focuses on the same field site.

Ed.: In your forthcoming article you say that Camp Grounded attendees used food metaphors (and words like “snacking” and “nutrition”) to understand their own use of technology and behaviour. How useful is this as an analogy?

Theodora: The food/technology analogy is an incredibly neat way to talk about something we think of as immaterial in a more tangible way. We know that our digital world relies on physical connections, but we forget that all the time. Another thing it does in lending a dietary connotation is to imply we should regulate our consumption of digital use; that there are healthy and unhealthy or inappropriate ways of using it.

I explore more pros and cons to the analogy in the paper, but the biggest con in my opinion is that while it’s neat, it’s often used to make value judgments about technology use. For example, saying that online sociality is like processed food is implying that it lacks authenticity. So the food analogy is a really useful way to understand how people are interpreting technology culturally, but it’s important to be aware of how it’s used.

Ed.: How do people rationalise ideas of the digital being somehow “less real” or “genuine” (less “nourishing”), despite the fact that it obviously is all real: just different? Is it just a peg to blame an “other” and excuse their own behaviour .. rather than just switching off their phones and going for a run / sail etc. (or any other “real” activity..).

Theodora: The idea of new technologies being somehow less real or less natural is a pretty established Western concept, and it’s been fundamental in moral panics following new technologies. That digital sociality is different, not lesser, is something we can academically agree on, but people very often believe otherwise.

My personal view is that figuring out what kind of digital usage suits you and then acting in moderation is ideal, without the need for extreme lengths, but in reality moderation can be quite difficult to achieve. And the thing is, we’re not just talking about choosing to text rather than meet in person, or read a book instead of go on Twitter. We’re talking about digital activities that are increasingly inescapable and part of life, like work e-mail or government services being moved online.

The ability to go for a run or go sailing are again privileged activities for people with free time. Many people think getting back to nature or meeting in person are really important for human needs. But increasingly, not everyone has the ability to get away from devices, especially if you don’t have enough money to visit friends or travel to a forest, or you’re just too tired from working all the time. So Camp Grounded is part of what they feel is an urgent conversation about whether the technology we design addresses human, emotional needs.

Ed.: You write in the paper that “upon arrival at Camp Grounded, campers are met with hugs and milk and cookies” .. not to sound horrible, but isn’t this replacing one type of (self-focused) reassurance with another? I mean, it sounds really nice (as does the rest of the Camp), but it sounds a tiny bit like their “problem” is being fetishised / enjoyed a little bit? Or maybe that their problem isn’t to do with technology, but rather with confidence, anxiety etc.

Theodora: The people who run Camp Grounded would tell you themselves that digital detoxing is not really about digital technology. That’s just the current scapegoat for all the alienating aspects of modern life. They also take away real names, work talk, watches, and alcohol. One of the biggest things Camp Grounded tries to do is build up attendees’ confidence to be silly and playful and have their identities less tied to their work persona, which is a bit of a backlash against Silicon Valley’s intense work ethic. Milk and cookies comes from childhood, or America’s summer camps which many attendees went to as children, so it’s one little thing they do to get you to transition into that more relaxed and childlike way of behaving.

I’m not sure about “fetishized,” but Camp Grounded really jumps on board with the technology idea, using really ironic things like an analog dating service called “embers,” a “human powered search” where you pin questions on a big noticeboard and other people answer, and an “inbox” where people leave you letters.

And you’re right, there is an aspect of digital detoxing which is very much a “middle class ailment” in that it can seem rather surface-level and indulgent, and tickets are pretty pricey, making it quite a privileged activity. But at the same time I think it is a genuine conversation starter about our relationship with technology and how it’s designed. I think a digital detox is more than just escapism or reassurance, for them it’s about testing a different lifestyle, seeing what works best for them and learning from that.

Ed.: Many of these technologies are designed to be “addictive” (to use the term loosely: maybe I mean “seductive”) in order to drive engagement and encourage retention: is there maybe an analogy here with foods that are too sugary, salty, fatty (i.e. addictive) for us? I suppose the line between genuine addiction and free choice / agency is a difficult one; and one that may depend largely on the individual. Which presumably makes any attempts to regulate (or even just question) these persuasive digital environments particularly difficult? Given the massive outcry over perfectly rational attempts to tax sugar, fat etc.

Theodora: The analogy between sugary, salty, or fatty foods and seductive technologies is drawn a lot — it was even made by danah boyd in 2009. Digital detoxing comes from a standpoint that tech companies aren’t necessarily working to enable meaningful connection, and are instead aiming to “hook” people in. That’s often compared to food companies that exist to make a profit rather than improve your individual nutrition, using whatever salt, sugar, flavourings, or packaging they have at their disposal to make you keep coming back.

There are two different ways of “fixing” perceived problems with tech: there’s technical fixes that might only let you use the site for certain amounts of time, or re-designing it so that it’s less seductive; then there’s normative fixes, which could be on an individual level deciding to make a change, or even society wide, like the French labour law giving the “right to disconnect” from work emails on evenings and weekends.

One that sort of embodies both of these is The Time Well Spent project, run by Tristan Harris and the OII’s James Williams. They suggest different metrics for tech platforms, such as how well they enable good experiences away from the computer altogether. Like organic food stickers, they’ve suggested putting a stamp on websites whose companies have these different metrics. That could encourage people to demand better online experiences, and encourage tech companies to design accordingly.

So that’s one way that people are thinking about regulating it, but I think we’re still in the stages of sketching out what the actual problems are and thinking about how we can regulate or “fix” them. At the moment, the issue seems to depend on what the individual wants to do. I’d be really interested to know what other ideas people have had to regulate it, though.

Ed.: Without getting into the immense minefield of evolutionary psychology (and whether or not we are creating environments that might be detrimental to us mentally or socially: just as the Big Mac and Krispy Kreme are not brilliant for us nutritionally) — what is the lay of the land — the academic trends and camps — for this larger question of “Internet addiction” .. and whether or not it’s even a thing?

Theodora: In my experience academics don’t consider it a real thing, just as you wouldn’t say someone had an addiction to books. But again, that doesn’t mean it isn’t used all the time as a shorthand. And there are some academics who use it, like Kimberly Young who proposed it in the 1990’s. She still runs an Internet addiction treatment centre in New York, and there’s another in Fall City, Washington state.

The term certainly isn’t going away any time soon and the centres treat people who genuinely seem to have a very problematic relationship with their technology. People like the OII’s Andrew Przybylski (@ShuhBillSkee) are working on untangling this kind of problematic digital use from the idea of addiction, which can be a bit of a defeatist and dramatic term.

Ed.: As an ethnographer working at the Camp according to its rules (hand-written notes, analogue camera) .. did it affect your thinking or subsequent behaviour / habits in any way?

Theodora: Absolutely. In a way that’s a struggle, because I never felt that I wanted or needed a digital detox, yet having been to it three times now I can see the benefits. Going to camp made a strong case for the argument to be more careful with my technology use, for example not checking my phone mid-conversation, and I’ve been much more aware of it since. For me, that’s been part of an on-going debate that I have in my own life, which I think is a really useful fuel towards continuing to unravel this topic in my studies.

Ed.: So what are your plans now for your research in this area — will you be going back to Camp Grounded for another detox?

Theodora: Yes — I’ll be doing an ethnography of the digital detoxing community again this summer for my PhD and that will include attending Camp Grounded again. So far I’ve essentially done just preliminary fieldwork and visited to touch base with my informants. It’s easy to listen to the rhetoric around digital detoxing, but I think what’s been missing is someone spending time with them to really understand their point of view, especially their values, that you can’t always capture in a survey or in interviews.

In my PhD I hope to understand things like: how digital detoxers even think about technology, what kind of strategies they have to use it appropriately once they return from a detox, and how metaphor and language work in talking about the need to “unplug.” The food analogy is just one preliminary finding that shows how fascinating the topic is as soon as you start scratching away the surface.

Read the full article: Sutton, T. (2017) Disconnect to reconnect: The food/technology metaphor in digital detoxing. First Monday 22 (6).


OII DPhil student Theodora Sutton was talking to blog editor David Sutcliffe.

]]>
Estimating the Local Geographies of Digital Inequality in Britain: London and the South East Show Highest Internet Use — But Why? https://ensr.oii.ox.ac.uk/estimating-the-local-geographies-of-digital-inequality-in-britain/ Wed, 01 Mar 2017 11:39:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3962 Despite the huge importance of the Internet in everyday life, we know surprisingly little about the geography of Internet use and participation at sub-national scales. A new article on Local Geographies of Digital Inequality by Grant Blank, Mark Graham, and Claudio Calvino published in Social Science Computer Review proposes a novel method to calculate the local geographies of Internet usage, employing Britain as an initial case study.

In the first attempt to estimate Internet use at any small-scale level, they combine data from a sample survey, the 2013 Oxford Internet Survey (OxIS), with the 2011 UK census, employing small area estimation to estimate Internet use in small geographies in Britain. (Read the paper for more on this method, and discussion of why there has been little work on the geography of digital inequality.)

There are two major reasons to suspect that geographic differences in Internet use may be important: apparent regional differences and the urban-rural divide. The authors do indeed find a regional difference: the area with least Internet use is in the North East, followed by central Wales; the highest is in London and the South East. But interestingly, geographic differences become non-significant after controlling for demographic variables (age, education, income etc.). That is, demographics matter more than simply where you live, in terms of the likelihood that you’re an Internet user.

Britain has one of the largest Internet economies in the developed world, and the Internet contributes an estimated 8.3 percent to Britain’s GDP. By reducing a range of geographic frictions and allowing access to new customers, markets and ideas it strongly supports domestic job and income growth. There are also personal benefits to Internet use. However, these advantages are denied to people who are not online, leading to a stream of research on the so-called digital divide.

We caught up with Grant Blank to discuss the policy implications of this marked disparity in (estimated) Internet use across Britain.

Ed.: The small-area estimation method you use combines the extreme breadth but shallowness of the national census, with the relative lack of breadth (2000 respondents) but extreme richness (550 variables) of the OxIS survey. Doing this allows you to estimate things like Internet use in fine-grained detail across all of Britain. Is this technique in standard use in government, to understand things like local demand for health services etc.? It seems pretty clever..

Grant: It is used by the government, but not extensively. It is complex and time-consuming to use well, and it requires considerable statistical skills. These have hampered its spread. It probably could be used more than it is — your example of local demand for health services is a good idea..

Ed.: You say this method works for Britain because OxIS collects information based on geographic area (rather than e.g. randomly by phone number) — so we can estimate things geographically for Britain that can’t be done for other countries in the World Internet Project (including the US, Canada, Sweden, Australia). What else will you be doing with the data, based on this happy fact?

Grant: We have used a straightforward measure of Internet use versus non-use as our dependent variable. Similar techniques could predict and map a variety of other variables. For example, we could take a more nuanced view of how people use the Internet. The patterns of mobile use versus fixed-line use may differ geographically and could be mapped. We could separate work-only users, teenagers using social media, or other subsets. Major Internet activities could be mapped, including such things as entertainment use, information gathering, commerce, and content production. In addition, the amount of use and the variety of uses could be mapped. All these are major issues and their geographic distribution has never been tracked.

Ed.: And what might you be able to do by integrating into this model another layer of geocoded (but perhaps not demographically rich or transparent) data, e.g. geolocated social media / Wikipedia activity (etc.)?

Grant: The strength of the data we have is that it is representative of the UK population. The other examples you mention, like Wikipedia activity or geolocated social media, are all done by smaller, self-selected groups of people, who are not at all representative. One possibility would be to show how and in what ways they are unrepresentative.

Ed.: If you say that Internet use actually correlates to the “usual” demographics, i.e. education, age, income — is there anything policy makers can realistically do with this information? i.e. other than hope that people go to school, never age, and get good jobs? What can policy-makers do with these findings?

Grant: The demographic characteristics are things that don’t change quickly. These results point to the limits of the government’s ability to move people online. They say that 100% of the UK population will never be online. This raises the question, what are realistic expectations for online activity? I don’t know the answer to that but it is an important question that is not easily addressed.

Ed.: You say that “The first law of the Internet is that everything is related to age”. When are we likely to have enough longitudinal data to understand whether this is simply because older people never had the chance to embed the Internet in their lives when they were younger, or whether it is indeed the case that older people inherently drop out. Will this age-effect eventually diminish or disappear?

Grant: You ask an important but unresolved question. In the language of social sciences — is the decline in Internet use with age an age-effect or a cohort-effect. An age-effect means that the Internet becomes less valuable as people age and so the decline in use with age is just a reflection of the declining value of the Internet. If this explanation is true then the age-effect will persist into the indefinite future. A cohort-effect implies that the reason older people tend to use the Internet less is that fewer of them learned to use the Internet in school or work. They will eventually be replaced by active Internet-using people and Internet use will no longer be associated with age. The decline with age will eventually disappear. We can address this question using data from the Oxford Internet Survey, but it is not a small area estimation problem.

Read the full article: Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.

This work was supported by the Economic and Social Research Council [grant ES/K00283X/1]. The data have been deposited in the UK Data Archive under the name “Geography of Digital Inequality”.


Grant Blank was speaking to blog editor David Sutcliffe.

]]>
Five Pieces You Should Probably Read On: Fake News and Filter Bubbles https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-fake-news-and-filter-bubbles/ Fri, 27 Jan 2017 10:08:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3940 This is the second post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Fake News and Filter Bubbles!

Fake news, post-truth, “alternative facts”, filter bubbles — this is the news and media environment we apparently now inhabit, and that has formed the fabric and backdrop of Brexit (“£350 million a week”) and Trump (“This was the largest audience to ever witness an inauguration — period”). Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing? How much can we do with machine-automated or crowd-sourced verification of facts? And are things really any worse now than when Bacon complained in 1620 about the false notions that “are now in possession of the human understanding, and have taken deep root therein”?

 

1. Bernie Hogan: How Facebook divides us [Times Literary Supplement]

27 October 2016 / 1000 words / 5 minutes

“Filter bubbles can create an increasingly fractured population, such as the one developing in America. For the many people shocked by the result of the British EU referendum, we can also partially blame filter bubbles: Facebook literally filters our friends’ views that are least palatable to us, yielding a doctored account of their personalities.”

Bernie Hogan says it’s time Facebook considered ways to use the information it has about us to bring us together across political, ideological and cultural lines, rather than hide us from each other or push us into polarized and hostile camps. He says it’s not only possible for Facebook to help mitigate the issues of filter bubbles and context collapse; it’s imperative, and it’s surprisingly simple.

 

2. Luciano Floridi: Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis [the Guardian]

29 November 2016 / 1000 words / 5 minutes

“The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere. […] some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals’ wellbeing and social good.”

The Internet echo chamber satiates our appetite for pleasant lies and reassuring falsehoods, and has become the defining challenge of the 21st century, says Luciano Floridi. So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively, but this is not good enough, he says. We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.

 

3. Philip Howard: Facebook and Twitter’s real sin goes beyond spreading fake news

3 January 2017 / 1000 words / 5 minutes

“With the data at their disposal and the platforms they maintain, social media companies could raise standards for civility by refusing to accept ad revenue for placing fake news. They could let others audit and understand the algorithms that determine who sees what on a platform. Just as important, they could be the platforms for doing better opinion, exit and deliberative polling.”

Only Facebook and Twitter know how pervasive fabricated news stories and misinformation campaigns have become during referendums and elections, says Philip Howard — and allowing fake news and computational propaganda to target specific voters is an act against democratic values. But in a time of weakening polling systems, withholding data about public opinion is actually their major crime against democracy, he says.

 

4. Brent Mittelstadt: Should there be a better accounting of the algorithms that choose our news for us?

7 December 2016 / 1800 words / 8 minutes

“Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished. At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users.”

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information, says Brent Mittelstadt. And content personalization systems and the algorithms they rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

 

5. Heather Ford: Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom?

19 November 2013 / 1400 words / 6 minutes

“A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms.”

‘Human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process, says Heather Ford. If code is law and if other aspects in addition to code determine how we can act in the world, it is important that we understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources — only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

 

.. and just to prove we’re capable of understanding and acknowledging and assimilating multiple viewpoints on complex things, here’s Helen Margetts, with a different slant on filter bubbles: “Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

 

The Authors

Bernie Hogan is a Research Fellow at the OII; his research interests lie at the intersection of social networks and media convergence.

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information. His  research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology.

Philip Howard is the OII’s Professor of Internet Studies. He investigates the impact of digital media on political life around the world.

Brent Mittelstadt is an OII Postdoc His research interests include the ethics of information handled by medical ICT, theoretical developments in discourse and virtue ethics, and epistemology of information.

Heather Ford completed her doctorate at the OII, where she studied how Wikipedia editors write history as it happens. She is now a University Academic Fellow in Digital Methods at the University of Leeds. Her forthcoming book “Fact Factories: Wikipedia’s Quest for the Sum of All Human Knowledge” will be published by MIT Press.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up! .. It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

]]>
Five Pieces You Should Probably Read On: The US Election https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-the-us-election/ Fri, 20 Jan 2017 12:22:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3927 This is the first post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: The US Election.

This was probably the nastiest Presidential election in recent memory: awash with Twitter bots and scandal, polarisation and filter bubbles, accusations of interference by Russia and the Director of the FBI, and another shock result. We have written about electoral prediction elsewhere: instead, here are five pieces that consider the interaction of social media and democracy — the problems, but also potential ways forward.

 

1. James Williams: The Clickbait Candidate

10 October 2016 / 2700 words / 13 minutes

“Trump is very straightforwardly an embodiment of the dynamics of clickbait: he is the logical product (though not endpoint) in the political domain of a media environment designed to invite, and indeed incentivize, relentless competition for our attention […] Like clickbait or outrage cascades, Donald Trump is merely the sort of informational packet our media environment is designed to select for.”

James Williams says that now is probably the time to have that societal conversation about the design ethics of the attention economy — because in our current media environment, attention trumps everything.

 

2. Sam Woolley, Philip Howard: Bots Unite to Automate the Presidential Election [Wired]

15 May 2016 / 850 words / 4 minutes

“Donald Trump understands minority communities. Just ask Pepe Luis Lopez, Francisco Palma, and Alberto Contreras […] each tweeted in support of Trump after his victory in the Nevada caucuses earlier this year. The problem is, Pepe, Francisco, and Alberto aren’t people. They’re bots.”

It’s no surprise that automated spam accounts (or bots) are creeping into election politics, say Sam Woolley and Philip Howard. Demanding bot transparency would at least help clean up social media — which, for better or worse, is increasingly where presidents get elected.

 

3. Phil Howard: Is Social Media Killing Democracy?

15 November 2016 / 1100 words / 5 minutes

“This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits […] these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.”

Phil Howard discusses ways to address fake news, audit social algorithms, and deal with social media’s “moral pass” — social media is damaging democracy, he says, but can also be used to save it.

 

4. Helen Margetts: Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection?

15 November 2016 / 600 words / 3 minutes

“Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead.”

New social information and visibility brings change to social behaviour, says Helen Margetts — ushering in political turbulence and unpredictability. Social media made visible what could have remain a country’s dark secret (hatred of women, rampant racism, etc.), but it will also underpin any radical counter-movement that emerges in the future.

 

5. Helen Margetts: Of course social media is transforming politics. But it’s not to blame for Brexit and Trump

9 January 2017 / 1700 words / 8 minutes

“Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

Politics is a lot messier in the social media era than it used to be, says Helen Margetts, but rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

 

The Authors

James Williams is an OII doctoral candidate, studying the ethics of attention and persuasion in technology design.

Sam Woolley is a Research Assistant on the OII’s Computational Propaganda project; he is interested in political bots, and the intersection of political communication and automation.

Philip Howard is the OII’s Professor of Internet Studies and PI of the Computational Propaganda project. He investigates the impact of digital media on political life around the world.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up .. Fake news and filter bubbles / It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

#5OIIPieces

]]>
Should there be a better accounting of the algorithms that choose our news for us? https://ensr.oii.ox.ac.uk/should-there-be-a-better-accounting-of-the-algorithms-that-choose-our-news-for-us/ Wed, 07 Dec 2016 14:44:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3875 A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information — and content personalization systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse.

A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalization systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalization systems undermine open exchange of ideas and evidence among participants: at a minimum, personalization systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers — content personalization systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalization systems.

He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalized content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalization systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing.

The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine — no longer in force — and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealized version of political discourse described here. Both precedents promote balance in public political discourse by setting standards for delivery of politically relevant content. Whether it is appropriate to hold service providers that use content personalization systems to a similar standard remains a crucial question.

Read the full article: Mittelstadt, B. (2016) Auditing for Transparency in Content Personalization Systems. International Journal of Communication 10(2016), 4991–5002.

We caught up with Brent to explore the broader implications of the study:

Ed: We basically accept that the tabloids will be filled with gross bias, populism and lies (in order to sell copy) — and editorial decisions are not generally transparent to us. In terms of their impact on the democratic process, what is the difference between the editorial boardroom and a personalising social media algorithm?

Brent: There are a number of differences. First, although not necessarily transparent to the public, one hopes that editorial boardrooms are at least transparent to those within the news organisations. Editors can discuss and debate the tone and factual accuracy of their stories, explain their reasoning to one another, reflect upon the impact of their decisions on their readers, and generally have a fair debate about the merits and weaknesses of particular content.

This is not the case for a personalising social media algorithm; those working with the algorithm inside a social media company are often unable to explain why the algorithm is functioning in a particular way, or determined a particular story or topic to be ‘trending’ or displayed to particular users, while others are not. It is also far more difficult to ‘fact check’ algorithmically curated news; a news item can be widely disseminated merely by many users posting or interacting with it, without any purposeful dissemination or fact checking by the platform provider.

Another big difference is the degree to which users can be aware of the bias of the stories they are reading. Whereas a reader of The Daily Mail or The Guardian will have some idea of the values of the paper, the same cannot be said of platforms offering algorithmically curated news and information. The platform can be neutral insofar as it disseminates news items and information reflecting a range of values and political viewpoints. A user will encounter items reflecting her particular values (or, more accurately, her history of interactions with the platform and the values inferred from them), but these values, and their impact on her exposure to alternative viewpoints, may not be apparent to the user.

Ed: And how is content “personalisation” different to content filtering (e.g. as we see with the Great Firewall of China) that people get very worked up about? Should we be more worried about personalisation?

Brent: Personalisation and filtering are essentially the same mechanism; information is tailored to a user or users according to some prevailing criteria. One difference is whether content is merely infeasible to access, or technically inaccessible. Content of all types will typically still be accessible in principle when personalisation is used, but the user will have to make an effort to access content that is not recommended or otherwise given special attention. Filtering systems, in contrast, will impose technical measures to make particular content inaccessible from a particular device or geographical area.

Another difference is the source of the criteria used to set the visibility of different types of content. In the case of personalisation, these criteria are typically based on the users (inferred) interests, values, past behaviours and explicit requests. Critically, these values are not necessarily apparent to the user. For filtering, criteria are typically externally determined by a third party, often a government. Some types of information are set off limits, according to the prevailing values of the third party. It is the imposition of external values, which limit the capacity of users to access content of their choosing, which often causes an outcry against filtering and censorship.

Importantly, the two mechanisms do not necessarily differ in terms of the transparency of the limiting factors or rules to users. In some cases, such as the recently proposed ban in the UK of adult websites that do not provide meaningful age verification mechanisms, the criteria that determine whether sites are off limits will be publicly known at a general level. In other cases, and especially with personalisation, the user inside the ‘filter bubble’ will be unaware of the rules that determine whether content is (in)accessible. And it is not always the case that the platform provider intentionally keeps these rules secret. Rather, the personalisation algorithms and background analytics that determine the rules can be too complex, inaccessible or poorly understood even by the provider to give the user any meaningful insight.

Ed: Where are these algorithms developed: are they basically all proprietary? i.e. how would you gain oversight of massively valuable and commercially sensitive intellectual property?

Brent: Personalisation algorithms tend to be proprietary, and thus are not normally open to public scrutiny in any meaningful sense. In one sense this is understandable; personalisation algorithms are valuable intellectual property. At the same time the lack of transparency is a problem, as personalisation fundamentally affects how users encounter and digest information on any number of topics. As recently argued, it may be the case that personalisation of news impacts on political and democratic processes. Existing regulatory mechanisms have not been successful in opening up the ‘black box’ so to speak.

It can be argued, however, that legal requirements should be adopted to require these algorithms to be open to public scrutiny due to the fundamental way they shape our consumption of news and information. Oversight can take a number of forms. As I argue in the article, algorithmic auditing is one promising route, performed both internally by the companies themselves, and externally by a government agency or researchers. A good starting point would be for the companies developing and deploying these algorithms to extend their cooperation with researchers, thereby allowing a third party to examine the effects these systems are having on political discourse, and society more broadly.

Ed: By “algorithm audit” — do you mean examining the code and inferring what the outcome might be in terms of bias, or checking the outcome (presumably statistically) and inferring that the algorithm must be introducing bias somewhere? And is it even possible to meaningfully audit personalisation algorithms, when they might rely on vast amounts of unpredictable user feedback to train the system?

Brent: Algorithm auditing can mean both of these things, and more. Audit studies are a tool already in use, whereby human participants introduce different inputs into a system, and examine the effect on the system’s outputs. Similar methods have long been used to detect discriminatory hiring practices, for instance. Code audits are another possibility, but are generally prohibitive due to problems of access and complexity. Also, even if you can access and understand the code of an algorithm, that tells you little about how the algorithm performs in practice when given certain input data. Both the algorithm and input data would need to be audited.

Alternatively, auditing can assess just the outputs of the algorithm; recent work to design mechanisms to detect disparate impact and discrimination, particularly in the Fairness, Accountability and Transparency in Machine Learning (FAT-ML) community, is a great example of this type of auditing. Algorithms can also be designed to attempt to prevent or detect discrimination and other harms as they occur. These methods are as much about the operation of the algorithm, as they are about the nature of the training and input data, which may itself be biased. In short, auditing is very difficult, but there are promising avenues of research and development. Once we have reliable auditing methods, the next major challenge will be to tailor them to specific sectors; a one-size-meets-all approach to auditing is not on the cards.

Ed: Do you think this is a real problem for our democracy? And what is the solution if so?

Brent: It’s difficult to say, in part because access and data to study the effects of personalisation systems are hard to come by. It is one thing to prove that personalisation is occurring on a particular platform, or to show that users are systematically displayed content reflecting a narrow range of values or interests. It is quite another to prove that these effects are having an overall harmful effect on democracy. Digesting information is one of the most basic elements of social and political life, so any mechanism that fundamentally changes how information is encountered should be subject to serious and sustained scrutiny.

Assuming personalisation actually harms democracy or political discourse, mitigating its effects is quite a different issue. Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished.

At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users. A promising step would be proactively giving the user some idea of what the system thinks it knows about them, or how they are being classified or profiled, without the user first needing to ask.


Brent Mittelstadt was talking to blog editor David Sutcliffe.

]]>
Can we predict electoral outcomes from Wikipedia traffic? https://ensr.oii.ox.ac.uk/can-we-predict-electoral-outcomes-from-wikipedia-traffic/ Tue, 06 Dec 2016 15:34:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3881 As digital technologies become increasingly integrated into the fabric of social life their ability to generate large amounts of information about the opinions and activities of the population increases. The opportunities in this area are enormous: predictions based on socially generated data are much cheaper than conventional opinion polling, offer the potential to avoid classic biases inherent in asking people to report their opinions and behaviour, and can deliver results much quicker and be updated more rapidly.

In their article published in EPJ Data Science, Taha Yasseri and Jonathan Bright develop a theoretically informed prediction of election results from socially generated data combined with an understanding of the social processes through which the data are generated. They can thereby explore the predictive power of socially generated data while enhancing theory about the relationship between socially generated data and real world outcomes. Their particular focus is on the readership statistics of politically relevant Wikipedia articles (such as those of individual political parties) in the time period just before an election.

By applying these methods to a variety of different European countries in the context of the 2009 and 2014 European Parliament elections they firstly show that the relative change in number of page views to the general Wikipedia page on the election can offer a reasonable estimate of the relative change in election turnout at the country level. This supports the idea that increases in online information seeking at election time are driven by voters who are considering voting.

Second, they show that a theoretically informed model based on previous national results, Wikipedia page views, news media mentions, and basic information about the political party in question can offer a good prediction of the overall vote share of the party in question. Third, they present a model for predicting change in vote share (i.e., voters swinging towards and away from a party), showing that Wikipedia page-view data provide an important increase in predictive power in this context.

This relationship is exaggerated in the case of newer parties — consistent with the idea that voters don’t seek information uniformly about all parties at election time. Rather, they behave like ‘cognitive misers’, being more likely to seek information on new political parties with which they do not have previous experience and being more likely to seek information only when they are actually changing the way they vote.

In contrast, there was no evidence of a ‘media effect’: there was little correlation between news media mentions and overall Wikipedia traffic patterns. Indeed, the news media and Wikipedia appeared to be biased towards different things: with the news favouring incumbent parties, and Wikipedia favouring new ones.

Read the full article: Yasseri, T. and Bright, J. (2016) Wikipedia traffic data and electoral prediction: towards theoretically informed models. EPJ Data Science. 5 (1).

We caught up with the authors to explore the implications of the work.

Ed: Wikipedia represents a vast amount of not just content, but also user behaviour data. How did you access the page view stats — but also: is anyone building dynamic visualisations of Wikipedia data in real time?

Taha and Jonathan: Wikipedia makes its page view data available for free (in the same way as it makes all of its information available!). You can find the data here, along with some visualisations

Ed: Why did you use Wikipedia data to examine election prediction rather than (the I suppose the more fashionable) Twitter? How do they compare as data sources?

Taha and Jonathan: One of the big problems with using Twitter to predict things like elections is that contributing on social media is a very public thing and people are quite conscious of this. For example, some parties are seen as unfashionable so people might not make their voting choice explicit. Hence overall social media might seem to be saying one thing whereas actually people are thinking another.

By contrast, looking for information online on a website like Wikipedia is an essentially private activity so there aren’t these social biases. In other words, on Wikipedia we can directly have access to transactional data on what people do, rather than what they say or prefer to say.

Ed: How did these results and findings compare with the social media analysis done as part of our UK General Election 2015 Election Night Data Hack? (long title..)

Taha and Jonathan: The GE2015 data hack looked at individual politicians. We found that having a Wikipedia page is becoming increasingly important — over 40% of Labour and Conservative Party candidates had an individual Wikipedia page. We also found that this was highly correlated with Twitter presence — being more active on one network also made you more likely to be active on the other one. And we found some initial evidence that social media reaction was correlated with votes, though there is a lot more work to do here!

Ed: Can you see digital social data analysis replacing (or maybe just complementing) opinion polling in any meaningful way? And what problems would need to be addressed before that happened: e.g. around representative sampling, data cleaning, and weeding out bots?

Taha and Jonathan: Most political pundits are starting to look at a range of indicators of popularity — for example, not just voting intention, but also ratings of leadership competence, economic performance, etc. We can see good potential for social data to become part of this range of popularity indicator. However we don’t think it will replace polling just yet; the use of social media is limited to certain demographics. Also, the data collected from social media are often very shallow, not allowing for validation. In the case of Wikipedia, for example, we only know how many times each page is viewed, but we don’t know by how many people and from where.

Ed: You do a lot of research with Wikipedia data — has that made you reflect on your own use of Wikipedia?

Taha and Jonathan: It’s interesting to think about this activity of getting direct information about politicians — it’s essentially a new activity, something you couldn’t do in the pre-digital age. I know that I personally [Jonathan] use it to find out things about politicians and political parties — it would be interesting to know more about why other people are using it as well. This could have a lot of impacts. One thing Wikipedia has is a really long memory, in a way that other means of getting information on politicians (such as newspapers) perhaps don’t. We could start to see this type of thing becoming more important in electoral politics.

[Taha] .. since my research has been mostly focused on Wikipedia edit wars between human and bot editors, I have naturally become more cautious about the information I find on Wikipedia. When it comes to sensitive topics, sach as politics, Wikipedia is a good point to start, but not a great point to end the search!


Taha Yasseri and Jonathan Bright were talking to blog editor David Sutcliffe.

]]>
Edit wars! Examining networks of negative social interaction https://ensr.oii.ox.ac.uk/edit-wars-examining-networks-of-negative-social-interaction/ Fri, 04 Nov 2016 10:05:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3893
Network of all reverts done in the English language Wikipedia within one day (January 15, 2010). More details:
Network of all reverts done in the English language Wikipedia within one day (January 15, 2010). Read the full article for details.
While network science has significantly advanced our understanding of the structure and dynamics of the human social fabric, much of the research has focused on positive relations and interactions such as friendship and collaboration. Considerably less is known about networks of negative social interactions such as distrust, disapproval, and disagreement. While these interactions are less common, they strongly affect people’s psychological well-being, physical health, and work performance.

Negative interactions are also rarely explicitly declared and recorded, making them hard for scientists to study. In their new article on the structural and temporal features of negative interactions in the community, Milena Tsvetkova, Ruth García-Gavilanes and Taha Yasseri use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. In large online collaboration communities like Wikipedia, users sometimes undo or downrate contributions made by other users; most often to maintain and improve the collaborative project. However, it is also possible that these actions are social in nature, with previous research acknowledging that they could also imply negative social interactions.

The authors find evidence that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. However, they don’t find evidence that editors “pay forward” a revert, coordinate with others to revert an editor, or revert different editors serially. These interactions can be related to the status of the editors. Even though the individual reverts might not necessarily be negative social interactions, their analysis points to the existence of certain patterns of negative social dynamics within the editorial community. Some of these patterns have not been previously explored and certainly carry implications for Wikipedia’s own knowledge collection practices — and can also be applied to other large-scale collaboration networks to identify the existence of negative social interactions.

Read the full article: Milena Tsvetkova, Ruth García-Gavilanes and Taha Yasseri (2016) Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration. Scientific Reports 6 doi:10.1038/srep36333

We caught up with the authors to explore the implications of the work.

Ed: You find that certain types of negative social interactions and status considerations interfere with knowledge production on Wikipedia. What could or should Wikipedia do about it — or is it not actually a significant problem?

Taha: We believe it is an issue to consider. While the Wikipedia community might not be able to directly cope with it, as negative social interactions are intrinsic to human societies, an important consequence of our report would be to use the information in Wikipedia articles with extra care — and also to bear in mind that Wikipedia content might carry more subjectivity compared to a professionally written encyclopaedia.

Ed: Does reverting behaviour correlate with higher quality articles (i.e. with a lot of editorial attention) or simply with controversial topics — i.e. do you see reverting behaviour as generally a positive or negative thing?

Taha: In a different project we looked at the correlation between controversy and quality. We observed that controversy, up to a certain level, is correlated with higher quality of the article, specifically as far as the completeness of the article is concerned. However, the articles with very high scores of controversy, started to show less quality. In short, a certain amount of controversy helps the articles to become more complete, but too much controversy is a bad sign.

Ed: Do you think your results say more about the structure of Wikipedia, the structure of crowds, or about individuals?

Taha: Our results shed light on some of the most fundamental patterns in human behavior. It is one of the few examples in which a large dataset of negative interactions is analysed and the dynamics of negativity are studied. In this sense, this article is more about human behavior in interaction with other community members in a collaborative environment. However, because our data come from Wikipedia, I believe there are also lessons to be learnt about Wikipedia itself.

Ed: You note that by focusing on the macro-level you miss the nuanced understanding that thick ethnographic descriptions can produce. How common is it for computational social scientists to work with ethnographers? What would you look at if you were to work with ethnographers on this project?

Taha: One of the drawbacks in big data analysis in computational social science is the small depth of the analysis. We are lacking any demographic information about the individuals that we study. We can draw conclusions about the community of Wikipedia editors in a certain language, but that is by no means specific enough. An ethnographic approach, which would benefit our research tremendously, would go deeper in analyzing individuals and studying the features and attributes which lead to certain behavior. For example, we report, at a high level, that “status” determines editors’ actions to a good extend, but of course the mechanisms behind this observation can only be explained based on ethnographic analysis.

Ed: I guess Wikipedia (whether or not unfairly) is commonly associated with edit wars — while obviously also being a gigantic success: how about other successful collaborative platforms — how does Wikipedia differ from Zooniverse, for example?

Taha: There is no doubt that Wikipedia is a huge success and probably the largest collaborative project in the history of mankind. Our research mostly focuses on its dark side, but it does not question its success and value. Compared to other collaborative projects, such as Zooniverse, the main difference is in the management model. Wikipedia is managed and run by the community of editors. Very little top-down management is employed in Wikipedia. Whereas in Zooniverse for instance, the overall structure of the project is designed by a few researchers and the crowd can only act within a pre-determined framework. For more of these sort of comparisons, I suggest to look at our HUMANE project, in which we provide a typology and comparison for a wide range of Human-Machine Networks.

Ed: Finally — do you edit Wikipedia? And have you been on the receiving end of reverts yourself?

Taha: I used to edit Wikipedia much more. And naturally I have had my own share of reverts, at both ends!


Taha Yasseri was talking to blog editor David Sutcliffe.

]]>
Is internet gaming as addictive as gambling? (no, suggests a new study) https://ensr.oii.ox.ac.uk/is-internet-gaming-as-addictive-as-gambling-no-suggests-a-new-study/ Fri, 04 Nov 2016 09:43:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3842 New research by Andrew Przybylski (OII, Oxford University), Netta Weinstein (Cardiff University), and Kou Murayama (Reading University) published today in the American Journal of Psychiatry suggests that very few of those who play internet-based video games have symptoms suggesting they may be addicted. The article also says that gaming, though popular, is unlikely to be as addictive as gambling. Two years ago the APA identified a critical need for good research to look into whether internet gamers run a risk of becoming addicted and asked how such an addiction might be diagnosed properly. To the authors’ knowledge, these are the first findings from a large-scale project to produce robust evidence on the potential new problem of “internet gaming disorder”.

The authors surveyed 19,000 men and women from nationally representative samples from the UK, the United States, Canada and Germany, with over half saying they had played internet games recently. Out of the total sample, 1% of young adults (18-24 year olds) and 0.5% of the general population (aged 18 or older) reported symptoms linking play to possible addictive behaviour — less than half of recently reported rates for gambling.

They warn that researchers studying the potential “darker sides” of Internet-based games must be cautious. Extrapolating from their data, as many as a million American adults might meet the proposed DSM-5 criteria for addiction to online games — representing a large cohort of people struggling with what could be clinically dysregulated behavior. However, because the authors found no evidence supporting a clear link to clinical outcomes, they warn that more evidence for clinical and behavioral effects is needed before concluding that this is a legitimate candidate for inclusion in future revisions of the DSM. If adopted, Internet gaming disorder would vie for limited therapeutic resources with a range of serious psychiatric disorders.

Read the full article: Andrew K. Przybylski, Netta Weinstein, Kou Murayama (2016) Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. Published online: November 04, 2016.

We caught up with Andy to explore the broader implications of the study:

Ed.: Is “gaming addiction” or “Internet addition” really a thing? e.g. is it something dreamed up by politicians / media people, or is it something that has been discussed and reported by psychiatrists and GPs on the ground?

Andy: Although internet addiction started as a joke about the pathologizing of everyday behaviours, popular fears have put it on the map for policymakers and researchers. In other words, thinking about potential disorders linked to the internet, gaming, and technology have taken on a life of their own.

Ed.: Two years ago the APA identified “a critical need for good research to look into whether internet gamers run a risk of becoming addicted” and asked how such an addiction might be diagnosed properly (i.e. using a checklist of symptoms). What other work or discussion has come out of that call?

Andy: In recent years two groups of researchers have emerged, one arguing there is an international consensus about the potential disorder based on the checklist, the second arguing that it is problematic to pathologize internet gaming. This second group says we don’t understand enough about gaming to know if it’s any different from other hobbies, like being a sports fan. They’re concerned that it could lead other activities to be classified as pathological. Our study set out to test if the checklist approach works, a rigorous test of the APA call for research using the symptoms proposed.

Ed.: Do fears (whether founded or not) of addiction overlap at all with fears of violent video games perhaps altering players’ behaviour? Or are they very clearly discussed and understood as very separate issues?

Andy: Although the fears do converge, the evidence does not. There is a general view that some people might be more liable to be influenced by the addictive or violent aspects of gaming but this remains an untested assumption. In both areas the quality of the evidence base needs critical improvement before the work is valuable for policymakers and mental health professionals.

Ed.: And what’s the broad landscape like in this area – i.e. who are the main players, stakeholders, and pressure points?

Andy: In addition to the American Psychiatric Association (DSM-5), the World Health Organisation is considering formalising Gaming Disorder as a potential mental health issue in the next revision of the International Classifications of Disease (ICD) tool. There is a movement among researchers (myself included based on this research) to urge caution rushing to create new behavioural addition based on gaming for the ICD-11. It is likely that including gaming addiction will do more harm than good by confusing an already complex and under developed research area.

Ed.: And lastly: asking the researcher – do we have enough data and analysis to be able to discuss this sensibly and scientifically? What would a “definitive answer” to this question look like to you — and is it achievable?

Andy: The most important thing to understand about this research area is that there is very little high quality evidence. Generally speaking there are two kinds of empirical studies in the social and clinical sciences, exploratory studies and confirmatory ones. Most of the evidence about gaming addiction to date is exploratory, that is the analyses reported represent what ‘sticks to the wall’ after the data is collected. This isn’t a good evidence for health policy.

Our studies represent the first confirmatory research on gaming addiction. We pre-registered how we were going to collect and analyse our data before we saw it. We collected large representative samples and tested a priori hypotheses. This makes a big difference in the kinds of inferences you can draw and the value of the work to policymakers. We hope our work represents the first of many studies on technology effects that put open data, open code, and a pre-registered analysis plans at the centre of science in this area. Until the research field adopts these high standards we will not have accurate definitive answers about Internet Gaming Disorder.


Read the full article: Andrew K. Przybylski, Netta Weinstein, Kou Murayama (2016) Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. Published online: November 04, 2016.

Andy was talking to David Sutcliffe, Managing Editor of the Policy blog.

]]>
The economic expectations and potentials of broadband Internet in East Africa https://ensr.oii.ox.ac.uk/the-economic-expectations-and-potentials-of-broadband-internet-in-east-africa/ Thu, 13 Mar 2014 09:39:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2603 Ed: There has a lot of excitement about the potential of increased connectivity in the region: where did this come from? And what sort of benefits were promised?

Chris: Yes, at the end of the 2000s when the first fibre cables landed in East Africa, there was much anticipation about what this new connectivity would mean for the region. I remember I was in Tanzania at the time, and people were very excited about this development – being tired of the slow and expensive satellite connections where even simple websites could take a minute to load. The perception, both in the international press and from East African politicians was that the cables would be a game changer. Firms would be able to market and sell more directly to customers and reduce inefficient ‘intermediaries’. Connectivity would allow new types of digital-driven business, and it would provide opportunity for small and medium firms to become part of the global economy. We wanted to revisit this discussion. Were firms adopting internet, as it became cheaper? Had this new connectivity had the effects that were anticipated, or was it purely hype?

Ed:  So what is the current level and quality of broadband access in Rwanda? ie how connected are people on the ground?

Chris: Internet access has greatly improved over the previous few years, and the costs of bandwidth have declined markedly. The government has installed a ‘backbone’ fibre network and in the private sector there has also been a growth in the number of firms providing Internet service. There are still some problems though. Prices are still are quite high, particularly for dedicated broadband connections, and in the industries we looked at (tea and tourism) many firms couldn’t afford it. Secondly, we heard a lot of complaints that lower bandwidth connections – WiMax and mobile internet – are unreliable and become saturated at peak times. So, Rwanda has come a long way, but we expect there will be more improvements in the future.

Ed: How much impact has the Internet had on Rwanda’s economy generally? And who is it actually helping, if so?

Chris: Economists in the World Bank have calculated that in developing economies a 10% improvement in Internet access leads to an increase in growth of 1.3%, so the effects should be taken seriously. In Rwanda, it’s too early to concretely see the effects in bottom line economic growth. In Rwanda, it’s too early to concretely see the effects in bottom line economic growth. In this work we wanted to examine the effect on already established sectors to get insight on Internet adoption and use. In general, we can say that firms are increasingly adopting Internet connectivity in some form, and that firms have been able take advantage and improve operations. However, it seems that wider transformational effects of connectivity have so far been limited.

Ed: And specifically in terms of the Rwandan tea and tourism industries: has the Internet had much effect?

Chris: The global tourism industry is driven by Internet use, and so tour firms, guides and hotels in Rwanda have been readily adopting it. We can see that the Internet has been beneficial, particularly for those firms coordinating tourism in Rwanda, who can better handle volumes of tourists. In the tea industry, adoption is a little lower but the Internet is used in similar ways – to coordinate the movement of tea from production to processing to selling, and this simplifies management for firms. So, connectivity has had benefits by improvements in efficiency, and this complements the fact that both sectors are looking to attract international investment and become better integrated into markets. In that sense, one can say that the growth in Internet connectivity is playing a significant role in strategies of private sector development.

Ed: The project partly focuses on value chains: ie where value is captured at different stages of a chain, leading (for example) from Rwandan tea bush to UK Tesco shelf. How have individual actors in the chain been affected? And has there been much in the way of (the often promised) disintermediation — ie are Rwandan tea farmers and tour operators now able to ‘plug directly’ into international markets?

Chris: Value chains allow us to pay more attention to who are the winners (and losers) of the processes described above, and particularly to see if this benefits Rwandan firms who are linked into global markets. One of the potential benefits originally discussed around new connectivity was that with the growth of online channels and platforms — and through social media — that firms as they became connected would have a more direct link to large markets and be able to disintermediate and improve the benefits they received. Generally, we can say that such disintermediation has not happened, for different reasons. In the tourism sector, many tourists are still reluctant to go directly to Rwandan tourist firms, for reasons related to trust (particularly around payment for holidays). In the tea sector, the value chains are very well established, and with just a few retailers in the end-markets, direct interaction with markets has simply not materialised. So, the hope of connectivity driving disintermediation in value chains has been limited by the market structure of both these sectors.

Ed: Is there any sense that the Internet is helping to ‘lock’ Rwanda into global markets and institutions: for example international standards organisations? And will greater transparency mean Rwanda is better able to compete in global markets, or will it just allow international actors to more efficiently exploit Rwanda’s resources — ie for the value in the chain to accrue to outsiders?

Chris: One of the core activities around the Internet that we found for both tea and tourism was firms using connectivity as a way to integrate themselves into logistic tracking, information systems, and quality and standards; whether this be automation in the tea sector or using global booking systems in the tourism sector. In one sense, this benefits Rwandan firms in that it’s crucial to improving efficiency in global markets, but it’s less clear that benefits of integration always accrue to those in Rwanda. It also moves away from the earlier ideas that connectivity would empower firms, unleashing a wave of innovation. To some of the firms we interviewed, it felt like this type of investment in the Internet was simply a way for others to better monitor, define and control every step they made, dictated by firms far away.

Ed. How do the project findings relate to (or comment on) the broader hopes of ICT4D developers? ie does ICT (magically) solve economic and market problems — and if so, who benefits?

Chris: For ICT developers looking to support development, there is often a tendency to look to build for actors who are struggling to find markets for their goods and services (such as apps linking buyers and producers, or market pricing information). But, the industries we looked at are quite different — actors (even farmers) are already linked via value chains to global markets, and so these types of application were less useful. In interviews, we found other informal uses of the Internet amongst lower-income actors in these sectors, which point the way towards new ICT applications: sectoral knowledge building, adapting systems to allow smallholders to better understand their costs, and systems to allow better links amongst cooperatives. More generally for those interested in ICT and development, this work highlights that changes in economies are not solely driven by connectivity, particularly in industries where rewards are already skewed towards larger global firms over those in developing countries. This calls for a context-dependent analysis of policy and structures, something that can be missed when more optimistic commentators discuss connectivity and the digital future.


Christopher Foster was talking to blog editor David Sutcliffe.

]]>
Exploring variation in parental concerns about online safety issues https://ensr.oii.ox.ac.uk/exploring-variation-parental-concerns-about-online-safety-issues/ Thu, 14 Nov 2013 08:29:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1208 Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate?

boyd / Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole.

As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices — both the good and the bad. Our goal is to introduce research that can help contextualize socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology.

Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research?

boyd / Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous.

Parents have different and often conflicting views about what’s best for their children or for children writ large. This creates a significant challenge for designing interventions that are meant to be beneficial and applicable to a diverse group of people. What’s beneficial or desirable to one may not be positively received by another. More importantly, what’s helpful to one group of parents may not actually benefit parents or youth as a whole. As a result, we think it’s important to start interrogating assumptions that underpin technology policy interventions so that policymakers have a better understanding of how their decisions affect whom they’re hoping to reach.

Ed: What did your study reveal, and in particular, where do you see the greatest differences in attitudes arising? Did it reveal anything unexpected?

boyd / Hargittai: The most significant take-away from our research is that there are significant demographic differences in concerns about young people. Some of the differences are not particularly surprising. For example, parents of children who have been exposed to pornography or violent content, or who have bullied or been bullied, have greater concern that this will happen to their child. Yet, other factors may be more surprising. For example, we found significant racial and ethnic differences in how parents approach these topics. Black, Hispanic, and Asian parents are much more concerned about at least some of the online safety measures than Whites, even when controlling for socioeconomic factors and previous experiences.

While differences in cultural experiences may help explain some of these findings, our results raise serious questions as to the underlying processes and reasons for these discrepancies. Are these parents more concerned because they have a higher level of distrust for technology? Because they feel as though there are fewer societal protections for their children? Because they feel less empowered as parents? We don’t know. Still, our findings challenge policy-makers to think about the diversity of perspectives their law-making should address. And when they enact laws, they should be attentive to how those interventions are received. Just because parents of colour are more concerned does not mean that an intervention intended to empower them will do so. Like many other research projects, this study results in as many — if not more — questions than it answers.

Ed: Are parents worrying about the right things? For example, you point out that ‘stranger danger’ registers the highest level of concern from most parents, yet this is a relatively rare occurrence. Bullying is much more common, yet not such a source of concern. Do we need to do more to educate parents about risks, opportunities and coping?

boyd / Hargittai: Parental fear is a contested issue among scholars and for good reason. In many ways, it’s a philosophical issue. Should parents worry more about frequent but low-consequence issues? Or should they concern themselves more with the possibility of rare but devastating incidents? How much fear is too much fear? Fear is an understandable response to danger, but left unchecked, it can become an irrational response to perceived but unlikely risks. Fear can prevent injury, but too much fear can result in a form of protectionism that itself can be harmful. Most parents want to protect their children from harm but few think about the consequences of smothering their children in their efforts to keep them safe. All too often, in erring on the side of caution, we escalate a societal tendency to become overprotective, limiting our children’s opportunities to explore, learn, be creative and mature. Finding the right balance is very tricky.

People tend to fear things that they don’t understand. New technologies are often terrifying because they are foreign. And so parents are reasonably concerned when they see their children using tools that confound them. One of the best antidotes to fear is knowledge. Although this is outside of the scope of this paper, we strongly recommend that parents take the time to learn about the tools that their children are using, ideally by discussing them with their children. The more that parents can understand the technological choices and decisions made by their children, the more that parents can help them navigate the risks and challenges that they do face, online and off.

Ed: On the whole, it seems that parents whose children have had negative experiences online are more likely to say they are concerned, which seems highly appropriate. But we also have evidence from other studies that many parents are unaware of such experiences, and also that children who are more vulnerable offline, may be more vulnerable online too. Is there anything in your research to suggest that certain groups of parents aren’t worrying enough?

boyd / Hargittai: As researchers, we regularly use different methodologies and different analytical angles to get at various research questions. Each approach has its strengths and weaknesses, insights and blind spots. In this project, we surveyed parents, which allows us to get at their perspective, but it limits our ability to understand what they do not know or will not admit. Over the course of our careers, we’ve also surveyed and interviewed numerous youth and young adults, parents and other adults who’ve worked with youth. In particular, danah has spent a lot of time working with at-risk youth who are especially vulnerable. Unfortunately, what she’s learned in the process — and what numerous survey studies have shown — is that those who are facing some of the most negative experiences do not necessarily have positive home life experiences. Many youth face parents who are absent, addicts, or abusive; these are the youth who are most likely to be physically, psychologically, or socially harmed, online and offline.

In this study, we took parents at face value, assuming that parents are good actors with positive intentions. It is important to recognise, however, that this cannot be taken for granted. As with all studies, our findings are limited because of the methodological approach we took. We have no way of knowing whether or not these parents are paying attention, let alone whether or not their relationship to their children is unhealthy.

Although the issues of abuse and neglect are outside of the scope of this particular paper, these have significant policy implications. Empowering well-intended parents is generally a good thing, but empowering abusive parents can create unintended consequences for youth. This is an area where much more research is needed because it’s important to understand when and how empowering parents can actually put youth at risk in different ways.

Ed: What gaps remain in our understanding of parental attitudes towards online risks?

boyd / Hargittai: As noted above, our paper assumes well-intentioned parenting on behalf of caretakers. A study could explore online attitudes in the context of more information about people’s general parenting practices. Regarding our findings about attitudinal differences by race and ethnicity, much remains to be done. While existing literature alludes to some reasons as to why we might observe these variations, it would be helpful to see additional research aiming to uncover the sources of these discrepancies. It would be fruitful to gain a better understanding of what influences parental attitudes about children’s use of technology in the first place. What role do mainstream media, parents’ own experiences with technology, their personal networks, and other factors play in this process?

Another line of inquiry could explore how parental concerns influence rules aimed at children about technology uses and how such rules affect youth adoption and use of digital media. The latter is a question that Eszter is addressing in a forthcoming paper with Sabrina Connell, although that study does not include data on parental attitudes, only rules. Including details about parental concerns in future studies would allow more nuanced investigation of the above questions. Finally, much is needed to understand the impact that policy interventions in this space have on parents, youth, and communities. Even the most well-intentioned policy may inadvertently cause harm. It is important that all policy interventions are monitored and assessed as to both their efficacy and secondary effects.


Read the full paper: boyd, d., and Hargittai, E. (2013) Connected and Concerned: Exploring Variation in Parental Concerns About Online Safety Issues. Policy and Internet 5 (3).

danah boyd and Eszter Hargittai were talking to blog editor David Sutcliffe.

]]>
Papers on Policy, Activism, Government and Representation: New Issue of Policy and Internet https://ensr.oii.ox.ac.uk/issue-34/ Wed, 16 Jan 2013 21:40:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=667 We are pleased to present the combined third and fourth issue of Volume 4 of Policy and Internet. It contains eleven articles, each of which investigates the relationship between Internet-based applications and data and the policy process. The papers have been grouped into the broad themes of policy, government, representation, and activism.

POLICY: In December 2011, the European Parliament Directive on Combating the Sexual Abuse, Sexual Exploitation of Children and Child Pornography was adopted. The directive’s much-debated Article 25 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavor to obtain the removal of such websites hosted outside their territory. Member States are also given the option to block access to such websites to users within their territory. Both these policy choices have been highly controversial and much debated; Karel Demeyer, Eva Lievens, and Jos Dumortie analyse the technical and legal means of blocking and removing illegal child sexual content from the Internet, clarifying the advantages and drawbacks of the various policy options.

Another issue of jurisdiction surrounds government use of cloud services. While cloud services promise to render government service delivery more effective and efficient, they are also potentially stateless, triggering government concern over data sovereignty. Kristina Irion explores these issues, tracing the evolution of individual national strategies and international policy on data sovereignty. She concludes that data sovereignty presents national governments with a legal risk that can’t be addressed through technology or contractual arrangements alone, and recommends that governments retain sovereignty over their information.

While the Internet allows unprecedented freedom of expression, it also facilitates anonymity and facelessness, increasing the possibility of damage caused by harmful online behavior, including online bullying. Myoung-Jin Lee, Yu Jung Choi, and Setbyol Choi investigate the discourse surrounding the introduction of the Korean Government’s “Verification of Identity” policy, which aimed to foster a more responsible Internet culture by mandating registration of a user’s real identity before allowing them to post to online message boards. The authors find that although arguments about restrictions on freedom of expression continue, the policy has maintained public support in Korea.

A different theoretical approach to another controversy topic is offered by Sameer Hinduja, who applies Actor-Network Theory (ANT) to the phenomenon of music piracy, arguing that we should pay attention not only to the social aspects, but also to the technical, economic, political, organizational, and contextual aspects of piracy. He argues that each of these components merits attention and response by law enforcers if progress is to be made in understanding and responding to digital piracy.

GOVERNMENT: While many governments have been lauded for their success in the online delivery of services, fewer have been successful in employing the Internet for more democratic purposes. Tamara A. Small asks whether the Canadian government — with its well-established e-government strategy — fits the pattern of service delivery oriented (rather than democracy oriented) e-government. Based on a content analysis of Government of Canada tweets, she finds that they do indeed tend to focus on service delivery, and shows how nominal a commitment the Canadian government has made to the more interactive and conversational qualities of Twitter.

While political scientists have greatly benefitted from the increasing availability of online legislative data, data collections and search capabilities are not comprehensive, nor are they comparable across the different U.S. states. David L. Leal, Taofang Huang, Byung-Jae Lee, and Jill Strube review the availability and limitations of state online legislative resources in facilitating political research. They discuss levels of capacity and access, note changes over time, and note that their usability index could potentially be used as an independent variable for researchers seeking to measure the transparency of state legislatures.

RERESENTATION: An ongoing theme in the study of elected representatives is how they present themselves to their constituents in order to enhance their re-election prospects. Royce Koop and Alex Marland compare presentation of self by Canadian Members of Parliament on parliamentary websites and in the older medium of parliamentary newsletters. They find that MPs are likely to present themselves as outsiders on their websites, that this differs from patterns observed in newsletters, and that party affiliation plays an important role in shaping self-presentation online.

Many strategic, structural and individual factors can explain the use of online campaigning in elections; based on candidate surveys, Julia Metag and Frank Marcinkowski show that strategic and structural variables, such as party membership or the perceived share of indecisive voters, do most to explain online campaigning. Internet-related perceptions are explanatory in a few cases; if candidates think that other candidates campaign online they feel obliged to use online media during the election campaign.

ACTIVISM: Mainstream opinion at the time of the protests of the “Arab Spring” – and the earlier Iranian “Twitter Revolution” – was that use of social media would significantly affect the outcome of revolutionary collective action. Throughout the Libyan Civil War, Twitter users took the initiative to collect and process data for use in the rebellion against the Qadhafi regime, including map overlays depicting the situation on the ground. In an exploratory case study on crisis mapping of intelligence information, Steve Stottlemyre and Sonia Stottlemyre investigate whether the information collected and disseminated by Twitter users during the Libyan civil war met the minimum requirements to be considered tactical military intelligence.

Philipp S. Mueller and Sophie van Huellen focus on the 2009 post-election protests in Teheran in their analysis of the effect of many-to-many media on power structures in society. They offer two analytical approaches as possible ways to frame the complex interplay of media and revolutionary politics. While social media raised international awareness by transforming the agenda-setting process of the Western mass media, the authors conclude that, given the inability of protesters to overthrow the regime, a change in the “media-scape” does not automatically imply a changed “power-scape.”

A different theoretical approach is offered by Mark K. McBeth, Elizabeth A. Shanahan, Molly C. Arrandale Anderson, and Barbara Rose, who look at how interest groups increasingly turn to new media such as YouTube as tools for indirect lobbying, allowing them to enter into and have influence on public policy debates through wide dissemination of their policy preferences. They explore the use of policy narratives in new media, using a Narrative Policy Framework to analyze YouTube videos posted by the Buffalo Field Campaign, an environmental activist group.

]]>
The “IPP2012: Big Data, Big Challenges” conference explores the new research frontiers opened up by big data .. as well as its limitations https://ensr.oii.ox.ac.uk/the-ipp2012-big-data-big-challenges-conference-explores-the-new-research-frontiers-opened-up-by-big-data-as-well-as-its-limitations/ Mon, 24 Sep 2012 10:50:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=447 Recent years have seen an increasing buzz around how ‘Big Data’ can uncover patterns of human behaviour and help predict social trends. Most social activities today leave digital imprints that can be collected and stored in the form of large datasets of transactional data. Access to this data presents powerful and often unanticipated opportunities for researchers and policy makers to generate new, precise, and rapid insights into economic, social and political practices and processes, as well as to tackle longstanding problems that have hitherto been impossible to address, such as how political movements like the ‘Arab Spring’ and Occupy originate and spread.

Helen Margetts
Opening comments from convenor, Helen Margetts
While big data can allow the design of efficient and realistic policy and administrative change, it also brings ethical challenges (for example, when it is used for probabilistic policy-making), raising issues of justice, equity and privacy. It also presents clear methodological and technical challenges: big data generation and analysis requires expertise and skills which can be a particular challenge to governmental organizations, given their dubious record on the guardianship of large scale datasets, the management of large technology-based projects, and capacity to innovate. It is these opportunities and challenges that were addressed by the recent conference “Internet, Politics, Policy 2012: Big Data, Big Challenges?” organised by the Oxford Internet Institute (University of Oxford) on behalf of the OII-edited academic journal Policy and Internet. Over the two days of paper and poster presentations and discussion it explored the new research frontiers opened up by big data as well as its limitations, serving as a forum to encourage discussion across disciplinary boundaries on how to exploit this data to inform policy debates and advance social science research.

Duncan Watts
Duncan Watts (Keynote Speaker)
The conference was organised along three tracks: “Policy,” “Politics,” and Data+Methods (see the programme) with panels focusing on the impact of big data on (for example) political campaigning, collective action and political dissent, sentiment analysis, prediction of large-scale social movements, government, public policy, social networks, data visualisation, and privacy. Webcasts are now available of the keynote talks given by Nigel Shadbolt (University of Southampton and Open Data Institute) and Duncan Watts (Microsoft Research). A webcast is also available of the opening plenary panel, which set the scene for the conference, discussing the potential and challenges of big data for public policy-making, with participation from Helen Margetts (OII), Lance Bennett (University of Washington, Seattle), Theo Bertram (UK Policy Manager, Google), and Patrick McSharry (Mathematical Institute, University of Oxford), chaired by Victoria Nash (OII).

IPP2012 Convenors and Prize Winners
Poster Prize Winner Shawn Walker (left) and Paper Prize Winner Jonathan Bright (right) with IPP2012 convenors Sandra Gonzalez-Bailon (left) and Helen Margetts (right).
The evening receptions were held in the Ashmolean Museum (allowing us to project exciting data visualisations onto their shiny white walls), and the University’s Natural History Museum, which provided a rather more fossil-focused ambience. We are very pleased to note that the “Best Paper” winners were Thomas Chadefaux (ETH Zurich) for his paper: Early Warning Signals for War in the News, and Jonathan Bright (EUI) for his paper: The Dynamics of Parliamentary Discourse in the UK: 1936-2011. The Google-sponsored “Best Poster” prize winners were Shawn Walker (University of Washington) for his poster (with Joe Eckert, Jeff Hemsley, Robert Mason, and Karine Nahon): SoMe Tools for Social Media Research, and Giovanni Grasso (University of Oxford) for his poster (with Tim Furche, Georg Gottlob, and Christian Schallhart): OXPath: Everyone can Automate the Web!

Many of the conference papers are available on the conference website; the conference special issue on big data will be published in the journal Policy and Internet in 2013.

]]>
Last 2010 issue of Policy and Internet just published (2,4) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet/ Mon, 20 Dec 2010 17:05:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=15 The last 2010 issue of Policy and Internet has just been published! We are pleased to present seven articles, all of which focus on a substantive public policy issue arising from widespread use of the Internet: online political advocacy and petitioning, nationalism and borders online, unintended consequences of the introduction of file-sharing legislation, and the implications of Internet voting and voting advice applications for democracy and political participation.

Links to the articles are included below. Happy reading!

Helen Margetts: Editorial

David Karpf: Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism

Elisabeth A. Jones and Joseph W. Janes: Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read

Stefan Larsson and Måns Svensson: Compliance or Obscurity? Online Anonymity as a Consequence of Fighting Unauthorised File-sharing

Irina Shklovski and David M. Struthers: Of States and Borders on the Internet: The Role of Domain Name Extensions in Expressions of Nationalism Online in Kazakhstan

Andreas Jungherr and Pascal Jürgens: The Political Click: Political Participation through E-Petitions in Germany

Jan Fivaz and Giorgio Nadig: Impact of Voting Advice Applications (VAAs) on Voter Turnout and Their Potential Use for Civic Education

Anne-Marie Oostveen: Outsourcing Democracy: Losing Control of e-Voting in the Netherlands

]]>
New issue of Policy and Internet (2,3) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-23/ Thu, 04 Nov 2010 12:08:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=121 Welcome to the third issue of Policy & Internet for 2010. We are pleased to present five articles focusing on substantive public policy issues arising from widespread use of the Internet: regulation of trade in virtual goods; development of electronic government in Korea; online policy discourse in UK elections; regulatory models for broadband technologies in the US; and alternative governance frameworks for open ICT standards.

Three of the articles are the first to be published from the highly successful conference ‘Internet, Politics and Policy‘ held by the journal in Oxford, 16th-17th September 2010. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Vili Lehdonvirta and Perttu Virtanen: A New Frontier in Digital Content Policy: Case Studies in the Regulation of Virtual Goods and Artificial Scarcity

Joon Hyoung Lim: Digital Divides in Urban E-Government in South Korea: Exploring Differences in Municipalities’ Use of the Internet for Environmental Governance

Darren G. Lilleker and Nigel A. Jackson: Towards a More Participatory Style of Election Campaigning: The Impact of Web 2.0 on the UK 2010 General Election

Michael J. Santorelli: Regulatory Federalism in the Age of Broadband: A U.S. Perspective

Laura DeNardis: E-Governance Policies for Interoperability and Open Standards

]]>
New issue of Policy and Internet (2,2) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-22/ Thu, 19 Aug 2010 12:17:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=128 Welcome to the second issue of Policy & Internet for 2010! We are pleased to present six articles which investigate the role of the Internet in a wide range of policy processes and sectors: agenda setting in online and traditional media; environmental policy networks; online deliberation on climate change; data protection and privacy; net neutrality; and digital inclusion/exclusion. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Ben Sayre, Leticia Bode, Dhavan Shah, Dave Wilcox, and Chirag Shah: Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News

Kathleen McNutt and Adam Wellstead: Virtual Policy Networks in Forestry and Climate Change in the U.S. and Canada: Government Nodality, Internationalization and Actor Complexity

Julien Talpin and Stéphanie Wojcik: Deliberating Environmental Policy Issues: Comparing the Learning Potential of Online and Face-To-Face Discussions on Climate Change

Andrew A. Adams, Kiyoshi Murata, and Yohko Orito: The Development of Japanese Data Protection

Scott Jordan: The Application of Net Neutrality to Wireless Networks Based on Network Architecture

Alison Powell, Amelia Bryne, and Dharma Dailey: The Essential Internet: Digital Exclusion in Low-Income American Communities

]]>
New issue of Policy and Internet (2,1) https://ensr.oii.ox.ac.uk/21-2/ Fri, 16 Apr 2010 12:09:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=123 Welcome to the second issue of Policy & Internet and the first issue of 2010! We are pleased to present six articles that spread across the scope of the journal laid out in the first article of the first issue, The Internet and Public Policy (Margetts, 2009). Three articles cover some aspect of trust, identified as one of the key values associated with the Internet and likely to emerge in policy trends. The other three articles all bring internet-related technologies to centre stage in policy change.

Helen Margetts: Editorial

Stephan G. Grimmelikhuijsen: Transparency of Public Decision-Making: Towards Trust in Local Government?

Jesper Schlæger: Digital Governance and Institutional Change: Examining the Role of E-Government in China’s Coal Sector

Fadi Salem and Yasar Jarrar: Government 2.0? Technology, Trust and Collaboration in the UAE Public Sector

Mike Just and David Aspinall: Challenging Challenge Questions: An Experimental Analysis of Authentication Technologies and User Behaviour

Ainė Ramonaite: Voting Advice Applications in Lithuania: Promoting Programmatic Competition or Breeding Populism?

Thomas M. Lenard and Paul H. Rubin: In Defense of Data: Information and the Costs of Privacy

]]>