Politics & Government – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:48 +0000 en-GB hourly 1 Five reasons ‘technological solutions’ are a distraction from the Irish border problem https://ensr.oii.ox.ac.uk/five-reasons-technological-solutions-are-a-distraction-from-the-irish-border-problem/ Thu, 21 Feb 2019 09:59:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4889 In this post, Helen Margetts, Cosmina Dorobantu, Florian Ostmann, and Christina Hitrova discuss the focus on ‘technological solutions’ in the context of the Irish border debate — arguing that it is becoming a red herring and a distraction from the political choices ahead. They write:

 

Technology is increasingly touted as an alternative to the Irish backstop, especially in light of the government’s difficulty to find a Brexit strategy that can command a majority in the House of Commons. As academics, we have been following the debate around the role of technology in monitoring the border with interest, but also scepticism and frustration. Technology can foster government innovation in countless ways and digital technologies, in particular, have the potential transform the way in which government makes policy and designs public services. Yet, in the context of the Irish border debate, the focus on ‘technological solutions’ is becoming a red herring and distracts from the political choices ahead. Technology cannot solve the Irish border problem and it is time to face the facts.

1: Technology cannot ensure a ‘frictionless border’

Any legal or regulatory restrictions on the movement of goods or people between the UK and the Republic of Ireland post-Brexit will make border-related friction inevitable. Setting the restrictions is a matter of political agreements. Technology can help enforce the legal or regulatory restrictions, but it cannot prevent the introduction of friction compared to the status quo. For example, technology may speed up documentation, processing, and inspections, but it cannot eliminate the need for these procedures, whose existence will mean new burdens on those undergoing them.

2:  There will be a need for new infrastructure at or near the border

Technology may make it possible for some checks to be carried out away from the border. For example, machine learning algorithms can assist in identifying suspicious vehicles and police forces can stop and inspect them away from the border. Regardless of where the relevant inspections are carried out, however, there will be a need for new infrastructure at or near the border, such as camera systems that record the identity of the vehicles crossing the frontier. The amount of new infrastructure needed will depend on how strict the UK and the EU decide to be in enforcing restrictions on the movement of goods and people. At a minimum, cameras will have to be installed at the border. Stricter enforcement regimes will require additional infrastructure such as sensors, scanners, boom barriers or gates.

3: ‘Frictionless’ solutions are in direct conflict with the Brexit goal to ‘take back control’ over borders

There is a fundamental conflict between the goals of minimising friction and enforcing compliance. For example, friction for Irish and UK citizens traveling across the Irish border could be reduced by a system that allows passenger vehicles registered within the Common Travel Area to cross the border freely. This approach, however, would make it difficult to monitor whether registered vehicles are used to facilitate unauthorised movements of people or goods across the border. More generally, the more effective the border management system is in detecting and preventing non-compliant movements of goods or people across the border, the more friction there will be.

4: Technology has known imperfections

Many of the ‘technological solutions’ that have been proposed as ways to minimise friction have blind spots when it comes to monitoring and enforcing compliance – a fact quietly acknowledged through comments about the solutions’ ‘dependence on trust’. Automated licence plate recognition systems, for example, can easily be tricked by using stolen or falsified number plates. Probabilistic algorithmic tools to identify the ‘high risk’ vehicles selected for inspections will fail to identify some cases of non-compliance. Technological tools may lead to improvements over risk-based approaches that rely on human judgment alone, but they cannot, on their own, monitor the border safely.

5: Government will struggle to develop the relevant technological tools

Suggestions that the border controversy may find a last-minute solution by relying on technology seem dangerously detached from the realities of large-scale technology projects, especially in the public sector. In addition to considerable expertise and financial investments, such projects need time, a resource that is quickly running out as March 29 draws closer. The history of government technology projects is littered with examples of failures to meet expectations, enormous cost overruns, and troubled relationships with computer services providers.

A recent example is the mobile phone app meant to facilitate the registration of the 3.7 million EU nationals living in the UK that cannot work on iPhones. Private companies will be keen to sell technological solutions to the backstop problem, with firms like Fujitsu and GSM already signalling their interest in addressing this technological challenge. Under time pressure, government will struggle to evaluate the feasibility of the technological solutions proposed by these private providers, negotiate a favourable contract, and ensure that the resulting technology is fit for purpose.

Technological tools can help implement customs rules, but they cannot fill the current political vacuum. The design, development, and implementation of border management tools require regulatory clarity—prior knowledge of the rules whose monitoring and enforcement the technical tools are meant to support. What these rules will be for the UK-Ireland border following Brexit is a political question. The recent focus on ‘technological solutions’, rather than informing the debate around this question, seems to have served as a strategy for avoiding substantive engagement with it. It is time for government to accept that technology cannot solve the Irish border problem and move on to find real, feasible alternatives.

Authors:

Professor Helen Margetts, Professor of Society and the Internet, Oxford Internet institute, University of Oxford;  Director of the Public Policy Programme, The Alan Turing Institute

Dr Cosmina Dorobantu, Research Associate, Oxford Internet Institute, University of Oxford; Deputy Director of the Public Policy Programme, The Alan Turing Institute

Dr Florian Ostmann, Policy Fellow, Public Policy Programme, The Alan Turing Institute

Christina Hitrova, Digital Ethics Research Assistant, Public Policy Programme, The Alan Turing Institute

Disclaimer: The views expressed in this article are those of the listed members of The Alan Turing Institute’s Public Policy Programme in their individual academic capacities, and do not represent a formal view of the Institute.

]]>
Can “We the People” really help draft a national constitution? (sort of..) https://ensr.oii.ox.ac.uk/can-we-the-people-really-help-draft-a-national-constitution-sort-of/ Thu, 16 Aug 2018 14:26:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4687 As innovations like social media and open government initiatives have become an integral part of the politics in the twenty-first century, there is increasing interest in the possibility of citizens directly participating in the drafting of legislation. Indeed, there is a clear trend of greater public participation in the process of constitution making, and with the growth of e-democracy tools, this trend is likely to continue. However, this view is certainly not universally held, and a number of recent studies have been much more skeptical about the value of public participation, questioning whether it has any real impact on the text of a constitution.

Following the banking crisis, and a groundswell of popular opposition to the existing political system in 2009, the people of Iceland embarked on a unique process of constitutional reform. Having opened the entire drafting process to public input and scrutiny, these efforts culminated in Iceland’s 2011 draft crowdsourced constitution: reputedly the world’s first. In his Policy & Internet article “When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution”, Alexander Hudson examines the impact that the Icelandic public had on the development of the draft constitution. He finds that almost 10 percent of the written proposals submitted generated a change in the draft text, particularly in the area of rights.

This remarkably high number is likely explained by the isolation of the drafters from both political parties and special interests, making them more reliant on and open to input from the public. However, although this would appear to be an example of successful public crowdsourcing, the new constitution was ultimately rejected by parliament. Iceland’s experiment with participatory drafting therefore demonstrates the possibility of successful online public engagement — but also the need to connect the masses with the political elites. It was the disconnect between these groups that triggered the initial protests and constitutional reform, but also that led to its ultimate failure.

We caught up with Alexander to discuss his findings.

Ed: We know from Wikipedia (and other studies) that group decisions are better, and crowds can be trusted. However, I guess (re US, UK) I also feel increasingly nervous about the idea of “the public” having a say over anything important and binding. How do we distribute power and consultation, while avoiding populist chaos?  

Alexander: That’s a large and important question, which I can probably answer only in part. One thing we need to be careful of is what kind of public we are talking about. In many cases, we view self-selection as a bad thing — it can’t be representative. However, in cases like Wikipedia, we see self-selected individuals with specialized knowledge and an uncommon level of interest collaborating. I would suggest that there is an important difference between the kind of decisions that are made by careful and informed participants in citizens’ juries, deliberative polls, or Wikipedia editing, and the oversimplified binary choices that we make in elections or referendums.

So, while there is research to suggest that large numbers of ordinary people can make better decisions, there are some conditions in terms of prior knowledge and careful consideration attached to that. I have high hopes for these more deliberative forms of public participation, but we are right to be cautious about referendums. The Icelandic constitutional reform process actually involved several forms of public participation, including two randomly selected deliberative fora, self-selected online participation, and a popular referendum with several questions.

Ed: A constitution is a very technical piece of text: how much could non-experts realistically contribute to its development — or was there also contribution from specialised interest groups? Presumably there was a team of lawyers and drafters managing the process? 

Alexander: All of these things were going on in Iceland’s drafting process. In my research here and on a few other constitution-making processes in other countries, I’ve been impressed by the ability of citizens to engage at a high level with fundamental questions about the nature of the state, constitutional rights, and legal theory. Assuming a reasonable level of literacy, people are fully capable of reading some literature on constitutional law and political philosophy, and writing very well-informed submissions that express what they would like to see in the constitutional text. A small, self-selected set of the public in many countries seeks to engage in spirited and for the most part respectful debate on these issues. In the Icelandic case, these debates have continued from 2009 to the present.

I would also add that public interest is not distributed uniformly across all the topics that constitutions cover. Members of the public show much more interest in discussing issues of human rights, and have more success in seeing proposals on that theme included in the draft constitution. Some NGOs were involved in submitting proposals to the Icelandic Constitutional Council, but interest groups do not appear to have been a major factor in the process. Unlike some constitution-making processes, the Icelandic Constitutional Council had a limited staff, and the drafters themselves were very engaged with the public on social media.

Ed: I guess Iceland is fairly small, but also unusually homogeneous. That helps, presumably, in creating a general consensus across a society? Or will party / political leaning always tend to trump any sense of common purpose and destiny, when defining the form and identity of the nation?

Alexander: You are certainly right that Iceland is unusual in these respects, and this raises important questions of what this is a case of, and how the findings here can inform us about what might happen in other contexts. I would not say that the Icelandic people reached any sort of broad, national-level consensus about how the constitution should change. During the early part of the drafting process, it seems that those who had strong disagreements with what was taking place absented themselves from the proceedings. They did turn up later to some extent (especially after the 2012 referendum), and sought to prevent this draft from becoming law.

Where the small size and homogeneous population really came into play in Iceland is through the level of knowledge that those who participated had of one another before entering into the constitution-making process. While this has been over emphasized in some discussions of Iceland, there are communities of shared interests where people all seem to know each other, or at least know of each other. This makes forming new societies, NGOs, or interest groups easier, and probably helped to launch the constitution-making project in the first place. 

Ed: How many people were involved in the process — and how were bad suggestions rejected, discussed, or improved? I imagine there must have been divisive issues, that someone would have had to arbitrate? 

Alexander: The number of people who interacted with the process in some way, either by attending one of the public forums that took place early in the process, voting in the election for the Constitutional Council, or engaging with the process on social media, is certainly in the tens of thousands. In fact, one of the striking things about this case is that 522 people stood for election to the 25 member Constitutional Council which drafted the new constitution. So there was certainly a high level of interest in participating in this process.

My research here focused on the written proposals that were posted to the Constitutional Council’s website. 204 individuals participated in that more intensive way. As the members of the Constitutional Council tell it, they would read some of the comments on social media, and the formal submissions on their website during their committee meetings, and discuss amongst themselves which ideas should be carried forward into the draft. The vast majority of the submissions were well-informed, on topic, and conveyed a collegial tone. In this case at least, there was very little of the kind of abusive participation that we observe in some online networks. 

Ed: You say that despite the success in creating a crowd-sourced constitution (that passed a public referendum), it was never ratified by parliament — why is that? And what lessons can we learn from this?

Alexander: Yes, this is one of the most interesting aspects of the whole thing for scholars, and certainly a source of some outrage for those Icelanders who are still active in trying to see this draft constitution become law. Some of this relates to the specifics of Iceland’s constitutional amendment process (which disincentives parliament from approving changes in between elections), but I think that there are also a couple of broadly applicable things going on here. First, the constitution-making process arose as a response to the way that the Icelandic government was perceived to have failed in governing the financial system in the late 2000s. By the time a last-ditch attempt to bring the draft constitution up for a vote in parliament occurred right before the 2013 election, almost five years had passed since the crisis that began this whole saga, and the economic situation had begun to improve. So legislators were not feeling pressure to address those issues any more.

Second, since political parties were not active in the drafting process, too few members of parliament had a stake in the issue. If one of the larger parties had taken ownership of this draft constitution, we might have seen a different outcome. I think this is one of the most important lessons from this case: if the success of the project depends on action by elite political actors, they should be involved in the earlier stages of the process. For various reasons, the Icelanders chose to exclude professional politicians from the process, but that meant that the Constitutional Council had too few friends in parliament to ratify the draft.

Read the full article: Hudson, A. (2018) When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution. Policy & Internet 10 (2) 185-217. DOI: https://doi.org/10.1002/poi3.167

Alexander Hudson was talking to blog editor David Sutcliffe.

]]>
Bursting the bubbles of the Arab Spring: the brokers who bridge ideology on Twitter https://ensr.oii.ox.ac.uk/bursting-the-bubbles-of-the-arab-spring-the-brokers-who-bridge-ideology-on-twitter/ Fri, 27 Jul 2018 11:50:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4679 Online activism has become increasingly visible, with social media platforms being used to express protest and dissent from the Arab Spring to #MeToo. Scholarly interest in online activism has grown with its use, together with disagreement about its impact. Do social media really challenge traditional politics? Some claim that social media have had a profound and positive effect on modern protest — the speed of information sharing making online networks highly effective in building revolutionary movements. Others argue that this activity is merely symbolic: online activism has little or no impact, dilutes offline activism, and weakens social movements. Given online activity doesn’t involve the degree of risk, trust, or effort required on the ground, they argue that it can’t be considered to be “real” activism. In this view, the Arab Spring wasn’t simply a series of “Twitter revolutions”.

Despite much work on offline social movements and coalition building, few studies have used social network analysis to examine the influence of brokers of online activists (i.e. those who act as a bridge between different ideological groups), or their role in information diffusion across a network. In her Policy & Internet article “Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution”, Deena Abul-Fottouh tests whether social movements theory of networks and coalition building — developed to explain brokerage roles in offline networks, between established parties and organisations — can also be used to explain what happens online.

Social movements theory suggests that actors who occupy an intermediary structural position between different ideological groups are more influential than those embedded only in their own faction. That is, the “bridging ties” that link across political ideologies have a greater impact on mobilization than the bonding ties within a faction. Indeed, examining the Egyptian revolution and ensuing crisis, Deena finds that these online brokers were more evident during the first phase of movement solidarity between liberals, islamists, and socialists than in the period of schism and crisis (2011-2014) that followed the initial protests. However, she also found that the online brokers didn’t match the brokers on the ground: they played different roles, complementing rather than mirroring each other in advancing the revolutionary movement.

We caught up with Deena to discuss her findings:

Ed: Firstly: is the “Arab Spring” a useful term? Does it help to think of the events that took place across parts of the Middle East and North Africa under this umbrella term — which I suppose implies some common cause or mechanism?

Deena: Well, I believe it’s useful to an extent. It helps describe some positive common features that existed in the region such as dissatisfaction with the existing regimes, a dissatisfaction that was transformed from the domain of advocacy to the domain of high-risk activism, a common feeling among the people that they can make a difference, even though it did not last long, and the evidence that there are young people in the region who are willing to sacrifice for their freedom. On the other hand, structural forces in the region such as the power of deep states and the forces of counter-revolution were capable of halting this Arab Spring before it burgeoned or bore fruit, so may be the term “Spring” is no longer relevant.

Ed: Revolutions have been happening for centuries, i.e. they obviously don’t need Twitter or Facebook to happen. How significant do you think social media were in this case, either in sparking or sustaining the protests? And how useful are these new social media data as a means to examine the mechanisms of protest?

Deena: Social media platforms have proven to be useful in facilitating protests such as by sharing information in a speedy manner and on a broad range across borders. People in Egypt and other places in the region were influenced by Tunisia, and protest tactics were shared online. In other words, social media platforms definitely facilitate diffusion of protests. They are also hubs to create a common identity and culture among activists, which is crucial for the success of social movements. I also believe that social media present activists with various ways to circumvent policing of activism (e.g. using pseudonyms to hide the identity of the activists, sharing information about places to avoid in times of protests, many platforms offer the possibility for activists to form closed groups where they have high privacy to discuss non-public matters, etc.).

However, social media ties are weak ties. These platforms are not necessarily efficient in building the trust needed to bond social movements, especially in times of schism and at the level of high-risk activism. That is why, as I discuss in my article, we can see that the type of brokerage that is formed online is brokerage that is built on weak ties, not necessarily the same as offline brokerage that usually requires high trust.

Ed: It’s interesting that you could detect bridging between groups. Given schism seems to be fairly standard in society (Cf filter bubbles etc.) .. has enough attention been paid to this process of temporary shifting alignments, to advance a common cause? And are these incidental, or intentional acts of brokerage?

Deena: I believe further studies need to be made on the concepts of solidarity, schism and brokerage within social movements both online and offline. Little attention has been given to how movements come together or break apart online. The Egyptian revolution is a rich case to study these concepts as the many changes that happened in the path of the revolution in its first five years and the intervention of different forces have led to multiple shifts of alliances that deserve study. Acts of brokerage do not necessarily have to be intentional. In social movements studies, researchers have studied incidental acts that could eventually lead to formation of alliances, such as considering co-members of various social movements organizations as brokers between these organizations.

I believe that the same happens online. Brokerage could start with incidental acts such as activists following each other on Twitter for example, which could develop into stronger ties through mentioning each other. This could also build up to coordinating activities online and offline. In the case of the Egyptian revolution, many activists who met in protests on the ground were also friends online. The same happened in Moldova where activists coordinated tactics online and met on the ground. Thus, incidental acts that start with following each other online could develop into intentional coordinated activism offline. I believe further qualitative interviews need to be conducted with activists to study how they coordinate between online and offline activism, as there are certain mechanisms that cannot be observed through just studying the public profiles of activists or their structural networks.

Ed: The “Arab Spring” has had a mixed outcome across the region — and is also now perhaps a bit forgotten in the West. There have been various network studies of the 2011 protests: but what about the time between visible protests .. isn’t that in a way more important? What would a social network study of the current situation in Egypt look like, do you think?

Deena: Yes, the in-between times of waves of protests are as important to study as the waves themselves as they reveal a lot about what could happen, and we usually study them retroactively after the big shocks happen. A social network of the current situation in Egypt would probably include many “isolates” and tiny “components”, if I would use social network analysis terms. This started showing in 2014 as the effects of schism in the movement. I believe this became aggravated over time as the military coup d’état got a stronger grip over the country, suppressing all opposition. Many activists are either detained or have left the country. A quick look at their online profiles does not reveal strong communication between them. Yet, this is what apparently shows from public profiles. One of the levers that social media platforms offer is the ability to create private or “closed” groups online.

I believe these groups might include rich data about activists’ communication. However, it is very difficult, almost impossible to study these groups, unless you are a member or they give you permission. In other words, there might be some sort of communication occurring between activists but at a level that researchers unfortunately cannot access. I think we might call it the “underground of online activism”, which I believe is potentially a very rich area of study.

Ed: A standard criticism of “Twitter network studies” is that they aren’t very rich — they may show who’s following whom, but not necessarily why, or with what effect. Have there been any larger, more detailed studies of the Arab Spring that take in all sides: networks, politics, ethnography, history — both online and offline?

Deena: To my knowledge, there haven’t been studies that have included all these aspects together. Yet there are many studies that covered each of them separately, especially the politics, ethnography, and history of the Arab Spring (see for example: Egypt’s Tahrir Revolution 2013, edited by D. Tschirgi, W. Kazziha and S. F. McMahon). Similarly, very few studies have tried to compare the online and offline repertoires (see for example: Weber, Garimella and Batayneh 2013, Abul-Fottouh and Fetner 2018). In my doctoral dissertation (2018 from McMaster University), I tried to include many of these elements.

Read the full article: Abul-Fottouh, D. (2018) Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution. Policy & Internet 10: 218-240. doi:10.1002/poi3.169

Deena Abul-Fottouh was talking to blog editor David Sutcliffe.

]]>
Call for Papers: Government, Industry, Civil Society Responses to Online Extremism https://ensr.oii.ox.ac.uk/call-for-papers-responses-to-online-extremism/ Mon, 02 Jul 2018 12:52:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4666 We are calling for articles for a Special Issue of the journal Policy & Internet on “Online Extremism: Government, Private Sector, and Civil Society Responses”, edited by Jonathan Bright and Bharath Ganesh, to be published in 2019. The submission deadline is October 30, 2018.

Issue Outline

Governments, the private sector, and civil society are beginning to work together to challenge extremist exploitation of digital communications. Both Islamic and right-wing extremists use websites, blogs, social media, encrypted messaging, and filesharing websites to spread narratives and propaganda, influence mainstream public spheres, recruit members, and advise audiences on undertaking attacks.

Across the world, public-private partnerships have emerged to counter this problem. For example, the Global Internet Forum to Counter Terrorism (GIFCT) organized by the UN Counter-Terrorism Executive Directorate has organized a “shared hash database” that provides “digital fingerprints” of ISIS visual content to help platforms quickly take down content. In another case, the UK government funded ASI Data Science to build a tool to accurately detect jihadist content. Elsewhere, Jigsaw (a Google-owned company) has developed techniques to use content recommendations on YouTube to “redirect” viewers of extremist content to content that might challenge their views.

While these are important and admirable efforts, their impacts and effectiveness is unclear. The purpose of this special issue is to map and evaluate emerging public-private partnerships, technologies, and responses to online extremism. There are three main areas of concern that the issue will address:

(1) the changing role of content moderation, including taking down content and user accounts, as well as the use of AI techniques to assist;

(2) the increasing focus on “counter-narrative” campaigns and strategic communication; and

(3) the inclusion of global civil society in this agenda.

This mapping will contribute to understanding how power is distributed across these actors, the ways in which technology is expected to address the problem, and the design of the measures currently being undertaken.

Topics of Interest

Papers exploring one or more of the following areas are invited for consideration:

Content moderation

  • Efficacy of user and content takedown (and effects it has on extremist audiences);
  • Navigating the politics of freedom of speech in light of the proliferation of hateful and extreme speech online;
  • Development of content and community guidelines on social media platforms;
  • Effect of government policy, recent inquiries, and civil society on content moderation practices by the private sector (e.g. recent laws in Germany, Parliamentary inquiries in the UK);
  • Role and efficacy of Artificial Intelligence (AI) and machine learning in countering extremism.

Counter-narrative Campaigns and Strategic Communication

  • Effectiveness of counter-narrative campaigns in dissuading potential extremists;
  • Formal and informal approaches to counter narratives;
  • Emerging governmental or parastatal bodies to produce and disseminate counter-narratives;
  • Involvement of media and third sector in counter-narrative programming;
  • Research on counter-narrative practitioners;
  • Use of technology in supporting counter-narrative production and dissemination.

Inclusion of Global Civil Society

  • Concentration of decision making power between government, private sector, and civil society actors;
  • Diversity of global civil society actors involved in informing content moderation and counter-narrative campaigns;
  • Extent to which inclusion of diverse civil society/third sector actors improves content moderation and counter-narrative campaigns;
  • Challenges and opportunities faced by global civil society in informing agendas to respond to online extremism.

Submitting your Paper

We encourage interested scholars to submit 6,000 to 8,000 word papers that address one or more of the issues raised in the call. Submissions should be made through Policy & Internet’s manuscript submission system. Interested authors are encouraged to contact Jonathan Bright (jonathan.bright@oii.ox.ac.uk) and Bharath Ganesh (bharath.ganesh@oii.ox.ac.uk) to check the suitability of their paper.

Special Issue Schedule

The special issue will proceed according to the following timeline:

Paper submission: 30 October 2018

First round of reviews: January 2019

Revisions received: March 2019

Final review and decision: May 2019

Publication (estimated): December 2019

The special issue as a whole will be published at some time in late 2019, though individual papers will be published online in EarlyView as soon as they are accepted.

]]>
In a world of “connective action” — what makes an influential Twitter user? https://ensr.oii.ox.ac.uk/in-a-world-of-connective-action-what-makes-an-influential-twitter-user/ Sun, 10 Jun 2018 08:07:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4183 A significant part of political deliberation now takes place on online forums and social networking sites, leading to the idea that collective action might be evolving into “connective action”. The new level of connectivity (particularly of social media) raises important questions about its role in the political process. but understanding important phenomena, such as social influence, social forces, and digital divides, requires analysis of very large social systems, which traditionally has been a challenging task in the social sciences.

In their Policy & Internet article “Understanding Popularity, Reputation, and Social Influence in the Twitter Society“, David Garcia, Pavlin Mavrodiev, Daniele Casati, and Frank Schweitzer examine popularity, reputation, and social influence on Twitter using network information on more than 40 million users. They integrate measurements of popularity, reputation, and social influence to evaluate what keeps users active, what makes them more popular, and what determines their influence in the network.

Popularity in the Twitter social network is often quantified as the number of followers of a user. That implies that it doesn’t matter why some user follows you, or how important she is, your popularity only measures the size of your audience. Reputation, on the other hand, is a more complicated concept associated with centrality. Being followed by a highly reputed user has a stronger effect on one’s reputation than being followed by someone with low reputation. Thus, the simple number of followers does not capture the recursive nature of reputation.

In their article, the authors examine the difference between popularity and reputation on the process of social influence. They find that there is a range of values in which the risk of a user becoming inactive grows with popularity and reputation. Popularity in Twitter resembles a proportional growth process that is faster in its strongly connected component, and that can be accelerated by reputation when users are already popular. They find that social influence on Twitter is mainly related to popularity rather than reputation, but that this growth of influence with popularity is sublinear. In sum, global network metrics are better predictors of inactivity and social influence, calling for analyses that go beyond local metrics like the number of followers.

We caught up with the authors to discuss their findings:

Ed.: Twitter is a convenient data source for political scientists, but they tend to get criticised for relying on something that represents only a tiny facet of political activity. But Twitter is presumably very useful as a way of uncovering more fundamental / generic patterns of networked human interaction?

David: Twitter as a data source to study human behaviour is both powerful and limited. Powerful because it allows us to quantify and analyze human behaviour at scales and resolutions that are simply impossible to reach with traditional methods, such as experiments or surveys. But also limited because not every aspect of human behaviour is captured by Twitter and using its data comes with significant methodological challenges, for example regarding sampling biases or platform changes. Our article is an example of an analysis of general patterns of popularity and influence that are captured by spreading information in Twitter, which only make sense beyond the limitations of Twitter when we frame the results with respect to theories that link our work to previous and future scientific knowledge in the social sciences.

Ed.: How often do theoretical models (i.e. describing the behaviour of a network in theory) get linked up with empirical studies (i.e. of a network like Twitter in practice) but also with qualitative studies of actual Twitter users? And is Twitter interesting enough in itself for anyone to attempt to develop an overall theoretico-empirico-qualitative theory about it?

David: The link between theoretical models and large-scale data analyses of social media is less frequent than we all wish. But the gap between disciplines seems to be narrowing in the last years, with more social scientists using online data sources and computer scientists referring better to theories and previous results in the social sciences. What seems to be quite undeveloped is an interface with qualitative methods, specially with large-scale analyses like ours.

Qualitative methods can provide what data science cannot: questions about important and relevant phenomena that then can be explained within a wider theory if validated against data. While this seems to me as a fertile ground for interdisciplinary research, I doubt that Twitter in particular should be the paragon of such combination of approaches. I advocate for starting research from the aspect of human behaviour that is the subject of study, and not from a particularly popular social media platform that happens to be used a lot today, but might not be the standard tomorrow.

Ed.: I guess I’ve see a lot of Twitter networks in my time, but not much in the way of directed networks, i.e. showing direction of flow of content (i.e. influence, basically) — or much in the way of a time element (i.e. turning static snapshots into dynamic networks). Is that fair, or am I missing something? I imagine it would be fun to see how (e.g.) fake news or political memes propagate through a network?

David: While Twitter provides amazing volumes of data, its programming interface is notorious for the absence of two key sources: the date when follower links are created and the precise path of retweets. The reason for the general picture of snapshots over time is that researchers cannot fully trace back the history of a follower network, they can only monitor it with certain frequency to overcome the fact that links do not have a date attached.

The generally missing picture of flows of information is because when looking up a retweet, we can see the original tweet that is being retweeted, but not if the retweet is of a retweet of a friend. This way, without special access to Twitter data or alternative sources, all information flows look like stars around the original tweet, rather than propagation trees through a social network that allow the precise analysis of fake news or memes.

Ed.: Given all the work on Twitter, how well-placed do you think social scientists would be to advise a political campaign on “how to create an influential network” beyond just the obvious (Tweet well and often, and maybe hire a load of bots). i.e. are there any “general rules” about communication structure that would be practically useful to campaigning organisations?

David: When we talk about influence on Twitter, we usually talk about rather superficial behaviour, such as retweeting content or clicking on a link. This should not be mistaken as a more substantial kind of influence, the kind that makes people change their opinion or go to vote. Evaluating the real impact of Twitter influence is a bottleneck for how much social scientists can advise a political campaign. I would say than rather than providing general rules that can be applied everywhere, social scientists and computer scientists can be much more useful when advising, tracking, and optimizing individual campaigns that take into account the details and idiosyncrasies of the people that might be influenced by the campaign.

Ed.: Random question: but where did “computational social science” emerge from – is it actually quite dependent on Twitter (and Wikipedia?), or are there other commonly-used datasets? And are computational social science, “big data analytics”, and (social) data science basically describing the same thing?

David: Tracing back the meaning and influence of “computational social science” could take a whole book! My impression is that the concept started few decades ago as a spin on “sociophysics”, where the term “computational” was used as in “computational model”, emphasizing a focus on social science away from toy model applications from physics. Then the influential Science article by David Lazer and colleagues in 2009 defined the term as the application of digital trace datasets to test theories from the social sciences, leaving the whole computational modelling outside the frame. In that case, “computational” was used more as it is used in “computational biology”, to refer to social science with increased power and speed thanks to computer-based technologies. Later it seems to have converged back into a combination of both the modelling and the data analysis trends, as in the “Manifesto of computational social science” by Rosaria Conte and colleagues in 2012, inspired by the fact that we need computational modelling techniques from complexity science to understand what we observe in the data.

The Twitter and Wikipedia dependence of the field is just a path dependency due to the ease and open access to those datasets, and a key turning point in the field is to be able to generalize beyond those “model organisms”, as Zeynep Tufekci calls them. One can observe these fads in the latest computer science conferences, with the rising ones being Reddit and Github, or when looking at earlier research that heavily used product reviews and blog datasets. Computational social science seems to be maturing as a field, make sense out of those datasets and not just telling cool data-driven stories about one website or another. Perhaps we are beyond the peak of inflated expectations of the hype curve and the best part is yet to come.

With respect to big data and social data science, it is easy to get lost in the field of buzzwords. Big data analytics only deals with the technologies necessary to process large volumes of data, which could come from any source including social networks but also telescopes, seismographs, and any kind of sensor. These kind of techniques are only sometimes necessary in computational social science, but are far from the core of topics of the field.

Social data science is closer, but puts a stronger emphasis on problem-solving rather than testing theories from the social sciences. When using “data science” we usually try to emphasize a predictive or explorative aspect, rather than the confirmatory or generative approach of computational social science. The emphasis on theory and modelling of computational social science is the key difference here, linking back to my earlier comment about the role of computational modelling and complexity science in the field.

Ed.: Finally, how successful do you think computational social scientists will be in identifying any underlying “social patterns” — i.e. would you agree that the Internet is a “Hadron Collider” for social science? Or is society fundamentally too chaotic and unpredictable?

David: As web scientists like to highlight, the Web (not the Internet, which is the technical infrastructure connecting computers) is the largest socio-technical artifact ever produced by humanity. Rather than as a Hadron Collider, which is a tool to make experiments, I would say that the Web can be the Hubble telescope of social science: it lets us observe human behaviour at an amazing scale and resolution, not only capturing big data but also, fast, long, deep, mixed, and weird data that we never imagined before.

While I doubt that we will be able to predict society in some sort of “psychohistory” manner, I think that the Web can help us to understand much more about ourselves, including our incentives, our feelings, and our health. That can be useful knowledge to make decisions in the future and to build a better world without the need to predict everything.

Read the full article: Garcia, D., Mavrodiev, P., Casati, D., and Schweitzer, F. (2017) Understanding Popularity, Reputation, and Social Influence in the Twitter Society. Policy & Internet 9 (3) doi:10.1002/poi3.151

David Garcia was talking to blog editor David Sutcliffe.

]]>
How can we encourage participation in online political deliberation? https://ensr.oii.ox.ac.uk/how-can-we-encourage-participation-in-online-political-deliberation/ Fri, 01 Jun 2018 14:54:48 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4186 Political parties have been criticized for failing to link citizen preferences to political decision-making. But in an attempt to enhance policy representation, many political parties have established online platforms to allow discussion of policy issues and proposals, and to open up their decision-making processes. The Internet — and particularly the social web — seems to provide an obvious opportunity to strengthen intra-party democracy and mobilize passive party members. However, these mobilizing capacities are limited, and in most instances, participation has been low.

In their Policy & Internet article “Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party,” Katharina Gerl, Stefan Marschall, and Nadja Wilker examine the German Greens’ online collaboration platform to ask why only some party members and supporters use it. The platform aims improve the inclusion of party supporters and members in the party’s opinion-formation and decision-making process, but it has failed to reach inactive members. Instead, those who have already been active in the party also use the online platform. It also seems that classical resources such as education and employment status do not (directly) explain differences in participation; instead, participation is motivated by process-related and ideological incentives.

We caught up with the authors to discuss their findings:

Ed.: You say “When it comes to explaining political online participation within parties, we face a conceptual and empirical void” .. can you explain briefly what the offline models are, and why they don’t work for the Internet age?

Katharina / Stefan / Nadja: According to Verba et al. (1995) the reasons for political non-participation can be boiled down to three factors: (1) citizens do not want to participate, (2) they cannot, (3) nobody asked them to. Speaking model-wise we can distinguish three perspectives: Citizens need certain resources like education, information, time and civic skills to participate (resource model and civic voluntarism model). The social psychological model looks at the role of attitudes and political interest that are supposed to increase participation. In addition to resources and attitudes, the general incentives model analyses how motives, costs and benefits influence participation.

These models can be applied to online participation as well, but findings for the online context indicate that the mechanisms do not always work like in the offline context. For example, age plays out differently for online participation. Generally, the models have to be specified for each participation context. This especially applies for the online context as forms of online participation sometimes demand different resources, skills or motivational factors. Therefore, we have to adapt and supplemented the models with additional online factors like internet skills and internet sophistication.

Ed.: What’s the value to a political party of involving its members in policy discussion? (i.e. why go through the bother?)

Katharina / Stefan / Nadja: Broadly speaking, there are normative and rational reasons for that. At least for the German parties, intra-party democracy plays a crucial role. The involvement of members in policy discussion can serve as a means to strengthen the integration and legitimation power of a party. Additionally, the involvement of members can have a mobilizing effect for the party on the ground. This can positively influence the linkage between the party in central office, the party on the ground, and the societal base. Furthermore, member participation can be a way to react on dissatisfaction within a party.

Ed.: Are there any examples of successful “public deliberation” — i.e. is this maybe just a problem of getting disparate voices to usefully engage online, rather than a failure of political parties per se?

Katharina / Stefan / Nadja: This is definitely not unique to political parties. The problems we observe regarding online public deliberation in political parties also apply to other online participation platforms: political participation and especially public deliberation require time and effort for participants, so they will only be willing to engage if they feel they benefit from it. But the benefits of participation may remain unclear as public deliberation – by parties or other initiators – often takes place without a clear goal or a real say in decision-making for the participants. Initiators of public deliberation often fail to integrate processes of public deliberation into formal and meaningful decision-making procedures. This leads to disappointment for potential participants who might have different expectations concerning their role and scope of influence. There is a risk of a vicious circle and disappointed expectations on both sides.

Ed.: Based on your findings, what would you suggest that the Greens do in order to increase participation by their members on their platform?

Katharina / Stefan / Nadja: Our study shows that the members of the Greens are generally willing to participate online and appreciate this opportunity. However, the survey also revealed that the most important incentive for them is to have an influence on the party’s decision-making. We would suggest that the Greens create an actual cause for participation, meaning to set clear goals and to integrate it into specific and relevant decisions. Participation should not be an end in itself!

Ed.: How far do political parties try to harness deliberation where it happens in the wild e.g. on social media, rather than trying to get people to use bespoke party channels? Or might social media users see this as takeover by the very “establishment politics” they might have abandoned, or be reacting against?

Katharina / Stefan / Nadja: Parties do not constrain their online activities to their own official platforms and channels but also try to develop strategies for influencing discourses in the wild. However, this works much better and has much more authenticity as well as credibility if it isn’t parties as abstract organizations but rather individual politicians such as members of parliament who engage in person on social media, for example by using Twitter.

Ed.: How far have political scientists understood the reasons behind the so-called “crisis of democracy”, and how to address it? And even if academics came up with “the answer” — what is the process for getting academic work and knowledge put into practice by political parties?

Katharina / Stefan / Nadja: The alleged “crisis of democracy” is in first line seen as a crisis of representation in which the gap between political elites and the citizens has widened drastically within the last years, giving room to populist movements and parties in many democracies. Our impression is that facing the rise of populism in many countries, politicians have become more and more attentive towards discussions and findings in political science which have been addressing the linkage problems for years. But perhaps this is like shutting the stable door after the horse has bolted.

Read the full article: Gerl, K., Marschall, S., and Wilker, N. (2016) Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party. Policy & Internet doi:10.1002/poi3.149

Katharina Gerl, Stefan Marschall, and Nadja Wilker were talking to blog editor David Sutcliffe.

]]>
Making crowdsourcing work as a space for democratic deliberation https://ensr.oii.ox.ac.uk/making-crowdsourcing-work-as-a-space-for-democratic-deliberation/ Sat, 26 May 2018 12:44:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4245 There are a many instances of crowdsourcing in both local and national governance across the world, as governments implement crowdsourcing as part of their open government practices aimed at fostering civic engagement and knowledge discovery for policies. But is crowdsourcing conducive to deliberation among citizens or is it essentially just a consulting mechanism for information gathering? Second, if it is conducive to deliberation, what kind of deliberation is it? (And is it democratic?) Third, how representative are the online deliberative exchanges of the wishes and priorities of the larger population?

In their Policy & Internet article “Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland”, Tanja Aitamurto and Hélène Landemore examine a partially crowdsourced reform of the Finnish off-road traffic law. The aim of the process was to search for knowledge and ideas from the crowd, enhance people’s understanding of the law, and to increase the perception of the policy’s legitimacy. The participants could propose ideas on the platform, vote others’ ideas up or down, and comment.

The authors find that despite the lack of explicit incentives for deliberation in the crowdsourced process, crowdsourcing indeed functioned as a space for democratic deliberation; that is, an exchange of arguments among participants characterized by a degree of freedom, equality, and inclusiveness. An important finding, in particular, is that despite the lack of statistical representativeness among the participants, the deliberative exchanges reflected a diversity of viewpoints and opinions, tempering to a degree the worry about the bias likely introduced by the self-selected nature of citizen participation.

They introduce the term “crowdsourced deliberation” to mean the deliberation that happens (intentionally or unintentionally) in crowdsourcing, even when the primary aim is to gather knowledge rather than to generate deliberation. In their assessment, crowdsourcing in the Finnish experiment was conducive to some degree of democratic deliberation, even though, strikingly, the process was not designed for it.

We caught up with the authors to discuss their findings:

Ed.: There’s a lot of discussion currently about “filter bubbles” (and indeed fake news) damaging public deliberation. Do you think collaborative crowdsourced efforts (that include things like Wikipedia) help at all more generally, or .. are we all damned to our individual echo chambers?

Tanja and Hélène: Deliberation, whether taking place within a crowdsourced policymaking process or in another context, has a positive impact on society, when the participants exchange knowledge and arguments. While all deliberative processes are, to a certain extent, their own microcosms, there is typically at least some cross-cutting exposure of opinions and perspectives among the crowd. The more diverse the participant crowd is and the larger the number of participants, the more likely there is diversity also in the opinions, preventing strictly siloed echo chambers.

Moreover, it all comes down to design and incentives in the end. In our crowdsourcing platform we did not particularly try to attract a cross-cutting section of the population so there was a risk of having only a relatively homogenous population self-selecting into the process, which is what happened to a degree, demographically at last (over 90% of our participants were educated male professionals). In terms of ideas though, the pool was much more diverse than the demography would have suggested, and techniques we used (like clustering) helped maintain the visibility (to the researchers) of the minority views.

That said, if what you are is after is maximal openness and cross-cutting exposure, nothing beats random selection, like the one used in mini-publics of all kinds, from citizens’ juries to deliberative polls to citizens’ assemblies… That’s what Facebook and Twitter should use in order to break the filter bubbles in which people lock themselves: algorithms that randomize the content of our newsfeed and expose us to a vast range of opinions, rather than algorithms that maximize similarity with what we already like.

But for us the goal was different and so our design was different. Our goal was to gather knowledge and ideas and for this self-selection (the sort also at play in Wikipedia) is better than random-selection: whereas with random selection you shut the door on most people, in crowdsourcing platform you just let the door open to anyone who can self-identify as having a relevant form of knowledge and has the motivation to participate. The remarkable thing in our case is that even though we didn’t design the process for democratic deliberation, it occurred anyway, between the cracks of the design so to speak.

Ed.: I suppose crowdsourcing won’t work unless there is useful cooperation: do you think these successful relationships self-select on a platform, or do things perhaps work precisely because people may NOT be discussing other, divisive things (like immigration) when working together on something apparently unrelated, like an off-road law?

Tanja and Hélène: There is a varying degree of collaboration in crowdsourcing. In crowdsourced policymaking, the crowd does not typically collaborate on drafting the law (unlike the crowd does in Wikipedia writing), but they rather respond to the crowdsourcer’s, in this case, the government’s prompts. In this type of crowdsourcing, which was the case in the crowdsourced off-road traffic law reform, the crowd members don’t need to collaborate with each other in order the process to achieve its goal of finding new knowledge. The crowd, can, of course decide not to collaborate with the government and not answer the prompts, or start sabotaging the process.

The degree and success of collaboration will depend on the design and the goals of your experiment. In our case, crowdsourcing might have worked even without collaboration because our goal was to gather knowledge and information, which can be done by harvesting the contributions of the individual members of the crowd without them interacting with each other. But if what you are after is co-creation or deliberation, then yes you need to create the background conditions and incentives for cooperation.

Cooperation may require bracketing some sensitive topics or else learning to disagree in respectful ways. Deliberation, and more broadly cooperation are social skills — human technologies you might say — that we still don’t know how to use very well. This comes in part from the fact that our school systems do not teach those skills, focused as they are on promoting individual rather than collaborative success and creating an eco-system of zero-sum competition between students, when in the real world there is almost nothing you can do all by yourself and we would be much better off nurturing collaborative skills and the art or technology of deliberation.

Ed.: Have there been any other examples in Finland — i.e. is crowdsourcing (and deliberation) something that is seen as useful and successful by the government?

Tanja and Hélène: Yes, there has been several crowdsourced policymaking processes in Finland. One is a crowdsourced Limited Liability Housing Company Law reform, organized by the Ministry of Justice in the Finland government. We examined the quality of deliberation in the case, and the findings show that the quality of deliberation, as measured by Discourse Quality Index, was pretty good.

Read the full article: Aitamurto, T. and Landemore, H. (2016) Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland. Policy & Internet 8 (2) doi:10.1002/poi3.115.


Tanja Aitamurto and Hélène Landemore were talking to blog editor David Sutcliffe.

]]>
Habermas by design: designing public deliberation into online platforms https://ensr.oii.ox.ac.uk/habermas-by-design-designing-public-deliberation-into-online-platforms/ Thu, 03 May 2018 13:59:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4673 Advocates of deliberative democracy have always hoped that the Internet would provide the means for an improved public sphere. But what particular platform features should we look to, to promote deliberative debate online? In their Policy & Internet article “Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms“, Katharina Esau, Dennis Friess, and Christiane Eilders show how differences in the design of various news platforms result in significant variation in the quality of deliberation; measured as rationality, reciprocity, respect, and constructiveness.

The empirical findings of their comparative analysis across three types of news platforms broadly support the assumption that platform design affects the level of deliberative quality of user comments. Deliberation was most likely to be found in news fora, which are of course specifically designed to initiate user discussions. News websites showed a lower level of deliberative quality, with Facebook coming last in terms of meeting deliberative design criteria and sustaining deliberation. However, while Facebook performed poorly in terms of overall level of deliberative quality, it did promote a high degree of general engagement among users.

The study’s findings suggest that deliberative discourse in the virtual public sphere of the Internet is indeed possible, which is good news for advocates of deliberative theory. However, this will only be possible by carefully considering how platforms function, and how they are designed. Some may argue that the “power of design” (shaped by organizers like media companies), contradicts the basic idea of open debate amongst equals where the only necessary force is Habermas’s “forceless force of the better argument”. These advocates of an utterly free virtual public sphere may be disappointed, given it’s clear that deliberation is only likely to emerge if the platform is designed in a particular way.

We caught up with the authors to discuss their findings:

Ed: Just briefly: what design features did you find helped support public deliberation, i.e. reasoned, reciprocal, respectful, constructive discussion?

Katharina / Dennis / Christiane: There are several design features which are known to influence online deliberation. However, in this study we particularly focus on moderation, asynchronous discussion, clear topic definition, and the availability of information, which we have found to have a positive influence on the quality of online deliberation.

Ed.: I associate “Internet as a deliberative space” with Habermas, but have never read him: what’s the short version of what he thinks about “the public sphere” — and how the Internet might support this?

Katharina / Dennis / Christiane: Well, Habermas describes the public sphere as a space where free and equal people discuss topics of public import in a specific way. The respectful exchange of rational reasons is crucial in this normative ideal. Due to its open architecture, the Internet has often been presented as providing the infrastructure for large scale deliberation processes. However, Habermas himself is very skeptical as to whether online spaces support his ideas on deliberation. Ironically, he is one of the most influential authors in online deliberation scholarship.

Ed.: What do advocates of the Internet as a “deliberation space” hope for — simply that people will feel part of a social space / community if they can like things or comment on them (and see similar viewpoints); or that it will result in actual rational debate, and people changing their minds to “better” viewpoints, whatever they may be? I can personally see a value for the former, but I can’t imagine the latter ever working, i.e. given people basically don’t change?

Katharina / Dennis / Christiane: We are thinking that both hopes are present in the current debate, and we partly agree with your perception that changing minds seems to be difficult. But we may also be facing some methodological or empirical issues here, because changing of minds is not an easy thing to measure. We know from other studies that deliberation can indeed cause changes of opinion. However, most of this probably takes place within the individual’s mind. Robert E. Goodin has called this process “deliberation within” and this is not accessible through content analysis. People do not articulate “Oh, thanks for this argument, I have changed my mind”, but they probably take something away from online discussions which makes them more open minded.

Ed.: Does Wikipedia provide an example where strangers have (oddly!) come together to create something of genuine value — but maybe only because they’re actually making a specific public good? Is the basic problem of the idea of the “Internet supporting public discourse” that this is just too aimless an activity, with no obvious individual or collective benefit?

Katharina / Dennis / Christiane: We think Wikipedia is a very particular case. However, we can learn from this case that the collective goal plays a very important role for the quality of contributions. We know from empirical research that if people have the intention of contributing to something meaningful, discussion quality is significantly higher than in online spaces without that desire to have an impact.

Ed.: I wonder: isn’t Twitter the place where “deliberation” now takes place? How does it fit into, or inform, the deliberation literature, which I am assuming has largely focused on things like discussion fora?

Katharina / Dennis / Christiane: This depends on the definition of the term “deliberation”. We would argue that the limitation to 280 characters is probably not the best design feature for meaningful deliberation. However, we may have to think about deliberation in less complex contexts in order to reach more people; but this is a polarizing debate.

Ed.: You say that “outsourcing discussions to social networking sites such as Facebook is not advisable due to the low level of deliberative quality compared to other news platforms”. Facebook has now decided that instead of “connecting the world” it’s going to “bring people closer together” — what would you recommend that they do to support this, in terms of the design of the interactive (or deliberative) features of the platform?

Katharina / Dennis / Christiane: This is a difficult one! We think that the quality of deliberation on Facebook would strongly benefit from moderators, which should be more present on the platform to structure the discussions. By this we do not only mean professional moderators but also participative forms of moderation, which could be encouraged more by mechanisms which support such behaviour.

Read the full article: Katharina Esau, Dennis Friess, and Christiane Eilders (2017) Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms. Policy & Internet 9 (3) 321-342.

Katharina (@kathaesa), Dennis, and Christiane were talking to blog editor David Sutcliffe.

]]>
Stormzy 1: The Sun 0 — Three Reasons Why #GE2017 Was the Real Social Media Election https://ensr.oii.ox.ac.uk/stormzy-1-the-sun-0-three-reasons-why-ge2017-was-the-real-social-media-election/ Thu, 15 Jun 2017 15:51:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4261 After its initial appearance as a cynical but safe device by Teresa May to ratchet up the Conservative majority, the UK general election of 2017 turned out to be one of the most exciting and unexpected of all time. One of the many things for which it will be remembered is as the first election where it was the social media campaigns that really made the difference to the relative fortunes of the parties, rather than traditional media. And it could be the first election where the right wing tabloids finally ceded their influence to new media, their power over politics broken according to some.

Social media have been part of the UK electoral landscape for a while. In 2015, many of us attributed the Conservative success in part to their massive expenditure on targeted Facebook advertising, 10 times more than Labour, whose ‘bottom-up’ Twitter campaign seemed mainly to have preached to the converted. Social media advertising was used more successfully by Leave.EU than Remain in the referendum (although some of us cautioned against blaming social media for Brexit). But in both these campaigns, the relentless attack of the tabloid press was able to strike at the heart of the Labour and Remain campaigns and was widely credited for having influenced the result, as in so many elections from the 1930s onwards.

However, in 2017 Labour’s campaign was widely regarded as having made a huge positive difference to the party’s share of the vote – unexpectedly rising by 10 percentage points on 2015 – in the face of a typically sustained and viscious attack by the Daily Mail, the Sun and the Daily Express. Why? There are (at least) three reasons.

First, increased turnout of young people is widely regarded to have driven Labour’s improved share of the vote – and young people do not in general read newspapers not even online. Instead, they spend increasing proportions of their time on social media platforms on mobile phones, particularly Instagram (with 10 million UK users, mostly under 30) and Snapchat (used by half of 18-34 year olds), both mobile-first platforms. On these platforms, although they may see individual stories that are shared or appear on their phone’s news portal, they may not even see the front page headlines that used to make politicians shake.

Meanwhile, what people do pay attention to and share on these platforms are videos and music, so popular artists amass huge followings. Some of the most popular came out in favour of Labour under the umbrella hashtag #Grime4Corbyn, with artists like Stormzy, JME (whose Facebook interview with Corbyn was viewed 2.5 million times) and Skepta with over a million followers on Instagram alone.

A leaflet from Croydon pointing out that ‘Even your Dad has more Facebook friends’ than the 2015 vote difference between Conservative and Labour and showing Stormzy saying ‘Vote Labour!’ was shared millions of times. Obviously we don’t know how much difference these endorsements made – but by sharing videos and images, they certainly spread the idea of voting for Corbyn across huge social networks.

Second, Labour have overtaken the Tories in reaching out across social platforms used by young people with an incredibly efficient advertising strategy. There is no doubt that in 2017 the Conservatives ran a relentless campaign of anti-Corbyn attack ads on Facebook and Instagram. But for the Conservatives, social media are just for elections. Instead, Labour have been using these channels for two years now – Corbyn has been active on Snapchat since becoming Labour leader in 2015 (when some of us were surprised to hear our teenage offspring announcing brightly ‘I’m friends with Jeremy Corbyn on Snapchat’).

That means that by the time of the election Corbyn and various fiercely pro-Labour online-only news outlets like the Canary had acquired a huge following among this demographic, meaning not having to pay for ads. And if you have followers to spread your message, you can be very efficient with advertising spend. While the Conservatives spent more than £1m on direct advertising with Facebook etc., nearly 10 million people watched pro-Labour videos on Facebook that cost less than £2K to make. Furthermore, there is some evidence that the relentless negativity of the Conservative advertising campaign actually put young people off particularly. After all, the advertising guidelines for Instagram advise ‘Images should tell a story/be inspirational’!

On the day before the election, the Daily Mail ran a frontpage headline ‘Apologists for Terror’, with a photo of Diane Abbot along with Corbyn and John McDonnell. But that morning Labour announced that Abbot’s standing aside due to illness. The paper circulating around the networks and sitting on news-stands was already out of date. Digital natives are used to real-time information, they are never going to be swayed by something so clearly past its sell-by-date.

Likewise, the Sun’s election day image – a grotesque image of Jeremy “Corbinned” in a dustbin was photoshopped to replace Corbyn with an equally grotesque photograph of May taking his place in the dustbin, before the first editions landed. It won’t have reached the same audience, perhaps, but it will have reached a lot of people.

It will be a long time before we can really assess the influence of social media in the 2017 election, and some things we may never know. That is because all the data that would allow us to do so is held by the platforms themselves – Facebook, Instagram, Snapchat and so on. That is a crucial issue for the future of our democracy, already bringing calls for some transparency in political advertising both by social media platforms and the parties themselves. Under current conditions the Electoral Commission is incapable of regulating election advertising effectively, or judging (for example) how much national parties spend on targeted advertising locally. This is something that urgently needs addressing in the coming months, especially given Britain’s current penchant for elections.

The secret and often dark world of personalized political advertising on social media, where strong undercurrents of support remain hidden to the outside world, is one reason why polls fail to predict election results until after the election has taken place. Having the data to understand the social media election would also explain some of the volatility in elections these days, as explored in our book Political Turbulence: How Social Media Shape Collective Action. By investigating large-scale data on political activity my co-authors and I showed that social media are injecting the same sort of instability into politics as they have into cultural markets, where most artists gain no traction at all but a (probably unpredictable) few become massively popular – the young singer Ed Sheeran’s ‘The Shape of You’ has been streamed one billion times on Spotify alone.

In 2017, Stormzy and co. provided a more direct link between political and music markets, and this kind of development will ensure that politics in the age of social media will remain turbulent and unpredictable. We can’t claim to have predicted Labour’s unexpected success in this election, but we can claim to have foreseen that it couldn’t be predicted.

]]>
Could Voting Advice Applications force politicians to keep their manifesto promises? https://ensr.oii.ox.ac.uk/could-voting-advice-applications-force-politicians-to-keep-their-manifesto-promises/ Mon, 12 Jun 2017 09:00:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4199 In many countries, Voting Advice Applications (VAAs) have become an almost indispensable part of the electoral process, playing an important role in the campaigning activities of parties and candidates, an essential element of media coverage of the elections, and being widely used by citizens. A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for.

These applications are based on the idea of issue and proximity voting — the parties and candidates recommended by VAAs are those with the highest number of matching positions on a number of political questions and issues. Many of these questions are much more specific and detailed than party programs and electoral platforms, and show the voters exactly what the party or candidates stand for and how they will vote in parliament once elected. In his Policy & Internet article “Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote,” Andreas Ladner examines the extent to which VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises.

His main hypothesis is that VAAs lead to “promissory representation” — where parties and candidates are elected for their promises and sanctioned by the electorate if they don’t keep them. He suggests that as these tools become more popular, the “delegate model” is likely to increase in popularity: i.e. one in which politicians are regarded as delegates voted into parliament to keep their promises, rather than being voted a free mandate to act how they see fit (the “trustee model”).

We caught up with Andreas to discuss his findings:

Ed.: You found that issue-voters were more likely (than other voters) to say they would sanction a politician who broke their election promises. But also that issue voters are less politically engaged. So is this maybe a bit moot: i.e. if the people most likely to force the “delegate model” system are the least likely to enforce it?

Andreas: It perhaps looks a bit moot in the first place, but what happens if the less engaged are given the possibility to sanction them more easily or by default. Sanctioning a politician who breaks an election promise is not per se a good thing, it depends on the reason why he or she broke it, on the situation, and on the promise. VAA can easily provide information to what extent candidates keep their promises — and then it gets very easy to sanction them simply for that without taking other arguments into consideration.

Ed.: Do voting advice applications work best in complex, multi-party political systems? (I’m not sure anyone would need one to distinguish between Trump / Clinton, for example?)

Andreas: Yes, I believe that in very complex systems – like for example in the Swiss case where voters not only vote for parties but also for up to 35 different candidates – VAAs are particularly useful since they help to process a huge amount of information. If the choice is only between two parties or two candidates which are completely different, than VAAs are less helpful.

Ed.: I guess the recent elections / referendum I am most familiar with (US, UK, France) have been particularly lurid and nasty: but I guess VAAs rely on a certain quiet rationality to work as intended? How do you see your Swiss results (and Swiss elections, generally) comparing with these examples? Do VAAs not just get lost in the noise?

Andreas: The idea of VAAs is to help voters to make better informed choices. This is, of course, opposed to decisions based on emotions. In Switzerland, elections are not of outmost importance, due to specific features of our political system such as direct democracy and power sharing, but voters seem to appreciate the information provided by smartvote. Almost 20% of the voter cast their vote after having consulted the website.

Ed.: Macron is a recent example of someone who clearly sought (and received) a general mandate, rather than presenting a detailed platform of promises. Is that unusual? He was criticised in his campaign for being “too vague,” but it clearly worked for him. What use are manifesto pledges in politics — as opposed to simply making clear to the electorate where you stand on the political spectrum?

Andreas: Good VAAs combine electoral promises on concrete issues as well as more general political positions. Voters can base their decisions on either of them, or on a combination of both of them. I am not arguing in favour of one or the other, but they clearly have different implications. The former is closer to the delegate model, the latter to the trustee model. I think good VAAs should make the differences clear and should even allow the voters to choose.

Ed.: I guess Trump is a contrasting example of someone whose campaign was all about promises (while also seeking a clear mandate to “make America great again”), but who has lied, and broken these (impossible) promises seemingly faster than people can keep track of them. Do you think his supporters care, though?

Andreas: His promises were too far away from what he can possibly keep. Quite a few of his voters, I believe, do not want them to be fully realized but rather that the US move a bit more into this direction.

Ed.: I suppose another example of an extremely successful quasi-pledge was the Brexit campaign’s obviously meaningless — but hugely successful — “We send the EU £350 million a week; let’s fund our NHS instead.” Not to sound depressing, but do promises actually mean anything? Is it the candidate / issue that matters (and the media response to that), or the actual pledges?

Andreas: I agree that the media play an important role and not always into the direction they intend to do. I do not think that it is the £350 million a week which made the difference. It is much more a general discontent and a situation which was not sufficiently explained and legitimized which led to this unexpected decision. If you lose the support for your policy than it gets much easier for your opponents. It is difficult to imagine that you can get a majority built on nothing.

Ed.: I’ve read all the articles in the Policy & Internet special issue on VAAs: one thing that struck me is that there’s lots of incomplete data, e.g. no knowledge of how people actually voted in the end (or would vote in future). What are the strengths and weaknesses of VAAs as a data source for political research?

Andreas: The quality of the data varies between countries and voting systems. We have a self-selection bias in the use of VAAs and often also into the surveys conducted among the users. In general we don’t know how they voted, and we have to believe them what they tell us. In many respects the data does not differ that much from what we get from classic electoral studies, especially since they also encounter difficulties in addressing a representative sample. VAAs usually have much larger Ns on the side of the voters, generate more information about their political positions and preferences, and provide very interesting information about the candidates and parties.

Read the full article: Ladner, A. (2016) Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote. Policy & Internet 8 (4). DOI: doi:10.1002/poi3.137.


Andreas Ladner was talking to blog editor David Sutcliffe.

]]>
Did you consider Twitter’s (lack of) representativeness before doing that predictive study? https://ensr.oii.ox.ac.uk/did-you-consider-twitters-lack-of-representativeness-before-doing-that-predictive-study/ Mon, 10 Apr 2017 06:12:36 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4062 Twitter data have many qualities that appeal to researchers. They are extraordinarily easy to collect. They are available in very large quantities. And with a simple 140-character text limit they are easy to analyze. As a result of these attractive qualities, over 1,400 papers have been published using Twitter data, including many attempts to predict disease outbreaks, election results, film box office gross, and stock market movements solely from the content of tweets.

Easy availability of Twitter data links nicely to a key goal of computational social science. If researchers can find ways to impute user characteristics from social media, then the capabilities of computational social science would be greatly extended. However few papers consider the digital divide among Twitter users. But the question of who uses Twitter has major implications for research attempts to use the content of tweets for inference about population behaviour. Do Twitter users share identical characteristics with the population interest? For what populations are Twitter data actually appropriate?

A new article by Grant Blank published in Social Science Computer Review provides a multivariate empirical analysis of the digital divide among Twitter users, comparing Twitter users and nonusers with respect to their characteristic patterns of Internet activity and to certain key attitudes. It thereby fills a gap in our knowledge about an important social media platform, and it joins a surprisingly small number of studies that describe the population that uses social media.

Comparing British (OxIS survey) and US (Pew) data, Grant finds that generally, British Twitter users are younger, wealthier, and better educated than other Internet users, who in turn are younger, wealthier, and better educated than the offline British population. American Twitter users are also younger and wealthier than the rest of the population, but they are not better educated. Twitter users are disproportionately members of elites in both countries. Twitter users also differ from other groups in their online activities and their attitudes.

Under these circumstances, any collection of tweets will be biased, and inferences based on analysis of such tweets will not match the population characteristics. A biased sample can’t be corrected by collecting more data; and these biases have important implications for research based on Twitter data, suggesting that Twitter data are not suitable for research where representativeness is important, such as forecasting elections or gaining insight into attitudes, sentiments, or activities of large populations.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698

We caught up with Grant to explore the implications of the findings:

Ed.: Despite your cautions about lack of representativeness, you mention that the bias in Twitter could actually make it useful to study (for example) elite behaviours: for example in political communication?

Grant: Yes. If you want to study elites and channels of elite influence then Twitter is a good candidate. Twitter data could be used as one channel of elite influence, along with other online channels like social media or blog posts, and offline channels like mass media or lobbying. There is an ecology of media and Twitter is one part.

Ed.: You also mention that Twitter is actually quite successful at forecasting certain offline, commercial behaviours (e.g. box office receipts).

Grant: Right. Some commercial products are disproportionately used by wealthier or younger people. That certainly would include certain forms of mass entertainment like cinema. It also probably includes a number of digital products like smartphones, especially more expensive phones, and wearable devices like a Fitbit. If a product is disproportionately bought by the same population groups that use Twitter then it may be possible to forecast sales using Twitter data. Conversely, products disproportionately used by poorer or older people are unlikely to be predictable using Twitter.

Ed.: Is there a general trend towards abandoning expensive, time-consuming, multi-year surveys and polling? And do you see any long-term danger in that? i.e. governments and media (and academics?) thinking “Oh, we can just get it off social media now”.

Grant: Yes and no. There are certainly people who are thinking about it and trying to make it work. The ease and low cost of social media is very seductive. However, that has to be balanced against major weaknesses. First the population using Twitter (and other social media) is unclear, but it is not a random sample. It is just a population of Twitter users, which is not a population of interest to many.

Second, tweets are even less representative. As I point out in the article, over 40% of people with a Twitter account have never sent a tweet, and the top 15% of users account for 85% of tweets. So tweets are even less representative of any real-world population than Twitter users. What these issues mean is that you can’t calculate measures of error or confidence intervals from Twitter data. This is crippling for many academic and government uses.

Third, Twitter’s limited message length and simple interface tends to give it advantages on devices with restricted input capability, like phones. It is well-suited for short, rapid messages. These characteristics tend to encourage Twitter use for political demonstrations, disasters, sports events, and other live events where reports from an on-the-spot observer are valuable. This suggests that Twitter usage is not like other social media or like email or blogs.

Fourth, researchers attempting to extract the meaning of words have 140 characters to analyze and they are littered with abbreviations, slang, non-standard English, misspellings and links to other documents. The measurement issues are immense. Measurement is hard enough in surveys when researchers have control over question wording and can do cognitive interviews to understand how people interpret words.

With Twitter (and other social media) researchers have no control over the process that generated the data, and no theory of the data generating process. Unlike surveys, social media analysis is not a general-purpose tool for research. Except in limited areas where these issues are less important, social media is not a promising tool.

Ed.: How would you respond to claims that for example Facebook actually had more accurate political polling than anyone else in the recent US Election? (just that no-one had access to its data, and Facebook didn’t say anything)?

Grant: That is an interesting possibility. The problem is matching Facebook data with other data, like voting records. Facebook doesn’t know where people live. Finding their location would not be an easy problem. It is simpler because Facebook would not need an actual address; it would only need to locate the correct voting district or the state (for the Electoral College in US Presidential elections). Still, there would be error of unknown magnitude, probably impossible to calculate. It would be a very interesting research project. Whether it would be more accurate than a poll is hard to say.

Ed.: Do you think social media (or maybe search data) scraping and analysis will ever successfully replace surveys?

Grant: Surveys are such versatile, general purpose tools. They can be used to elicit many kinds information on all kinds of subjects from almost any population. These are not characteristics of social media. There is no real danger that surveys will be replaced in general.

However, I can see certain specific areas where analysis of social media will be useful. Most of these are commercial areas, like consumer sentiments. If you want to know what people are saying about your product, then going to social media is a good, cheap source of information. This is especially true if you sell a mass market product that many people use and talk about; think: films, cars, fast food, breakfast cereal, etc.

These are important topics to some people, but they are a subset of things that surveys are used for. Too many things are not talked about, and some are very important. For example, there is the famous British reluctance to talk about money. Things like income, pensions, and real estate or financial assets are not likely to be common topics. If you are a government department or a researcher interested in poverty, the effect of government assistance, or the distribution of income and wealth, you have to depend on a survey.

There are a lot of other situations where surveys are indispensable. For example, if the OII wanted to know what kind of jobs OII alumni had found, it would probably have to survey them.

Ed.: Finally .. 1400 Twitter articles in .. do we actually know enough now to say anything particularly useful or concrete about it? Are we creeping towards a Twitter revelation or consensus, or is it basically 1400 articles saying “it’s all very complicated”?

Grant: Mostly researchers have accepted Twitter data at face value. Whatever people write in a tweet, it means whatever the researcher thinks it means. This is very easy and it avoids a whole collection of complex issues. All the hard work of understanding how meaning is constructed in Twitter and how it can be measured is yet to be done. We are a long way from understanding Twitter.

Read the full article: Blank, G. (2016) The Digital Divide Among Twitter Users and Its Implications for Social Research. Social Science Computer Review. DOI: 10.1177/0894439316671698


Grant Blank was talking to blog editor David Sutcliffe.

]]>
Five Pieces You Should Probably Read On: Fake News and Filter Bubbles https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-fake-news-and-filter-bubbles/ Fri, 27 Jan 2017 10:08:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3940 This is the second post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Fake News and Filter Bubbles!

Fake news, post-truth, “alternative facts”, filter bubbles — this is the news and media environment we apparently now inhabit, and that has formed the fabric and backdrop of Brexit (“£350 million a week”) and Trump (“This was the largest audience to ever witness an inauguration — period”). Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing? How much can we do with machine-automated or crowd-sourced verification of facts? And are things really any worse now than when Bacon complained in 1620 about the false notions that “are now in possession of the human understanding, and have taken deep root therein”?

 

1. Bernie Hogan: How Facebook divides us [Times Literary Supplement]

27 October 2016 / 1000 words / 5 minutes

“Filter bubbles can create an increasingly fractured population, such as the one developing in America. For the many people shocked by the result of the British EU referendum, we can also partially blame filter bubbles: Facebook literally filters our friends’ views that are least palatable to us, yielding a doctored account of their personalities.”

Bernie Hogan says it’s time Facebook considered ways to use the information it has about us to bring us together across political, ideological and cultural lines, rather than hide us from each other or push us into polarized and hostile camps. He says it’s not only possible for Facebook to help mitigate the issues of filter bubbles and context collapse; it’s imperative, and it’s surprisingly simple.

 

2. Luciano Floridi: Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis [the Guardian]

29 November 2016 / 1000 words / 5 minutes

“The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere. […] some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals’ wellbeing and social good.”

The Internet echo chamber satiates our appetite for pleasant lies and reassuring falsehoods, and has become the defining challenge of the 21st century, says Luciano Floridi. So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively, but this is not good enough, he says. We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.

 

3. Philip Howard: Facebook and Twitter’s real sin goes beyond spreading fake news

3 January 2017 / 1000 words / 5 minutes

“With the data at their disposal and the platforms they maintain, social media companies could raise standards for civility by refusing to accept ad revenue for placing fake news. They could let others audit and understand the algorithms that determine who sees what on a platform. Just as important, they could be the platforms for doing better opinion, exit and deliberative polling.”

Only Facebook and Twitter know how pervasive fabricated news stories and misinformation campaigns have become during referendums and elections, says Philip Howard — and allowing fake news and computational propaganda to target specific voters is an act against democratic values. But in a time of weakening polling systems, withholding data about public opinion is actually their major crime against democracy, he says.

 

4. Brent Mittelstadt: Should there be a better accounting of the algorithms that choose our news for us?

7 December 2016 / 1800 words / 8 minutes

“Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished. At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users.”

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information, says Brent Mittelstadt. And content personalization systems and the algorithms they rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

 

5. Heather Ford: Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom?

19 November 2013 / 1400 words / 6 minutes

“A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms.”

‘Human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process, says Heather Ford. If code is law and if other aspects in addition to code determine how we can act in the world, it is important that we understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources — only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

 

.. and just to prove we’re capable of understanding and acknowledging and assimilating multiple viewpoints on complex things, here’s Helen Margetts, with a different slant on filter bubbles: “Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

 

The Authors

Bernie Hogan is a Research Fellow at the OII; his research interests lie at the intersection of social networks and media convergence.

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information. His  research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology.

Philip Howard is the OII’s Professor of Internet Studies. He investigates the impact of digital media on political life around the world.

Brent Mittelstadt is an OII Postdoc His research interests include the ethics of information handled by medical ICT, theoretical developments in discourse and virtue ethics, and epistemology of information.

Heather Ford completed her doctorate at the OII, where she studied how Wikipedia editors write history as it happens. She is now a University Academic Fellow in Digital Methods at the University of Leeds. Her forthcoming book “Fact Factories: Wikipedia’s Quest for the Sum of All Human Knowledge” will be published by MIT Press.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up! .. It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

]]>
Five Pieces You Should Probably Read On: The US Election https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-the-us-election/ Fri, 20 Jan 2017 12:22:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3927 This is the first post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: The US Election.

This was probably the nastiest Presidential election in recent memory: awash with Twitter bots and scandal, polarisation and filter bubbles, accusations of interference by Russia and the Director of the FBI, and another shock result. We have written about electoral prediction elsewhere: instead, here are five pieces that consider the interaction of social media and democracy — the problems, but also potential ways forward.

 

1. James Williams: The Clickbait Candidate

10 October 2016 / 2700 words / 13 minutes

“Trump is very straightforwardly an embodiment of the dynamics of clickbait: he is the logical product (though not endpoint) in the political domain of a media environment designed to invite, and indeed incentivize, relentless competition for our attention […] Like clickbait or outrage cascades, Donald Trump is merely the sort of informational packet our media environment is designed to select for.”

James Williams says that now is probably the time to have that societal conversation about the design ethics of the attention economy — because in our current media environment, attention trumps everything.

 

2. Sam Woolley, Philip Howard: Bots Unite to Automate the Presidential Election [Wired]

15 May 2016 / 850 words / 4 minutes

“Donald Trump understands minority communities. Just ask Pepe Luis Lopez, Francisco Palma, and Alberto Contreras […] each tweeted in support of Trump after his victory in the Nevada caucuses earlier this year. The problem is, Pepe, Francisco, and Alberto aren’t people. They’re bots.”

It’s no surprise that automated spam accounts (or bots) are creeping into election politics, say Sam Woolley and Philip Howard. Demanding bot transparency would at least help clean up social media — which, for better or worse, is increasingly where presidents get elected.

 

3. Phil Howard: Is Social Media Killing Democracy?

15 November 2016 / 1100 words / 5 minutes

“This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits […] these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.”

Phil Howard discusses ways to address fake news, audit social algorithms, and deal with social media’s “moral pass” — social media is damaging democracy, he says, but can also be used to save it.

 

4. Helen Margetts: Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection?

15 November 2016 / 600 words / 3 minutes

“Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead.”

New social information and visibility brings change to social behaviour, says Helen Margetts — ushering in political turbulence and unpredictability. Social media made visible what could have remain a country’s dark secret (hatred of women, rampant racism, etc.), but it will also underpin any radical counter-movement that emerges in the future.

 

5. Helen Margetts: Of course social media is transforming politics. But it’s not to blame for Brexit and Trump

9 January 2017 / 1700 words / 8 minutes

“Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

Politics is a lot messier in the social media era than it used to be, says Helen Margetts, but rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

 

The Authors

James Williams is an OII doctoral candidate, studying the ethics of attention and persuasion in technology design.

Sam Woolley is a Research Assistant on the OII’s Computational Propaganda project; he is interested in political bots, and the intersection of political communication and automation.

Philip Howard is the OII’s Professor of Internet Studies and PI of the Computational Propaganda project. He investigates the impact of digital media on political life around the world.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up .. Fake news and filter bubbles / It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

#5OIIPieces

]]>
Of course social media is transforming politics. But it’s not to blame for Brexit and Trump https://ensr.oii.ox.ac.uk/of-course-social-media-is-transforming-politics-but-its-not-to-blame-for-brexit-and-trump/ Mon, 09 Jan 2017 10:24:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3909 After Brexit and the election of Donald Trump, 2016 will be remembered as the year of cataclysmic democratic events on both sides of the Atlantic. Social media has been implicated in the wave of populism that led to both these developments.

Attention has focused on echo chambers, with many arguing that social media users exist in ideological filter bubbles, narrowly focused on their own preferences, prey to fake news and political bots, reinforcing polarization and leading voters to turn away from the mainstream. Mark Zuckerberg has responded with the strange claim that his company (built on $5 billion of advertising revenue) does not influence people’s decisions.

So what role did social media play in the political events of 2016?

Political turbulence and the new populism

There is no doubt that social media has brought change to politics. From the waves of protest and unrest in response to the 2008 financial crisis, to the Arab spring of 2011, there has been a generalized feeling that political mobilization is on the rise, and that social media had something to do with it.

Our book investigating the relationship between social media and collective action, Political Turbulence, focuses on how social media allows new, “tiny acts” of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later, if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or campaigns for policy change. But they almost always don’t. The overwhelming majority (99.99%) of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US).

The very few that succeed do so very quickly on a massive scale (petitions challenging the Brexit and Trump votes immediately shot above 4 million signatures, to become the largest petitions in history), but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties – the reason why so many of the Arab Spring revolutions proved disappointing.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics can explain why many political developments of our time seem to come from nowhere. It can help to understand the shock waves of support that brought us the Italian Five Star Movement, Podemos in Spain, Jeremy Corbyn, Bernie Sanders, and most recently Brexit and Trump – all of which have campaigned against the “establishment” and challenged traditional political institutions to breaking point.

Each successive mobilization has made people believe that challengers from outside the mainstream are viable – and that is in part what has brought us unlikely results on both sides of the Atlantic. But it doesn’t explain everything.

We’ve had waves of populism before – long before social media (indeed many have made parallels between the politics of 2016 and that of the 1930s). While claims that social media feeds are the biggest threat to democracy, leading to the “disintegration of the general will” and “polarization that drives populism” abound, hard evidence is more difficult to find.

The myth of the echo chamber

The mechanism that is most often offered for this state of events is the existence of echo chambers or filter bubbles. The argument goes that first social media platforms feed people the news that is closest to their own ideological standpoint (estimated from their previous patterns of consumption) and second, that people create their own personalized information environments through their online behaviour, selecting friends and news sources that back up their world view.

Once in these ideological bubbles, people are prey to fake news and political bots that further reinforce their views. So, some argue, social media reinforces people’s current views and acts as a polarizing force on politics, meaning that “random exposure to content is gone from our diets of news and information”.

Really? Is exposure less random than before? Surely the most perfect echo chamber would be the one occupied by someone who only read the Daily Mail in the 1930s – with little possibility of other news – or someone who just watches Fox News? Can our new habitat on social media really be as closed off as these environments, when our digital networks are so very much larger and more heterogeneous than anything we’ve had before?

Research suggests not. A recent large-scale survey (of 50,000 news consumers in 26 countries) shows how those who do not use social media on average come across news from significantly fewer different online sources than those who do. Social media users, it found, receive an additional “boost” in the number of news sources they use each week, even if they are not actually trying to consume more news. These findings are reinforced by an analysis of Facebook data, where 8.8 billion posts, likes and comments were posted through the US election.

Recent research published in Science shows that algorithms play less of a role in exposure to attitude-challenging content than individuals’ own choices and that “on average more than 20% of an individual’s Facebook friends who report an ideological affiliation are from the opposing party”, meaning that social media exposes individuals to at least some ideologically cross-cutting viewpoints: “24% of the hard content shared by liberals’ friends is cross-cutting, compared to 35% for conservatives” (the equivalent figures would be 40% and 45% if random).

In fact, companies have no incentive to create hermetically sealed (as I have heard one commentator claim) echo chambers. Most of social media content is not about politics (sorry guys) – most of that £5 billion advertising revenue does not come from political organizations. So any incentives that companies have to create echo chambers – for the purposes of targeted advertising, for example – are most likely to relate to lifestyle choices or entertainment preferences, rather than political attitudes.

And where filter bubbles do exist they are constantly shifting and sliding – easily punctured by a trending cross-issue item (anybody looking at #Election2016 shortly before polling day would have seen a rich mix of views, while having little doubt about Trump’s impending victory).

And of course, even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach.

And from the research, it looks like they managed to do just that. A barrage of evidence suggests that such advertising was effective in the 2015 UK general election (where the Conservatives spent 10 times as much as Labour on Facebook advertising), in the EU referendum (where the Leave campaign also focused on paid Facebook ads) and in the presidential election, where Facebook advertising has been credited for Trump’s victory, while the Clinton campaign focused on TV ads. And of course, advanced advertising techniques might actually focus on those undecided voters from their conversations. This is not the bottom-up political mobilization that fired off support for Podemos or Bernie Sanders. It is massive top-down advertising dollars.

Ironically however, these huge top-down political advertising campaigns have some of the same characteristics as the bottom-up movements discussed above, particularly sustainability. Former New York Governor Mario Cuomo’s dictum that candidates “campaign in poetry and govern in prose” may need an update. Barack Obama’s innovative campaigns of online social networks, micro-donations and matching support were miraculous, but the extent to which he developed digital government or data-driven policy-making in office was disappointing. Campaign digitally, govern in analogue might be the new mantra.

Chaotic pluralism

Politics is a lot messier in the social media era than it used to be – whether something takes off and succeeds in gaining critical mass is far more random than it appears to be from a casual glance, where we see only those that succeed.

In Political Turbulence, we wanted to identify the model of democracy that best encapsulates politics intertwined with social media. The dynamics we observed seem to be leading us to a model of “chaotic pluralism”, characterized by diversity and heterogeneity – similar to early pluralist models – but also by non-linearity and high interconnectivity, making liberal democracies far more disorganized, unstable and unpredictable than the architects of pluralist political thought ever envisaged.

Perhaps rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

Within chaotic pluralism, there is an urgent need for redesigning democratic institutions that can accommodate new forms of political engagement, and respond to the discontent, inequalities and feelings of exclusion – even anger and alienation – that are at the root of the new populism. We should be using social media to listen to (rather than merely talk at) the expression of these public sentiments, and not just at election time.

Many political institutions – for example, the British Labour Party, the US Republican Party, and the first-past-the-post electoral system shared by both countries – are in crisis, precisely because they have become so far removed from the concerns and needs of citizens. Redesign will need to include social media platforms themselves, which have rapidly become established as institutions of democracy and will be at the heart of any democratic revival.

As these platforms finally start to admit to being media companies (rather than tech companies), we will need to demand human intervention and transparency over algorithms that determine trending news; factchecking (where Google took the lead); algorithms that detect fake news; and possibly even “public interest” bots to counteract the rise of computational propaganda.

Meanwhile, the only thing we can really predict with certainty is that unpredictable things will happen and that social media will be part of our political future.

Discussing the echoes of the 1930s in today’s politics, the Wall Street Journal points out how Roosevelt managed to steer between the extremes of left and right because he knew that “public sentiments of anger and alienation aren’t to be belittled or dismissed, for their causes can be legitimate and their consequences powerful”. The path through populism and polarization may involve using the opportunity that social media presents to listen, understand and respond to these sentiments.

This piece draws on research from Political Turbulence: How Social Media Shape Collective Action (Princeton University Press, 2016), by Helen Margetts, Peter John, Scott Hale and Taha Yasseri.

It is cross-posted from the World Economic Forum, where it was first published on 22 December 2016.

]]>
Can we predict electoral outcomes from Wikipedia traffic? https://ensr.oii.ox.ac.uk/can-we-predict-electoral-outcomes-from-wikipedia-traffic/ Tue, 06 Dec 2016 15:34:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3881 As digital technologies become increasingly integrated into the fabric of social life their ability to generate large amounts of information about the opinions and activities of the population increases. The opportunities in this area are enormous: predictions based on socially generated data are much cheaper than conventional opinion polling, offer the potential to avoid classic biases inherent in asking people to report their opinions and behaviour, and can deliver results much quicker and be updated more rapidly.

In their article published in EPJ Data Science, Taha Yasseri and Jonathan Bright develop a theoretically informed prediction of election results from socially generated data combined with an understanding of the social processes through which the data are generated. They can thereby explore the predictive power of socially generated data while enhancing theory about the relationship between socially generated data and real world outcomes. Their particular focus is on the readership statistics of politically relevant Wikipedia articles (such as those of individual political parties) in the time period just before an election.

By applying these methods to a variety of different European countries in the context of the 2009 and 2014 European Parliament elections they firstly show that the relative change in number of page views to the general Wikipedia page on the election can offer a reasonable estimate of the relative change in election turnout at the country level. This supports the idea that increases in online information seeking at election time are driven by voters who are considering voting.

Second, they show that a theoretically informed model based on previous national results, Wikipedia page views, news media mentions, and basic information about the political party in question can offer a good prediction of the overall vote share of the party in question. Third, they present a model for predicting change in vote share (i.e., voters swinging towards and away from a party), showing that Wikipedia page-view data provide an important increase in predictive power in this context.

This relationship is exaggerated in the case of newer parties — consistent with the idea that voters don’t seek information uniformly about all parties at election time. Rather, they behave like ‘cognitive misers’, being more likely to seek information on new political parties with which they do not have previous experience and being more likely to seek information only when they are actually changing the way they vote.

In contrast, there was no evidence of a ‘media effect’: there was little correlation between news media mentions and overall Wikipedia traffic patterns. Indeed, the news media and Wikipedia appeared to be biased towards different things: with the news favouring incumbent parties, and Wikipedia favouring new ones.

Read the full article: Yasseri, T. and Bright, J. (2016) Wikipedia traffic data and electoral prediction: towards theoretically informed models. EPJ Data Science. 5 (1).

We caught up with the authors to explore the implications of the work.

Ed: Wikipedia represents a vast amount of not just content, but also user behaviour data. How did you access the page view stats — but also: is anyone building dynamic visualisations of Wikipedia data in real time?

Taha and Jonathan: Wikipedia makes its page view data available for free (in the same way as it makes all of its information available!). You can find the data here, along with some visualisations

Ed: Why did you use Wikipedia data to examine election prediction rather than (the I suppose the more fashionable) Twitter? How do they compare as data sources?

Taha and Jonathan: One of the big problems with using Twitter to predict things like elections is that contributing on social media is a very public thing and people are quite conscious of this. For example, some parties are seen as unfashionable so people might not make their voting choice explicit. Hence overall social media might seem to be saying one thing whereas actually people are thinking another.

By contrast, looking for information online on a website like Wikipedia is an essentially private activity so there aren’t these social biases. In other words, on Wikipedia we can directly have access to transactional data on what people do, rather than what they say or prefer to say.

Ed: How did these results and findings compare with the social media analysis done as part of our UK General Election 2015 Election Night Data Hack? (long title..)

Taha and Jonathan: The GE2015 data hack looked at individual politicians. We found that having a Wikipedia page is becoming increasingly important — over 40% of Labour and Conservative Party candidates had an individual Wikipedia page. We also found that this was highly correlated with Twitter presence — being more active on one network also made you more likely to be active on the other one. And we found some initial evidence that social media reaction was correlated with votes, though there is a lot more work to do here!

Ed: Can you see digital social data analysis replacing (or maybe just complementing) opinion polling in any meaningful way? And what problems would need to be addressed before that happened: e.g. around representative sampling, data cleaning, and weeding out bots?

Taha and Jonathan: Most political pundits are starting to look at a range of indicators of popularity — for example, not just voting intention, but also ratings of leadership competence, economic performance, etc. We can see good potential for social data to become part of this range of popularity indicator. However we don’t think it will replace polling just yet; the use of social media is limited to certain demographics. Also, the data collected from social media are often very shallow, not allowing for validation. In the case of Wikipedia, for example, we only know how many times each page is viewed, but we don’t know by how many people and from where.

Ed: You do a lot of research with Wikipedia data — has that made you reflect on your own use of Wikipedia?

Taha and Jonathan: It’s interesting to think about this activity of getting direct information about politicians — it’s essentially a new activity, something you couldn’t do in the pre-digital age. I know that I personally [Jonathan] use it to find out things about politicians and political parties — it would be interesting to know more about why other people are using it as well. This could have a lot of impacts. One thing Wikipedia has is a really long memory, in a way that other means of getting information on politicians (such as newspapers) perhaps don’t. We could start to see this type of thing becoming more important in electoral politics.

[Taha] .. since my research has been mostly focused on Wikipedia edit wars between human and bot editors, I have naturally become more cautious about the information I find on Wikipedia. When it comes to sensitive topics, sach as politics, Wikipedia is a good point to start, but not a great point to end the search!


Taha Yasseri and Jonathan Bright were talking to blog editor David Sutcliffe.

]]>
Is Social Media Killing Democracy? https://ensr.oii.ox.ac.uk/is-social-media-killing-democracy/ Tue, 15 Nov 2016 08:46:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3850 Donald Trump in Reno, Nevada, by Darron Birgenheier (Flickr).
Donald Trump in Reno, Nevada, by Darron Birgenheier (Flickr).

This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits.

Platforms like Twitter and Facebook now provide a structure for our political lives. We’ve always relied on many kinds of sources for our political news and information. Family, friends, news organizations, charismatic politicians certainly predate the internet. But whereas those are sources of information, social media now provides the structure for political conversation. And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends. Many of us just assume that there is a modicum of truth-in-advertising. We expect this from advertisements for commercial goods and services, but not from politicians and political parties. Occasionally a political actor gets punished for betraying the public trust through their misinformation campaigns. But in the United States “political speech” is completely free from reasonable public oversight, and in most other countries the media organizations and public offices for watching politicians are legally constrained, poorly financed, or themselves untrustworthy. Research demonstrates that during the campaigns for Brexit and the U.S. presidency, large volumes of fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms.

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”. When offered the choice of who to spend time with or which organizations to trust, we prefer to strengthen our ties to the people and organizations we already know and like. When offered a choice of news stories, we prefer to read about the issues we already care about, from pundits and news outlets we’ve enjoyed in the past. Random exposure to content is gone from our diets of news and information. The problem is not that we have constructed our own community silos — humans will always do that. The problem is that social media networks take away the random exposure to new, high-quality information.

This is not a technological problem. We are social beings and so we will naturally look for ways to socialize, and we will use technology to socialize each other. But technology could be part of the solution. A not-so-radical redesign might occasionally expose us to new sources of information, or warn us when our own social networks are getting too bounded.

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to.

In most democracies, the public policy and exit polling systems have been broken for a decade. Many social scientists now find that big data, especially network data, does a better job of revealing public preferences than traditional random digit dial systems. So Facebook actually got a moral pass twice this year. Their data on public opinion would have certainly informed the Brexit debate, and their data on voter preferences would certainly have informed public conversation during the US election.

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends. Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook. The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook. When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda. The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies. Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work. And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems. Opaque software algorithms shape what people find in their news feeds. We’ve all noticed fake news stories (often called clickbait), and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion. These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump. Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention.

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms. Was Hillary Clinton really replaced by an alien in the final weeks of the 2016 campaign? We all need to be able to see who wrote this story, whether or not it is true, and how it was spread. Most important, Facebook should not allow such stories to be presented as news, much less spread. If they take ad revenue for promoting political misinformation, they should face the same regulatory punishments that a broadcaster would face for doing such a public disservice.

The second problem is a social one that can be exacerbated by information technologies. This means it can also be mitigated by technologies. Introducing random news stories and ensuring exposure to high quality information would be a simple — and healthy — algorithmic adjustment to social media platforms. The third problem could be resolved with moral leadership from within social media firms, but a little public policy oversight from elections officials and media watchdogs would help. Did Facebook see that journalists and pollsters were wrong about public preferences? Facebook should have told us if so, and shared that data.

Social media platforms have provided a structure for spreading around fake news, we users tend to trust our friends and family, and we don’t hold media technology firms accountable for degrading our public conversations. The next big thing for technology evolution is the Internet of Things, which will generate massive amounts of data that will further harden these structures. Is social media damaging democracy? Yes, but we can also use social media to save democracy.

]]>
Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection? https://ensr.oii.ox.ac.uk/dont-shoot-the-messenger-what-part-did-social-media-play-in-2016-us-election/ Tue, 15 Nov 2016 07:57:44 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3854
Young activists gather at Lafayette Park, preparing for a march to the U.S. Capitol in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).
Young activists gather at Lafayette Park in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).

Commentators have been quick to ‘blame social media’ for ‘ruining’ the 2016 election in putting Mr Donald Trump in the White House. Just as was the case in the campaign for Brexit, people argue that social media has driven us to a ‘post-truth’ world of polarisation and echo chambers.

Is this really the case? At first glance, the ingredients of the Trump victory — as for Brexit — seem remarkably traditional. The Trump campaign spent more on physical souvenirs than on field data, more on Make America Great Again hats (made in China) than on polling. The Daily Mail characterisation of judges as Enemies of the People after their ruling that the triggering of Article 50 must be discussed in parliament seemed reminiscent of the 1930s. Likewise, US crowds chanting ‘Lock her up’, like lynch mobs, seemed like ghastly reminders of a pre-democratic era.

Clearly social media were a big part of the 2016 election, used heavily by the candidates themselves, and generating 8.8 billion posts, likes and commentson Facebook alone. Social media also make visible what in an earlier era could remain a country’s dark secret — hatred of women (through death and rape threats and trolling of female politicians in both the UK and US), and rampant racism.

This visibility, society’s new self-awareness, brings change to political behaviour. Social media provide social information about what other people are doing: viewing, following, liking, sharing, tweeting, joining, supporting and so on. This social information is the driver behind the political turbulence that characterises politics today. Those rustbelt Democrats feeling abandoned by the system saw on social media that they were not alone — that other people felt the same way, and that Trump was viable as a candidate. For a woman drawn towards the Trump agenda but feeling tentative, the hashtag #WomenForTrump could reassure her that there were like-minded people she could identify with. Decades of social science research shows information about the behaviour of others influences how groups behave and now it is driving the unpredictability of politics, bringing us Trump, Brexit, Corbyn, Sanders and unexpected political mobilisation across the world.

These are not echo chambers. As recent research shows, people are exposed to cross-cutting discourse on social media, across ever larger and more heterogeneous social networks. While the hypothetical #WomenForTrump tweeter or Facebook user will see like-minded behaviour, she will also see a peppering of social information showing people using opposing hashtags like #ImWithHer, or (post-election) #StillWithHer. It could be argued that a better example of an ‘echo chamber’ would be a regular Daily Mail reader or someone who only watched Fox News.

The mainstream media loved Trump: his controversial road-crash views sold their newspapers and advertising. Social media take us out of that world. They are relatively neutral in their stance on content, giving no particular priority to extreme or offensive views as on their platforms, the numbers are what matter.

Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead. Social media can also shine the light of transparency on the workings of a Trump administration, as they did on his campaign. They will be critical for building networks of solidarity to confront the intolerance, sexism and racism stirred up during this bruising campaign. And social media will underpin any radical counter-movement that emerges in the coming years.


Helen Margetts is the author of Political Turbulence: How Social Media Shape Collective Action and thanks her co-authors Peter JohnScott Haleand Taha Yasseri.

]]>
Rethinking Digital Media and Political Change https://ensr.oii.ox.ac.uk/rethinking-digital-media-and-political-change/ Tue, 23 Aug 2016 14:52:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3824
Image:
Did Twitter lead to Donald Trump’s rise and success to date in the American campaign for the presidency? Image: Gage Skidmore (Flickr)
What are the dangers or new opportunities of digital media? One of the major debates in relation to digital media in the United States has been whether they contribute to political polarization. I argue in a new paper (Rethinking Digital Media and Political Change) that Twitter led to Donald Trump’s rise and success to date in the American campaign for the presidency. There is plenty of evidence to show that Trump received a disproportionate amount of attention on Twitter, which in turn generated a disproportionate amount of attention in the mainstream media. The strong correlation between the two suggests that Trump was able to bypass the gatekeepers of the traditional media.

A second ingredient in his success has been populism, which rails against dominant political elites (including the Republican party) and the ‘biased’ media. Populism also rests on the notion of an ‘authentic’ people — by implication excluding ‘others’ such as immigrants and foreign powers like the Chinese — to whom the leader appeals directly. The paper makes parallels with the strength of the Sweden Democrats, an anti-immigrant party which, in a similar way, has been able to appeal to its following via social media and online newspapers, again bypassing mainstream media with its populist message.

There is a difference, however: in the US, commercial media compete for audience share, so Trump’s controversial tweets have been eagerly embraced by journalists seeking high viewership and readership ratings. In Sweden, where public media dominate and there is far less of the ‘horserace’ politics of American politics, the Sweden Democrats have been more locked out of the mainstream media and of politics. In short, Twitter plus populism has led to Trump. I argue that dominating the mediated attention space is crucial. One outcome of how this story ends will be known in November. But whatever the outcome, it is already clear that the role of the media in politics, and how they can be circumvented by new media, requires fundamental rethinking.


Ralph Schroeder is Professor and director of the Master’s degree in Social Science of the Internet at the Oxford Internet Institute. Before coming to Oxford University, he was Professor in the School of Technology Management and Economics at Chalmers University in Gothenburg (Sweden). Recent books include Rethinking Science, Technology and Social Change (Stanford University Press, 2007) and, co-authored with Eric T. Meyer, Knowledge Machines: Digital Transformations of the Sciences and Humanities (MIT Press 2015).

]]>
Brexit, voting, and political turbulence https://ensr.oii.ox.ac.uk/brexit-voting-and-political-turbulence/ Thu, 18 Aug 2016 14:23:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3819 Cross-posted from the Princeton University Press blog. The authors of Political Turbulence discuss how the explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere.


On 23rd June 2016, a majority of the British public voted in a referendum on whether to leave the European Union. The Leave or so-called #Brexit option was victorious, with a margin of 52% to 48% across the country, although Scotland, Northern Ireland, London and some towns voted to remain. The result was a shock to both leave and remain supporters alike. US readers might note that when the polls closed, the odds on futures markets of Brexit (15%) were longer than those of Trump being elected President.

Political scientists are reeling with the sheer volume of politics that has been packed into the month after the result. From the Prime Minister’s morning-after resignation on 24th June the country was mired in political chaos, with almost every political institution challenged and under question in the aftermath of the vote, including both Conservative and Labour parties and the existence of the United Kingdom itself, given Scotland’s resistance to leaving the EU. The eventual formation of a government under a new prime minister, Teresa May, has brought some stability. But she was not elected and her government has a tiny majority of only 12 Members of Parliament. A cartoon by Matt in the Telegraph on July 2nd (which would work for almost any day) showed two students, one of them saying ‘I’m studying politics. The course covers the period from 8am on Thursday to lunchtime on Friday.’

All these events – the campaigns to remain or leave, the post-referendum turmoil, resignations, sackings and appointments – were played out on social media; the speed of change and the unpredictability of events being far too great for conventional media to keep pace. So our book, Political Turbulence: How Social Media Shape Collective Action, can provide a way to think about the past weeks. The book focuses on how social media allow new, ‘tiny acts’ of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later – if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or petitions for policy change. These mobilizations normally fail – 99.9% of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US). The very few that succeed usually do so very quickly on a massive scale, but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties. When Brazilian President Dilma Rousseff asked to speak to the leaders of the mass demonstrations against the government in 2014 organised entirely on social media with an explicit rejection of party politics, she was told ‘there are no leaders’.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere. In the US and the UK it can help to understand the shock waves of support that brought Bernie Sanders, Donald Trump, Jeremy Corbyn (elected leader of the Labour party in 2015) and Brexit itself, all of which have challenged so strongly traditional political institutions. In both countries, the two largest political parties are creaking to breaking point in their efforts to accommodate these phenomena.

The unpredicted support for Brexit by over half of voters in the UK referendum illustrates these characteristics of the movements we model in the book, with the resistance to traditional forms of organization. Voters were courted by political institutions from all sides – the government, all the political parties apart from UKIP, the Bank of England, international organizations, foreign governments, the US President himself and the ‘Remain’ or StrongerIn campaign convened by Conservative, Labour and the smaller parties. Virtually every authoritative source of information supported Remain. Yet people were resistant to aligning themselves with any of them. Experts, facts, leaders of any kind were all rejected by the rising swell of support for the Leave side. Famously, Michael Gove, one of the key leave campaigners said ‘we have had enough of experts’. According to YouGov polls, over 2/3 of Conservative voters in 2015 voted to Leave in 2016, as did over one third of Labour and Liberal Democrat voters.

Instead, people turned to a few key claims promulgated by the two Leave campaigns Vote Leave(with key Conservative Brexiteers such as Boris Johnson, Michael Gove and Liam Fox) and Leave.EU, dominated by UKIP and its leader Nigel Farage, bankrolled by the aptly named billionaire Arron Banks. This side dominated social media in driving home their simple (if largely untrue) claims and anti-establishment, anti-elitist message (although all were part of the upper echelons of both establishment and elite). Key memes included the claim (painted on the side of a bus) that the UK gave £350m a week to the EU which could instead be spent on the NHS; the likelihood that Turkey would soon join the EU; and an image showing floods of migrants entering the UK via Europe. Banks brought in staff from his own insurance companies and political campaign firms (such as Goddard Gunster) and Leave.EU created a massive database of leave supporters to employ targeted advertising on social media.

While Remain represented the status-quo and a known entity, Leave was flexible to sell itself as anything to anyone. Leave campaigners would often criticize the Government but then not offer specific policy alternatives stating, ‘we are a campaign not a government.’ This ability for people to coalesce around a movement for a variety of different (and sometimes conflicting) reasons is a hallmark of the social-media based campaigns that characterize Political Turbulence. Some voters and campaigners argued that voting Leave would allow the UK to be more global and accept more immigrants from non-EU countries. In contrast, racism and anti-immigration sentiment were key reasons for other voters. Desire for sovereignty and independence, responses to austerity and economic inequality and hostility to the elites in London and the South East have all figured in the torrent of post-Brexit analysis. These alternative faces of Leave were exploited to gain votes for ‘change,’ but the exact change sought by any two voters could be very different.

The movement‘s organization illustrates what we have observed in recent political turbulence – as in Brazil, Hong Kong and Egypt; a complete rejection of mainstream political parties and institutions and an absence of leaders in any conventional sense. There is little evidence that the leading lights of the Leave campaigns were seen as prospective leaders. There was no outcry from the Leave side when they seemed to melt away after the vote, no mourning over Michael Gove’s complete fall from grace when the government was formed – nor even joy at Boris Johnson’s appointment as Foreign Secretary. Rather, the Leave campaigns acted like advertising campaigns, driving their points home to all corners of the online and offline worlds but without a clear public face. After the result, it transpired that there was no plan, no policy proposals, no exit strategy proposed by either campaign. The Vote Leave campaign was seemingly paralyzed by shock after the vote (they tried to delete their whole site, now reluctantly and partially restored with the lie on the side of the bus toned down to £50 million), pickled forever after 23rd June. Meanwhile, Teresa May, a reluctant Remain supporter and an absent figure during the referendum itself, emerged as the only viable leader after the event, in the same way as (in a very different context) the Muslim Brotherhood, as the only viable organization, were able to assume power after the first Egyptian revolution.

In contrast, the Leave.Eu website remains highly active, possibly poised for the rebirth of UKIP as a radical populist far-right party on the European model, as Arron Banks has proposed. UKIP was formed around this single policy – of leaving the EU – and will struggle to find policy purpose, post-Brexit. A new party, with Banks’ huge resources and a massive database of Leave supporters and their social media affiliations, possibly disenchanted by the slow progress of Brexit, disaffected by the traditional parties – might be a political winner on the new landscape.

The act of voting in the referendum will define people’s political identity for the foreseeable future, shaping the way they vote in any forthcoming election. The entire political system is being redrawn around this single issue, and whichever organizational grouping can ride the wave will win. The one thing we can predict for our political future is that it will be unpredictable.

 

]]>
Back to the bad old days, as civil service infighting threatens UK’s only hope for digital government https://ensr.oii.ox.ac.uk/back-to-the-bad-old-days-as-civil-service-infighting-threatens-uks-only-hope-for-digital-government/ Wed, 10 Aug 2016 13:59:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3814 GDS-TheConversationTechnology and the public sector have rarely been happy bedfellows in the UK, where every government technology project seems doomed to arrive late, unperform and come in over budget. The Government Digital Service (GDS) was created to drag the civil service into the 21st century, making services “digital by default”, cheaper, faster, and easier to use. It quickly won accolades for its approach and early cost savings.

But then its leadership departed, not once or twice but three times – the latter two within the last few months. The largest government departments have begun to reassert their authority over GDS expert advice, and digital government looks likely to be dragged back towards the deeply dysfunctional old ways of doing things. GDS isn’t perfect, but to erase the progress it has put in place would be a terrible loss.

The UK government’s use of technology has previously lagged far behind other countries. Low usage of digital services rendered them expensive and inefficient. Digital operations were often handicapped by complex networks of legacy systems, some dating right back to the 1970s. The development of the long-promised “digital era governance” was mired in a series of mega contracts: huge in terms of cost, scope and timescale, bigger than any attempted by other governments worldwide, and to be delivered by the same handful of giant global computer consulting firms that rarely saw any challenge to their grip on public contracts. Departmental silos ensured there were no economies of scale, shared services failed, and the Treasury negotiated with 24 departments individually for their IT expenditure.

Some commentators (including this one) were a little sceptical on our first encounter with GDS. We had seen it before: the Office of the e-Envoy set up by Tony Blair in 1999, superseded by the E-government Unit (2004-7), and then Directgov until 2010.

Successes and failures

In many ways GDS has been a success story, with former prime minister David Cameron calling it one of the “great unsung triumphs of the last parliament” with a claimed £1.7 billion cost savings. Treasury negotiates with GDS, rather than with 24 departments, and GDS has been involved in every hiring decision for senior digital staff, raising the quality of digital expertise.

The building blocks of the GDS’ promised “government as a platform” approach have appeared: Verify, a federated identity system that doesn’t rely on ID cards or centralised identity databases, Govpay, which makes it easier to make payments to the government, and Notify, which allows government agencies to keep citizens informed of progress on services.

GDS tackled the overweening power of the huge firms that have dominated government IT in the past, and has given smaller departments and agencies the confidence to undertake some projects themselves, bringing expertise back in-house, embracing open source, and washing away some of the taint of failure from previous government IT projects.

There has even been a procession of visitors from overseas coming to investigate, and imitations have spawned across the world, from the US to Australia.

But elsewhere GDS has really only chipped away at monolithic government IT. For example, GDS and the Department for Work and Pensions failed to work together on Universal Credit. Instead, the huge Pathfindersystem that underpinned the Universal Credit trial phase was supplied by HP, Accenture, IBM and BT and ran into serious trouble at a cost of hundreds of millions of pounds. The department is now building a new system in parallel, with GDS advice, that will largely replace it.

The big systems integrators are still waiting in the wings, poised to renew their influence in government. Francis Maude, who as cabinet minister created GDS, recently admitted that if GDS had undertaken faster and more wholescale reform of legacy systems, it wouldn’t be under threat now.

The risks of centralisation

An issue GDS never tackled is one that has existed right from the start: is it an army, or is it a band of mercenaries working in other departments? Should GDS be at the centre, building and providing systems, or should it just help others to do so, building their expertise? GDS has done both, but the emphasis has been on the former, most evident through putting the government portal GOV.UK at the centre of public services.

Heading down a centralised route was always risky, as the National Audit Office observed of its forerunner direct.gov in 2007. Many departments resented the centralisation of GOV.UK, and the removal of their departmental websites, but it’s likely they’re used to it now, even relieved that it’s no longer their problem. But a staff of 700 with a budget of £112m (from 2015) was always going to look vulnerable to budget cuts.

Return of the Big Beasts

If GDS is diminished or disbanded, any hope of creating effective digital government faces two threats.

A land-grab from the biggest departments – HMRC, DWP and the Home Office, all critics of the GDS – is one possibility. There are already signs of a purge of the digital chiefs put in place by GDS, despite the National Audit Office citing continuity of leadership as critical. This looks like permanent secretaries in the civil service reasserting control over their departments’ digital operations – which will inevitably bring a return to siloed thinking and siloed data, completely at odds with the idea of government as a platform. While the big beasts can walk alone, without GDS the smaller agencies will struggle.

The other threat is the big companies, poised in the wings to renew their influence on government should GDS controls on contract size be removed. It has already begun: the ATLAS consortium led by HP has already won two Ministry of Defence contracts worth £1.5 billion since founding GDS chief Mike Bracken resigned.

It’s hard to see how government as a platform can be taken forward without expertise and capacity at the centre – no single department would have the incentive to do so. Canada’s former chief information officer recently attributed Canada’s decline as a world leader in digital government to the removal of funds dedicated to allowing departmental silos to work together. Even as the UN declares the UK to be the global leader for implementing e-government, unless the GDS can re-establish itself the UK may find the foundations it has created swept away – at a time when using digital services to do more with less is needed more than ever.


This was first posted on The Conversation.

]]>
Alan Turing Institute and OII: Summit on Data Science for Government and Policy Making https://ensr.oii.ox.ac.uk/alan-turing-institute-and-oii-summit-on-data-science-for-government-and-policy-making/ Tue, 31 May 2016 06:45:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3804 The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings.

The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI – with the academic partners in listening mode.

We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does – collect tax from the whole population, or give money away at scale, or possess the legitimate use of force – it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are unique and tackling them could put government working with researchers at the forefront of data science innovation.

During the Summit a range of stakeholders provided insight from their distinctive perspectives; the Government Chief Scientific Advisor, Sir Mark Walport; Deputy Director of the ATI, Patrick Wolfe; the National Statistician and Director of ONS, John Pullinger; Director of Data at the Government Digital Service, Paul Maltby. Representatives of frontline departments recounted how algorithmic decision-making is already bringing predictive capacity into operational business, improving efficiency and effectiveness.

Discussion revolved around the challenges of how to build core capability in data science across government, rather than outsourcing it (as happened in an earlier era with information technology) or confining it to a data science profession. Some delegates talked of being in the ‘foothills’ of data science. The scale, heterogeneity and complexity of some government departments currently works against data science innovation, particularly when larger departments can operate thousands of databases, creating legacy barriers to interoperability. Out-dated policies can work against data science methodologies. Attendees repeatedly voiced concerns about sharing data across government departments, in some case because of limitations of legal protections; in others because people were unsure what they can and cannot do.

The potential power of data science creates an urgent need for discussion of ethics. Delegates and speakers repeatedly affirmed the importance of an ethical framework and for thought leadership in this area, so that ethics is ‘part of the science’. The clear emergent option was a national Council for Data Ethics (along the lines of the Nuffield Council for Bioethics) convened by the ATI, as recommended in the recent Science and Technology parliamentary committee report The big data dilemma and the government response. Luciano Floridi (OII’s professor of the philosophy and ethics of information) warned that we cannot reduce ethics to mere compliance. Ethical problems do not normally have a single straightforward ‘right’ answer, but require dialogue and thought and extend far beyond individual privacy. There was consensus that the UK has the potential to provide global thought leadership and to set the standard for the rest of Europe. It was announced during the Summit that an ATI Working Group on the Ethics of Data Science has been confirmed, to take these issues forward.

So what happens now?

Throughout the Summit there were calls from policy makers for more data science leadership. We hope that the ATI will be instrumental in providing this, and an interface both between government, business and academia, and between separate Government departments. This Summit showed just how much real demand – and enthusiasm – there is from policy makers to develop data science methods and harness the power of big data. No-one wants to repeat with data science the history of government information technology – where in the 1950s and 60s, government led the way as an innovator, but has struggled to maintain this position ever since. We hope that the ATI can act to prevent the same fate for data science and provide both thought leadership and the ‘time and space’ (as one delegate put it) for policy-makers to work with the Institute to develop data science for the public good.

So since the Summit, in response to the clear need that emerged from the discussion and other conversations with stakeholders, the ATI has been designing a Policy Innovation Unit, with the aim of working with government departments on ‘data science for public good’ issues. Activities could include:

  • Secondments at the ATI for data scientists from government
  • Short term projects in government departments for ATI doctoral students and postdoctoral researchers
  • Developing ATI as an accredited data facility for public data, as suggested in the current Cabinet Office consultation on better use of data in government
  • ATI pilot policy projects, using government data
  • Policy symposia focused on specific issues and challenges
  • ATI representation in regular meetings at the senior level (for example, between Chief Scientific Advisors, the Cabinet Office, the Office for National Statistics, GO-Science).
  • ATI acting as an interface between public and private sectors, for example through knowledge exchange and the exploitation of non-government sources as well as government data
  • ATI offering a trusted space, time and a forum for formulating questions and developing solutions that tackle public policy problems and push forward the frontiers of data science
  • ATI as a source of cross-fertilization of expertise between departments
  • Reviewing the data science landscape in a department or agency, identifying feedback loops – or lack thereof – between policy-makers, analysts, front-line staff and identifying possibilities for an ‘intelligent centre’ model through strategic development of expertise.

The Summit, and a series of Whitehall Roundtables convened by GO-Science which led up to it, have initiated a nascent network of stakeholders across government, which we aim to build on and develop over the coming months. If you are interested in being part of this, please do be in touch with us

Helen Margetts, Oxford Internet Institute, University of Oxford (director@oii.ox.ac.uk)

Tom Melham, Department of Computer Science, University of Oxford

]]>
Exploring the Ethics of Monitoring Online Extremism https://ensr.oii.ox.ac.uk/exploring-the-ethics-of-monitoring-online-extremism/ Wed, 23 Mar 2016 09:59:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3616 (Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.

Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
Surveillance in NYC’s financial district. Photo by Jonathan McIntosh (flickr).

Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?

Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!

Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?

Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.

This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.

Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities.  How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?

Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”

One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.

Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh and Ian were talking to Blog Editor Bertie Vidgen.

]]>
Assessing the Ethics and Politics of Policing the Internet for Extremist Material https://ensr.oii.ox.ac.uk/assessing-the-ethics-and-politics-of-policing-the-internet-for-extremist-material/ Thu, 18 Feb 2016 22:59:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3558 The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

 

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organize, radicalize people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organization, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance – including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalization. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalization, but if they do get radicalized and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept – extremist material – and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorized processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.

]]>
Controlling the crowd? Government and citizen interaction on emergency-response platforms https://ensr.oii.ox.ac.uk/controlling-the-crowd-government-and-citizen-interaction-on-emergency-response-platforms/ Mon, 07 Dec 2015 11:21:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3529 There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov‘s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down — not just to “mobilize them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up woords in the forest one winter after a terribly huge forest fires in Russia in year 2010. Image: Max Mayorov.
RUSSIA, NEAR RYAZAN – 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).
My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realized that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves.

To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different.

Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to the image of these institutions, which didn’t want citizens to be portrayed as the leading responding actors. Second, any type of citizen-based collective action, even if not purely political, may be an issue of concern in authoritarian countries in particular. Accordingly, one can argue that, while citizens are struggling against a disaster, in some cases the traditional institutions may make substantial efforts to restrain and contain the action of citizens. In this light, the role of information technologies can include not only enhancing citizen engagement and increasing the efficiency of the response, but also controlling the digital crowd of potential volunteers.

The purpose of this paper was to conceptualize the tension between the role of ICTs in the engagement of the crowd and its resources, and the role of ICTs in controlling the resources of the crowd. The research suggests a theoretical and methodological framework that allows us to explore this tension. The paper focuses on an analysis of specific platforms and suggests empirical data about the structure of the platforms, and interviews with developers and administrators of the platforms. This data is used in order to identify how tools of engagement are transformed into tools of control, and what major differences there are between platforms that seek to achieve these two goals. That said, obviously any platform can have properties of control and properties of engagement at the same time; however the proportion of these two types of elements can differ significantly.

One of the core issues for my research is how traditional actors respond to fast, bottom-up innovation by citizens.[2]. On the one hand, the authorities try to restrict the empowerment of citizens by the new tools. On the other hand, the institutional actors also seek to innovate and develop new tools that can restore the balance of power that has been challenged by citizen-based innovation. The tension between using digital tools for the engagement of the crowd and for control of the crowd can be considered as one of the aspects of this dynamic.

That doesn’t mean that all state-backed platforms are created solely for the purpose of control. One can argue, however, that the development of digital tools that offer a mechanism of command and control over the resources of the crowd is prevalent among the projects that are supported by the authorities. This can also be approached as a means of using information technologies in order to include the digital crowd within the “vertical of power”, which is a top-down strategy of governance. That is why this paper seeks to conceptualize this phenomenon as “vertical crowdsourcing”.

The question of whether using a digital tool as a mechanism of control is intentional is to some extent secondary. What is important is that the analysis of platform structures relying on activity theory identifies a number of properties that allow us to argue that these tools are primarily tools of control. The conceptual framework introduced in the paper is used in order to follow the transformation of tools for the engagement of the crowd into tools of control over the crowd. That said, some of the interviews with the developers and administrators of the platforms may suggest the intentional nature of the development of tools of control, while crowd engagement is secondary.

[1] Asmolov G. “Natural Disasters and Alternative Modes of Governance: The Role of Social Networks and Crowdsourcing Platforms in Russia”, in Bits and Atoms Information and Communication Technology in Areas of Limited Statehood, edited by Steven Livingston and Gregor Walter-Drop, Oxford University Press, 2013.

[2] Asmolov G., “Dynamics of innovation and the balance of power in Russia”, in State Power 2.0 Authoritarian Entrenchment and Political Engagement Worldwide, edited by Muzammil M. Hussain and Philip N. Howard, Ashgate, 2013.

Read the full article: Asmolov, G. (2015) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy and Internet 7,3: 292–318.


asmolovGregory Asmolov is a PhD student at the LSE, where he is studying crowdsourcing and emergence of spontaneous order in situations of limited statehood. He is examining the emerging collaborative power of ICT-enabled crowds in crisis situations, and aiming to investigate the topic drawing on evolutionary theories concerned with spontaneous action and the sustainability of voluntary networked organizations. He analyzes whether crowdsourcing practices can lead to development of bottom-up online networked institutions and “peer-to-peer” governance.

]]>
Does crowdsourcing citizen initiatives affect attitudes towards democracy? https://ensr.oii.ox.ac.uk/does-crowdsourcing-of-citizen-initiatives-affect-attitudes-towards-democracy/ Sun, 22 Nov 2015 20:30:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3496 Crowdsourcing legislation is an example of a democratic innovation that gives citizens a say in the legislative process. In their Policy and Internet journal article ‘Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland’, Henrik Serup Christensen, Maija Karjalainen and Laura Nurminen explore how involvement in the citizen initiatives affects attitudes towards democracy. They find that crowdsourcing citizen initiatives can potentially strengthen political legitimacy, but both outcomes and procedures matter for the effects.

Crowdsourcing is a recent buzzword that describes efforts to use the Internet to mobilize online communities to achieve specific organizational goals. While crowdsourcing serves several purposes, the most interesting potential from a democratic perspective is the ability to crowdsource legislation. By giving citizens the means to affect the legislative process more directly, crowdsourcing legislation is an example of a democratic innovation that gives citizens a say in the legislative process. Recent years have witnessed a scholarly debate on whether such new forms of participatory governance can help cure democratic deficits such as a declining political legitimacy of the political system in the eyes of the citizenry. However, it is still not clear how taking part in crowdsourcing affects the political attitudes of the participants, and the potential impact of such democratic innovations therefore remain unclear.

In our study, we contribute to this research agenda by exploring how crowdsourcing citizens’ initiatives affected political attitudes in Finland. The non-binding Citizens’ Initiative instrument in Finland was introduced in spring 2012 to give citizens the chance to influence the agenda of the political decision making. In particular, we zoom in on people active on the Internet website Avoin Ministeriö (Open Ministry), which is a site based on the idea of crowdsourcing where users can draft citizens’ initiatives and deliberate on their contents. As is frequently the case for studies of crowdsourcing, we find that only a small portion of the users are actively involved in the crowdsourcing process. The option to deliberate on the website was used by about 7% of the users; the rest were only passive readers or supported initiatives made by others. Nevertheless, Avoin Ministeriö has been instrumental in creating support for several of the most successful initiatives during the period, showing that the website has been a key actor during the introductory phase of the Citizens’ initiative in Finland.

We study how developments in political attitudes were affected by outcome satisfaction and process satisfaction. Outcome satisfaction concerns whether the participants get their preferred outcome through their involvement, and this has been emphasized by proponents of direct democracy. Since citizens get involved to achieve a specific outcome, their evaluation of the experience hinges on whether or not they achieve this outcome. Process satisfaction, on the other hand, is more concerned with the perceived quality of decision making. According to this perspective, what matters is that participants find that their concerns are given due consideration. When people find the decision making to be fair and balanced, they may even accept not getting their preferred outcome. The relative importance of these two perspectives remains disputed in the literature.

The research design consisted of two surveys administered to the users of Avoin Ministeriö before and after the decision of the Finnish Parliament on the first citizens’ initiative in concerning a ban on the fur-farming industry in Finland. This allowed us to observe how involvement in the crowdsourcing process shaped developments in central political attitudes among the users of Avoin Ministeriö and what factors determined the developments in subjective political legitimacy. The first survey was conducted in fall 2012, when the initiators were gathering signatures in support of the initiative to ban fur-farming, while the second survey was conducted in summer 2013 when Parliament rejected the initiative. Altogether 421 persons filled in both surveys, and thus comprised the sample for the analyses.

The study yielded a number of interesting findings. First of all, those who were dissatisfied with Parliament rejecting the initiative experienced a significantly more negative development in political trust compared to those who did not explicitly support the initiative. This shows that the crowdsourcing process had a negative impact on political legitimacy among the initiative’s supporters, which is in line with previous contributions emphasizing the importance of outcome legitimacy. It is worth noting that this also affected trust in the Finnish President, even if he has no formal powers in relation to the Citizens’ initiative in Finland. This shows that negative effects on political legitimacy could be more severe than just a temporary dissatisfaction with the political actors responsible for the decision.

Nevertheless, the outcome may not be the most important factor for determining developments in political legitimacy. Our second major finding indicated that those who were dissatisfied with the way Parliament handled the initiative also experienced more negative developments in political legitimacy compared to those who were satisfied. Furthermore, this effect was more pervasive than the effect for outcome satisfaction. This implies that the procedures for handling non-binding initiatives may play a strong role in citizens’ perceptions of representative institutions, which is in line with previous findings emphasising the importance of procedural aspects and evaluations for judging political authorities.

We conclude that there is a beneficial impact on political legitimacy if crowdsourced citizens’ initiatives have broad appeal so they can be passed in Parliament. However, it is important to note that positive effects on political legitimacy do not hinge on Parliament approving citizens’ initiatives. If the MPs invest time and resources in the careful, transparent and publicly justified handling of initiatives, possible negative effects of rejecting initiatives can be diminished. Citizens and activists may accept an unfavourable decision if the procedure by which it was reached seems fair and just. Finally, the results give reason to be hopeful about the role of crowdsourcing in restoring political legitimacy, since a majority of our respondents felt that the possibility of crowdsourcing citizens’ initiatives clearly improved Finnish democracy.

While all hopes may not have been fulfilled so far, crowdsourcing legislation therefore still has potential to help rebuild political legitimacy.

Read the full article: Christensen, H., Karjalainen, M., and Nurminen, L., (2015) Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland. Policy and Internet 7 (1) 25–45.


Henrik Serup Christensen is Academy Research Fellow at SAMFORSK, Åbo Akademi University.

Maija Karjalainen is a PhD Candidate at the Department of Political Science and Contemporary History in the University of Turku, Finland.

Laura Nurminen is a Doctoral Candidate at the Department of Political and Economic Studies at Helsinki University, Finland.

]]>
Do Finland’s digitally crowdsourced laws show a way to resolve democracy’s “legitimacy crisis”? https://ensr.oii.ox.ac.uk/do-finlands-digitally-crowdsourced-laws-show-a-way-to-resolve-democracys-legitimacy-crisis/ Mon, 16 Nov 2015 12:29:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3475 There is much discussion about a perceived “legitimacy crisis” in democracy. In his article The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation, Taneli Heikka (University of Jyväskylä) discusses the digitally crowdsourced law for same-sex marriage that was passed in Finland in 2014, analysing how the campaign used new digital tools and created practices that affect democratic citizenship and power making.

Ed: There is much discussion about a perceived “legitimacy crisis” in democracy. For example, less than half of the Finnish electorate under 40 choose to vote. In your article you argue that Finland’s 2012 Citizens’ Initiative Act aimed to address this problem by allowing for the crowdsourcing of ideas for new legislation. How common is this idea? (And indeed, how successful?)

Taneli: The idea that digital participation could counter the “legitimacy crisis” is a fairly common one. Digital utopians have nurtured that idea from the early years of the internet, and have often been disappointed. A couple of things stand out in the Finnish experiment that make it worth a closer look.

First, the digital crowdsourcing system with strong digital identification is a reliable and potentially viral campaigning tool. Most civic initiative systems I have encountered rely on manual or otherwise cumbersome, and less reliable, signature collection methods.

Second, in the Finnish model, initiatives that break the threshold of 50,000 names must be treated in the Parliament equally to an initiative from a group of MPs. This gives the initiative constitutional and political weight.

Ed: The Act led to the passage of Finland’s first equal marriage law in 2014. In this case, online platforms were created for collecting signatures as well as drafting legislation. An NGO created a well-used platform, but it subsequently had to shut it down because it couldn’t afford the electronic signature system. Crowds are great, but not a silver bullet if something as prosaic as authentication is impossible. Where should the balance lie between NGOs and centrally funded services, i.e. government?

Taneli: The crucial thing in the success of a civic initiative system is whether it gives the people real power. This question is decided by the legal framework and constitutional basis of the initiative system. So, governments have a very important role in this early stage – designing a law for truly effective citizen initiatives.

When a framework for power-making is in place, service providers will emerge. Should the providers be public, private or third sector entities? I think that is defined by local political culture and history.

In the United States, the civic technology field is heavily funded by philanthropic foundations. There is an urge to make these tools commercially viable, though no one seems to have figured out the business model. In Europe there’s less philanthropic money, and in my experience experiments are more often government funded.

Both models have their pros and cons, but I’d like to see the two continents learning more from each other. American digital civic activists tell me enviously that the radically empowering Finnish model with a government-run service for crowdsourcing for law would be impossible in the US. In Europe, civic technologists say they wish they had the big foundations that Americans have.

Ed: But realistically, how useful is the input of non-lawyers in (technical) legislation drafting? And is there a critical threshold of people necessary to draft legislation?

Taneli: I believe that input is valuable from anyone who cares to invest some time in learning an issue. That said, having lawyers in the campaign team really helps. Writing legislation is a special skill. It’s a pity that the co-creation features in Finland’s Open Ministry website were shut down due to a lack of funding. In that model, help from lawyers could have been made more accessible for all campaign teams.

In terms of numbers, I don’t think the size of the group is an issue either way. A small group of skilled and committed people can do a lot in the drafting phase.

Ed: But can the drafting process become rather burdensome for contributors, given professional legislators will likely heavily rework, or even scrap, the text?

Taneli: Professional legislators will most likely rework the draft, and that is exactly what they are supposed to do. Initiating an idea, working on a draft, and collecting support for it are just phases in a complex process that continues in the parliament after the threshold of 50,000 signatures is reached. A well-written draft will make the legislators’ job easier, but it won’t replace them.

Ed: Do you think there’s a danger that crowdsourcing legislation might just end up reflecting the societal concerns of the web-savvy – or of campaigning and lobbying groups

Taneli: That’s certainly a risk, but so far there is little evidence of it happening. The only initiative passed so far in Finland – the Equal Marriage Act – was supported by the majority of Finns and by the majority of political parties, too. The initiative system was used to bypass a political gridlock. The handful of initiatives that have reached the 50,000 signatures threshold and entered parliamentary proceedings represent a healthy variety of issues in the fields of education, crime and punishment, and health care. Most initiatives seem to echo the viewpoint of the ‘ordinary people’ instead of lobbies or traditional political and business interest groups.

Ed: You state in your article that the real-time nature of digital crowdsourcing appeals to a generation that likes and dislikes quickly; a generation that inhabits “the space of flows”. Is this a potential source of instability or chaos? And how can this rapid turnover of attention be harnessed efficiently so as to usefully contribute to a stable and democratic society?

Taneli: The Citizens’ Initiative Act in Finland is one fairly successful model to look at in terms of balancing stability and disruptive change. It is a radical law in its potential to empower the individual and affect real power-making. But it is by no means a shortcut to ‘legislation by a digital mob’, or anything of that sort. While the digital campaigning phase can be an explosive expression of the power of the people in the ‘time and space of flows’, the elected representatives retain the final say. Passing a law is still a tedious process, and often for good reasons.

Ed: You also write about the emergence of the “mediating citizen” – what do you mean by this?

Taneli: The starting point for developing the idea of the mediating citizen is Lance Bennett’s AC/DC theory, i.e. the dichotomy of the actualising and the dutiful citizen. The dutiful citizen is the traditional form of democratic citizenship – it values voting, following the mass media, and political parties. The actualising citizen, on the other hand, finds voting and parties less appealing, and prefers more flexible and individualised forms of political action, such as ad hoc campaigns and the use of interactive technology.

I find these models accurate but was not able to place in this duality the emerging typologies of civic action I observed in the Finnish case. What we see is understanding and respect for parliamentary institutions and their power, but also strong faith in one’s skills and capability to improve the system in creative, technologically savvy ways. I used the concept of the mediating citizen to describe an actor who is able to move between the previous typologies, mediating between them. In the Finnish example, creative tools were developed to feed initiatives in the traditional power-making system of the parliament.

Ed: Do you think Finland’s Citizens Initiative Act is a model for other governments to follow when addressing concerns about “democratic legitimacy”?

Taneli: It is an interesting model to look at. But unfortunately the ‘legitimacy crisis’ is probably too complex a problem to be solved by a single participation tool. What I’d really like to see is a wave of experimentation, both on-line and off-line, as well as cross-border learning from each other. And is that not what happened when the representative model spread, too?

Read the full article: Heikka, T., (2015) The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation. Policy and Internet 7 (3) 268–291.


Taneli Heikka is a journalist, author, entrepreneur, and PhD student based in Washington.

Taneli Heikka was talking to Blog Editor Pamina Smith.

]]>
Assessing crowdsourcing technologies to collect public opinion around an urban renovation project https://ensr.oii.ox.ac.uk/assessing-crowdsourcing-technologies-to-collect-public-opinion-around-an-urban-renovation-project/ Mon, 09 Nov 2015 11:20:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3453 Ed: Given the “crisis in democratic accountability”, methods to increase citizen participation are in demand. To this end, your team developed some interactive crowdsourcing technologies to collect public opinion around an urban renovation project in Oulu, Finland. What form did the consultation take, and how did you assess its impact?

Simo: Over the years we’ve deployed various types of interactive interfaces on a network of public displays. In this case it was basically a network of interactive screens deployed in downtown Oulu, next to where a renovation project was happening that we wanted to collect feedback about. We deployed an app on the screens, that allowed people to type feedback direcly on the screens (on-screen soft keyboard), and submit feedback to city authorities via SMS, Twitter and email. We also had a smiley-based “rating” system there, which people could us to leave quick feedback about certain aspects of the renovation project.

We ourselves could not, and did not even want to, assess the impact — that’s why we did this in partnership with the city authorities. Then, together with the city folks we could better evaluate if what we were doing had any real-world value whatsoever. And, as we discuss, in the end it did!

Ed: How did you go about encouraging citizens to engage with touch screen technologies in a public space — particularly the non-digitally literate, or maybe people who are just a bit shy about participating?

Simo: Actually, the whole point was that we did not deliberately encourage them by advertising the deployment or by “forcing” anyone to use it. Quite to the contrary: we wanted to see if people voluntarily used it, and the technologies that are an integral part of the city itself. This is kind of the future vision of urban computing, anyway. The screens had been there for years already, and what we wanted to see is if people find this type of service on their own when exploring the screens, and if they take the opportunity to then give feedback using them. The screens hosted a variety of other applications as well: games, news, etc., so it was interesting to also gauge how appealing the idea of public civic feedback is in comparison to everything else that was being offered.

Ed: You mention that using SMS to provide citizen feedback was effective in filtering out noise since it required a minimal payment from citizens — but it also created an initial barrier to participation. How do you increase the quality of feedback without placing citizens on different-level playing fields from the outset — particularly where technology is concerned?

Simo: Yes, SMS really worked well in lowering the amount of irrelevant commentary and complete nonsense. And it is true that SMS already introduces a cost, and even if the cost is miniscule, it’s still a cost to the citizen — and just voicing one’s opinions should of course be free. So there’s no correct answer here — if the channel is public and publicly accessible to anyone, there will be a lot of noisy input. In such cases moderation is a heavy task, and to this end we have been exploring crowdsourcing as well. We can make the community moderate itself. First, we need to identify the users who are genuinely concerned or interested about the issues being explored, and then funnel those users to moderate the discussion / output. It is a win-win situation — the people who want to get involved are empowered to moderate the commentary from others, for implicit rewards.

Ed: For this experiment on citizen feedback in an urban space, your team assembled the world’s largest public display network, which was available for research purposes 24/7. In deploying this valuable research tool, how did you guarantee the privacy of the participants involved, given that some might not want to be seen submitting very negative comments? (e.g. might a form of social pressure be the cause of relatively low participation in the study?)

Simo: The display network was not built only for this experiment, but we have run hundreds of experiments on it, and have written close to a hundred academic papers about them. So, the overarching research focus, really, is on how we can benefit citizens using the network. Over the years we have been able to systematically study issues such as social pressure, group use, effects of the public space, or, one might say “stage”, etc. And yes, social pressure does affect a lot, and for this allowing people participate via e.g. SMS or email helps a lot. That way the users won’t be seen sending the input directly.

Group use is another thing: in groups people don’t feel pressure from the “outside world” so much and are willing to interact with our applications (such as the one documented in this work), but, again, it affects the feedback quality. Groups don’t necessarily tell the truth as they aim for consensus. So the individual, and very important, opinions may not become heard. Ultimately, this is all just part of the game we must deal with, and the real question becomes how to minimize those negative effects that the public space introduces. The positives are clear: everyone can participate, easily, in the heart of the city, and whenever they want.

Ed: Despite the low participation, you still believe that the experimental results are valuable. What did you learn?

Simo: The question in a way already reveals the first important point: people are just not as interested in these “civic” things as they might claim in interviews and pre-studies. When we deploy a civic feedback prototype as the “only option” on a public gizmo (a display, some kind of new tech piece, etc.), people out of curiosity use it. Now, in our case, we just deploy it “as is”, as part of the city infrastructure for people to use if, and only if, they want to use it. So, the prototype competes for attention against smartphones, other applications on the displays, the cluttered city itself… everything!

When one reads many academic papers on interactive civic engagement prototypes, the assumptions are set very high in the discussion: “we got this much participation in this short time”, etc., but that’s not the entire truth. Leave the thing there for months and see if it still interests people! We have done the same, deployed a prototype for three days, gotten tons of interaction, published it, and learned only afterwards that “oh, maybe we were a bit optimistic with the efficiency” when the use suddenly dropped to minimum. It’s just not that easy and the application require frequent updates to keep user interest longitudinally.

Also, the radical differences in the feedback channels were surprising, but we already talked about that a bit earlier.

Ed: Your team collaborated with local officials, which is obviously valuable (and laudable), but it can potentially impose an extra burden on academics. For example, you mention that instead of employing novel feedback formats (e.g. video, audio, images, interactive maps), your team used only text. But do you think working with public officials benefitted the project as a whole, and how?

Simo: The extra burden is a necessity if one wants to really claim authentic success in civic engagement. In our opinion, it only happens between citizens and the city, not between citizens and researchers. We do not wish to build these deployments for the sake of an academic article or two: the display infrastructure is there for citizens and the city, and if we don’t educate the authorities on how to use it then nobody will. Advertisers would be glad to take over the entire real estate there, so in a way this project is just a part of the bigger picture. Which is making the display infrastructure “useful” instead of just a gimmick to kill time with (games) or for advertising.

And yes, the burden is real, but also because of this we could document what we have learned about dealing with authorities: how it is first easy to sell these prototypes to them, but sometimes hard to get commitment, etc. And it is not just this prototype — we’ve done a number of other civic engagement projects where we have noticed the same issues mentioned in the paper as well.

Ed: You also mention that as academics and policymakers you had different notions of success: for example in terms of levels of engagement and feedback of citizens. What should academics aspiring to have a concrete impact on society keep in mind when working with policymakers?

Simo: It takes a lot of time to assess impact. Policymakers will not be able to say after only a few weeks (which is the typical length of studies in our field) if the prototype has actual value to it, or if it’s just a “nice experiment”. So, deploy your strategy / tech / anything you’re doing, write about it, and let it sit. Move on with the life, and then revisit it after months to see if anything has come out of it! Patience is key here.

Ed: Did the citizen feedback result in any changes to the urban renovation project they were being consulted on?

Simo: Not the project directly: the project naturally was planned years ahead and the blueprints were final at that point. The most remarkable finding for us (and the authorities) was that after moderating the noise out from the feedback, the remaining insight was pretty much the only feedback that they ever directly got from citizens. Finns tend to be a bit on the shy side, so people won’t just pick up the phone and call the local engineering department and speak out. Not sure if anyone does, really? So they complain and chat on forums and coffee tables. So it would require active work for the authorities to find and reach out to these people.

With the display infrastructure, which was already there, we were able to gauge the public opinion that did not affect the construction directly, but indirectly affected how the department could manage their press releases, which things to stress in public communications, what parts of PR to handle differently in the next stage of the renovation project etc.

Ed: Are you planning any more experiments?

Simo: We are constantly running quite a few experiments. On the civic engagement side, for example, we are investigating how to gamify environmental awareness (recycling, waste management, keeping the environment clean) for children, as well as running longer longitudinal studies to assess the engagement of specify groups of people (e.g., children and the elderly).

Read the full article: Hosio, S., Goncalves, J., Kostakos, V. and Riekki, J. (2015) Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu. Policy and Internet 7 (2) 203–222.


Simon Hosio is a research scientist (Dr. Tech.) at the University of Oulu, in Finland. Core topics of his research are smart city tech, crowdsourcing, wisdom of the crowd, civic engagement, and all types of “mobile stuff” in general.

Simo Hosio was talking to blog editor Pamina Smith.

]]>
Crowdsourcing for public policy and government https://ensr.oii.ox.ac.uk/crowdsourcing-for-public-policy-and-government/ Thu, 27 Aug 2015 11:28:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3339 If elections were invented today, they would probably be referred to as “crowdsourcing the government.” First coined in a 2006 issue of Wired magazine (Howe, 2006), the term crowdsourcing has come to be applied loosely to a wide variety of situations where ideas, opinions, labor or something else is “sourced” in from a potentially large group of people. Whilst most commonly applied in business contexts, there is an increasing amount of buzz around applying crowdsourcing techniques in government and policy contexts as well (Brabham, 2013).

Though there is nothing qualitatively new about involving more people in government and policy processes, digital technologies in principle make it possible to increase the quantity of such involvement dramatically, by lowering the costs of participation (Margetts et al., 2015) and making it possible to tap into people’s free time (Shirky, 2010). This difference in quantity is arguably great enough to obtain a quality of its own. We can thus be justified in using the term “crowdsourcing for public policy and government” to refer to new digitally enabled ways of involving people in any aspect of democratic politics and government, not replacing but rather augmenting more traditional participation routes such as elections and referendums.

In this editorial, we will briefly highlight some of the key emerging issues in research on crowdsourcing for public policy and government. Our entry point into the discussion is a collection of research papers first presented at the Internet, Politics & Policy 2014 (IPP2014) conference organized by the Oxford Internet Institute (University of Oxford) and the Policy & Internet journal. The theme of this very successful conference—our third since the founding of the journal—was “crowdsourcing for politics and policy.” Out of almost 80 papers presented at the conference in September last year, 14 of the best have now been published as peer-reviewed articles in this journal, including five in this issue. A further handful of papers from the conference focusing on labor issues will be published in the next issue, but we can already now take stock of all the articles focusing on government, politics, and policy.

The growing interest in crowdsourcing for government and public policy must be understood in the context of the contemporary malaise of politics, which is being felt across the democratic world, but most of all in Europe. The problems with democracy have a long history, from the declining powers of parliamentary bodies when compared to the executive; to declining turnouts in elections, declining participation in mass parties, and declining trust in democratic institutions and politicians. But these problems have gained a new salience in the last five years, as the ongoing financial crisis has contributed to the rise of a range of new populist forces all across Europe, and to a fragmentation of the center ground. Furthermore, poor accuracy of pre- election polls in recent elections in Israel and the UK have generated considerable debate over the usefulness and accuracy of the traditional way of knowing what the public is thinking: the sample survey.

Many place hopes on technological and institutional innovations such as crowdsourcing to show a way out of the brewing crisis of democratic politics and political science. One of the key attractions of crowdsourcing techniques to governments and grass roots movements alike is the legitimacy such techniques are expected to be able to generate. For example, crowdsourcing techniques have been applied to enable citizens to verify the legality and correctness of government decisions and outcomes. A well-known application is to ask citizens to audit large volumes of data on government spending, to uncover any malfeasance but also to increase citizens’ trust in the government (Maguire, 2011).

Articles emerging from the IPP2014 conference analyze other interesting and comparable applications. In an article titled “Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform,” Carlos Arias, Jorge Garcia and Alejandro Corpeño (2015) describe the use of crowdsourcing for auditing election results. Dieter Zinnbauer (2015) discusses the potentials and pitfalls of the use of crowdsourcing for some other types of auditing purposes, in “Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future.”

Besides allowing citizens to verify the outcome of a process, crowdsourcing can also be used to lend an air of inclusiveness and transparency to a process itself. This process legitimacy can then indirectly legitimate the outcome of the process as well. For example, crowdsourcing-style open processes have been used to collect policy ideas, gather support for difficult policy decisions, and even generate detailed spending plans through participatory budgeting (Wampler & Avritzer, 2004). Articles emerging from our conference further advance this line of research. Roxana Radu, Nicolo Zingales and Enrico Calandro (2015) examine the use of crowdsourcing to lend process legitimacy to Internet governance, in an article titled “Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance.” Graham Smith, Robert C. Richards Jr. and John Gastil (2015) write about “The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations.”

An interesting cautionary tale is presented by Henrik Serup Christensen, Maija Karjalainen and Laura Nurminen (2015) in “Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland.” They show how a citizen initiative process ended up decreasing government legitimacy, after the government failed to implement the outcome of an initiative process that was perceived as highly legitimate by its supporters. Taneli Heikka (2015) further examines the implications of citizen initiative processes to the state–citizen relationship in “The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation.”

In many of the contributions that touch on the legitimating effects of crowdsourcing, one can sense a third, latent theme. Besides allowing outcomes to be audited and processes to be potentially more inclusive, crowdsourcing can also increase the perceived legitimacy of a government or policy process by lending an air of innovation and technological progress to the endeavor and those involved in it. This is most explicitly stated by Simo Hosio, Jorge Goncalves, Vassilis Kostakos and Jukka Riekki (2015) in “Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu.” They describe how local government officials collaborating with the research team to test a new public screen based polling system “expressed that the PR value boosted their public perception as a modern organization.” That some government crowdsourcing initatives are at least in part motivated by such “crowdwashing” is hardly surprising, but it encourages us to retain a critical approach and analyze actual outcomes instead of accepting dominant discourses about the nature and effects of crowdsourcing at face value.

For instance, we must continue to examine the actual size, composition, internal structures and motivations of the supposed “crowds” that make use of online platforms. Articles emerging from our conference that contributed towards this aim include “Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data” by Benedikt Boecking, Margeret Hall and Jeff Schneider (2015) and “Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making” by Pete Burnap and Matthew L. Williams (2015). Anatoliy Gruzd and Ksenia Tsyganova won a best paper award at the IPP2014 conference for an article published in this journal as “Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups.” These articles can be used to challenge the notion that crowdsourcing contributors are simply sets of independent individuals who are neatly representative of a larger population, and instead highlight the clusters, networks, and power structures inherent within them. This has implications to the democratic legitimacy of some of the more naive crowdsourcing initiatives.

One of the most original articles to emerge out of IPP2014 turns the concept of crowdsourcing for public policy and government on its head. While most research has focused on crowdsourcing’s empowering effects (or lack thereof), Gregory Asmolov (2015) analyses crowdsourcing as a form of social control. In an article titled “Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations,” Asmolov draws on empirical evidence and theorists such as Foucault to show how crowdsourcing platforms can be used to institutionalize volunteer resources in order to align them with state objectives and prevent independent collective action. An article by Jorge Goncalves, Yong Liu, Bin Xiao, Saad Chaudhry, Simo Hosio and Vassilis Kostakos (2015) provides a less nefarious example of strategic use of online platforms to further government objectives, under the title “Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook.”

Articles emerging from the conference also include two review articles that provide useful overviews of the field from different perspectives. “A Systematic Review of Online Deliberation Research” by Dennis Friess and Christiane Eilders (2015) takes stock of the use of digital technologies as public spheres. “The Fundamentals of Policy Crowdsourcing” by John Prpić, Araz Taeihagh and James Melton (2015) situates a broad variety of crowdsourcing literature into the context of a public policy cycle framework.

It has been extremely satisfying to follow the progress of these papers from initial conference submissions to high-quality journal articles, and to see that the final product not only advances the state of the art, but also provides certain new and critical perspectives on crowdsourcing. These perspectives will no doubt provoke responses, and Policy & Internet continues to welcome high-quality submissions dealing with crowdsourcing for public policy, government, and beyond.

Read the full editorial: Vili Lehdonvirta andJonathan Bright (2015) Crowdsourcing for Public Policy and Government. Editorial. Volume 7, Issue 3, pages 263–267.

References

Arias, C.R., Garcia, J. and Corpeño, A. (2015) Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform. Policy & Internet 7 (2) 185–202.

Asmolov, G. (2105) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy & Internet 7 (3).

Brabham, D. C. (2013). Citizen E-Participation in Urban Governance: Crowdsourcing and Collaborative Creativity: Crowdsourcing and Collaborative Creativity. IGI Global.

Boecking, B., Hall, M. and Schneider, J. (2015) Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data. Policy & Internet 7 (2) 159–184.

Burnap P. and Williams, M.L. (2015) Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making. Policy & Internet 7 (2) 223–242.

Christensen, H.S., Karjalainen, M. and Nurminen, L. (2015) Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland. Policy & Internet 7 (1) 25-45.

Friess, D. and Eilders, C. (2015) A Systematic Review of Online Deliberation Research. Policy & Internet 7 (3).

Goncalves, J., Liu, Y., Xiao, B., Chaudhry, S., Hosio, S. and Kostakos, V. (2015) Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook. Policy & Internet 7 (1) 80-102.

Gruzd, A. and Tsyganova, K. (2015) Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups. Policy & Internet 7 (2) 121–158.

Heikka, T. (2015) The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation. Policy & Internet 7 (3).

Hosio, S., Goncalves, J., Kostakos, V. and Riekki, J. (2015) Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu. Policy & Internet 7 (2) 203–222.

Howe, J. (2006). The Rise of Crowdsourcing by Jeff Howe | Byliner. Retrieved from

Maguire, S. (2011). Can Data Deliver Better Government? Political Quarterly, 82(4), 522–525.

Margetts, H., John, P., Hale, S., & Yasseri, T. (2015): Political Turbulence: How Social Media Shape Collective Action. Princeton University Press.

Prpić, J., Taeihagh, A. and Melton, J. (2015) The Fundamentals of Policy Crowdsourcing. Policy & Internet 7 (3).

Radu, R., Zingales, N. and Calandro, E. (2015) Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet 7 (3).

Shirky, C. (2010). Cognitive Surplus: How Technology Makes Consumers into Collaborators. Penguin Publishing Group.

Smith, G., Richards R.C. Jr. and Gastil, J. (2015) The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations. Policy & Internet 7 (2) 243–262.

Wampler, B., & Avritzer, L. (2004). Participatory publics: civil society and new institutions in democratic Brazil. Comparative Politics, 36(3), 291–312.

Zinnbauer, D. (2015) Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future. Policy & Internet 7 (1) 1–24.

]]>
Why are citizens migrating to Uber and Airbnb, and what should governments do about it? https://ensr.oii.ox.ac.uk/why-are-citizens-migrating-to-uber-and-airbnb-and-what-should-governments-do-about-it/ Mon, 27 Jul 2015 06:48:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3307 protested fair taxi laws by parking in Pioneer square. Organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).
Protest for fair taxi laws in Portland; organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).

Cars were smashed and tires burned in France last month in protests against the ride hailing app Uber. Less violent protests have also been staged against Airbnb, a platform for renting short-term accommodation. Despite the protests, neither platform shows any signs of faltering. Uber says it has a million users in France, and is available in 57 countries. Airbnb is available in over 190 countries, and boasts over a million rooms, more than hotel giants like Hilton and Marriott. Policy makers at the highest levels are starting to notice the rise of these and similar platforms. An EU Commission flagship strategy paper notes that “online platforms are playing an ever more central role in social and economic life,” while the Federal Trade Commission recently held a workshop on the topic in Washington.

Journalists and entrepreneurs have been quick to coin terms that try to capture the essence of the social and economic changes associated with online platforms: the sharing economy; the on-demand economy; the peer-to-peer economy; and so on. Each perhaps captures one aspect of the phenomenon, but doesn’t go very far in helping us make sense of all its potentials and contradictions, including why some people love it and some would like to smash it into pieces. Instead of starting from the assumption that everything we see today is new and unprecedented, what if we dug into existing social science theory to see what it has to say about economic transformation and the emergence of markets?

Economic sociologists are adamant that markets don’t just emerge by themselves: they are always based on some kind of an underlying infrastructure that allows people to find out what goods and services are on offer, agree on prices and terms, pay, and have a reasonable expectation that the other party will honour the agreement. The oldest market infrastructure is the personal social network: traders hear what’s on offer through word of mouth and trade only with those whom they personally know and trust. But personal networks alone couldn’t sustain the immense scale of trading in today’s society. Every day we do business with strangers and trust them to provide for our most basic needs. This is possible because modern society has developed institutions — things like private property, enforceable contracts, standardized weights and measures, consumer protection, and many other general and sector specific norms and facilities. By enabling and constraining everyone’s behaviours in predictable ways, institutions constitute a robust and more inclusive infrastructure for markets than personal social networks.

Modern institutions didn’t of course appear out of nowhere. Between prehistoric social networks and the contemporary institutions of the modern state, there is a long historical continuum of economic institutions, from ancient trade routes with their customs to medieval fairs with their codes of conduct to state-enforced trade laws of the early industrial era. Institutional economists led by Oliver Williamson and economic historians led by Douglass North theorized in the 1980s that economic institutions evolve towards more efficient forms through a process of natural selection. As new institutional forms become possible thanks to technological and organizational innovation, people switch to cheaper, easier, more secure, and overall more efficient institutions out of self-interest. Old and cumbersome institutions fall into disuse, and society becomes more efficient and economically prosperous as a result. Williamson and North both later received the Nobel Memorial Prize in Economic Sciences.

It is easy to frame platforms as the next step in such an evolutionary process. Even if platforms don’t replace state institutions, they can plug gaps that remain the state-provided infrastructure. For example, enforcing a contract in court is often too expensive and unwieldy to be used to secure transactions between individual consumers. Platforms provide cheaper and easier alternatives to formal contract enforcement, in the form of reputation systems that allow participants to rate each others’ conduct and view past ratings. Thanks to this, small transactions like sharing a commute that previously only happened in personal networks can now potentially take place on a wider scale, resulting in greater resource efficiency and prosperity (the ‘sharing economy’). Platforms are not the first companies to plug holes in state-provided market infrastructure, though. Private arbitrators, recruitment agencies, and credit rating firms have been doing similar things for a long time.

What’s arguably new about platforms, though, is that some of the most popular ones are not mere complements, but almost complete substitutes to state-provided market infrastructures. Uber provides a complete substitute to government-licensed taxi infrastructures, addressing everything from quality and discovery to trust and payment. Airbnb provides a similarly sweeping solution to short-term accommodation rental. Both platforms have been hugely successful; in San Francisco, Uber has far surpassed the city’s official taxi market in size. The sellers on these platforms are not just consumers wanting to make better use of their resources, but also firms and professionals switching over from the state infrastructure. It is as if people and companies were abandoning their national institutions and emigrating en masse to Platform Nation.

From the natural selection perspective, this move from state institutions to platforms seems easy to understand. State institutions are designed by committee and carry all kinds of historical baggage, while platforms are designed from the ground up to address their users’ needs. Government institutions are geographically fragmented, while platforms offer a seamless experience from one city, country, and language area to the other. Government offices have opening hours and queues, while platforms make use of latest technologies to provide services around the clock (the ‘on-demand economy’). Given the choice, people switch to the most efficient institutions, and society becomes more efficient as a result. The policy implications of the theory are that government shouldn’t try to stop people from using Uber and Airbnb, and that it shouldn’t try to impose its evidently less efficient norms on the platforms. Let competing platforms innovate new regulatory regimes, and let people vote with their feet; let there be a market for markets.

The natural selection theory of institutional change provides a compellingly simple way to explain the rise of platforms. However, it has difficulty in explaining some important facts, like why economic institutions have historically developed differently in different places around the world, and why some people now protest vehemently against supposedly better institutions. Indeed, over the years since the theory was first introduced, social scientists have discovered significant problems in it. Economic sociologists like Neil Fligstein have noted that not everyone is as free to choose the institutions that they use. Economic historian Sheilagh Ogilvie has pointed out that even institutions that are efficient for those who participate in them can still sometimes be inefficient for society as a whole. These points suggest a different theory of institutional change, which I will apply to online platforms in my next post.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
How big data is breathing new life into the smart cities concept https://ensr.oii.ox.ac.uk/how-big-data-is-breathing-new-life-into-the-smart-cities-concept/ Thu, 23 Jul 2015 09:57:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3297 “Big data” is a growing area of interest for public policy makers: for example, it was highlighted in UK Chancellor George Osborne’s recent budget speech as a major means of improving efficiency in public service delivery. While big data can apply to government at every level, the majority of innovation is currently being driven by local government, especially cities, who perhaps have greater flexibility and room to experiment and who are constantly on a drive to improve service delivery without increasing budgets.

Work on big data for cities is increasingly incorporated under the rubric of “smart cities”. The smart city is an old(ish) idea: give urban policymakers real time information on a whole variety of indicators about their city (from traffic and pollution to park usage and waste bin collection) and they will be able to improve decision making and optimise service delivery. But the initial vision, which mostly centred around adding sensors and RFID tags to objects around the city so that they would be able to communicate, has thus far remained unrealised (big up front investment needs and the requirements of IPv6 are perhaps the most obvious reasons for this).

The rise of big data – large, heterogeneous datasets generated by the increasing digitisation of social life – has however breathed new life into the smart cities concept. If all the cars have GPS devices, all the people have mobile phones, and all opinions are expressed on social media, then do we really need the city to be smart at all? Instead, policymakers can simply extract what they need from a sea of data which is already around them. And indeed, data from mobile phone operators has already been used for traffic optimisation, Oyster card data has been used to plan London Underground service interruptions, sewage data has been used to estimate population levels … the examples go on.

However, at the moment these examples remain largely anecdotal, driven forward by a few cities rather than adopted worldwide. The big data driven smart city faces considerable challenges if it is to become a default means of policymaking rather than a conversation piece. Getting access to the right data; correcting for biases and inaccuracies (not everyone has a GPS, phone, or expresses themselves on social media); and communicating it all to executives remain key concerns. Furthermore, especially in a context of tight budgets, most local governments cannot afford to experiment with new techniques which may not pay off instantly.

This is the context of two current OII projects in the smart cities field: UrbanData2Decide (2014-2016) and NEXUS (2015-2017). UrbanData2Decide joins together a consortium of European universities, each working with a local city partner, to explore how local government problems can be resolved with urban generated data. In Oxford, we are looking at how open mapping data can be used to estimate alcohol availability; how website analytics can be used to estimate service disruption; and how internal administrative data and social media data can be used to estimate population levels. The best concepts will be built into an application which allows decision makers to access these concepts real time.

NEXUS builds on this work. A collaborative partnership with BT, it will look at how social media data and some internal BT data can be used to estimate people movement and traffic patterns around the city, joining these data into network visualisations which are then displayed to policymakers in a data visualisation application. Both projects fill an important gap by allowing city officials to experiment with data driven solutions, providing proof of concepts and showing what works and what doesn’t. Increasing academic-government partnerships in this way has real potential to drive forward the field and turn the smart city vision into a reality.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

]]>
Digital Disconnect: Parties, Pollsters and Political Analysis in #GE2015 https://ensr.oii.ox.ac.uk/digital-disconnect-parties-pollsters-and-political-analysis-in-ge2015/ Mon, 11 May 2015 15:16:16 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3268 We undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII's election night party, or read about the data hack
The Oxford Internet Institute undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII’s election night party, or read about the data hack

Counts of public Facebook posts mentioning any of the party leaders’ surnames. Data generated by social media can be used to understand political behaviour and institutions on an ongoing basis.[/caption]‘Congratulations to my friend @Messina2012 on his role in the resounding Conservative victory in Britain’ tweeted David Axelrod, campaign advisor to Miliband, to his former colleague Jim Messina, Cameron’s strategy adviser, on May 8th. The former was Obama’s communications director and the latter campaign manager of Obama’s 2012 campaign. Along with other consultants and advisors and large-scale data management platforms from Obama’s hugely successful digital campaigns, Conservative and Labour used an arsenal of social media and digital tools to interact with voters throughout, as did all the parties competing for seats in the 2015 election.

The parties ran very different kinds of digital campaigns. The Conservatives used advanced data science techniques borrowed from the US campaigns to understand how their policy announcements were being received and to target groups of individuals. They spent ten times as much as Labour on Facebook, using ads targeted at Facebook users according to their activities on the platform, geo-location and demographics. This was a top down strategy that involved working out was happening on social media and responding with targeted advertising, particularly for marginal seats. It was supplemented by the mainstream media, such as the Telegraph for example, which contacted its database of readers and subscribers to services such as Telegraph Money, urging them to vote Conservative. As Andrew Cooper tweeted after the election, ‘Big data, micro-targeting and social media campaigns just thrashed “5 million conversations” and “community organizing”’.

He has a point. Labour took a different approach to social media. Widely acknowledged to have the most boots on the real ground, knocking on doors, they took a similar ‘ground war’ approach to social media in local campaigns. Our own analysis at the Oxford Internet Institute shows that of the 450K tweets sent by candidates of the six largest parties in the month leading up to the general election, Labour party candidates sent over 120,000 while the Conservatives sent only 80,000, no more than the Greens and not much more than UKIP. But the greater number of Labour tweets were no more productive in terms of impact (measured in terms of mentions generated: and indeed the final result).

Both parties’ campaigns were tightly controlled. Ostensibly, Labour generated far more bottom-up activity from supporters using social media, through memes like #votecameron out, #milibrand (responding to Miliband’s interview with Russell Brand), and what Miliband himself termed the most unlikely cult of the 21st century in his resignation speech, #milifandom, none of which came directly from Central Office. These produced peaks of activity on Twitter that at some points exceeded even discussion of the election itself on the semi-official #GE2015 used by the parties, as the figure below shows. But the party remained aloof from these conversations, fearful of mainstream media mockery.

The Brand interview was agreed to out of desperation and can have made little difference to the vote (partly because Brand endorsed Miliband only after the deadline for voter registration: young voters suddenly overcome by an enthusiasm for participatory democracy after Brand’s public volte face on the utility of voting will have remained disenfranchised). But engaging with the swathes of young people who spend increasing amounts of their time on social media is a strategy for engagement that all parties ought to consider. YouTubers like PewDiePie have tens of millions of subscribers and billions of video views – their videos may seem unbelievably silly to many, but it is here that a good chunk the next generation of voters are to be found.

Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.
Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.

Only one of the leaders had a presence on social media that managed anything like the personal touch and universal reach that Obama achieved in 2008 and 2012 based on sustained engagement with social media – Nicola Sturgeon. The SNP’s use of social media, developed in last September’s referendum on Scottish independence had spawned a whole army of digital activists. All SNP candidates started the campaign with a Twitter account. When we look at the 650 local campaigns waged across the country, by far the most productive in the sense of generating mentions was the SNP; 100 tweets from SNP local candidates generating 10 times more mentions (1,000) than 100 tweets from (for example) the Liberal Democrats.

Scottish Labour’s failure to engage with Scottish peoples in this kind of way illustrates how difficult it is to suddenly develop relationships on social media – followers on all platforms are built up over years, not in the short space of a campaign. In strong contrast, advertising on these platforms as the Conservatives did is instantaneous, and based on the data science understanding (through advertising algorithms) of the platform itself. It doesn’t require huge databases of supporters – it doesn’t build up relationships between the party and supporters – indeed, they may remain anonymous to the party. It’s quick, dirty and effective.

The pollsters’ terrible night

So neither of the two largest parties really did anything with social media, or the huge databases of interactions that their platforms will have generated, to generate long-running engagement with the electorate. The campaigns were disconnected from their supporters, from their grass roots.

But the differing use of social media by the parties could lend a clue to why the opinion polls throughout the campaign got it so wrong, underestimating the Conservative lead by an average of five per cent. The social media data that may be gathered from this or any campaign is a valuable source of information about what the parties are doing, how they are being received, and what people are thinking or talking about in this important space – where so many people spend so much of their time. Of course, it is difficult to read from the outside; Andrew Cooper labeled the Conservatives’ campaign of big data to identify undecided voters, and micro-targeting on social media, as ‘silent and invisible’ and it seems to have been so to the polls.

Many voters were undecided until the last minute, or decided not to vote, which is impossible to predict with polls (bar the exit poll) – but possibly observable on social media, such as the spikes in attention to UKIP on Wikipedia towards the end of the campaign, which may have signaled their impressive share of the vote. As Jim Messina put it to msnbc news following up on his May 8th tweet that UK (and US) polling was ‘completely broken’ – ‘people communicate in different ways now’, arguing that the Miliband campaign had tried to go back to the 1970s.

Surveys – such as polls — give a (hopefully) representative picture of what people think they might do. Social media data provide an (unrepresentative) picture of what people really said or did. Long-running opinion surveys (such as the Ipsos MORI Issues Index) can monitor the hopes and fears of the electorate in between elections, but attention tends to focus on the huge barrage of opinion polls at election time – which are geared entirely at predicting the election result, and which do not contribute to more general understanding of voters. In contrast, social media are a good way to track rapid bursts in mobilization or support, which reflect immediately on social media platforms – and could also be developed to illustrate more long running trends, such as unpopular policies or failing services.

As opinion surveys face more and more challenges, there is surely good reason to supplement them with social media data, which reflect what people are really thinking on an ongoing basis – like, a video in rather than the irregular snapshots taken by polls. As a leading pollster João Francisco Meira, director of Vox Populi in Brazil (which is doing innovative work in using social media data to understand public opinion) put it in conversation with one of the authors in April – ‘we have spent so long trying to hear what people are saying – now they are crying out to be heard, every day’. It is a question of pollsters working out how to listen.

Political big data

Analysts of political behaviour – academics as well as pollsters — need to pay attention to this data. At the OII we gathered large quantities of data from Facebook, Twitter, Wikipedia and YouTube in the lead-up to the election campaign, including mentions of all candidates (as did Demos’s Centre for the Analysis of Social Media). Using this data we will be able, for example, to work out the relationship between local social media campaigns and the parties’ share of the vote, as well as modeling the relationship between social media presence and turnout.

We can already see that the story of the local campaigns varied enormously – while at the start of the campaign some candidates were probably requesting new passwords for their rusty Twitter accounts, some already had an ongoing relationship with their constituents (or potential constituents), which they could build on during the campaign. One of the candidates to take over the Labour party leadership, Chuka Umunna, joined Twitter in April 2009 and now has 100K followers, which will be useful in the forthcoming leadership contest.

Election results inject data into a research field that lacks ‘big data’. Data hungry political scientists will analyse these data in every way imaginable for the next five years. But data in between elections, for example relating to democratic or civic engagement or political mobilization, has traditionally been woefully short in our discipline. Analysis of the social media campaigns in #GE2015 will start to provide a foundation to understand patterns and trends in voting behaviour, particularly when linked to other sources of data, such as the actual constituency-level voting results and even discredited polls — which may yet yield insight, even having failed to achieve their predictive aims. As the OII’s Jonathan Bright and Taha Yasseri have argued, we need ‘a theory-informed model to drive social media predictions, that is based on an understanding of how the data is generated and hence enables us to correct for certain biases’

A political data science

Parties, pollsters and political analysts should all be thinking about these digital disconnects in #GE2015, rather than burying them with their hopes for this election. As I argued in a previous post, let’s use data generated by social media to understand political behaviour and institutions on an ongoing basis. Let’s find a way of incorporating social media analysis into polling models, for example by linking survey datasets to big data of this kind. The more such activity moves beyond the election campaign itself, the more useful social media data will be in tracking the underlying trends and patterns in political behavior.

And for the parties, these kind of ways of understanding and interacting with voters needs to be institutionalized in party structures, from top to bottom. On 8th May, the VP of a policy think-tank tweeted to both Axelrod and Messina ‘Gentlemen, welcome back to America. Let’s win the next one on this side of the pond’. The UK parties are on their own now. We must hope they use the time to build an ongoing dialogue with citizens and voters, learning from the success of the new online interest group barons, such as 38 degrees and Avaaz, by treating all internet contacts as ‘members’ and interacting with them on a regular basis. Don’t wait until 2020!


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics, investigating political behaviour, digital government and government-citizen interactions in the age of the internet, social media and big data. She has published over a hundred books, articles and major research reports in this area, including Political Turbulence: How Social Media Shape Collective Action (with Peter John, Scott Hale and Taha Yasseri, 2015).

Scott A. Hale is a Data Scientist at the OII. He develops and applies techniques from computer science to research questions in the social sciences. He is particularly interested in the area of human-computer interaction and the spread of information between speakers of different languages online and the roles of bilingual Internet users. He is also interested in collective action and politics more generally.

]]>
Political polarization on social media: do birds of a feather flock together on Twitter? https://ensr.oii.ox.ac.uk/political-polarization-on-social-media-do-birds-of-a-feather-flock-together-on-twitter/ Tue, 05 May 2015 09:53:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3254 Twitter has exploded in recent years, now boasting half a billion registered users. Like blogs and the world’s largest social networking platform, Facebook, Twitter has actively been used for political discourse during the past few elections in the US, Canada, and elsewhere but it differs from them in a number of significant ways. Twitter’s connections tend to be less about strong social relationships (such as those between close friends or family members), and more about connecting with people for the purposes of commenting and information sharing. Twitter also provides a steady torrent of updates and resources from individuals, celebrities, media outlets, and any other organization seeking to inform the world as to its views and actions.

This may well make Twitter particularly well suited to political debate and activity. Yet important questions emerge in terms of the patterns of conduct and engagement. Chief among them: are users mainly seeking to reinforce their own viewpoints and link with likeminded persons, or is there a basis for widening and thoughtful exposure to a variety of perspectives that may improve the collective intelligence of the citizenry as a result?

Conflict and Polarization

Political polarization often occurs in a so-called ‘echo chamber’ environment, in which individuals are exposed to only information and communities that support their own viewpoints, while ignoring opposing perspectives and insights. In such isolating and self-reinforcing conditions, ideas can become more engrained and extreme due to lack of contact with contradictory views and the exchanges that could ensue as a result.

On the web, political polarization has been found among political blogs, for instance. American researchers have found that liberal and conservative bloggers in the US tend to link to other bloggers who share their political ideology. For Kingwell, a prominent Canadian philosopher, the resulting dynamic is one that can be characterized by a decline in civility and a lessening ability for political compromise to take hold. He laments the emergence of a ‘shout doctrine’ that corrodes the civic and political culture, in the sense that divisions are accentuated and compromise becomes more elusive.

Such a dynamic is not the result of social media alone – but rather it reflects for some the impacts of the Internet generally and the specific manner by which social media can lend itself to broadcasting and sensationalism, rather than reasoned debate and exchange. Traditional media and journalistic organizations have thus become further pressured to act in kind, driven less by a patient and persistent presentation of all sides of an issue and more by near-instantaneous reporting online. In a manner akin to Kingwell’s view, one prominent television news journalist in the US, Ted Koppel, has lamented this new media environment as a danger to the republic.

Nonetheless, the research is far from conclusive as to whether the Internet increases political polarization. Some studies have found that among casual acquaintances (such as those that can typically be observed on Twitter), it is common to observe connections across ideological boundaries. In one such funded by the Pew Internet and American Life Project and the National Science Foundation, findings suggest that people who often visit websites that support their ideological orientation also visit web sites that support divergent political views. As a result, greater sensitivity and empathy for alternative viewpoints could potentially ensue, improving the likelihood for political compromise – even on a modest scale that would otherwise not have been achievable without this heightened awareness and debate.

Early Evidence from Canada

The 2011 federal election in Canada was dubbed by some observers in the media as the country’s first ‘social media election’ – as platforms such as Facebook and Twitter became prominent sources of information for growing segments of the citizenry, and evermore strategic tools for political parties in terms of fundraising, messaging, and mobilizing voters. In examining Twitter traffic, our own intention was to ascertain the extent to which polarization or cross-pollinization was occurring across the portion of the electorate making use of this micro-blogging platform.

We gathered nearly 6000 tweets pertaining to the federal election made by just under 1500 people during a three-day period in the week preceding election day (this time period was chosen because it was late enough in the campaign for people to have an informed opinion, but still early enough for them to be persuaded as to how they should vote). Once the tweets were retrieved, we used social network analysis and content analysis to analyze patterns of exchange and messaging content in depth.

We found that overall people do tend to cluster around shared political views on Twitter. Supporters of each of the four major political parties identified in the study were more likely to tweet to other supporters of the same affiliation (this was particularly true of the ruling Conservatives, the most inwardly networked of the four major politically parties). Nevertheless, in a significant number of cases (36% of all interactions) we also observed a cross-ideological discourse, especially among supporters of the two most prominent left-of-centre parties, the New Democratic Party (NDP) and the Liberal Party of Canada (LPC). The cross-ideological interactions among supporters of left-leaning parties tended to be agreeable in nature, but often at the expense of the party in power, the Conservative Party of Canada (CPC). Members from the NDP and Liberal formations were also more likely to share general information and updates about the election as well as debate various issues around their party platforms with each other.

By contrast, interactions between parties that are ideologically distant seemed to denote a tone of conflict: nearly 40% of tweets between left-leaning parties and the Conservatives tended to be hostile. Such negative interactions between supporters of different parties have shown to reduce enthusiasm about political campaigns in general, potentially widening the cleavage between highly engaged partisans and less affiliated citizens who may view such forms of aggressive and divisive politics as distasteful.

For Twitter sceptics, one concern is that the short length of Twitter messages does not allow for meaningful and in-depth discussions around complex political issues. While it is certainly true that expression within 140 characters is limited, one third of tweets between supporters of different parties included links to external sources such as news stories, blog posts, or YouTube videos. Such indirect sourcing can thereby constitute a means of expanding dialogue and debate.

Accordingly, although it is common to view Twitter as largely a platform for self-expression via short tweets, there may be a wider collective dimension to both users and the population at large as a steady stream of both individual viewpoints and referenced sources drive learning and additional exchange. If these exchanges happen across partisan boundaries, they can contribute to greater collective awareness and learning for the citizenry at large.

As the next federal election approaches in 2015, with younger voters gravitating online – especially via mobile devices, and with traditional polling increasingly under siege as less reliable than in the past, all major parties will undoubtedly devote more energy and resources to social media strategies including, perhaps most prominently, an effective usage of Twitter.

Partisan Politics versus Politics 2.0

In a still-nascent era likely to be shaped by the rise of social media and a more participative Internet on the one hand, and the explosion of ‘big data’ on the other hand, the prominence of Twitter in shaping political discourse seems destined to heighten. Our preliminary analysis suggests an important cleavage between traditional political processes and parties – and wider dynamics of political learning and exchange across a changing society that is more fluid in its political values and affiliations.

Within existing democratic structures, Twitter is viewed by political parties as primarily a platform for messaging and branding, thereby mobilizing members with shared viewpoints and attacking opposing interests. Our own analysis of Canadian electoral tweets both amongst partisans and across party lines underscores this point. The nexus between partisan operatives and new media formations will prove to be an increasingly strategic dimension to campaigning going forward.

More broadly, however, Twitter is a source of information, expression, and mobilization across a myriad of actors and formations that may not align well with traditional partisan organizations and identities. Social movements arising during the Arab Spring, amongst Mexican youth during that country’s most recent federal elections and most recently in Ukraine are cases in point. Across these wider societal dimensions – especially consequential in newly emerging democracies, the tremendous potential of platforms such as Twitter may well lie in facilitating new and much more open forms of democratic engagement that challenge our traditional constructs.

In sum, we are witnessing the inception of new forms of what can be dubbed ‘Politics 2.0’ that denotes a movement of both opportunities and challenges likely to play out differently across democracies at various stages of socio-economic, political, and digital development. Whether Twitter and other likeminded social media platforms enable inclusive and expansionary learning, or instead engrain divisive polarized exchange, has yet to be determined. What is clear however is that on Twitter, in some instances, birds of a feather do flock together as they do on political blogs. But in other instances, Twitter can play an important role to foster cross parties communication in the online political arenas.

Read the full article: Gruzd, A., and Roy, J. (2014) Investigating Political Polarization on Twitter: A Canadian Perspective. Policy and Internet 6 (1) 28-48.

Also read: Gruzd, A. and Tsyganova, K. Information wars and online activism during the 2013/2014 crisis in Ukraine: Examining the social structures of Pro- and Anti-Maidan groups. Policy and Internet. Early View April 2015: DOI: 10.1002/poi3.91


Anatoliy Gruzd is Associate Professor in the Ted Rogers School of Management and Director of the Social Media Lab at Ryerson University, Canada. Jeffrey Roy is Professor in the School of Public Administration at Dalhousie University’s Faculty of Management. His most recent book was published in 2013 by Springer: From Machinery to Mobility: Government and Democracy in a Participative Age.

]]>
Wikipedia sockpuppetry: linking accounts to real people is pure speculation https://ensr.oii.ox.ac.uk/wikipedia-sockpuppetry-linking-accounts-to-real-people-is-pure-speculation/ Thu, 23 Apr 2015 09:50:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3352 Conservative chairman Grant Shapps is accused of sockpuppetry on Wikipedia, but this former Wikipedia admin isn’t so sure the evidence stands up. Reposted from The Conversation.

Wikipedia has become one of the most highly linked-to websites on the internet, with countless others using it as a reference. But it can be edited by anyone, and this has led to occasions where errors have been widely repeated – or where facts have been distorted to fit an agenda.

The chairman of the UK’s Conservative Party, Grant Shapps, has been accused of editing Wikipedia pages related to him and his rivals within the party. The Guardian newspaper claims Wikipedia administrators blocked an account on suspicions that it was being used by Shapps, or someone in his employ.

Wikipedia accounts are anonymous, so what is the support for these claims? Is it a case of fair cop or, as Shapps says in his defence, a smear campaign in the run-up to the election?

Edits examined

This isn’t the first time The Guardian has directed similar accusations against Shapps around edits to Wikipedia, with similar claims emerging in September 2012. The investigation examines a list of edits by three Wikipedia user accounts: HackneymarshHistoryset, and Contribsx, and several other edits from users without accounts, recorded only as their IP addresses – which the article claimed to be “linked” to Shapps.

Are you pointing at me? Grant Shapps. Hannah McKay/PA

The Hackneymarsh account made 12 edits in a short period in May 2010. The Historyset account made five edits in a similar period. All the edits recorded by IP addresses date to between 2008 and 2010. Most recently, the Contribsx account has been active from August 2013 to April 2015.

First of all, it is technically impossible to conclusively link any of those accounts or IP addresses to a real person. Of course you can speculate – and in this case it’s clear that these accounts seem to demonstrate great sympathy with Shapps based on the edits they’ve made. But no further information about the three usernames can be made public by the Wikimedia Foundation, as per its privacy policies.

However, the case is different for the IP addresses. Using GeoIP or similar tools it’s possible to look up the IP addresses and locate them with a fair degree of accuracy to some region of the world. In this case London, Cleethorpes, and in the region of Manchester.

So, based on the publicly available information from ordinary channels, there is not much technical evidence to support The Guardian’s claims.

Putting a sock in sockpuppets

Even if it was possible to demonstrate that Schapps was editing his own Wikipedia page to make himself look good, this sort of self-promotion, while frowned upon, is not sufficient to result in a ban. A Wikipedia admin blocked Contribsx for a different reason regarded far more seriously: sockpuppetry.

The use of multiple Wikipedia user accounts for an improper purpose is called sockpuppetry. Improper purposes include attempts to deceive or mislead other editors, disrupt discussions, distort consensus, avoid sanctions, evade blocks or otherwise violate community standards and policies … Wikipedia editors are generally expected to edit using only one (preferably registered) account.

Certain Wikipedia admins called “check users” have limited access to the logs of IP addresses and details of users’ computer, operating system and the browser recorded by Wikipedia’s webservers. Check users use this confidential information together with other evidence of user behaviour – such as similarity of editing interests – to identify instances of sockpuppetry, and whether the intent has been to mislead.

As a former check user, I can say for the record it’s difficult to establish with complete accuracy whether two or more accounts are used by the same person. But on occasion there is enough to be drawn from the accounts’ behaviour to warrant accusations of sockpuppetry and so enforce a ban. But this only occurs if the sockpuppet accounts have violated any other rule – sockpuppetry itself is not prohibited, only when used for nefarious ends.

Limited technical evidence

In this case, the check user has speculated that Contribsx is related to the other users Hackneymarsh and Historyset – but these users have been inactive for five years, and so by definition cannot have violated any other Wikipedia rule to warrant a ban. More importantly, the technical information available to check users only goes back a couple of months in most cases, so I can’t see the basis for technical evidence that would support the claim these accounts are connected.

In fact the banning administrator admits that the decision is mainly based on behavioural similarity and not technical evidence available to them as a check user. And this has raised criticisms and requests for further investigation from their fellow editors.

]]>
How do the mass media affect levels of trust in government? https://ensr.oii.ox.ac.uk/how-do-the-mass-media-affect-levels-of-trust-in-government/ Wed, 04 Mar 2015 16:33:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3157
Caption
The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness, using many different ICTs. Seoul at night by jonasginter.
Ed: You examine the influence of citizens’ use of online mass media on levels of trust in government. In brief, what did you find?

Greg: As I explain in the article, there is a common belief that mass media outlets, and especially online mass media outlets, often portray government in a negative light in an effort to pique the interest of readers. This tendency of media outlets to engage in ‘bureaucracy bashing’ is thought, in turn, to detract from the public’s support for their government. The basic assumption underpinning this relationship is that the more negative information on government there is, the more negative public opinion. However, in my analyses, I found evidence of a positive indirect relationship between citizens’ use of online mass media outlets and their levels of trust in government. Interestingly, however, the more frequently citizens used online mass media outlets for information about their government, the weaker this association became. These findings challenge conventional wisdom that suggests greater exposure to mass media outlets will result in more negative perceptions of the public sector.

Ed: So you find that that the particular positive or negative spin of the actual message may not be as important as the individuals’ sense that they are aware of the activities of the public sector. That’s presumably good news — both for government, and for efforts to ‘open it up’?

Greg: Yes, I think it can be. However, a few important caveats apply. First, the positive relationship between online mass media use and perceptions of government tapers off as respondents made more frequent use of online mass media outlets. In the study, I interpreted this to mean that exposure to mass media had less of an influence upon those who were more aware of public affairs, and more of an influence upon those who were less aware of public affairs. Therefore, there is something of a diminishing returns aspect to this relationship. Second, this study was not able to account for the valence (ie how positive or negative the information is) of information respondents were exposed to when using online mass media. While some attempts were made to control for valance by adding different control variables, further research drawing upon experimental research designs would be useful in substantiating the relationship between the valence of information disseminated by mass media outlets and citizens’ perceptions of their government.

Ed: Do you think governments are aware of this relationship — ie that an indirect effect of being more open and present in the media, might be increased citizen trust — and that they are responding accordingly?

Greg: I think that there is a general idea that more communication is better than less communication. However, at the same time there is a lot of evidence to suggest that some of the more complex aspects of the relationship between openness and trust in government go unaccounted for in current attempts by public sector organizations to become more open and transparent. As a result, this tool that public organizations have at their disposal is not being used as effectively as it could be, and in some instances is being used in ways that are counterproductive–that is, actually decreasing citizen trust in government. Therefore, in order for governments to translate greater openness into greater trust in government, more refined applications are necessary.

Ed: I know there are various initiatives in the UK — open government data / FoIs / departmental social media channels etc. — aimed at a general opening up of government processes. How open is the Korean government? Is a greater openness something they might adopt (or are adopting?) as part of a general aim to have a more informed and involved — and therefore hopefully more trusting — citizenry?

Greg: The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness. Their strategy has made use of different ICTs, such as e-government websites, social media accounts, non-emergency call centers, and smart phone apps. As a result, many now say that attempts by the Korean Government to become more open are more advanced than in many other areas of the developed world. However, the persistent issue in South Korea, as elsewhere, is whether these attempts are having the intended impact. A lot of empirical research has found, for example, that various attempts at becoming more open by many governments around the world have fallen short of creating a more informed and involved citizenry.

Ed: Finally — is there much empirical work or data in this area?

Greg: While there is a lot of excellent empirical research from the field of political science that has examined how mass media use relates to citizens’ perceptions of politicians, political preferences, or their levels of political knowledge, this topic has received almost no attention at all in public management/administration. This lack of discussion is surprising, given mass media has long served as a key means of enhancing the transparency and accountability of public organizations.

Read the full article: Porumbescu, G. (2013) Assessing the Link Between Online Mass Media and Trust in Government: Evidence From Seoul, South Korea. Policy & Internet 5 (4) 418-443.


Greg Porumbescu was talking to blog editor David Sutcliffe.

Gregory Porumbescu is an Assistant Professor at the Northern Illinois University Department of Public Administration. His research interests primarily relate to public sector applications of information and communications technology, transparency and accountability, and citizens’ perceptions of public service provision.

]]>
Don’t knock clickivism: it represents the political participation aspirations of the modern citizen https://ensr.oii.ox.ac.uk/dont-knock-clickivism-it-represents-the-political-participation-aspirations-of-the-modern-citizen/ Sun, 01 Mar 2015 10:44:49 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3140
Following a furious public backlash in 2011, the UK government abandoned plans to sell off 258,000 hectares of state-owned woodland. The public forest campaign by 38 Degrees gathered over half a million signatures.
How do we define political participation? What does it mean to say an action is ‘political’? Is an action only ‘political’ if it takes place in the mainstream political arena; involving government, politicians or voting? Or is political participation something that we find in the most unassuming of places, in sports, home and work? This question, ‘what is politics’ is one that political scientists seem to have a lot of trouble dealing with, and with good reason. If we use an arena definition of politics, then we marginalise the politics of the everyday; the forms of participation and expression that develop between the cracks, through need and ingenuity. However, if we broaden our approach as so to adopt what is usually termed a process definition, then everything can become political. The problem here is that saying that everything is political is akin to saying nothing is political, and that doesn’t help anyone.

Over the years, this debate has plodded steadily along, with scholars on both ends of the spectrum fighting furiously to establish a working understanding. Then, the Internet came along and drew up new battle lines. The Internet is at its best when it provides a home for the disenfranchised, an environment where like-minded individuals can wipe free the dust of societal disassociation and connect and share content. However, the Internet brought with it a shift in power, particularly in how individuals conceptualised society and their role within it. The Internet, in addition to this role, provided a plethora of new and customisable modes of political participation. From the onset, a lot of these new forms of engagement were extensions of existing forms, broadening the everyday citizen’s participatory repertoire. There was a move from voting to e-voting, petitions to e-petitions, face-to-face communities to online communities; the Internet took what was already there and streamlined it, removing those pesky elements of time, space and identity.

Yet, as the Internet continues to develop, and we move into the ultra-heightened communicative landscape of the social web, new and unique forms of political participation take root, drawing upon those customisable environments and organic cyber migrations. The most prominent of these is clicktivism, sometimes also, unfairly, referred to as slacktivism. Clicktivism takes the fundamental features of browsing culture and turns them into a means of political expression. Quite simply, clicktivism refers to the simplification of online participatory processes: one-click online petitions, content sharing, social buttons (e.g. Facebook’s ‘Like’ button) etc.

For the most part, clicktivism is seen in derogatory terms, with the idea that the streamlining of online processes has created a societal disposition towards feel-good, ‘easy’ activism. From this perspective, clicktivism is a lazy or overly-convenient alternative to the effort and legitimacy of traditional engagement. Here, individuals engaging in clicktivism may derive some sense of moral gratification from their actions, but clicktivism’s capacity to incite genuine political change is severely limited. Some would go so far as to say that clicktivism has a negative impact on democratic systems, as it undermines an individual’s desire and need to participate in traditional forms of engagement; those established modes which mainstream political scholars understand as the backbone of a healthy, functioning democracy.

This idea that clicktivism isn’t ‘legitimate’ activism is fuelled by a general lack of understanding about what clicktivism actually involves. As a recent development in observed political action, clicktivism has received its fair share of attention in the political participation literature. However, for the most part, this literature has done a poor job of actually defining clicktivism. As such, clicktivism is not so much a contested notion, as an ill-defined one. The extant work continues to describe clicktivism in broad terms, failing to effectively establish what it does, and does not, involve. Indeed, as highlighted, the mainstream political participation literature saw clicktivism not as a specific form of online action, but rather as a limited and unimportant mode of online engagement.

However, to disregard emerging forms of engagement such as clicktivism because they are at odds with long-held notions of what constitutes meaningful ‘political’ engagement is a misguided and dangerous road to travel. Here, it is important that we acknowledge that a political act, even if it requires limited effort, has relevance for the individual, and, as such, carries worth. And this is where we see clicktivism challenging these traditional notions of political participation. To date, we have looked at clicktivism through an outdated lens; an approach rooted in traditional notions of democracy. However, the Internet has fundamentally changed how people understand politics, and, consequently, it is forcing us to broaden our understanding of the ‘political’, and of what constitutes political participation.

The Internet, in no small part, has created a more reflexive political citizen, one who has been given the tools to express dissatisfaction throughout all facets of their life, not just those tied to the political arena. Collective action underpinned by a developed ideology has been replaced by project orientated identities and connective action. Here, an individual’s desire to engage does not derive from the collective action frames of political parties, but rather from the individual’s self-evaluation of a project’s worth and their personal action frames.

Simply put, people now pick and choose what projects they participate in and feel little generalized commitment to continued involvement. And it is clicktivism which is leading the vanguard here. Clicktivism, as an impulsive, non-committed online political gesture, which can be easily replicated and that does not require any specialized knowledge, is shaped by, and reinforces, this change. It affords the project-oriented individual an efficient means of political participation, without the hassles involved with traditional engagement.

This is not to say, however, that clicktivism serves the same functions as traditional forms. Indeed, much more work is needed to understand the impact and effect that clicktivist techniques can have on social movements and political issues. However, and this is the most important point, clicktivism is forcing us to reconsider what we define as political participation. It does not overtly engage with the political arena, but provides avenues through which to do so. It does not incite genuine political change, but it makes people feel as if they are contributing. It does not politicize issues, but it fuels discursive practices. It may not function in the same way as traditional forms of engagement, but it represents the political participation aspirations of the modern citizen. Clicktivism has been bridging the dualism between the traditional and contemporary forms of political participation, and in its place establishing a participatory duality.

Clicktivism, and similar contemporary forms of engagement, are challenging how we understand political participation, and to ignore them because of what they don’t embody, rather than what they do, is to move forward with eyes closed.

Read the full article: Halupka, M. (2014) Clicktivism: A Systematic Heuristic. Policy and Internet 6 (2) 115-132.


Max Halupka is a PhD candidate at the ANZOG Institute for Governance, University of Canberra. His research interests include youth political participation, e-activism, online engagement, hacktivism, and fluid participatory structures.

]]>
Will China’s new national search engine, ChinaSo, fare better than “The Little Search Engine that Couldn’t”? https://ensr.oii.ox.ac.uk/will-chinas-new-national-search-engine-chinaso-fare-better-than-the-little-search-engine-that-couldnt/ Tue, 10 Feb 2015 10:55:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3084 State search engine ChinaSo launched in March 2014 following indifferent performance from the previous state-run search engine Jike.
State search engine ChinaSo launched in March 2014 following indifferent performance from the previous state-run search engine Jike. Its long-term impact on China’s search market and users remains unclear.

When Jike, the Chinese state-run search engine, launched in 2011, its efforts received a mixed response. The Chinese government pulled out all the stops to promote it, including placing Deng Yaping, one of China’s most successful athletes at the helm. Jike strategically branded itself as friendly, high-tech, and patriotic to appeal to national pride, competition, and trust. It also signaled a serious attempt by a powerful authoritarian state to nationalize the Internet within its territory, and to extend its influence in the digital sphere. However, plagued by technological inferiority, management deficiencies, financial woes and user indifference, Jike failed in terms of user adoption, pointing to the limits of state influence in the marketplace.

Users and critics remain skeptical of state-run search engines. While some news outlets referred to Jike as “the little search engine that couldn’t,” Chinese propaganda was busy at work rebranding, recalibrating, and reimagining its efforts. The result? The search engine formally known as Jike has now morphed into a new enterprise known as “ChinaSo”. This transformation is not new — Jike originally launched in 2010 under the name Goso, rebranding itself as Jike a year later. The March 2014 unveiling of ChinaSo was the result of the merging of the two state-run search engines Jike and Panguso.

Only time will tell if this new (ad)venture will prove more fruitful. However, several things are worthy of note here. First, despite repeated trials, the Chinese state has not given up on its efforts to expand its digital toolbox and weave a ‘China Wide Web’. Rather, state media have pooled their resources to make their collective, strategic bets. The merging of Jike and Panguso into ChinaSo was backed by several state media giants, including People’s Daily, Xinhua News Agency, and China Central Television. Branded explicitly as “China Search: Authoritative National Search,” ChinaSo reinforces a sense of national identity. How does it perform? ChinaSo now ranks 225th in China and 2139th globally (Alexa.com, 8 February 2015), up from Jike’s ranking of 376th in China and 3,174th globally that we last recorded in May 2013. While ChinaSo’s rankings have increased over time, a low adoption rate continues to haunt the state search engine. Compared to China’s homegrown commercial search giant Baidu that ranks first in China and fifth globally (Alexa.com, 8 February 2015), ChinaSo has a long way to go.

Second, in terms of design, ChinaSo has adopted a mixture of general and vertical search to increase its appeal to a wide range of potential users. Its general search, similar to Google’s and Baidu’s, allows users to query through a search box to receive results in a combination of text, image and video formats based on ChinaSo’s search engine that archives, ranks, and presents information to users. In addition, ChinaSo incorporates vertical search focusing on a wide range of categories such as transportation, investment, education and technology, health, food, tourism, shopping, real estate and cars, and sports and entertainment. Interestingly, ChinaSo also guides searchers by highlighting “top search topics today” as users place their cursor in the search box. Currently, various “anti-corruption” entries appear prominently which correspond to the central government’s high-profile anti-corruption campaigns. Given the opaqueness of search engine operation, it is unclear whether the “top searches” are ChinaSo’s editorial choices or search terms based on user queries. We suspect ChinaSo strategically prioritizes this list to direct user attention.

Third, besides improved functionality that enhances ChinaSo’s priming and agenda-setting abilities, it continues to practice (as did Jike) sophisticated information filtering and presentation. For instance, a search of “New York Times” returns not a single result directing users to the paper’s website — as it is banned in China. Instead, on the first page of results, ChinaSo directs users to several Chinese online encyclopedia entries for New York Times, stock information of NYT, and sanctioned news stories relating to the NYT that have appeared in such official media outlets as People’s Net, China Daily, and Global Times. All information appears in Chinese, which has acted as a natural barrier to the average Chinese user who seeks information outside China. Although Chinese language versions of foreign new organizations such as NYT Chinese, WSJ Chinese, and BBC Chinese exist, they are invariably blocked in China.

Last, ChinaSo’s long-term impact on China’s search market and users remains unclear. While many believe ChinaSo to be a “waste of taxpayer money” due to its persistent inability to carve out its market share in competition, others are willing to give it a shot, especially with regard to queries for official policies and statements, remarking that “[there] is nothing wrong with creating a state-run search engine service” and that ChinaSo’s results are better than those of its commercial counterparts. It seems that users either do not care or remain largely unaware of the surveillance capacities of search engines. Although recent scholarship (for instance here and here) has started to probe the Chinese notion and practices of privacy in social networking sites, no research has been conducted with regard to search-related privacy concerns in the Chinese context.

The idea of a state-sponsored search engine is not new, however. As early as 2005, a few European countries proposed a Euro-centric search engine “Project Quaero” to compete against Google and Yahoo! in what was perceived to be the “threat of Anglo-Saxon cultural imperialism.” In the post-Snowden world, not only are powerful authoritarian countries—China, Russia, Iran, and Turkey—interested in building their own national search engines, democratic countries like Germany and Brazil have also condemned the U.S. government and vowed to create their own “national Internets.”

The changing international political landscape compels researchers, policy makers and the public to re-evaluate previous assumptions of internationalism and confront the reality of the role of the Internet as an extension of state power and national identity instead. In the near future, the “the return of the state”, reflected in various trends to re-nationalize communication networks, will likely go hand in hand with social, economic and cultural changes that cross national and international borders. ChinaSo is part and parcel of the “geopolitical turn” in policy and Internet studies that should command more scholarly and public attention.

Read the full article: Jiang, M. & Okamoto, K. (2014) National identity, ideological apparatus, or panopticon? A case study of the Chinese national search engine Jike. Policy and Internet 6 (1) 89-107.

Min Jiang is an Associate Professor, in the department of Communication Studies, UNC Charlotte. Kristen Okamoto is a Ph.D. Student in the school of Communication Studies, University of Ohio.

]]>
Why does the Open Government Data agenda face such barriers? https://ensr.oii.ox.ac.uk/why-does-the-open-government-data-agenda-face-such-barriers/ Mon, 26 Jan 2015 11:03:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3068 Image by the Open Data Institute.
Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Advocates of Open Government Data (OGD) – that is, data produced or commissioned by government or government-controlled entities that can be freely used, reused and redistributed by anyone – talk about the potential of such data to increase government transparency, catalyse economic growth, address social and environmental challenges and boost democratic participation. This heady mix of potential benefits has proved persuasive to the UK Government (and governments around the world). Over the past decade, since the emergence of the OGD agenda, the UK Government has invested extensively in making more of its data open. This investment has included £10 million to establish the Open Data Institute and a £7.5 million fund to support public bodies overcome technical barriers to releasing open data.

Yet the transformative impacts claimed by OGD advocates, in government as well as NGOs such as the Open Knowledge Foundation, still seem a rather distant possibility. Even the more modest goal of integrating the creation and use of OGD into the mainstream practices of government, businesses and citizens remains to be achieved. In my recent article Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective (Policy & Internet 6:3) I reflect upon the barriers preventing the OGD agenda from making a breakthrough into the mainstream. These reflections centre on the five key finds of a survey exploring where key stakeholders within the UK OGD community perceive barriers to the OGD agenda. The key messages from the UK OGD community are that:

1. Barriers to the OGD agenda are perceived to be widespread 

Unsurprisingly, given the relatively limited impact of OGD to date, my research shows that barriers to the OGD agenda are perceived to be widespread and numerous in the UK’s OGD community. What I find rather more surprising is the expectation, amongst policy makers, that these barriers ought to just melt away when exposed to the OGD agenda’s transparently obvious value and virtue. Given that the breakthrough of the OGD agenda (in actual fact) will require changes across the complex socio-technical structures of government and society, many teething problems should be expected, and considerable work will be required to overcome them.

2. Barriers on the demand side are of great concern

Members of the UK OGD community are particularly concerned about the wide range of demand-side barriers, including the low level of demand for OGD across civil society and the public and private sectors. These concerns are likely to have arisen as a legacy of the OGD community’s focus on the supply of OGD (such as public spending, prescription and geospatial data), which has often led the community to overlook the need to nurture initiatives that make use of OGD: for example innovators such as Carbon Culture who use OGD to address environmental challenges.

Adopting a strategic approach to supporting niches of OGD use could help overcome some of the demand-side barriers. For example, such an approach could foster the social learning required to overcome barriers relating to the practices and business models of data users. Whilst there are encouraging signs that the UK’s Open Data Institute (a UK Government-supported not-for-profit organisation seeking to catalyse the use of open data) is supporting OGD use in the private sector, there remains a significant opportunity to improve the support offered to potential OGD users across civil society. It is also important to recognise that increasing the support for OGD users is not guaranteed to result in increased demand. Rather the possibility remains that demand for OGD is limited for many other reasons – including the possibility that the majority of businesses, citizens and community organisations find OGD of very little value.

3. The structures of government continue to act as barriers

Members of the UK OGD community are also concerned that major barriers remain on the supply side, particularly in the form of the established structures and institutions of government. For example, barriers were perceived in the forms of the risk-adverse cultures of government organisations and the ad hoc funding of OGD initiatives. Although resilient, these structures are dynamic, so proponents of OGD need to be aware of emerging ‘windows of opportunity’ as they open up. Such opportunities may take the form of tensions within the structures of government (e.g. where restrictions on data sharing between different parts of government present an opportunity for OGD to create efficiency savings); and external pressures on government (e.g. the pressure to transition to a low carbon economy could create opportunities for OGD initiatives and demand for OGD).

4. There are major challenges to mobilising resources to support the open government data agenda

The research results also showed that members of the UK’s OGD community see mobilising the resources required to support the OGD as a major challenge. Concerns around securing funding are predictably prominent, but concerns also extend to developing the skills and knowledge required to use OGD across civil society, government and the private sector. These challenges are likely to persist whilst the post-financial crisis narrative of public deficit reduction through public spending reduction dominates the political agenda. This leaves OGD advocates to consider the politics and ethics of calling for investment in OGD initiatives, whilst spending reductions elsewhere are leading to the degradation of public services provision to vulnerable and socially excluded individuals.

5. The nature of some barriers remains contentious within the OGD community

OGD is often presented by advocates as a neutral, apolitical public good. However, my research highlights the important role that values and politics plays in how individuals within the OGD community perceive the agenda and the barriers it faces. For example, there are considerable differences in opinion, within the OGD community, on whether or not a private sector focus on exploiting financial value from OGD is crowding out the creation of social and environmental value. So benefits may arise from advocates being more open about the values and politics that underpin and shape the agenda. At the same time, OGD-related policy and practice could create further opportunities for social learning that brings together the diverse values and perspectives that coexist within the OGD community.

Having considered the wide range of barriers to the breakthrough of OGD agenda, and some approaches to overcoming these barriers, these discussions need setting in a broader political context. If the agenda does indeed make a breakthrough into the mainstream, it remains unclear what form this will take. Will the OGD agenda make a breakthrough by conforming with, and reinforcing, prevailing neoliberal interests? Or will the agenda stretch the fabric of government, the economy and society, and transform the relationship between citizens and the state?

Read the full article: Martin, C. (2014) Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective. Policy & Internet 6 (3) 217-240.

]]>
Finnish decision to allow same-sex marriage “shows the power of citizen initiatives” https://ensr.oii.ox.ac.uk/finnish-decision-to-allow-same-sex-marriage-shows-the-power-of-citizen-initiatives/ Fri, 28 Nov 2014 13:45:04 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3024
November rainbows in front of the Finnish parliament house in Helsinki, one hour before the vote for same-sex marriage. Photo by Anne Sairio.
November rainbows in front of the Finnish parliament house in Helsinki, one hour before the vote for same-sex marriage. Photo by Anni Sairio.

In a pivotal vote today, the Finnish parliament voted in favour of removing references to gender in the country’s marriage law, which will make it possible for same-sex couples to get married. It was predicted to be an extremely close vote, but in the end gender neutrality won with 105 votes to 92. Same-sex couples have been able to enter into registered partnerships in Finland since 2002, but this form of union lacks some of the legal and more notably symbolic privileges of marriage. Today’s decision is thus a historic milestone in the progress towards tolerance and equality before the law for all the people of Finland.

Today’s parliamentary decision is also a milestone for another reason: it is the first piece of “crowdsourced” legislation on its way to becoming law in Finland. A 2012 constitutional change made it possible for 50,000 citizens or more to propose a bill to the parliament, through a mechanism known as the citizen initiative. Citizens can develop bills on a website maintained by the Open Ministry, a government-supported citizen association. The Open Ministry aims to be the deliberative version of government ministries that do the background work for government bills. Once the text of a citizen bill is finalised, citizens can also endorse it on a website maintained by the Ministry of Justice. If a bill attracts more than 50,000 endorsements within six months, it is delivered to the parliament.

A significant reason behind the creation of the citien initiative system was to increase citizen involvement in decision making and thus enhance the legitimacy of Finland’s political system: to make people feel that they can make a difference. Finland, like most Western democracies, is suffering from dwindling voter turnout rates (though in the last parliamentary elections, domestic voter turnout was a healthy 70.5 percent). However, here lies one of the potential pitfalls of the citizen initiative system. Of the six citizen bills delivered to the parliament so far, parliamentarians have outright rejected most proposals. According to research presented by Christensen and his colleagues at our Internet, Politics & Policy conference in Oxford in September (and to be published in issue 7:1 of Policy and Internet, March 2015), there is a risk that the citizen iniative system ends up having an effect that is opposite from what was intended:

“[T]hose who supported [a crowdsourced bill rejected by the parliament] experienced a drop in political trust as a result of not achieving this outcome. This shows that political legitimacy may well decline when participants do not get the intended result (cf. Budge, 2012). Hence, if crowdsourcing legislation in Finland is to have a positive impact on political legitimacy, it is important that it can help produce popular Citizens’ initiatives that are subsequently adopted by Parliament.”

One reason why citizen initiatives have faced a rough time in the parliament is that they are a somewhat odd addition to the parliament’s existing ways of working. The Finnish parliament, like most parliaments in representative democracies, is used to working in a government-opposition arrangement, where the government proposes bills, and parliamentarians belonging to government parties are expected to support those bills and resist bills originating from the opposition. Conversely, opposition leaders expect their members to be loyal to their own initiatives. In this arrangement, citizen initiatives have fallen into a no-man’s land where they are endorsed by neither government nor opposition members. Thanks to the party whip system, their only hope of passing has been in being adopted by the government. But the whole point of citizen initiatives is that they would allow bills not proposed by the government to reach parliament, making the exercise rather pointless.

The marriage equality citizen initiative was able to break this pattern not only because it enjoyed immense popular support, but also because many parliamentarians saw marriage equality as a matter of conscience, where the party whip system wouldn’t apply. Parliamentarians across party lines voted in support and against the initiative, in many cases ignoring their party leaders’ instructions.

Prime Minister Alexander Stubb commented immediately after the vote that the outcome “shows the power of citizen initiatives”, “citizen democracy and direct democracy”. Now that a precedent has been set, it is possible that subsequent citizen initiatives, too, get judged more on their merits than on who proposed them. Today’s decision on marriage equality may thus turn out to be historic not only for advancing equality and fairness, but also for helping to define crowdsourcing’s role in Finnish parliamentary decision making.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
The life and death of political news: using online data to measure the impact of the audience agenda https://ensr.oii.ox.ac.uk/the-life-and-death-of-political-news-using-online-data-to-measure-the-impact-of-the-audience-agenda/ Tue, 09 Sep 2014 07:04:47 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2879
Caption
Image of the Telegraph’s state of the art “hub and spoke” newsroom layout by David Sim.
The political agenda has always been shaped by what the news media decide to publish — through their ability to broadcast to large, loyal audiences in a sustained manner, news editors have the ability to shape ‘political reality’ by deciding what is important to report. Traditionally, journalists pass to their editors from a pool of potential stories; editors then choose which stories to publish. However, with the increasing importance of online news, editors must now decide not only what to publish and where, but how long it should remain prominent and visible to the audience on the front page of the news website.

The question of how much influence the audience has in these decisions has always been ambiguous. While in theory we might expect journalists to be attentive to readers, journalism has also been characterized as a profession with a “deliberate…ignorance of audience wants” (Anderson, 2011b). This ‘anti-populism’ is still often portrayed as an important journalistic virtue, in the context of telling people what they need to hear, rather than what they want to hear. Recently, however, attention has been turning to the potential impact that online audience metrics are having on journalism’s “deliberate ignorance”. Online publishing provides a huge amount of information to editors about visitor numbers, visit frequency, and what visitors choose to read and how long they spend reading it. Online editors now have detailed information about what articles are popular almost as soon as they are published, with these statistics frequently displayed prominently in the newsroom.

The rise of audience metrics has created concern both within the journalistic profession and academia, as part of a broader set of concerns about the way journalism is changing online. Many have expressed concern about a ‘culture of click’, whereby important but unexciting stories make way for more attention grabbing pieces, and editorial judgments are overridden by traffic statistics. At a time when media business models are under great strain, the incentives to follow the audience are obvious, particularly when business models increasingly rely on revenue from online traffic and advertising. The consequences for the broader agenda-setting function of the news media could be significant: more prolific or earlier readers might play a disproportionate role in helping to select content; particular social classes or groupings that read news online less frequently might find their issues being subtly shifted down the agenda.

The extent to which such a populist influence exists has attracted little empirical research. Many ethnographic studies have shown that audience metrics are being captured in online newsrooms, with anecdotal evidence for the importance of traffic statistics on an article’s lifetime (Anderson 2011b, MacGregor, 2007). However, many editors have emphasised that popularity is not a major determining factor (MacGregor, 2007), and that news values remain significant in terms of placement of news articles.

In order to assess the possible influence of audience metrics on decisions made by political news editors, we undertook a systematic, large-scale study of the relationship between readership statistics and article lifetime. We examined the news cycles of five major UK news outlets (the BBC, the Daily Telegraph, the Guardian, the Daily Mail and the Mirror) over a period of six weeks, capturing their front pages every 15 minutes, resulting in over 20,000 front-page captures and more than 40,000 individual articles. We measure article readership by capturing information from the BBC’s “most read” list of news articles (twelve percent of the articles were featured at some point on the ‘most read’ list, with a median time to achieving this status of two hours, and an average article life of 15 hours on the front page). Using the Cox Proportional Hazards model (which allows us to quantify the impact of an article’s appearance on the ‘most read’ list on its chance of survival) we asked whether an article’s being listed in a ‘most read’ column affected the length of time it remained on the front page.

We found that ‘most read’ articles had, on average, a 26% lower chance of being removed from the front page than equivalent articles which were not on the most read list, providing support for the idea that online editors are influenced by readership statistics. In addition to assessing the general impact of readership statistics, we also wanted to see whether this effect differs between ‘political’ and ‘entertainment’ news. Research on participatory journalism has suggested that online editors might be more willing to allow audience participation in areas of soft news such as entertainment, arts, sports, etc. We find a small amount of evidence for this claim, though the difference between the two categories was very slight.

Finally, we wanted to assess whether there is a ‘quality’ / ‘tabloid’ split. Part of the definition of tabloid style journalism lies precisely in its willingness to follow the demands of its audience. However, we found the audience ‘effect’ (surprisingly) to be most obvious in the quality papers. For tabloids, ‘most read’ status actually had a slightly negative effect on article lifetime. We wouldn’t argue that tabloid editors actively reject the wishes of their audience; however we can say that these editors are no more likely to follow their audience than the typical ‘quality’ editor, and in fact may be less so. We do not have a clear explanation for this difference, though we could speculate that, as tabloid publications are already more tuned in to the wishes of their audience, the appearance of readership statistics makes less practical difference to the overall product. However it may also simply be the case that the online environment is slowly producing new journalistic practices for which the tabloid / quality distinction will be of less usefulness.

So on the basis of our study, we can say that high-traffic articles do in fact spend longer in the spotlight than ones that attract less readership: audience readership does have a measurable impact on the lifespan of political news. The audience is no longer the unknown quantity it was in offline journalism: it appears to have a clear impact on journalistic practice. The question that remains, however, is whether this constitutes evidence of a new ‘populism’ in journalism; or whether it represents (as editors themselves have argued) the simple striking of a balance between audience demands and news values.

Read the full article: Bright, J., and Nicholls, T. (2014) The Life and Death of Political News: Measuring the Impact of the Audience Agenda Using Online Data. Social Science Computer Review 32 (2) 170-181.

References

Anderson, C. W. (2011) Between creative and quantified audiences: Web metrics and changing patterns of newswork in local US newsrooms. Journalism 12 (5) 550-566.

MacGregor, P. (2007) Tracking the Online Audience. Journalism Studies 8 (2) 280-298.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

Tom Nicholls is a doctoral student at the Oxford Internet Institute. His research interests include the impact of technology on citizen/government relationships, the Internet’s implications for public management and models of electronic public service delivery.

]]>
How easy is it to research the Chinese web? https://ensr.oii.ox.ac.uk/how-easy-is-it-to-research-the-chinese-web/ Tue, 18 Feb 2014 11:05:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2418 Chinese Internet Cafe
Access to data from the Chinese Web, like other Web data, depends on platform policies, the level of data openness, and the availability of data intermediary and tools. Image of a Chinese Internet cafe by Hal Dick.

Ed: How easy is it to request or scrape data from the “Chinese Web”? And how much of it is under some form of government control?

Han-Teng: Access to data from the Chinese Web, like other Web data, depends on the policies of platforms, the level of data openness, and the availability of data intermediary and tools. All these factors have direct impacts on the quality and usability of data. Since there are many forms of government control and intentions, increasingly not just the websites inside mainland China under Chinese jurisdiction, but also the Chinese “soft power” institutions and individuals telling the “Chinese story” or “Chinese dream” (as opposed to “American dreams”), it requires case-by-case research to determine the extent and level of government control and interventions. Based on my own research on Chinese user-generated encyclopaedias and Chinese-language twitter and Weibo, the research expectations seem to be that control and intervention by Beijing will be most likely on political and cultural topics, not likely on economic or entertainment ones.

This observation is linked to how various forms of government control and interventions are executed, which often requires massive data and human operations to filter, categorise and produce content that are often based on keywords. It is particularly true for Chinese websites in mainland China (behind the Great Firewall, excluding Hong Kong and Macao), where private website companies execute these day-to-day operations under the directives and memos of various Chinese party and government agencies.

Of course there is some extra layer of challenges if researchers try to request content and traffic data from the major Chinese websites for research, especially regarding censorship. Nonetheless, since most Web content data is open, researchers such as Professor Fu in Hong Kong University manage to scrape data sample from Weibo, helping researchers like me to access the data more easily. These openly collected data can then be used to measure potential government control, as has been done for previous research on search engines (Jiang and Akhtar 2011; Zhu et al. 2011) and social media (Bamman et al. 2012; Fu et al. 2013; Fu and Chau 2013; King et al. 2012; Zhu et al. 2012).

It follows that the availability of data intermediary and tools will become important for both academic and corporate research. Many new “public opinion monitoring” companies compete to provide better tools and datasets as data intermediaries, including the Online Public Opinion Monitoring and Measuring Unit (人民网舆情监测室) of the People’s Net (a Party press organ) with annual revenue near 200 million RMB. Hence, in addition to the on-going considerations on big data and Web data research, we need to factor in how these private and public Web data intermediaries shape the Chinese Web data environment (Liao et al. 2013).

Given the fact that the government’s control of information on the Chinese Web involves not only the marginalization (as opposed to the traditional censorship) of “unwanted” messages and information, but also the prioritisation of propaganda or pro-government messages (including those made by paid commentators and “robots”), I would add that the new challenges for researchers include the detection of paid (and sometimes robot-generated) comments. Although these challenges are not exactly the same as data access, researchers need to consider them for data collection.

Ed: How much of the content and traffic is identifiable or geolocatable by region (eg mainland vs Hong Kong, Taiwan, abroad)?

Han-Teng: Identifying geographic information from Chinese Web data, like other Web data, can be largely done by geo-IP (a straightforward IP to geographic location mapping service), domain names (.cn for China; .hk for Hong Kong; .tw for Taiwan), and language preferences (simplified Chinese used by mainland Chinese users; traditional Chinese used by Hong Kong and Taiwan). Again, like the question of data access, the availability and quality of such geographic and linguistic information depends on the policies, openness, and the availability of data intermediary and tools.

Nonetheless, there exist research efforts on using geographic and/or linguistic information of Chinese Web data to assess the level and extent of convergence and separation of Chinese information and users around the world (Etling et al. 2009; Liao 2008; Taneja and Wu 2013). Etling and colleagues (2009) concluded their mapping of Chinese blogsphere research with the interpretation of five “attentive spaces” roughly corresponding to five clusters or zones in the network map: on one side, two clusters of “Pro-state” and “Business” bloggers, and on the other, two clusters of “Overseas” bloggers (including Hong Kong and Taiwan) and “Culture”. Situated between the three clusters of “Pro-state”, “Overseas” and “Culture” (and thus at the centre of the network map) is the remaining cluster they call the “critical discourse” cluster, which is at the intersection of the two sides (albeit more on the “blocked” side of the Great Firewall).

I myself found distinct geographic focus and linguistic preferences between the online citations in Baidu Baike and Chinese Wikipedia (Liao 2008). Other research based on a sample of traffic data shows the existence of a “Chinese” cluster as an instance of a “culturally defined market”, regardless of their geographic and linguistic differences (Taneja and Wu 2013). Although I found their argument that the Great Firewall has very limited impacts on such a single “Chinese” cluster, they demonstrate the possibility of extracting geographic and linguistic information on Chinese Web data for better understanding the dynamics of Chinese online interactions; which are by no means limited within China or behind the Great Firewall.

Ed: In terms of online monitoring of public opinion, is it possible to identify robots / “50 cent party” — that is, what proportion of the “opinion” actually has a government source?

Han-Teng: There exist research efforts in identifying robot comments by analysing the patterns and content of comments, and their profile relationship with other accounts. It is more difficult to prove the direct footprint of government sources. Nonetheless, if researchers take another approach such as narrative analysis for well-defined propaganda research (such as the pro- and anti-Falun opinions), it might be easier to categorise and visualise the dynamics and then trace back to the origins of dominant keywords and narratives to identify the sources of loud messages. I personally think such research and analytical efforts require deep knowledge on both technical and cultural-political understanding of Chinese Web data, preferably with an integrated mixed method research design that incorporates both the quantitative and qualitative methods required for the data question at hand.

Ed: In terms of censorship, ISPs operate within explicit governmental guidelines; do the public (who contribute content) also have explicit rules about what topics and content are ‘acceptable’, or do they have to work it out by seeing what gets deleted?

Han-Teng: As a general rule, online censorship works better when individual contributors are isolated. Most of the time, contributors experience technical difficulties when using Beijing’s unwanted keywords or undesired websites, triggering self-censorship behaviours to avoid such difficulties. I personally believe such tacit learning serves as the most relevant psychological and behaviour mechanism (rather than explicit rules). In a sense, the power of censorship and political discipline is the fact that the real rules of engagement are never explicit to users, thereby giving more power to technocrats to exercise power in a more arbitrary fashion. I would describe the general situation as follows. Directives are given to both ISPs and ICPs about certain “hot terms”, some dynamic and some constant. Users “learn” them through encountering various forms of “technical difficulties”. Thus, while ISPs and ICPs may not enforce the same directives in the same fashion (some overshoot while others undershoot), the general tacit knowledge about the “red line” is thus delivered.

Nevertheless, there are some efforts where users do share their experiences with one another, so that they have a social understanding of what information and which category of users is being disciplined. There are also constant efforts outside mainland China, especially institutions in Hong Kong and Berkeley to monitor what is being deleted. However, given the fact that data is abundant for Chinese users, I have become more worried about the phenomenon of “marginalization of information and/or narratives”. It should be noted that censorship or deletion is just one of the tools of propaganda technocrats and that the Chinese Communist Party has had its share of historical lessons (and also victories) against its past opponents, such as the Chinese Nationalist Party and the United States during the Chinese Civil War and the Cold War. I strongly believe that as researchers we need better concepts and tools to assess the dynamics of information marginalization and prioritisation, treating censorship and data deletion as one mechanism of information marginalization in the age of data abundance and limited attention.

Ed: Has anyone tried to produce a map of censorship: ie mapping absence of discussion? For a researcher wanting to do this, how would they get hold of the deleted content?

Han-Teng: Mapping censorship has been done through experiment (MacKinnon 2008; Zhu et al. 2011) and by contrasting datasets (Fu et al. 2013; Liao 2013; Zhu et al. 2012). Here the availability of data intermediaries such as the WeiboScope in Hong Kong University, and unblocked alternative such as Chinese Wikipedia, serve as direct and indirect points of comparison to see what is being or most likely to be deleted. As I am more interested in mapping information marginalization (as opposed to prioritisation), I would say that we need more analytical and visualisation tools to map out the different levels and extent of information censorship and marginalization. The research challenges then shift to the questions of how and why certain content has been deleted inside mainland China, and thus kept or leaked outside China. As we begin to realise that the censorship regime can still achieve its desired political effects by voicing down the undesired messages and voicing up the desired ones, researchers do not necessarily have to get hold of the deleted content from the websites inside mainland China. They can simply reuse plenty of Chinese Web data available outside the censorship and filtering regime to undertake experiments or comparative study.

Ed: What other questions are people trying to explore or answer with data from the “Chinese Web”? And what are the difficulties? For instance, are there enough tools available for academics wanting to process Chinese text?

Han-Teng: As Chinese societies (including mainland China, Hong Kong, Taiwan and other overseas diaspora communities) go digital and networked, it’s only a matter of time before Chinese Web data becomes the equivalent of English Web data. However, there are challenges in processing Chinese language texts, although several of the major challenges become manageable as digital and network tools go multilingual. In fact, Chinese-language users and technologies have been the major goal and actors for a multi-lingual Internet (Liao 2009a,b). While there is technical progress in basic tools, we as Chinese Internet researchers still lack data and tool intermediaries that are designed to process Chinese texts smoothly. For instance, many analytical software and tools depend on or require the use of space characters as word boundaries, a condition that does not apply to Chinese texts.

In addition, since there exist some technical and interpretative challenges in analysing Chinese text datasets with mixed scripts (e.g. simplified and traditional Chinese) or with other foreign languages. Mandarin Chinese language is not the only language inside China; there are indications that the Cantonese and Shanghainese languages have a significant presence. Minority languages such as Tibetan, Mongolian, Uyghur, etc. are also still used by official Chinese websites to demonstrate the cultural inclusiveness of the Chinese authorities. Chinese official and semi-official diplomatic organs have also tried to tell “Chinese stories” in various of the world’s major languages, sometimes in direct competition with its political opponents such as Falun Gong.

These areas of the “Chinese Web” data remain unexplored territory for systematic research, which will require more tools and methods that are similar to the toolkits of multi-lingual Internet researchers. Hence I would say the basic data and tool challenges are not particular to the “Chinese Web”, but are rather a general challenge to the “Web” that is becoming increasingly multilingual by the day. We Chinese Internet researchers do need more collaboration when it comes to sharing data and tools, and I am hopeful that we will have more trustworthy and independent data intermediaries, such as Weiboscope and others, for a better future of the Chinese Web data ecology.

References

Bamman, D., O’Connor, B., & Smith, N. (2012). Censorship and deletion practices in Chinese social media. First Monday, 17(3-5).

Etling, B., Kelly, J., & Faris, R. (2009). Mapping Chinese Blogosphere. In 7th Annual Chinese Internet Research Conference (CIRC 2009). Annenberg School for Communication, University of Pennsylvania, Philadelphia, US.

Fu, K., Chan, C., & Chau, M. (2013). Assessing Censorship on Microblogs in China: Discriminatory Keyword Analysis and Impact Evaluation of the “Real Name Registration” Policy. IEEE Internet Computing, 17(3), 42–50.

Fu, K., & Chau, M. (2013). Reality Check for the Chinese Microblog Space: a random sampling approach. PLOS ONE, 8(3), e58356.

Jiang, M., & Akhtar, A. (2011). Peer into the Black Box of Chinese Search Engines: A Comparative Study of Baidu, Google, and Goso. Presented at the The 9th Chinese Internet Research Conference (CIRC 2011), Washington, D.C.: Institute for the Study of Diplomacy. Georgetown University.

King, G., Pan, J., & Roberts, M. (2012). How censorship in China allows government criticism but silences collective expression. In APSA 2012 Annual Meeting Paper.

Liao, H.-T. (2008). A webometric comparison of Chinese Wikipedia and Baidu Baike and its implications for understanding the Chinese-speaking Internet. In 9th annual Internet Research Conference: Rethinking Community, Rethinking Place. Copenhagen.

Liao, H.-T. (2009a). Are Chinese characters not modern enough? An essay on their role online. GLIMPSE: the art + science of seeing, 2(1), 16–24.

Liao, H.-T. (2009b). Conflict and Consensus in the Chinese version of Wikipedia. IEEE Technology and Society Magazine, 28(2), 49–56. doi:10.1109/MTS.2009.932799

Liao, H.-T. (2013, August 5). How do Baidu Baike and Chinese Wikipedia filter contribution? A case study of network gatekeeping. To be presented at the Wikisym 2013: The Joint International Symposium on Open Collaboration, Hong Kong.

Liao, H.-T., Fu, K., Jiang, M., & Wang, N. (2013, June 15). Chinese Web Data: Definition, Uses, and Scholarship. (Accepted). To be presented at the 11th Annual Chinese Internet Research Conference (CIRC 2013), Oxford, UK.

MacKinnon, R. (2008). Flatter world and thicker walls? Blogs, censorship and civic discourse in China. Public Choice, 134(1), 31–46. doi:10.1007/s11127-007-9199-0

Taneja, H., & Wu, A. X. (2013). How Does the Great Firewall of China Affect Online User Behavior? Isolated “Internets” as Culturally Defined Markets on the WWW. Presented at the 11th Annual Chinese Internet Research Conference (CIRC 2013), Oxford, UK.

Zhu, T., Bronk, C., & Wallach, D. S. (2011). An Analysis of Chinese Search Engine Filtering. arXiv:1107.3794.

Zhu, T., Phipps, D., Pridgen, A., Crandall, J. R., & Wallach, D. S. (2012). Tracking and Quantifying Censorship on a Chinese Microblogging Site. arXiv:1211.6166.


Han-Teng was talking to blog editor David Sutcliffe.

Han-Teng Liao is an OII DPhil student whose research aims to reconsider the role of keywords (as in understanding “keyword advertising” using knowledge from sociolinguistics and information science) and hyperlinks (webometrics) in shaping the sense of “fellow users” in digital networked environments. Specifically, his DPhil project is a comparative study of two major user-contributed Chinese encyclopedias, Chinese Wikipedia and Baidu Baike.

]]>
Mapping collective public opinion in the Russian blogosphere https://ensr.oii.ox.ac.uk/mapping-collective-public-opinion-in-the-russian-blogosphere/ Mon, 10 Feb 2014 11:30:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2372 Caption
Widely reported as fraudulent, the 2011 Russian Parliamentary elections provoked mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia. Image by Nikolai Vassiliev.

Blogs are becoming increasingly important for agenda setting and formation of collective public opinion on a wide range of issues. In countries like Russia where the Internet is not technically filtered, but where the traditional media is tightly controlled by the state, they may be particularly important. The Russian language blogosphere counts about 85 million blogs – an amount far beyond the capacities of any government to control – and the Russian search engine Yandex, with its blog rating service, serves as an important reference point for Russia’s educated public in its search of authoritative and independent sources of information. The blogosphere is thereby able to function as a mass medium of “public opinion” and also to exercise influence.

One topic that was particularly salient over the period we studied concerned the Russian Parliamentary elections of December 2011. Widely reported as fraudulent, they provoked immediate and mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia, as well as corresponding activity in the blogosphere. Protesters made effective use of the Internet to organize a movement that demanded cancellation of the parliamentary election results, and the holding of new and fair elections. These protests continued until the following summer, gaining widespread national and international attention.

Most of the political and social discussion blogged in Russia is hosted on the blog platform LiveJournal. Some of these bloggers can claim a certain amount of influence; the top thirty bloggers have over 20,000 “friends” each, representing a good circulation for the average Russian newspaper. Part of the blogosphere may thereby resemble the traditional media; the deeper into the long tail of average bloggers, however, the more it functions as more as pure public opinion. This “top list” effect may be particularly important in societies (like Russia’s) where popularity lists exert a visible influence on bloggers’ competitive behavior and on public perceptions of their significance. Given the influence of these top bloggers, it may be claimed that, like the traditional media, they act as filters of issues to be thought about, and as definers of their relative importance and salience.

Gauging public opinion is of obvious interest to governments and politicians, and opinion polls are widely used to do this, but they have been consistently criticized for the imposition of agendas on respondents by pollsters, producing artefacts. Indeed, the public opinion literature has tended to regard opinion as something to be “extracted” by pollsters, which inevitably pre-structures the output. This literature doesn’t consider that public opinion might also exist in the form of natural language texts, such as blog posts, that have not been pre-structured by external observers.

There are two basic ways to detect topics in natural language texts: the first is manual coding of texts (ie by traditional content analysis), and the other involves rapidly developing techniques of automatic topic modeling or text clustering. The media studies literature has relied heavily on traditional content analysis; however, these studies are inevitably limited by the volume of data a person can physically process, given there may be hundreds of issues and opinions to track — LiveJournal’s 2.8 million blog accounts, for example, generate 90,000 posts daily.

For large text collections, therefore, only the second approach is feasible. In our article we explored how methods for topic modeling developed in computer science may be applied to social science questions – such as how to efficiently track public opinion on particular (and evolving) issues across entire populations. Specifically, we demonstrate how automated topic modeling can identify public agendas, their composition, structure, the relative salience of different topics, and their evolution over time without prior knowledge of the issues being discussed and written about. This automated “discovery” of issues in texts involves division of texts into topically — or more precisely, lexically — similar groups that can later be interpreted and labeled by researchers. Although this approach has limitations in tackling subtle meanings and links, experiments where automated results have been checked against human coding show over 90 percent accuracy.

The computer science literature is flooded with methodological papers on automatic analysis of big textual data. While these methods can’t entirely replace manual work with texts, they can help reduce it to the most meaningful and representative areas of the textual space they help to map, and are the only means to monitor agendas and attitudes across multiple sources, over long periods and at scale. They can also help solve problems of insufficient and biased sampling, when entire populations become available for analysis. Due to their recentness, as well as their mathematical and computational complexity, these approaches are rarely applied by social scientists, and to our knowledge, topic modeling has not previously been applied for the extraction of agendas from blogs in any social science research.

The natural extension of automated topic or issue extraction involves sentiment mining and analysis; as Gonzalez-Bailon, Kaltenbrunner, and Banches (2012) have pointed out, public opinion doesn’t just involve specific issues, but also encompasses the state of public emotion about these issues, including attitudes and preferences. This involves extracting opinions on the issues/agendas that are thought to be present in the texts, usually by dividing sentences into positive and negative. These techniques are based on human-coded dictionaries of emotive words, on algorithmic construction of sentiment dictionaries, or on machine learning techniques.

Both topic modeling and sentiment analysis techniques are required to effectively monitor self-generated public opinion. When methods for tracking attitudes complement methods to build topic structures, a rich and powerful map of self-generated public opinion can be drawn. Of course this mapping can’t completely replace opinion polls; rather, it’s a new way of learning what people are thinking and talking about; a method that makes the vast amounts of user-generated content about society – such as the 65 million blogs that make up the Russian blogosphere — available for social and policy analysis.

Naturally, this approach to public opinion and attitudes is not free of limitations. First, the dataset is only representative of the self-selected population of those who have authored the texts, not of the whole population. Second, like regular polled public opinion, online public opinion only covers those attitudes that bloggers are willing to share in public. Furthermore, there is still a long way to go before the relevant instruments become mature, and this will demand the efforts of the whole research community: computer scientists and social scientists alike.

Read the full paper: Olessia Koltsova and Sergei Koltcov (2013) Mapping the public agenda with topic modeling: The case of the Russian livejournal. Policy and Internet 5 (2) 207–227.

Also read on this blog: Can text mining help handle the data deluge in public policy analysis? by Aude Bicquelet.

References

González-Bailón, S., A. Kaltenbrunner, and R.E. Banches. 2012. “Emotions, Public Opinion and U.S. Presidential Approval Rates: A 5 Year Analysis of Online Political Discussions,” Human Communication Research 38 (2): 121–43.

]]>
Technological innovation and disruption was a big theme of the WEF 2014 in Davos: but where was government? https://ensr.oii.ox.ac.uk/technological-innovation-disruption-was-big-theme-wef-2014-davos-but-where-was-government/ Thu, 30 Jan 2014 11:23:09 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2536
caption
The World Economic Forum engages business, political, academic and other leaders of society to shape global, regional and industry agendas. Image by World Economic Forum.

Last week, I was at the World Economic Forum in Davos, the first time that the Oxford Internet Institute has been represented there. Being closeted in a Swiss ski resort with 2,500 of the great, the good and the super-rich provided me with a good chance to see what the global elite are thinking about technological change and its role in ‘The Reshaping of the World: Consequences for Society, Politics and Business’, the stated focus of the WEF Annual Meeting in 2014.

What follows are those impressions that relate to public policy and the internet, and reflect only my own experience there. Outside the official programme there are whole hierarchies of breakfasts, lunches, dinners and other events, most of which a newcomer to Davos finds it difficult to discover and some of which require one to be at least a president of a small to medium-sized state — or Matt Damon.

There was much talk of hyperconnectivity, spirals of innovation, S-curves and exponential growth of technological diffusion, digitalization and disruption. As you might expect, the pace of these was emphasized most by those participants from the technology industry. The future of work in the face of leaps forward in robotics was a key theme, drawing on the new book by Eric Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies, which is just out in the US. There were several sessions on digital health and the eventual fruition of decades of pilots in telehealth (a banned term now, apparently), as applications based on mobile technologies start to be used more widely. Indeed, all delegates were presented with a ‘Jawbone’ bracelet which tracks the wearer’s exercise and sleep patterns (7,801 steps so far today). And of course there was much talk about the possibilities afforded by big data, if not quite as much as I expected.

The University of Oxford was represented in an ‘Ideas Lab’, convened by the Oxford Martin School on Data, Machines and the Human Factor. This format involves each presenter talking for five minutes in front of their 15 selected images rolling at 20 seconds each, with no control over the timing (described by the designer of the format before the session as ‘waterboarding for academics’, due to the conciseness and brevity required — and I can vouch for that). It was striking how much synergy there was in the presentations by the health engineer Lionel Tarassenko (talking about developments in digital healthcare in the home), the astrophysicist Chris Lintott (on crowdsourcing of science) and myself talking about collective action and mobilization in the social media age. We were all talking about the new possibilities that the internet and social media afford for citizens to contribute to healthcare, scientific knowledge and political change. Indeed, I was surprised that the topics of collective action and civic engagement, probably not traditional concerns of Davos, attracted widespread interest, including a session on ‘The New Citizen’ with the founders of Avaaz.

Of course there was some discussion of the Snowden revelations of the data crawling activities of the US NSA and UK GCHQ, and the privacy implications. A dinner on ‘the Digital Me’ generated an interesting discussion on privacy in the age of social media, reflecting a growing and welcome (to me anyway) pragmatism with respect to issues often hotly contested. As one participant put it, in an age of imperfect, partial information, we become used to the idea that what we read on Facebook is often, through its relation to the past, irrelevant to the present time and not to be taken into consideration when (for example) considering whether to offer someone a job. The wonderful danah boyd gave some insight from her new book It’s Complicated: the social lives of networked teens, from which emerged a discussion of a ‘taxonomy of privacy’ and the importance of considering the use to which data is put, as opposed to just the possession or collection of the data – although this could be dangerous ground, in the light of the Snowden revelations.

There was more talk of the future than the past. I participated in one dinner discussion of the topic of ‘Rethinking Living’ in 50 years time, a timespan challenged by Google Chairman Eric Schmidt’s argument earlier in the day that five years was an ‘infinite’ amount of time in the current speed of technological innovation. The after dinner discussion was surprisingly fun, and at my table at least we found ourselves drawn back to the past, wondering if the rise of preventative health care and the new localism that connectivity affords might look like a return to the pre-industrial age. When it came to the summing up and drawing out the implications for government, I was struck how most elements of any trajectory of change exposed a growing disconnect between citizens, or business, on the one hand – and government on the other.

This was the one topic that for me was notably absent from WEF 2014; the nature of government in this rapidly changing world, in spite of the three pillars — politics, society, and business — of the theme of the conference noted above. At one lunch convened by McKinsey that was particularly ebullient regarding the ceaseless pace of technological change, I pointed out that government was only at the beginning of the S-curve, or perhaps that such a curve had no relevance for government. Another delegate asked how the assembled audience might help government to manage better here, and another pointed out that globally, we were investing less and less in government at a time when it needed more resources, including far higher remuneration for top officials. But the panellists were less enthusiastic to pick up on these points.

As I have discussed previously on this blog and elsewhere, we are in an era where governments struggle to innovate technologically or to incorporate social media into organizational processes, where digital services lag far behind those of business, where the big data revolution is passing government by (apart from open data, which is not the same thing as big data, see my Guardian blog post on this issue). Pockets of innovation like the UK Government Digital Service push for governmentwide change, but we are still seeing major policy initiatives such as Obama’s healthcare plans in the US or Universal Credit in the UK flounder on technological grounds. Yet there were remarkably few delegates at the WEF representing the executive arm of government, particularly for the UK. So on the relationship between government and citizens in an age of rapid technological change, it was citizens – rather than governments – and, of course, business (given the predominance of CEOs) that received the attention of this high-powered crowd.

At the end of the ‘Rethinking Living’ dinner, a participant from another table said to me that in contrast to the participants from the technology industry, he thought 50 years was a rather short time horizon. As a landscape architect, designing with trees that take 30 years to grow, he had no problem imagining how things would look on this timescale. It occurred to me that there could be an analogy here with government, which likewise could take this kind of timescale to catch up with the technological revolution. But by that time, technology will have moved on and it may be that governments cannot afford this relaxed pace catching up with their citizens and the business world. Perhaps this should be a key theme for future forums.


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
Five recommendations for maximising the relevance of social science research for public policy-making in the big data era https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/ https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/#comments Mon, 04 Nov 2013 10:30:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2196 As I discussed in a previous post on the promises and threats of big data for public policy-making, public policy making has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means citizens and governments leave digital traces that can be harvested to generate big data. This increasingly rich data environment poses both promises and threats to policy-makers.

So how can social scientists help policy-makers in this changed environment, ensuring that social science research remains relevant? Social scientists have a good record on having policy influence, indeed in the UK better than other academic fields, including medicine, as recent research from the LSE Public Policy group has shown. Big data hold major promise for social science, which should enable us to further extend our record in policy research. We have access to a cornucopia of data of a kind which is more like that traditionally associated with so-called ‘hard’ science. Rather than being dependent on surveys, the traditional data staple of empirical social science, social media such as Wikipedia, Twitter, Facebook, and Google Search present us with the opportunity to scrape, generate, analyse and archive comparative data of unprecedented quantity. For example, at the OII over the last four years we have been generating a dataset of all petition signing in the UK and US, which contains the joining rate (updated every hour) for the 30,000 petitions created in the last three years. As a political scientist, I am very excited by this kind of data (up to now, we have had big data like this only for voting, and that only at election time), which will allow us to create a complete ecology of petition signing, one of the more popular acts of political participation in the UK. Likewise, we can look at the entire transaction history of online organizations like Wikipedia, or map the link structure of government’s online presence.

But big data holds threats for social scientists too. The technological challenge is ever present. To generate their own big data, researchers and students must learn to code, and for some that is an alien skill. At the OII we run a course on Digital Social Research that all our postgraduate students can take; but not all social science departments could either provide such a course, or persuade their postgraduate students that they needed it. Ours, who study the social science of the Internet, are obviously predisposed to do so. And big data analysis requires multi-disciplinary expertise. Our research team working on petitions data includes a computer scientist (Scott Hale), a physicist (Taha Yasseri) and a political scientist (myself). I can’t imagine doing this sort of research without such technical expertise, and as a multi-disciplinary department we are (reasonably) free to recruit these type of research faculty. But not all social science departments can promise a research career for computer scientists, or physicists, or any of the other disciplinary specialists that might be needed to tackle big data problems.

Five Recommendations for Social Scientists

So, how can social scientists overcome these challenges, and thereby be in a good position to aid policy-makers tackle their own barriers to making the most of the possibilities afforded by big data? Here are five recommendations:

Accept that multi-disciplinary research teams are going to become the norm for social science research, extending beyond social science disciplines into the life sciences, mathematics, physics, and engineering. At Policy and Internet’s 2012 Big Data conference, the keynote speaker Duncan Watts (physicist turned sociologist) called for a ‘dating agency’ for engineers and social scientists – with the former providing the technological expertise, and the latter identifying the important research questions. We need to make sure that forums exist where social scientists and technologists meet and discuss big data research at the earliest stages, so that research projects and programmes incorporate the core competencies of both.

We need to provide the normative and ethical basis for policy decisions in the big data era. That means bringing in normative political theorists and philosophers of information into our research teams. The government has committed £65 million to big data research funding, but it seems likely that any successful research proposals will have a strong ethics component embedded in the research programme, rather than an ethics add on or afterthought.

Training in data science. Many leading US universities are now admitting undergraduates to data science courses, but lack social science input. Of the 20 US masters courses in big data analytics compiled by Information Week, nearly all came from computer science or informatics departments. Social science research training needs to incorporate coding and analysis skills of the kind these courses provide, but with a social science focus. If we as social scientists leave the training to computer scientists, we will find that the new cadre of data scientists tend to leave out social science concerns or questions.

Bringing policy makers and academic researchers together to tackle the challenges that big data present. Last month the OII and Policy and Internet convened a workshop in Harvard on Responsible Research Agendas for Public Policy in the Big Data Era, which included various leading academic researchers in the government and big data field, and government officials from the Census Bureau, the Federal Reserve Board, the Bureau of Labor Statistics, and the Office of Management and Budget (OMB). The discussions revealed that there is continual procession of major events on big data in Washington DC (usually with a corporate or scientific research focus) to which US federal officials are invited, but also how few were really dedicated to tackling the distinctive issues that face government agencies such as those represented around the table.

Taking forward theoretical development in social science, incorporating big data insights. I recently spoke at the Oxford Analytica Global Horizons conference, at a session on Big Data. One of the few policy-makers (in proportion to corporate representatives) in the audience asked the panel “where is the theory”? As social scientists, we need to respond to that question, and fast.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop and the Political Studies Association Why Universities Matter: How Academic Social Science Contributes to Public Policy Impact, held at the LSE on 26 September 2013.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in e-government and digital era governance and politics, investigating the nature and implications of relationships between governments, citizens and the Internet and related digital technologies in the UK and internationally.

]]>
https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/feed/ 1
The promises and threats of big data for public policy-making https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/ https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/#comments Mon, 28 Oct 2013 15:07:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2299 The environment in which public policy is made has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means both citizens and governments leave digital traces that can be harvested to generate big data. Policy-making takes place in an increasingly rich data environment, which poses both promises and threats to policy-makers.

On the promise side, such data offers a chance for policy-making and implementation to be more citizen-focused, taking account of citizens’ needs, preferences and actual experience of public services, as recorded on social media platforms. As citizens express policy opinions on social networking sites such as Twitter and Facebook; rate or rank services or agencies on government applications such as NHS Choices; or enter discussions on the burgeoning range of social enterprise and NGO sites, such as Mumsnet, 38 degrees and patientopinion.org, they generate a whole range of data that government agencies might harvest to good use. Policy-makers also have access to a huge range of data on citizens’ actual behaviour, as recorded digitally whenever citizens interact with government administration or undertake some act of civic engagement, such as signing a petition.

Data mined from social media or administrative operations in this way also provide a range of new data which can enable government agencies to monitor – and improve – their own performance, for example through log usage data of their own electronic presence or transactions recorded on internal information systems, which are increasingly interlinked. And they can use data from social media for self-improvement, by understanding what people are saying about government, and which policies, services or providers are attracting negative opinions and complaints, enabling identification of a failing school, hospital or contractor, for example. They can solicit such data via their own sites, or those of social enterprises. And they can find out what people are concerned about or looking for, from the Google Search API or Google trends, which record the search patterns of a huge proportion of internet users.

As for threats, big data is technologically challenging for government, particularly those governments which have always struggled with large-scale information systems and technology projects. The UK government has long been a world leader in this regard and recent events have only consolidated its reputation. Governments have long suffered from information technology skill shortages and the complex skill sets required for big data analytics pose a particularly acute challenge. Even in the corporate sector, over a third of respondents to a recent survey of business technology professionals cited ‘Big data expertise is scarce and expensive’ as their primary concern about using big data software.

And there are particular cultural barriers to government in using social media, with the informal style and blurring of organizational and public-private boundaries which they engender. And gathering data from social media presents legal challenges, as companies like Facebook place barriers to the crawling and scraping of their sites.

More importantly, big data presents new moral and ethical dilemmas to policy makers. For example, it is possible to carry out probabilistic policy-making, where policy is made on the basis of what a small segment of individuals will probably do, rather than what they have done. Predictive policing has had some success particularly in California, where robberies declined by a quarter after use of the ‘PredPol’ policing software, but can lead to a “feedback loop of injustice” as one privacy advocacy group put it, as policing resources are targeted at increasingly small socio-economic groups. What responsibility does the state have to devote disproportionately more – or less – resources to the education of those school pupils who are, probabilistically, almost certain to drop out of secondary education? Such challenges are greater for governments than corporations. We (reasonably) happily trade privacy to allow Tesco and Facebook to use our data on the basis it will improve their products, but if government tries to use social media to understand citizens and improve its own performance, will it be accused of spying on its citizenry in order to quash potential resistance.

And of course there is an image problem for government in this field – discussion of big data and government puts the word ‘big’ dangerously close to the word ‘government’ and that is an unpopular combination. Policy-makers’ responses to Snowden’s revelations of the US Tempora and UK Prism programmes have done nothing to improve this image, with their focus on the use of big data to track down individuals and groups involved in acts of terrorism and criminality – rather than on anything to make policy-making better, or to use the wealth of information that these programmes collect for the public good.

However, policy-makers have no choice but to tackle some of these challenges. Big data has been the hottest trend in the corporate world for some years now, and commentators from IBM to the New Yorker are starting to talk about the big data ‘backlash’. Government has been far slower to recognize the advantages for policy-making and services. But in some policy sectors, big data poses very fundamental questions which call for an answer; how should governments conduct a census, for or produce labour statistics, for example, in the age of big data? Policy-makers will need to move fast to beat the backlash.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/feed/ 1
Can text mining help handle the data deluge in public policy analysis? https://ensr.oii.ox.ac.uk/can-text-mining-help-handle-data-deluge-public-policy-analysis/ Sun, 27 Oct 2013 12:29:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2273 Policy makers today must contend with two inescapable phenomena. On the one hand, there has been a major shift in the policies of governments concerning participatory governance – that is, engaged, collaborative, and community-focused public policy. At the same time, a significant proportion of government activities have now moved online, bringing about “a change to the whole information environment within which government operates” (Margetts 2009, 6).

Indeed, the Internet has become the main medium of interaction between government and citizens, and numerous websites offer opportunities for online democratic participation. The Hansard Society, for instance, regularly runs e-consultations on behalf of UK parliamentary select committees. For examples, e-consultations have been run on the Climate Change Bill (2007), the Human Tissue and Embryo Bill (2007), and on domestic violence and forced marriage (2008). Councils and boroughs also regularly invite citizens to take part in online consultations on issues affecting their area. The London Borough of Hammersmith and Fulham, for example, recently asked its residents for thier views on Sex Entertainment Venues and Sex Establishment Licensing policy.

However, citizen participation poses certain challenges for the design and analysis of public policy. In particular, governments and organizations must demonstrate that all opinions expressed through participatory exercises have been duly considered and carefully weighted before decisions are reached. One method for partly automating the interpretation of large quantities of online content typically produced by public consultations is text mining. Software products currently available range from those primarily used in qualitative research (integrating functions like tagging, indexing, and classification), to those integrating more quantitative and statistical tools, such as word frequency and cluster analysis (more information on text mining tools can be found at the National Centre for Text Mining).

While these methods have certainly attracted criticism and skepticism in terms of the interpretability of the output, they offer four important advantages for the analyst: namely categorization, data reduction, visualization, and speed.

1. Categorization. When analyzing the results of consultation exercises, analysts and policymakers must make sense of the high volume of disparate responses they receive; text mining supports the structuring of large amounts of this qualitative, discursive data into predefined or naturally occurring categories by storage and retrieval of sentence segments, indexing, and cross-referencing. Analysis of sentence segments from respondents with similar demographics (eg age) or opinions can itself be valuable, for example in the construction of descriptive typologies of respondents.

2. Data Reduction. Data reduction techniques include stemming (reduction of a word to its root form), combining of synonyms, and removal of non-informative “tool” or stop words. Hierarchical classifications, cluster analysis, and correspondence analysis methods allow the further reduction of texts to their structural components, highlighting the distinctive points of view associated with particular groups of respondents.

3. Visualization. Important points and interrelationships are easy to miss when read by eye, and rapid generation of visual overviews of responses (eg dendrograms, 3D scatter plots, heat maps, etc.) make large and complex datasets easier to comprehend in terms of identifying the main points of view and dimensions of a public debate.

4. Speed. Speed depends on whether a special dictionary or vocabulary needs to be compiled for the analysis, and on the amount of coding required. Coding is usually relatively fast and straightforward, and the succinct overview of responses provided by these methods can reduce the time for consultation responses.

Despite the above advantages of automated approaches to consultation analysis, text mining methods present several limitations. Automatic classification of responses runs the risk of missing or miscategorising distinctive or marginal points of view if sentence segments are too short, or if they rely on a rare vocabulary. Stemming can also generate problems if important semantic variations are overlooked (eg lumping together ‘ill+ness’, ‘ill+defined’, and ‘ill+ustration’). Other issues applicable to public e-consultation analysis include the danger that analysts distance themselves from the data, especially when converting words to numbers. This is quite apart from the issues of inter-coder reliability and data preparation, missing data, and insensitivity to figurative language, meaning and context, which can also result in misclassification when not human-verified.

However, when responding to criticisms of specific tools, we need to remember that different text mining methods are complementary, not mutually exclusive. A single solution to the analysis of qualitative or quantitative data would be very unlikely; and at the very least, exploratory techniques provide a useful first step that could be followed by a theory-testing model, or by triangulation exercises to confirm results obtained by other methods.

Apart from these technical issues, policy makers and analysts employing text mining methods for e-consultation analysis must also consider certain ethical issues in addition to those of informed consent, privacy, and confidentiality. First (of relevance to academics), respondents may not expect to end up as research subjects. They may simply be expecting to participate in a general consultation exercise, interacting exclusively with public officials and not indirectly with an analyst post hoc; much less ending up as a specific, traceable data point.

This has been a particularly delicate issue for healthcare professionals. Sharf (1999, 247) describes various negative experiences of following up online postings: one woman, on being contacted by a researcher seeking consent to gain insights from breast cancer patients about their personal experiences, accused the researcher of behaving voyeuristically and “taking advantage of people in distress.” Statistical interpretation of responses also presents its own issues, particularly if analyses are to be returned or made accessible to respondents.

Respondents might also be confused about or disagree with text mining as a method applied to their answers; indeed, it could be perceived as dehumanizing – reducing personal opinions and arguments to statistical data points. In a public consultation, respondents might feel somewhat betrayed that their views and opinions eventually result in just a dot on a correspondence analysis with no immediate, apparent meaning or import, at least in lay terms. Obviously the consultation organizer needs to outline clearly and precisely how qualitative responses can be collated into a quantifiable account of a sample population’s views.

This is an important point; in order to reduce both technical and ethical risks, researchers should ensure that their methodology combines both qualitative and quantitative analyses. While many text mining techniques provide useful statistical output, the UK Government’s prescribed Code of Practice on public consultation is quite explicit on the topic: “The focus should be on the evidence given by consultees to back up their arguments. Analyzing consultation responses is primarily a qualitative rather than a quantitative exercise” (2008, 12). This suggests that the perennial debate between quantitative and qualitative methodologists needs to be updated and better resolved.

References

Margetts, H. 2009. “The Internet and Public Policy.” Policy & Internet 1 (1).

Sharf, B. 1999. “Beyond Netiquette: The Ethics of Doing Naturalistic Discourse Research on the Internet.” In Doing Internet Research, ed. S. Jones, London: Sage.


Read the full paper: Bicquelet, A., and Weale, A. (2011) Coping with the Cornucopia: Can Text Mining Help Handle the Data Deluge in Public Policy Analysis? Policy & Internet 3 (4).

Dr Aude Bicquelet is a Fellow in LSE’s Department of Methodology. Her main research interests include computer-assisted analysis, Text Mining methods, comparative politics and public policy. She has published a number of journal articles in these areas and is the author of a forthcoming book, “Textual Analysis” (Sage Benchmarks in Social Research Methods, in press).

]]>
Responsible research agendas for public policy in the era of big data https://ensr.oii.ox.ac.uk/responsible-research-agendas-for-public-policy-in-the-era-of-big-data/ Thu, 19 Sep 2013 15:17:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2164 Last week the OII went to Harvard. Against the backdrop of a gathering storm of interest around the potential of computational social science to contribute to the public good, we sought to bring together leading social science academics with senior government agency staff to discuss its public policy potential. Supported by the OII-edited journal Policy and Internet and its owners, the Washington-based Policy Studies Organization (PSO), this one-day workshop facilitated a thought-provoking conversation between leading big data researchers such as David Lazer, Brooke Foucault-Welles and Sandra Gonzalez-Bailon, e-government experts such as Cary Coglianese, Helen Margetts and Jane Fountain, and senior agency staff from US federal bureaus including Labor Statistics, Census, and the Office for the Management of the Budget.

It’s often difficult to appreciate the impact of research beyond the ivory tower, but what this productive workshop demonstrated is that policy-makers and academics share many similar hopes and challenges in relation to the exploitation of ‘big data’. Our motivations and approaches may differ, but insofar as the youth of the ‘big data’ concept explains the lack of common language and understanding, there is value in mutual exploration of the issues. Although it’s impossible to do justice to the richness of the day’s interactions, some of the most pertinent and interesting conversations arose around the following four issues.

Managing a diversity of data sources. In a world where our capacity to ask important questions often exceeds the availability of data to answer them, many participants spoke of the difficulties of managing a diversity of data sources. For agency staff this issue comes into sharp focus when available administrative data that is supposed to inform policy formulation is either incomplete or inadequate. Consider, for example, the challenge of regulating an economy in a situation of fundamental data asymmetry, where private sector institutions track, record and analyse every transaction, whilst the state only has access to far more basic performance metrics and accounts. Such asymmetric data practices also affect academic research, where once again private sector tech companies such as Google, Facebook and Twitter often offer access only to portions of their data. In both cases participants gave examples of creative solutions using merged or blended data sources, which raise significant methodological and also ethical difficulties which merit further attention. The Berkman Center’s Rob Faris also noted the challenges of combining ‘intentional’ and ‘found’ data, where the former allow far greater certainty about the circumstances of their collection.

Data dictating the questions. If participants expressed the need to expend more effort on getting the most out of available but diverse data sources, several also canvassed against the dangers of letting data availability dictate the questions that could be asked. As we’ve experienced at the OII, for example, the availability of Wikipedia or Twitter data means that questions of unequal digital access (to political resources, knowledge production etc.) can often be addressed through the lens of these applications or platforms. But these data can provide only a snapshot, and large questions of great social or political importance may not easily be answered through such proxy measurements. Similarly, big data may be very helpful in providing insights into policy-relevant patterns or correlations, such as identifying early indicators of seasonal diseases or neighbourhood decline, but seem ill-suited to answer difficult questions regarding say, the efficacy of small-scale family interventions. Just because the latter are harder to answer using currently vogue-ish tools doesn’t mean we should cease to ask these questions.

Ethics. Concerns about privacy are frequently raised as a significant limitation of the usefulness of big data. Given that with two or more data sets even supposedly anonymous data subjects may be identified, the general consensus seems to be that ‘privacy is dead’. Whilst all participants recognised the importance of public debate around this issue, several academics and policy-makers expressed a desire to get beyond this discussion to a more nuanced consideration of appropriate ethical standards. Accountability and transparency are often held up as more realistic means of protecting citizens’ interests, but one workshop participant also suggested it would be helpful to encourage more public debate about acceptable and unacceptable uses of our data, to determine whether some uses might simply be deemed ‘off-limits’, whilst other uses could be accepted as offering few risks.

Accountability. Following on from this debate about the ethical limits of our uses of big data, discussion exposed the starkly differing standards to which government and academics (to say nothing of industry) are held accountable. As agency officials noted on several occasions it matters less what they actually do with citizens’ data, than what they are perceived to do with it, or even what it’s feared they might do. One of the greatest hurdles to be overcome here concerns the fundamental complexity of big data research, and the sheer difficulty of communicating to the public how it informs policy decisions. Quite apart from the opacity of the algorithms underlying big data analysis, the explicit focus on correlation rather than causation or explanation presents a new challenge for the justification of policy decisions, and consequently, for public acceptance of their legitimacy. As Greg Elin of Gitmachines emphasised, policy decisions are still the result of explicitly normative political discussion, but the justifiability of such decisions may be rendered more difficult given the nature of the evidence employed.

We could not resolve all these issues over the course of the day, but they served as pivot points for honest and productive discussion amongst the group. If nothing else, they demonstrate the value of interaction between academics and policy-makers in a research field where the stakes are set very high. We plan to reconvene in Washington in the spring.

*We are very grateful to the Policy Studies Organization (PSO) and the American Public University for their generous support of this workshop. The workshop “Responsible Research Agendas for Public Policy in the Era of Big Data” was held at the Harvard Faculty Club on 13 September 2013.

Also read: Big Data and Public Policy Workshop by Eric Meyer, workshop attendee and PI of the OII project Accessing and Using Big Data to Advance Social Science Knowledge.


Victoria Nash received her M.Phil in Politics from Magdalen College in 1996, after completing a First Class BA (Hons) Degree in Politics, Philosophy and Economics, before going on to complete a D.Phil in Politics from Nuffield College, Oxford University in 1999. She was a Research Fellow at the Institute of Public Policy Research prior to joining the OII in 2002. As Research and Policy Fellow at the OII, her work seeks to connect OII research with policy and practice, identifying and communicating the broader implications of OII’s research into Internet and technology use.

]]>
Is China shaping the Internet in Africa? https://ensr.oii.ox.ac.uk/is-china-shaping-the-internet-in-africa/ Thu, 15 Aug 2013 14:02:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1984 World Economic Forum
The telecommunication sector in Africa is increasingly crowded. Image of the Panel on the Future of China-Africa Relations, World Economic Forum on Africa 2011 (Cape Town) by World Economic Forum.

Ed: Concerns have been expressed (eg by Hillary Clinton and David Cameron) about the detrimental role China may play in African media sectors, by increasing authoritarianism and undermining Western efforts to promote openness and freedom of expression. Are these concerns fair?

Iginio: China’s initiatives in the communication sector abroad are burdened by the negative record of its domestic media. For the Chinese authorities this is a challenge that does not have an easy solution as they can’t really use their international broadcasters to tell a different story about Chinese media and Chinese engagement with foreign media, because they won’t be trusted. As the linguist George Lakoff has explained, if someone is told “Don’t think of an elephant!” he will likely start “summoning the bulkiness, the grayness, the trunkiness of an elephant”. That is to say, “when we negate a frame, we evoke a frame”. Saying that “Chinese interventions are not increasing authoritarianism” won’t help much. The only path China can undertake is to develop projects and use its media in ways that fall outside the realm of what is expected, creating new associations between China and the media, rather than trying to redress existing ones. In part this is already happening. For example, CCTV Africa, the new initiative of state-owned China’s Central Television (CCTV) and China’s flagship effort to win African hearts and minds, has developed a strategy aimed not at directly offering an alternative image of China, but at advancing new ways of looking at Africa, offering unprecedented resources to African journalists to report from the continent and tapping into the narrative of a “rising Africa”, as a continent of opportunities rather than of hunger, wars and underdevelopment.

Ed: Ideology has disappeared from the language of China-Africa cooperation, largely replaced by admissions of China’s interest in Africa’s resources and untapped potential. Does politics (eg China wanting to increase its international support and influence) nevertheless still inform the relationship?

China’s efforts in Africa during decolonisation were closely linked to its efforts to export and strengthen the socialist revolution on the continent. Today the language of ideology has largely disappeared from public statements, leaving less charged references to the promotion of “mutual benefit” and “sovereignty and independence” as guides of the new engagement. At the same time, this does not mean that the Chinese government has lost interest in engaging at the political/ideological level when the conditions allow. Identity of political views is not a precondition for engagement anymore but neither is it an aspiration, as China is not necessarily trying to influence local politics in ways that could promote socialism. But when there is already a resonance with the ideas embraced by its partners, Chinese authorities have not shied away from taking the engagement to a political/ideological level. This is demonstrated for example by party to party ties between the Communist Party of China (CUC) and other Socialist parties in Africa, including the Ethiopian People’s Revolutionary Democratic Front. Representative of the CUC have been invited to attend the EPRDF’s party conferences.

Ed: How much influence does China have on the domestic media / IT policies of the nations it invests in? Is it pushing the diffusion of its own strategies of media development and media control abroad? (And what are these strategies if so?)

Iginio: The Chinese government has signalled its lack of interest in exporting its own development model, and its intention to simply respond to the demands of its African partners. Ongoing research has largely confirmed that this ‘no strings attached’ approach is consistent, but this does not mean that China’s presence on the continent is neutral or has no impact on development policies and practices. China is indirectly influencing media/IT policies and practices in at least three ways.

First, while Western donors have tended to favour media projects benefiting the private sector and the civil society, often seeking to create incentives for the state to open a dialogue with other forces in society, China has exhibited a tendency to privilege government actors, thus increasing governments’ capacity vis-à-vis other critical components in the development of a media and telecommunication systems.

Second, with the launch of media projects such as CCTV Africa China has dramatically boosted its potential to shape narratives, exert soft power, and allow different voices to shape the political and development agenda. While international broadcasters such as the BBC World Service and Aljazeera have often tended to rely on civil society organisations as gatekeepers of information, CCTV has so far shown less interest in these actors, privileging the formal over the informal and also as part of its effort to provide more positive news from the continent.

Third, China’s domestic example to balance between investment in media and telecommunication and efforts to contain the risks of political instability that new technologies may bring, has the potential to act as a legitimising force for other states that share concerns of balancing both development and security, and that are actively seeking justifications for limiting voices and uses of technology that are considered potentially destabilising.

Ed: Is China developing tailored media models for abroad, or even using Africa as a “development lab”? How does China’s interest in Africa’s mediascape compare with its interest in other regions worldwide?

Iginio: There are concerns that, just as Western countries have tried to promote their models in Africa, China will try to export its own. As mentioned earlier, no studies to date have proved this to be the case. Rather, Africa indeed seems to be emerging as a “development lab”, a terrain in which to experiment and progressively find new strategies for engagement. Despite Africa’s growing importance for China as a trading and geostrategic partner, the continent is still perceived as a space where it is possible to make mistakes. In the case of the media, this is resulting in greater opportunities for journalists to experiment with new styles and enjoy freedoms that would be more difficult to obtain back in China, or even in the US, where CCTV has launched another regional initiative, CCTV America, which is more burdened, however, by the ideological confrontation between the two countries.

As part of Oxford’s Programme in Comparative Media Law and Policy‘s (PCMLP’s) ongoing research on China’s role in the media and communication sector in Africa, we have proposed a framework that can encourage understanding of Chinese engagement in the African mediasphere in terms of its original contributions, and not simply as a negative of the impression left by the West. This framework breaks down China’s actions on the continent according to China’s ability to act as a partner, a prototype, and a persuader, questioning, for example, whether or not media projects sponsored by the Chinese government are facilitating the diffusion of some aspects that characterise the Chinese domestic media system, rather than assuming this will be the case.

China’s role as a partner is evident in the significant resources it provides to African countries to implement social and economic development projects, including the laying down of infrastructure to increase Internet and mobile access. China’s perception as a prototype is linked to the ability its government has shown in balancing between investment in media and ICTs and containment of the risks of political instability new technologies may bring. Finally, China’s presence in Africa can be assessed according to its modality and ability to act as a persuader, as it seeks to shape national and international narratives.

So far we have employed this framework only to look at Chinese engagement in Africa, focusing in particular on Ghana, Ethiopia and Kenya, but we believe it can be applied also in other areas where China has stepped up its involvement in the ICT sector.

Ed: Has there been any explicit conflict yet between Chinese and non-Chinese news corporations vying for influence in this space? And how crowded is that space?

Iginio: The telecommunication sector in Africa is increasingly crowded as numerous international corporations from Europe (e.g. Vodafone), India (e.g. Airtel) and indeed China (e.g. Huawei and ZTE) are competing for shares of a profitable and growing market. Until recently Chinese companies have avoided competing with one another, but things are slowly changing. In Ethiopia, for example, after an initial project funded by the Chinese government to upgrade the telecommunication infrastructure was entirely commissioned to Chinese telecom giant ZTE, which is partially owned by the state, now ZTE has entered in competition with its Chinese (and privately owned) rival, Huawei, to benefit from an extension of the earlier project. In Kenya Huawei even decided to take ZTE to court over a project its rival won to supply the Kenyan police with a communication and surveillance system. Chinese investments in the telecommunication sectors in Africa have been part of the government’s strategy of engagement in the continent, but profit seems to have become an increasingly important factor, even if this may interfere with this strategy.

Ed: How do the recipient nations regard China’s investment and influence? For example, is there any evidence that authoritarian governments are seeking to adopt aspects of China’s own system?

Iginio: China is perceived as an example mostly by those countries that are seeking to balance between investment in ICTs and containment of the risks of political instability new technologies may bring. In a Wikileaks cable reporting a meeting between Sebhat Nega, one of the Ethiopian government’s ideologues, and the then US ambassador Donald Yamamoto, for example, Sebhat was reported to have openly declared his admiration for China and stressed that Ethiopia “needs the China model to inform the Ethiopian people”.


Iginio Gagliardone is a British Academy Post-Doctoral Research Fellow at the Centre for Socio-Legal Studies, University of Oxford. His research focuses on the role of the media in political change, especially in Sub-Saharan Africa, and the adaptation of international norms of freedom of expression in authoritarian regimes. Currently, he is exploring the role of emerging powers such as China in promoting alternative conceptions of the Internet in Africa. In particular he is analysing whether and how the ideas of state stability, development and community that characterize the Chinese model are influencing and legitimizing the development of a different conception of the information society.

Iginio Gagliardone was talking to blog editor David Sutcliffe.

]]>
Uncovering the patterns and practice of censorship in Chinese news sites https://ensr.oii.ox.ac.uk/uncovering-the-patterns-and-practice-of-censorship-in-chinese-news-sites/ Thu, 08 Aug 2013 08:17:55 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1992 Ed: How much work has been done on censorship of online news in China? What are the methodological challenges and important questions associated with this line of enquiry?

Sonya: Recent research is paying much attention to social media and aiming to quantify their censorial practices and to discern common patterns in them. Among these empirical studies, Bamman et al.’s (2012) work claimed to be “the first large-scale analysis of political content censorship” that investigates messages deleted from Sina Weibo, a Chinese equivalent to Twitter. On an even larger scale, King et al. (2013) collected data from nearly 1,400 Chinese social media platforms and analyzed the deleted messages. Most studies on news censorship, however, are devoted to narratives of special cases, such as the closure of Freeing Point, an outspoken news and opinion journal, and the blocking of the New York Times after it disclosed the wealth possessed by the family of Chinese former premier Wen Jiabao.

The shortage of news censorship research could be attributed to several methodological challenges. First, it is tricky to detect censorship to begin with, given the word ‘censorship’ is one of the first to be censored. Also, news websites will not simply let their readers hit a glaring “404 page not found”. Instead, they will use a “soft 404”, which returns a “success” code for a request of a deleted web page and takes readers to a (different) existing web page. While humans may be able to detect these soft 404s, it will be harder for computer programs (eg run by researchers) to do so. Moreover, because different websites employ varying soft 404 techniques, much labor is required to survey them and to incorporate the acquired knowledge into a generic monitoring tool.

Second, high computing power and bandwidth are required to handle the large amount of news publications and the slow network access to Chinese websites. For instance, NetEase alone publishes 8,000 – 10,000 news articles every day. Meanwhile, the Internet connection between the Chinese cyberspace and the outer world is fairly slow and it takes more than a second to check one link because the Great Firewall checks both incoming and outgoing Internet traffic. These two factors translate to 2-3 hours for a single program to check one day’s news publications of NetEase alone. If we fire up too many programs to accelerate the progress, the database system and/or the network connection may be challenged. In my case, even though I am using high performance computers at Michigan State University to conduct this research, they are overwhelmed every now and then.

Despite all the difficulties, I believe it is of great importance to reveal censored news stories to the public, especially to the audience inside China who do not enjoy a free flow of information. Censored news is a special type of information, as it is too inconvenient to exist in authorities’ eyes and it is deemed important to citizens’ everyday lives. For example, the outbreak of SARS had been censored from Chinese media presumably to avoid spoiling the harmonious atmosphere created for the 16th National Congress of the Communist Party. This allowed the virus to develop into a worldwide epidemic. Like SARS, a variety of censored issues are not only inconvenient but also crucial, because the authorities would not otherwise allocate substantial resources to monitor or eliminate them if they were merely trivial. Therefore, after censored news is detected, it is vital to seek effective and efficient channels to disclose it to the public so as to counterbalance potential damage that censorship may entail.

Ed: You found that party organs, ie news organizations tightly affiliated with the Chinese Communist Party, published a considerable amount of deleted news. Was this surprising?

Sonya: Yes, I was surprised when looking at the results the first time. To be exact, our finding is that commercial media experience a higher deletion rate, but party organs contribute the most deleted news by sheer volume, reflecting the fact that party organs possess more resources allocated by the central and local governments and therefore have the capacity to produce more news. Consequently, party organs have a higher chance of publishing controversial information that may be deleted in the future, especially when a news story becomes sensitive for some reason that is hard to foresee. For example, investigations of some government officials started when netizens recognized them in the news with different luxury watches and other expensive accessories. As such, even though party organs are obliged to write odes to the party, they may eventually backfire on the cadres if the beautiful words are discovered to be too far from reality.

Ed: How sensitive are citizens to the fact that some topics are actively avoided in the news media? And how easy is it for people to keep abreast of these topics (eg the “three Ts” of Tibet, Taiwan, and Tiananmen) from other information sources?

Sonya: This question highlights the distinction between pre-censorship and post-censorship. Our study looked at post-censorship, ie information that is published but subsequently deleted. By contrast, the topics that are “actively avoided” fall under the category of pre-censorship. I am fairly convinced that the current pre- and post-censorship practice is effective in terms of keeping the public from learning inconvenient facts and from mobilizing for collective action. If certain topics are consistently wiped from the mass media, how will citizens ever get to know about them?

The Tiananmen Square protest, for instance, has never been covered by Chinese mass media, leaving an entire generation growing up since 1989 that is ignorant of this historical event. As such, if younger Chinese citizens have never heard of the Tiananmen Square protest, how could they possibly start an inquiry into this incident? Or, if they have heard of it and attempt to learn about it from the Internet, what they will soon realize is that domestic search engines, social media, and news media all fail their requests and foreign ones are blocked. Certainly, they could use circumvention tools to bypass the Great Firewall, but the sad truth is that probably under 1% of them have ever made such an effort, according to the Harvard Berkman Center’s report in 2011.

Ed: Is censorship of domestic news (such as food scares) more geared towards “avoiding panics and maintaining social order”, or just avoiding political embarrassment? For example, do you see censorship of environmental issues and (avoidable) disasters?

Sonya: The government certainly tries to avoid political embarrassment in the case of food scares by manipulating news coverage, but it is also their priority to maintain social order or so-called “social harmony”. Exactly for this reason, Zhao Lianhai, the most outspoken parent of a toxic milk powder victim was charged with “inciting social disorder” and sentenced to two and a half years in prison. Frustrated by Chinese milk powder, Chinese tourists are aggressively stocking up on milk powder from elsewhere, such as in Hong Kong and New Zealand, causing panics over milk powder shortages in those places.

After the earthquake in Sichuan, another group of grieving parents were arrested on similar charges when they questioned why their children were buried under crumbled schools whereas older buildings remained standing. The high death toll of this earthquake was among the avoidable disasters that the government attempts to mask and force the public to forget. Environmental issues, along with land acquisition, social unrest, and labor exploitation, are other frequently censored topics in the name of “stability maintenance”.

Ed: You plotted a map to show the geographic distribution of news deletion: what does the pattern show?

Sonya: We see an apparent geographic pattern in news deletion, with neighboring countries being more likely to be deleted than distant ones. Border disputes between China and its neighbors may be one cause; for example with Japan over the Diaoyu-Senkaku Islands, with the Philippines over the Huangyan Island-Scarborough Shoal, and with India over South Tibet. Another reason may be a concern over maintaining allies. Burma had the highest deletion rates among all the countries, with the deleted news mostly covering its curb on censorship. Watching this shift, China might worry that media reform in Burma could lead to copycat attempts inside China.

On the other hand, China has given Burma diplomatic cover, considering it as the “second coast” to the Indian Ocean and importing its natural resources (Howe & Knight, 2012). For these reasons, China may be compelled to censor Burma more than other countries, even though they don’t share a border. Nonetheless, although oceans apart, the US topped the list by sheer number of news deletions, reflecting the bittersweet relation between the two nations.

Ed: What do you think explains the much higher levels of censorship reported by others for social media than for news media? How does geographic distribution of deletion differ between the two?

Sonya: The deletion rates of online news are apparently far lower than those of Sina Weibo posts. The overall deletion rates on NetEase and Sina Beijing were 0.05% and 0.17%, compared to 16.25% on the social media platform (Bamman et al., 2012). Several reasons may help explain this gap. First, social media confronts enduring spam that has to be cleaned up constantly, whereas it is not a problem at all for professional news aggregators. Second, self-censorship practiced by news media plays an important role, because Chinese journalists are more obliged and prepared to self-censor sensitive information, compared to ordinary Chinese citizens. Subsequently, news media rarely mention “crown prince party” or “democracy movement”, which were among the most frequently deleted terms on Sina Weibo.

Geographically, the deletion rates across China have distinct patterns on news media and social media. Regarding Sina Weibo, deletion rates increase when the messages are published near the fringe or in the west where the economy is less developed. Regarding news websites, the deletion rates rise as they approach the center and east, where the economy is better developed. In addition, the provinces surrounding Beijing also have more news deleted, meaning that political concerns are a driving force behind content control.

Ed: Can you tell if the censorship process mostly relies on searching for sensitive keywords, or on more semantic analysis of the actual content? ie can you (or the censors..) distinguish sensitive “opinions” as well as sensitive topics?

Sonya: First, too sensitive topics will never survive pre-censorship or be published on news websites, such as the Tiananmen Square protest, although they may sneak in on social media with deliberate typos or other circumvention techniques. However, it is clear that censors use keywords to locate articles on sensitive topics. For instance, after the Fukushima earthquake in 2011, rumors spread in the Chinese Cyberspace that radiation was rising from the Japanese nuclear plant and iodine would help protect against its harmful effects; this was followed by panic-buying of iodized salt. During this period, “nuclear defense”, “iodized salt” and “radioactive iodine”–among other generally neutral terms–became politically charged overnight, and were censored in the Chinese web sphere. The taboo list of post-censorship keywords evolves continuously to handle breaking news. Beyond keywords, party organs and other online media are trying to automate sentiment analysis and discern more subtle context. People’s Daily, for instance, has been working with elite Chinese universities in this field and already developed a generic product for other institutes to monitor “public sentiment”.

Another way to sort out sensitive information is to keep an eye on most popular stories, because a popular story would represent a greater “threat” to the existing political and social order. In our study, about 47% of the deleted stories were listed as top 100 mostly read/discussed at some point. This indicates that the more readership a story gains, the more attention it draws from censors.

Although news websites self-censor (therefore experiencing under 1% post-censorship), they are also required to monitor and “clean” comments following each news article. According to my very conservative estimate–if a censor processes 100 comments per minute and works eight hours per day–reviewing comments on Sina Beijing from 11-16 September 2012, would have required 336 censors working full time. In fact, Charles Cao, CEO of Sina, mentioned to Forbes that at least 100 censors were “devoted to monitoring content 24 hours a day”. As new sensitive issues emerge and new circumvention techniques are developed continuously, it is an ongoing battle between the collective intelligence of Chinese netizens and the mechanical work conducted (and artificial intelligence implemented) by a small group of censors.

Ed: It must be a cause of considerable anxiety for journalists and editors to have their material removed. Does censorship lead to sanctions? Or is the censorship more of an annoyance that must be negotiated?

Sonya: Censorship does indeed lead to sanctions. However, I don’t think “anxiety” would be the right word to describe their feelings, because if they are really anxious they could always choose self-censorship and avoid embarrassing the authorities. Considering it is fairly easy to predict whether a news report will please or irritate officials, I believe what fulfills the whistleblowers when they disclose inconvenient facts is a strong sense of justice and tremendous audacity. Moreover, I could barely discern any “negotiation” in the process of censorship. Negotiation is at least a two-way communication, whereas censorship follows continual orders sent from the authorities to the mass media, and similarly propaganda is a one-way communication from the authorities to the masses via the media. As such, it is common to see disobedient journalists threatened or punished for “defying” censorial orders.

Southern Metropolis Daily is one of China’s most aggressive and punished newspapers. In 2003, the newspaper broke the epidemic of SARS that local officials had wished to hide from the public. Soon after this report, it covered a university graduate beaten to death in policy custody because he carried no proper residency papers. Both cases received enormous attention from Chinese authorities and the international community, seriously embarrassing local officials. It is alleged and widely believed that some local officials demanded harsh penalties for the Daily; the director and the deputy editor were sentenced to 11 and 12 years in jail for “taking briberies” and “misappropriating state-owned assets” and the chief editor was dismissed.

Not only professional journalists but also (broadly defined) citizen journalists could face similar penalties. For instance, Xu Zhiyong, a lawyer who defended journalists on trial, and Ai Weiwei, an artist who tried to investigate collapsed schools after the Sichuan earthquake, have experienced similar penalties: fines for tax evasion, physical attacks, house arrest, and secret detainment; exactly the same censorship tactics that states carried out before the advent of the Internet, as described in Ilan Peleg’s (1993) book Patterns of Censorship Around the World.

Ed: What do you think explains the lack of censorship in the overseas portal? (Could there be a certain value for the government in having some news items accessible to an external audience, but unavailable to the internal one?)

Sonya: It is more costly to control content by searching for and deleting individual news stories than simply blocking a whole website. For this reason, when a website outside the Great Firewall carries embarrassing content to the Chinese government, Chinese censors will simply block the whole website rather than request deletions. Overseas branches of Chinese media may comply but foreign media may simply drop such a deletion request.

Given online users’ behavior, it is effective and efficient to strictly control domestic content. In general, there are two types of Chinese online users, those who only visit Chinese websites operating inside China and those who also consume content from outside the country. Regarding this second type, it is really hard to prescribe what they do and don’t read, because they may be well equipped with circumvention tools and often obtain access to Chinese media published in Hong Kong and Taiwan but blocked in China. In addition, some Western media, such as the BBC, the New York Times, and Deutsche Welle, make media consumption easy for Chinese readers by publishing in Chinese. Of course, this type of Chinese user may be well educated and able to read English and other foreign languages directly. Facing these people, Chinese authorities would see their efforts in vain if they tried to censor overseas branches of Chinese media, because, outside the Great Firewall, there are too many sources for information that lie beyond the reach of Chinese censors.

Chinese authorities are in fact strategically wise in putting their efforts into controlling domestic online media, because this first type of Chinese user accounts for 99.9% of the whole online population, according to Google’s 2010 estimate. In his 2013 book Rewire, Ethan Zuckerman summarizes this phenomenon: “none of the top ten nations [in terms of online population] looks at more than 7 percent international content in its fifty most popular news sites” (p. 56). Since the majority of the Chinese populace perceives the domestic Internet as “the entire cyberspace”, manipulating the content published inside the Great Firewall means that (according to Chinese censors) many of the time bombs will have been defused.


Read the full paper: Sonya Yan Song, Fei Shen, Mike Z. Yao, Steven S. Wildman (2013) Unmasking News in Cyberspace: Examining Censorship Patterns of News Portal Sites in China. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

Sonya Y. Song led this study as a Google Policy Fellow in 2012. Currently, she is a Knight-Mozilla OpenNews Fellow and a Ph.D. candidate in media and information studies at Michigan State University. Sonya holds a bachelor’s and master’s degree in computer science from Tsinghua University in Beijing and master of philosophy in journalism from the University of Hong Kong. She is also an avid photographer, a devotee of literature, and a film buff.

Sonya Yan Song was talking to blog editor David Sutcliffe.

]]>
Predicting elections on Twitter: a different way of thinking about the data https://ensr.oii.ox.ac.uk/predicting-elections-on-twitter-a-different-way-of-thinking-about-the-data/ Sun, 04 Aug 2013 11:43:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1498 GOP presidential nominee Mitt Romney
GOP presidential nominee Mitt Romney, centre, waving to crowd, after delivering his acceptance speech on the final night of the 2012 Republican National Convention. Image by NewsHour.

Recently, there has been a lot of interest in the potential of social media as a means to understand public opinion. Driven by an interest in the potential of so-called “big data”, this development has been fuelled by a number of trends. Governments have been keen to create techniques for what they term “horizon scanning”, which broadly means searching for the indications of emerging crises (such as runs on banks or emerging natural disasters) online, and reacting before the problem really develops. Governments around the world are already committing massive resources to developing these techniques. In the private sector, big companies’ interest in brand management has fitted neatly with the potential of social media monitoring. A number of specialised consultancies now claim to be able to monitor and quantify reactions to products, interactions or bad publicity in real time.

It should therefore come as little surprise that, like other research methods before, these new techniques are now crossing over into the competitive political space. Social media monitoring, which in theory can extract information from tweets and Facebook posts and quantify positive and negative public reactions to people, policies and events has an obvious utility for politicians seeking office. Broadly, the process works like this: vast datasets relating to an election, often running into millions of items, are gathered from social media sites such as Twitter. These data are then analysed using natural language processing software, which automatically identifies qualities relating to candidates or policies and attributes a positive or negative sentiment to each item. Finally, these sentiments and other properties mined from the text are totalised, to produce an overall figure for public reaction on social media.

These techniques have already been employed by the mainstream media to report on the 2010 British general election (when the country had its first leaders debate, an event ripe for this kind of research) and also in the 2012 US presidential election. This growing prominence led my co-author Mike Jensen of the University of Canberra and myself to question: exactly how useful are these techniques for predicting election results? In order to answer this question, we carried out a study on the Republican nomination contest in 2012, focused on the Iowa Caucus and Super Tuesday. Our findings are published in the current issue of Policy and Internet.

There are definite merits to this endeavour. US candidate selection contests are notoriously hard to predict with traditional public opinion measurement methods. This is because of the unusual and unpredictable make-up of the electorate. Voters are likely (to greater or lesser degrees depending on circumstances in a particular contest and election laws in the state concerned) to share a broadly similar outlook, so the electorate is harder for pollsters to model. Turnout can also vary greatly from one cycle to the next, adding an additional layer of unpredictability to the proceedings.

However, as any professional opinion pollster will quickly tell you, there is a big problem with trying to predict elections using social media. The people who use it are simply not like the rest of the population. In the case of the US, research from Pew suggests that only 16 per cent of internet users use Twitter, and while that figure goes up to 27 per cent of those aged 18-29, only 2 per cent of over 65s use the site. The proportion of the electorate voting for within those categories, however, is the inverse: over 65s vote at a relatively high rate compared to the 18-29 cohort. furthermore, given that we know (from research such as Matthew Hindman’s The Myth of Digital Democracy) that the only a very small proportion of people online actually create content on politics, those who are commenting on elections become an even more unusual subset of the population.

Thus (and I can say this as someone who does use social media to talk about politics!) we are looking at an unrepresentative sub-set (those interested in politics) of an unrepresentative sub-set (those using social media) of the population. This is hardly a good omen for election prediction, which relies on modelling the voting population as closely as possible. As such, it seems foolish to suggest that a simply culmination of individual preferences can simply be equated to voting intentions.

However, in our article we suggest a different way of thinking about social media data, more akin to James Surowiecki’s idea of The Wisdom of Crowds. The idea here is that citizens commenting on social media should not be treated like voters, but rather as commentators, seeking to understand and predict emerging political dynamics. As such, the method we operationalized was more akin to an electoral prediction market, such as the Iowa Electronic Markets, than a traditional opinion poll.

We looked for two things in our dataset: sudden changes in the number of mentions of a particular candidate and also words that indicated momentum for a particular candidate, such as “surge”. Our ultimate finding was that this turned out to be a strong predictor. We found that the former measure had a good relationship with Rick Santorum’s sudden surge in the Iowa caucus, although it did also tend to disproportionately-emphasise a lot of the less successful candidates, such as Michelle Bachmann. The latter method, on the other hand, picked up the Santorum surge without generating false positives, a finding certainly worth further investigation.

Our aim in the paper was to present new ways of thinking about election prediction through social media, going beyond the paradigm established by the dominance of opinion polling. Our results indicate that there may be some value in this approach.


Read the full paper: Michael J. Jensen and Nick Anstead (2013) Psephological investigations: Tweets, votes, and unknown unknowns in the republican nomination process. Policy and Internet 5 (2) 161–182.

Dr Nick Anstead was appointed as a Lecturer in the LSE’s Department of Media and Communication in September 2010, with a focus on Political Communication. His research focuses on the relationship between existing political institutions and new media, covering such topics as the impact of the Internet on politics and government (especially e-campaigning), electoral competition and political campaigns, the history and future development of political parties, and political mobilisation and encouraging participation in civil society.

Dr Michael Jensen is a Research Fellow at the ANZSOG Institute for Governance (ANZSIG), University of Canberra. His research spans the subdisciplines of political communication, social movements, political participation, and political campaigning and elections. In the last few years, he has worked particularly with the analysis of social media data and other digital artefacts, contributing to the emerging field of computational social science.

]]>
The complicated relationship between Chinese Internet users and their government https://ensr.oii.ox.ac.uk/the-complicated-relationship-between-chinese-internet-users-and-their-government/ Thu, 01 Aug 2013 06:28:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1827 David:For our research, we surveyed postgraduate students from all over China who had come to Shanghai to study. We asked them five questions to which they provided mostly rather lengthy answers. Despite them being young university students and being very active online, their answers managed to surprise us. Notably, the young Chinese who took part in our research felt very ambiguous about the Internet and its supposed benefits for individual people in China. They appreciated the greater freedom the Internet offered when compared to offline China, but were very wary of others abusing this freedom to their detriment.

Ed: In your paper you note that the opinions of many young people closely mirrored those of the government’s statements about the Internet — in what way?

David: In 2010 the government published a White Paper on the Internet in China in which they argued that the main uses of the Internet were for obtaining information, and for communicating with others. In contrast to Euro-American discourses around the Internet as a ‘force for democracy’, the students’ answers to our questions agreed with the evaluation of the government and did not see the Internet as a place to begin organising politically. The main reason for this — in my opinion — is that young Chinese are not used to discussing ‘politics’, and are mostly focused on pursuing the ‘Chinese dream’: good job, large flat or house, nice car, suitable spouse; usually in that order.

Ed: The Chinese Internet has usually been discussed in the West as a ‘force for democracy’ — leading to the inevitable relinquishing of control by the Chinese Communist Party. Is this viewpoint hopelessly naive?

David: Not naive as such, but both deterministic and limited, as it assumes that the introduction of technology can only have one ‘built-in’ outcome, thus ignoring human agency, and as it pretends that the Chinese Communist Party does not use technology at all. Given the intense involvement of Party and government offices, as well as of individual party members and government officials with the Internet it makes little sense to talk about ‘the Party’ and ‘the Internet’ as unconnected entities. Compared to governments in Europe or America, the Chinese Communist Party and the Chinese government have embraced the Internet and treated it as a real and valid communication channel between citizens and government/Party at all levels.

Ed: Chinese citizens are being encouraged by the government to engage and complain online, eg to expose inefficiency and corruption. Is the Internet just a space to blow off steam, or is it really capable of ‘changing’ Chinese society, as many have assumed?

David: This is mostly a matter of perspective and expectations. The Internet has NOT changed the system in China, nor is it likely to. In all likelihood, the Internet is bolstering the legitimacy and the control of the Chinese Communist Party over China. However, in many specific instances of citizen unhappiness and unrest, the Internet has proved a powerful channel of communication for the people to achieve their goals, as the authorities have reacted to online protests and supported the demands of citizens. This is a genuine change and empowerment of the people, though episodic and local, not global.

Ed: Why do you think your respondents were so accepting (and welcoming) of government control of the Internet in China: is this mainly due to government efforts to manage online opinion, or something else?

David: I think this is a reflex response fairly similar to what has happened elsewhere as well. If e.g. children manage to access porn sites, or an adult manages to groom several children over the Internet the mass media and the parents of the children call for ‘government’ to protect the children. This abrogation of power and shifting of responsibility to ‘the government’ by individuals — in the example by parents, in our study by young Chinese — is fairly widespread, if deplorable. Ultimately this demand for government ‘protection’ leads to what I would consider excessive government surveillance and control (and regulation) of online spaces in the name of ‘protection’ and the public’s acquiescence of the policing of cyberspace. In China, this takes the form of a widespread (resigned) acceptance of government censorship; in the UK it led to the acceptance of GCHQ’s involvement in Prism, or of the sentencing of Deyka Ayan Hassan or of Liam Stacey, which have turned the UK into the only country in the world in which people have been arrested for posting single, offensive posts on microblogs.

Ed: How does the central Government manage and control opinion online?

David: There is no unified system of government control over the Internet in China. Instead, there are many groups and institutions at all levels from central to local with overlapping areas of responsibility in China who are all exerting an influence on Chinese cyberspaces. There are direct posts by government or Party officials, posts by ‘famous’ people in support of government decisions or policies, paid, ‘hidden’ posters or even people sympathetic to the government. China’s notorious online celebrity Han Han once pointed out that the term ‘the Communist Party’ really means a population group of over 300 million people connected to someone who is an actual Party member.

In addition to pro-government postings, there are many different forms of censorship that try to prevent unacceptable posts. The exact definition of ‘unacceptable’ changes from time to time and even from location to location, though. In Beijing, around October 1, the Chinese National Day, many more websites are inaccessible than, for example in Shenzhen during April. Different government or Party groups also add different terms to the list of ‘unacceptable’ topics (or remove them), which contributes to the flexibility of the censorship system.

As a result of the often unpredictable ‘current’ limits of censorship, many Internet companies, forum and site managers, as well as individual Internet users add their own ‘self-censorship’ to the mix to ensure their own uninterrupted presence online. This ‘self-censorship’ is often stricter than existing government or Party regulations, so as not to even test the limits of the possible.

Ed: Despite the constant encouragement / admonishment of the government that citizens should report and discuss their problems online; do you think this is a clever (ie safe) thing for citizens to do? Are people pretty clever about negotiating their way online?

David: If it looks like a duck, moves like a duck, talks like a duck … is it a duck? There has been a lot of evidence over the years (and many academic articles) that demonstrate the government’s willingness to listen to criticism online without punishing the posters. People do get punished if they stray into ‘definitely illegal’ territory, e.g. promoting independence for parts of China, or questioning the right of the Communist Party to govern China, but so far people have been free to express their criticism of specific government actions online, and have received support from the authorities for their complaints.

Just to note briefly; one underlying issue here is the definition of ‘politics’ and ‘power’. Following Foucault, in Europe and America ‘everything’ is political, and ‘everything’ is a question of power. In China, there is a difference between ‘political’ issues, which are the responsibility of the Communist Party, and ‘social’ issues, which can be discussed (and complained about) by anybody. It might be worth exploring this difference of definitions without a priori acceptance of the Foucauldian position as ‘correct’.

Ed: There’s a lot of emphasis on using eg social media to expose corrupt officials and hold them to account; is there a similar emphasis on finding and rewarding ‘good’ officials? Or of officials using online public opinion to further their own reputations and careers? How cynical is the online public?

David: The online public is very cynical, and getting ever more so (which is seen as a problem by the government as well). The emphasis on ‘bad’ officials is fairly ‘normal’, though, as ‘good’ officials are not ‘newsworthy’. In the Chinese context there is the additional problem that socialist governments like to promote ‘model workers’, ‘model units’, etc. which would make the praising of individual ‘good’ officials by Internet users highly suspect. Other Internet users would simply assume the posters to be paid ‘hidden’ posters for the government or the Party.

Ed: Do you think (on balance) that the Internet has brought more benefits (and power) to the Chinese Government or new problems and worries?

David: I think the Internet has changed many things for many people worldwide. Limiting the debate on the Internet to the dichotomies of government vs Internet, empowered netizens vs disenfranchised Luddites, online power vs wasting time online, etc. is highly problematic. The open engagement with the Internet by government (and Party) authorities has been greater in China than elsewhere; in my view, the Chinese authorities have reacted much faster, and ‘better’ to the Internet than authorities elsewhere. As the so-called ‘revelations’ of the past few months have shown, governments everywhere have tried and are trying to control and use Internet technologies in pursuit of power.

Although I personally would prefer the Internet to be a ‘free’ and ‘independent’ place, I realise that this is a utopian dream given the political and economic benefits and possibilities of the Internet. Given the inevitability of government controls, though, I prefer the open control exercised by Chinese authorities to the hypocrisy of European and American governments, even if the Chinese controls (apparently) exceed those of other governments.


Dr David Herold is an Assistant Professor of Sociology at Hong Kong Polytechnic University, where he researches Chinese culture and contemporary PRC society, China’s relationship with other countries, and Chinese cyberspace and online society. His paper Captive Artists: Chinese University Students Talk about the Internet was presented at the presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

David Herold was talking to blog editor David Sutcliffe.

]]>
How are internal monitoring systems being used to tackle corruption in the Chinese public administration? https://ensr.oii.ox.ac.uk/how-are-internal-monitoring-systems-being-used-to-tackle-corruption-in-the-chinese-public-administration/ Fri, 26 Jul 2013 07:30:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1774 The Great Hall of the People
China has made concerted efforts to reduce corruption at the lowest levels of government. Image of the 18th National Congress of the CPC in the Great Hall of the People, Beijing, by: Bert van Dijk.

Ed: Investment by the Chinese government in internal monitoring systems has been substantial: what components make it up?

Jesper: Two different information systems are currently in use. Within the government there is one system directed towards administrative case-processing. In addition to this, the Communist Party has its own monitoring system, which is less sophisticated in terms of real-time surveillance, but which has a deeper structure, as it collects and cross-references personal information about party-members working in the administration. These two systems parallel the existing institutional arrangements found in the dual structure consisting of the Discipline Inspection Commissions and the Bureaus of Supervision on different levels of government. As such, the e-monitoring system has particular ‘Chinese characteristics’, reflecting the bureaucracy’s Leninist heritage where Party-affairs and government-affairs are handled separately, applying different sets of rules.

On the government’s e-monitoring platform the Bureau of Supervision (the closest we get to an Ombudsman function in the Chinese public administration) can collect data from several other data systems, such as the e-government systems of the individual bureaus involved in case processing; feeds from surveillance cameras in different government organisations; and even geographical data from satellites. The e-monitoring platform does not, however, afford scanning of information outside the government systems. For instance, social media are not part of the administration surveillance infrastructure.

Ed: How centralised is it as a system? Is local or province-level monitoring of public officials linked up to the central government?

Jesper: The architecture of the e-monitoring systems integrates the information flows to the provincial level, but not to the central level. One reason for this may be found by following the money. Funding for these systems mainly comes from local sources, and the construction was initially based on municipal-level systems supported by the provincial level. Hence, at the early stages the path towards individual local-level systems was the natural choice. A reason for why the build up was not initially envisioned to comprise the central level could be that the Chinese central government is comparatively small, and they could be worried about information overload. It could, however, also be an expression of provinces wanting to handle ‘internal affairs’ themselves rather than having central actors involved; possibly a case of provincial resistance to central monitoring.

Ed: Digital systems allow for the efficient control and recording of vast numbers of transactions (e.g. by timestamping, alerting, etc.). But all systems are subvertible: is there any evidence that this is happening?

Jesper: There are certainly attempts to shirk work or continue corrupt activities despite the monitoring system. For instance, some urban managers who work in the streets (which are hard to monitor by video surveillance) have used fake photos to ‘prove’ that a particular maintenance task had been completed, thereby saving themselves the time and energy of verifying that the problem had in fact been solved. They could do this because the system did not stamp the photo with geographical information data, and hence they could claim that a photo was taken at any location.

However, administrative processes that take place in an office rather than ‘in the wild’ are easier to monitor. Administrative approval processes that relate to, e.g., tax and business licensing, which the government handles in one-stop-shopping service centres, tend to be less corrupt after the introduction of the e-monitoring system. To be sure, this does not mean that the administration is clean now; instead the corruption moves to other places, such as applications for business licenses for larger companies, which is only partly covered by e-monitoring.

Ed: We are used to a degree of audit and oversight of our working behaviour and performance in the West; does this personal monitoring go beyond what might be considered normal (or just) to us?

Jesper: The notion of being video surveilled during office work would probably be met with resistance by employees in Western government agencies. This is, however, a widespread practice in call centres in the West, so in this sense it is not entirely unknown in work settings. Additionally, government one-stop shops in the West are often equipped with closed-circuit television, but this is mostly — as I understand — used to document client violations of the public employees rather than the other way round. Another aspect that sets apart the Chinese administration is that the options for recourse (e.g. for a wrongfully accused public employee) only include the authorities already dealing with the case.

Ed: Could these systems also be used to monitor the behaviour of citizens?

Jesper: Indeed, the monitoring system enables access to information from a number of different sources, such as registers of tax payment, social welfare benefits and real-estate holdings, and to some extent it is already used in relation to citizens. For instance the tax register and the real-estate register are cross-referenced. If a real-estate owner has a tax debt then documentation for the real estate cannot be printed until the debt is paid. We must expect further development of these kinds of functions. This e-monitoring ‘architecture of control’ can thus be activated both towards the administration itself as well as outward towards citizens.

Ed: There is oversight of the actions of government officials by the Bureau of Supervision; but is there any public oversight of, e.g., the government’s decision-making process, particularly of potentially embarrassing decisions? Who watches the watchers?

Jesper: Currently in China there are two digitally mediated mechanisms working simultaneously to reduce corruption. The first is the e-monitoring system described here, which mainly addresses administrative corruption. The second is what we might call a ‘fire alarm’ mechanism whereby citizens point public attention to corruption scandals or local government failures — often through the use of microblogs. E-monitoring addresses corruption in the work process but does not include government decision-making. The ‘fire alarm’ in part addresses the latter concern as citizens can vent their frustrations online. However, even though microblogging has empowered citizens to speak out against corruption and counter-productive policies, this does not reflect institutionalised control but happens on an ad hoc basis. If the Bureau of Supervision and the Disciplinary Inspection Commission do not wish to act, there is no further backstop. The Internet-based e-monitoring systems, hence, do not alter the institutional setup of the system and there is no-one to ‘watch the watchers’ except for in the occasional cases where the fire alarm mechanism works.

Ed: Is there a danger that public disclosure of power abuses might generate dissatisfaction and mistrust in government, without necessarily solving the issue of corruption itself?

Jesper: Over the last few years a number of corruption scandals have been brought to public attention through microblogs. Civil servants have been punished, and obviously these incidents have not improved public satisfaction with the particular local governments involved. Apart from the negative consequences of public mistrust, one could speculate that the microblogging ‘fire alarm’ only works when it is allowed to do so by the government. Technically speaking it is relatively simple for the sophisticated Chinese censoring apparatus to stop debates that touch upon issues that are too sensitive for the Party. So, it would be naive to believe that this mechanism is revealing more than the tip of the iceberg in terms of corruption.

Ed: Both Russia and India have big problems with corruption: do you know if there are similar electronic oversight systems embedded in their public administrations? What makes China different if not?

Jesper: In this area, China has made concerted efforts to reduce corruption at the lowest levels of government, as a result of dissatisfaction from both the business communities and the general public. Similarly, in Russia and India (and a number of Asian states) many functions such as taxation, business licensing, etc., have been incorporated in e-government systems and through this process been made more transparent and easy to track than previous processes. However, to my knowledge, the Chinese system is at the forefront when it comes to integrating these different platforms into a larger monitoring system ecology.


Jesper Schlæger is an Associate Professor at Sichuan University, School of Public Administration. His current research topics include comparative public administration, e-government, electronic monitoring, public values, and urban management in a comparative perspective. His latest book is E-Government in China: Technology, Power and Local Government Reform (Routledge, 2013).

Jesper Schlæger was talking to blog editor David Sutcliffe.

]]>
Chinese Internet users share the same values concerning free speech, privacy, and control as their Western counterparts https://ensr.oii.ox.ac.uk/chinese-internet-users-share-the-same-values-concerning-free-speech-privacy-and-control-as-their-western-counterparts/ Wed, 17 Jul 2013 13:34:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1709 Free Internet in Shanghai airport
There are now over half a billion Internet users in China, part of a shift in the centre of gravity of Internet use away from the US and Europe. Image of Pudong International Airport, Shanghai, by ToGa Wanderings.

Ed: You recently presented your results at the OII’s China and the New Internet World ICA preconference. What were people most interested in?

Gillian: A lot of people were interested in our finding that China was such a big online shopping market compared to other countries, with 60% of our survey respondents reporting that they make an online purchase at least weekly. That’s twice the world’s average. A lot of people who study the Chinese Internet talk about governance issues rather than commerce, but the fact that there is this massive investment in ecommerce in China and a rapid transition to a middle class lifestyle for such a large number of Chinese means that Chinese consumer behaviours will have a significant impact on global issues such as resource scarcity, global warming, and the global economy.

Others were interested in our findings concerning Internet use in ’emerging’ Internet countries like China. The Internet’s development in Western Europe and the US was driven by people who saw the technology as a platform for freedom of expression and peer-to-peer applications. In China, you see this optimism but you also see that a lot of people coming online move straight to smart phones and other locked-down technologies like the iPad, which you can only interact with in a certain way. Eighty-six percent of our Chinese respondents reported that they owned a smart phone, which was the highest percentage of all of the 24 countries we examined individually. A lot of these people are using those devices to play games and watch movies, which is a very different initial exposure to the Internet than we saw in early adopting Western countries.

Ed: So, a lot of significant differences between usages in emerging versus established Internet nations. Any similarities?

Gillian: In general, we find that uses are different but values are similar. People in emerging nations share the same values concerning free speech, privacy, and control as their Western counterparts. These are values that were embedded in the Internet’s creation and that have spread with it to other countries, regardless of national policy rhetorics. Many people – even in China – see the Internet as a tool for free speech and as a place where you can expect a certain degree of privacy and anonymity.

Ed: But isn’t there a disconnect between the fact that people are using more closed technologies as they are coming online and yet are sharing the same values of freedom associated with the Internet?

Gillian: There’s a difference between uses and values. People in emerging countries produce more content, they’re more sociable online, they listen to more music. But the way that people express their values doesn’t always match what they actually do. There is no correlation between whether someone approves of government censorship and their concern of being personally censored. There’s also no correlation in China between the frequency with which people post political opinions online and a worry that their online comments will be censored.

Ed: It seems that there are a few really interesting results in your study that run counter to accepted wisdom about the Internet. Were you surprised by any of the results?

Gillian: I was, particularly, surprised by the high levels of political commentary in emerging nations. We know that levels of online political expression in the West are very low (around 15%). But 40% of respondents in the emerging nations we surveyed reported posting a political opinion online at least weekly. That’s a huge difference. Even China, which we expected to have lower levels of political expression than the general average, followed a similar pattern. We didn’t see any chilling effect – i.e. any reduction of the frequency of posting of political opinions among Chinese users.

This matches other studies of the Chinese Internet that have concluded that there is very little censorship of people expressing themselves online – that censorship only really happens when people start to organise others. However, I was surprised by the extent of the difference: 18% of users in the US and UK reported posting a political opinion online at least weekly, 13 percent in France, and 3 percent in Japan; but 32% of Chinese, 51% of Brazilians, 50% of Indians, and 64% of Egyptians reported posting online at least weekly. This shows that these conclusions we have drawn about low levels of online political participation based on studies of Western Internet users are likely not applicable to users in other countries.

Of course, we have to remember that this is an online survey and so our results only reflect what Internet users report their activities and attitudes to be. However, the incentive to over-report activities is probably about the same for the US and for China. The thing that may be different in different countries is what people interpret as a political comment. Many more types of comments in China might be seen as political since the government controls so much more. A comment about the price of food might be seen as political speech in China, for example, since the government controls food prices, whereas a similar comment may not be seen as political by US respondents.

Ed: This research is interesting because it calls into question some fundamental assumptions about the Internet. What did you take away from the project?

Gillian: A lot of scholarship on the Internet is presented as applicable to the whole world, but isn’t actually applicable everywhere. The best example here is the very low percentage of people participating in the political process in the West, which needs to be re-evaluated with these findings. It shows that we need to be much more specific in Internet research about the unit of analysis, and what it applies to. However, we also found that Internet values are similar across the world. I think this shows that discourses about the Internet as a place for free expression and privacy are distributed hand-in-hand with the technology. Although Western users are declining as an overall percentage of the world’s Internet population, these founding rhetorics remain powerfully associated with the technology.


Read the full paper: Bolsover, G., Dutton, W.H., Law, G. and Dutta, S. (2013) Social Foundations of the Internet in China and the New Internet World: A Cross-National Comparative Perspective. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

Gillian was talking to blog editor Heather Ford.

]]>
Is China changing the Internet, or is the Internet changing China? https://ensr.oii.ox.ac.uk/is-china-changing-the-internet-or-is-the-internet-changing-china/ Fri, 12 Jul 2013 08:13:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1741 The rising prominence of China is one of the most important developments shaping the Internet. Once typified primarily by Internet users in the US, there are now more Internet users in China than there are Americans on the planet. By 2015, the proportion of Chinese language Internet users is expected to exceed the proportion of English language users. These are just two aspects of a larger shift in the centre of gravity of Internet use, in which the major growth is increasingly taking place in Asia and the rapidly developing economies of the Global South, and the BRIC nations of Brazil, Russia, India — and China.

The 2013 ICA Preconference “China and the New Internet World” (14 July 2013), organised by the OII in collaboration with many partners at collaborating universities, explored the issues raised by these developments, focusing on two main interrelated questions: how is the rise of China reshaping the global use and societal implications of the Internet? And in turn, how is China itself being reshaped by these regional and global developments?

As China has become more powerful, much attention has been focused on the number of Internet users: China now represents the largest group of Internet users in the world, with over half a billion people online. But how the Internet is used is also important; this group doesn’t just include passive ‘users’, it also includes authors, bloggers, designers and architects — that is, people who shape and design values into the Internet. This input will undoubtedly affect the Internet going forward, as Chinese institutions take on a greater role in shaping the Internet, in terms of policy, such as around freedom of expression and privacy, and practice, such as social and commercial uses, like shopping online.

Most discussion of the Internet tends to emphasise technological change and ignore many aspects of the social changes that accompany the Internet’s evolution, such as this dramatic global shift in the concentration of Internet users. The Internet is not just a technological artefact. In 1988, Deng Xiaoping declared that “science and technology are primary productive forces” that would be active and decisive factors in the new Chinese society. At the time China naturally paid a great deal of attention to technology as a means to lift its people out of poverty, but it may not have occurred to Deng that the Internet would not just impact the national economy, but that it would come to affect a person’s entire life — and society more generally — as well. In China today, users are more apt to shop online, but also to discuss political issues online than most of the other 65 nations across the world surveyed in a recent report [1].

The transformative potential of the Internet has challenged top-down communication patterns in China, by supporting multi-level and multi-directional flows of communication. Of course, communications systems reflect economic and political power to a large extent: the Internet is not a new or separate world, and its rules reflect offline rules and structures. In terms of the large ‘digital divide’ that exists in China (whose Internet penetration currently stands at a bit over 40%, meaning that 700 million people are still not online), we have to remember that this digital divide is likely to reflect other real economic and political divides, such as lack of access to other basic resources.

While there is much discussion about how the Internet is affecting China’s domestic policy (in terms of public administration, ensuring reliable systems of supply and control, the urban-rural divide and migration, and policy on things like anonymity and free speech), less time is spent discussing the geopolitics of the Internet. China certainly has the potential for great influence beyond its own borders, for example affecting communication flows worldwide and the global division of power. For such reasons, it is valuable to move beyond ‘single country studies’ to consider global shifts in attitudes and values shaping the Internet across the world. As a contested and contestable space, the political role of the Internet is likely to be a focal point for traditional discussions of key values, such as freedom of expression and assembly; remember Hilary Clinton’s 2010 ‘Internet freedom’ speech, delivered at Washington’s Newseum Institute. Contemporary debates over privacy and freedom of expression are indeed increasingly focused on Internet policy and practice.

Now is not the first time in the histories of the US and China that their respective foreign policies have been of great interest and importance to the other. However this might also be a period of anxiety-driven (rather than rational) policy making, particularly if increased exposure and access to information around the world leads to efforts to create Berlin walls of the digital age. In this period of national anxieties on the part of governments and citizens — who may feel that “something must be done” — there will inevitably be competition between the US, China, and the EU to drive national Internet policies that assert local control and jurisdiction. Ownership and control of the Internet by countries and companies is certainly becoming an increasingly politicized issue. Instead of supporting technical innovation and the diffusion of the Internet, nations are increasingly focused on controlling the flow of online content and exploiting the Internet as a means for gauging public sentiment and opinion, rather than as a channel to help shape public policy and social accountability.

For researchers, it is time to question a myopic focus on national units of analysis when studying the Internet, since many activities of critical importance take place in smaller regions, such as Silicon Valley, larger regions, such as the global South, and in virtual spaces that are truly global. We tend to think of single places: “the Internet” / “the world” / “China”: but as a number of conference speakers emphasized, there is more than one China, if we consider for example Taiwan, Hong Kong, rural China, and the factory zones — each with their different cultural, legal and economic dynamics. Similarly, there are a multitude of actors, for example corporations, which are shaping the Chinese Internet as surely as Beijing is. As Jack Qui, one of the opening panelists, observed: “There are many Internets, and many worlds.” There are also multiple histories of the Internet in China, and as yet no standard narrative.

The conference certainly made clear that we are learning a lot about China, as a rapidly growing number of Chinese scholars increasingly research and publish on the subject. The vitality of the Chinese Journal of Communication is one sign of this energy, but Internet research is expanding globally as well. Some of the panel topics will be familiar to anyone following the news, even if there is still not much published in the academic literature: smart censorship, trust in online information, human flesh search, political scandal, democratisation. But there were also interesting discussions from new perspectives, or perspectives that are already very familiar in a Western context: social networking, job markets, public administration, and e-commerce.

However, while international conferences and dedicated panels are making these cross-cultural (and cross-topic) discussions and conversations easier, we still lack enough published content about China and the Internet, and it can be difficult to find material, due to its recent diffusion, and major barriers such as language. This is an important point, given how easy it is to oversimplify another culture. A proper comparative analysis is hard and often frustrating to carry out, but important, if we are to see our own frameworks and settings in a different way.

One of the opening panelists remarked that two great transformations had occurred during his academic life: the emergence of the Internet, and the rise of China. The intersection of the two is providing fertile ground for research, and the potential for a whole new, rich research agenda. Of course the challenge for academics is not simply to find new, interesting and important things to say about a subject, but to draw enduring theoretical perspectives that can be applied to other nations and over time.

In returning to the framing question: “is China changing the Internet, or is the Internet changing China?” obviously the answer to both is “yes”, but as the Dean of USC Annenberg School, Ernest Wilson put it, we need to be asking “how?” and “to what degree?” I hope this preconference encouraged more scholars to pursue these questions.

Reference

[1] Bolsover, G., Dutton, W.H., Law, G. and Dutta, S. (2013) Social Foundations of the Internet in China and the New Internet World: A Cross-National Comparative Perspective. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.


The OII’s Founding Director (2002-2011), Professor William H. Dutton is Professor of Internet Studies, University of Oxford, and Fellow of Balliol College. Before coming to Oxford in 2002, he was a Professor in the Annenberg School for Communication at the University of Southern California, where he is now an Emeritus Professor. His most recent books include World Wide Research: Reshaping the Sciences and Humanities, co-edited with P. Jeffreys (MIT Press, 2011) and the Oxford Handbook of Internet Studies (Oxford University Press, 2013). Read Bill’s blog.

]]>
Presenting the moral imperative: effective storytelling strategies by online campaigning organisations https://ensr.oii.ox.ac.uk/presenting-the-moral-imperitive-storytelling-strategies-by-online-campaigning-organisations/ Tue, 25 Jun 2013 09:45:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1309 Online campaigning organisations are on the rise. They have captured the imagination of citizens and scholars alike with their ability to use rapid response tactics to engage with public policy debate and mobilize citizens. Early on Andrew Chadwick (2007) labeled these new campaign organisations as ‘hybrids’: using both online and offline political action strategies, as well as intentionally switching repertoires to sometimes act like a mass mobilisation social movement, and other times like an insider interest group.

These online campaigning organisations run multi-issue agendas, are geographically decentralized, and run sophisticated media strategies. The best known of these are MoveOn in the US, internationally focused Avaaz, and GetUp! in Australia. However, new online campaigning organisations are emerging all the time that more often than not have direct lineage through former staff and similar tactics to this first wave. These newer organisations include the UK-based 38 Degrees, SumOfUs that works on consumer issues to hold corporations accountable, and Change.Org, a for-profit organisation that hosts and develops petitions for grassroots groups.

Existing civil society focused organisations are also being challenged to fundamentally change their approach, to move political tactics and communications online, and to grow their member lists. David Karpf (2012) has branded this “MoveOn Effect”, where the success of online campaigning organisations like MoveOn has fundamentally changed and disrupted the advocacy organisation scene. But how has this shift occurred? How have these new organisations succeeded in being both innovative and politically successful?

One increasingly common answer is to focus on how they have developed low threshold online tactics where the risk to participants is reduced. This includes issue and campaign specific online petitions, letter writing, emails, donating money, and boycotts. The other answer is to focus more closely on the discursive tactics these organisations use in their campaigns, based on a shared commitment to a storytelling strategy, and the practical realisation of a ‘theory of change’. That is, to ask how campaigns produce successful stories that follow a concrete theory of why taking action inevitably leads to a desired result.

Storytelling is a device for explaining politics and a campaign via “cause and effect relations, through its sequencing of events, rather than by appeals to standards of logic and proof” (Polletta et al. 2011, 111). These campaign stories characteristically have a plot and identifiable characters, a beginning and middle to the story, but the recipient of the story can create, or rather act out, the end. Framing is important to understanding social movement action but a narrative or storytelling driven analysis focuses more on how language or rhetoric is used, and reveals the underlying “common sense” and emotional frames used in online campaigns’ delivery of messages (Polletta 2009). Polletta et al. (2011, 122) suggest that activists have been successful against better resourced and influential opponents and elites when they “sometimes but not always, have been able to exploit popular associations of narrative with people over power, and moral urgency over technical rationality”.

We have identified four stages of storytelling that need to occur for a campaign to be successful:

  1. An emotional identification with the issue by the story recipient, to mobilize participation
  2. A shared sense of community on the issue, to build solidarity (‘people over power’)
  3. Moral urgency for action (rather than technical persuasion), to resolve the issue and create social change
  4. Securing of public and political support by neutralising counter-movements.

The new online campaigning organisations all prioritise a storytelling approach in their campaigns, using it to build their own autobiographical story and to differentiate what they do from ‘politics as usual’, characterised as party-based, adversarial politics. Harvard scholar and organising practitioner Marshall Ganz’s ideas on the practice of storytelling underpin the philosophy of the New Organizing Institute, which provides training for increasing numbers of online activists. Having received training, these professional campaigners informally join the network of established and emerging ‘theory of change’ organisations such as MoveOn, AVAAZ, Organising for America, the Progressive Change Campaign Committee, SumOfUs, and so on.

GetUp! is a member of this network, has over 600,000 members in Australia, and has conducted high-profile public policy campaigns on issues as diverse as mental health, electoral law, same-sex marriage, and climate change. GetUp!’s communications strategy tries to use storytelling to reorient Australian political debate — and the nature of politics itself — in an affective way. And underpinning all their political tactics is the construction of effective online campaign stories. GetUp! has used stories to help citizens, and to a lesser extent, decision-makers, identify with an issue, build community, and act in recognition of the moral urgency for political change.

Yet despite GetUp!’s commitment to a storytelling technique, it does not always work — these organisations rarely publicise their failed campaigns, or those that do not even get past the initial email ‘ask’. It is important to look at how campaigns unfold to see how storytelling develops, and also to judge whether it is a success or not. This moves the analysis onto an organisation’s whole campaign, rather than studying only decontextualised emails or online petitions. In contrasting two campaigns in-depth we judged one on mental health policy to be a success in meeting the four storytelling criteria; and the other on climate change policy (which was promoted externally as a success) to actually be a storytelling failure.

The mental health story was able to build solidarity and emotional identification around families and friends of those with illness (not sufferers themselves) after celebrity experts launched the campaign to bring awareness to and increase funding for mental health. Mental health was presented by GetUp! as a purely moral dilemma, with very little mention by any opponents of the economic implications of policy reform. In the end the policy was changed, an extra $2.2 billion of funding for mental health was announced in the 2011 Federal Budget, and the Australian Prime Minister appeared with GetUp! in an online video to make the funding announcement.

GetUp’s climate change storytelling, however, failed on all four criteria. Despite national policy change taking place similar to what they had advocated, GetUp!’s climate change campaign did not achieve the level of member or public mobilisation achieved by their mental health campaign. GetUp! used partisan, adversarial tactics that can be partly attributed to climate change becoming an increasingly polarised issue in Australian political debate. This was particularly the case as the oppositional counter-movement successfully reframed climate change as solely an economic issue, focusing on the imposition of an expensive new tax. This story defeated GetUp’s moral urgency story, and their attempt to create ‘people-power’ mobilised for a shared environmental concern.

Why is thinking about this important? For a few reasons. It helps us to see online tactics within the context of a broader political campaign, and challenges us to think about how to judge both successful mobilisation and political influence of hybrid online campaign organisations. Yet it also points to the limitations of an affective approach based on moral urgency alone. Technical persuasion and, more often than not, economic reality still matter for both mobilisation and political change.

References

Chadwick, Andrew (2007) “Digital Network Repertoires and Organizational Hybridity” Political Communication, 24 (3): 283-301.

Karpf, David (2012) The MoveOn Effect: The unexpected transformation of American political advocacy. Oxford: Oxford University Press.

Polletta, Francesca (2009) “Storytelling in social movements” in Culture, Social Movements and Protest ed. Hank Johnston Surrey: Ashgate, 33-54.

Polletta, Francesca, Pang Ching, Bobby Chen, Beth Gharrity Gardner, and Alice Motes (2011) “The sociology of storytelling” Annual Review of Sociology, 37: 109–30.


Read the full paper: Vromen, A. and Coleman, W. (2013) Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5 (1).

]]>
Investigating the structure and connectivity of online global protest networks https://ensr.oii.ox.ac.uk/investigating-the-structure-and-connectivity-of-online-global-protest-networks/ Mon, 10 Jun 2013 12:04:26 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1275 How have online technologies reconfigured collective action? It is often assumed that the rise of social networking tools, accompanied by the mass adoption of mobile devices, have strengthened the impact and broadened the reach of today’s political protests. Enabling massive self-communication allows protesters to write their own interpretation of events – free from a mass media often seen as adversarial – and emerging protests may also benefit from the cheaper, faster transmission of information and more effective mobilization made possible by online tools such as Twitter.

The new networks of political protest, which harness these new online technologies are often described in theoretical terms as being ‘fluid’ and ‘horizontal’, in contrast to the rigid and hierarchical structure of earlier protest organization. Yet such theoretical assumptions have seldom been tested empirically. This new language of networks may be useful as a shorthand to describe protest dynamics, but does it accurately reflect how protest networks mediate communication and coordinate support?

The global protests against austerity and inequality which took place on May 12, 2012 provide an interesting case study to test the structure and strength of a transnational online protest movement. The ‘indignados’ movement emerged as a response to the Spanish government’s politics of austerity in the aftermath of the global financial crisis. The movement flared in May 2011, when hundreds of thousands of protesters marched in Spanish cities, and many set up camps ahead of municipal elections a week later.

These protests contributed to the emergence of the worldwide Occupy movement. After the original plan to occupy New York City’s financial district mobilised thousands of protesters in September 2011, the movement spread to other cities in the US and worldwide, including London and Frankfurt, before winding down as the camp sites were dismantled weeks later. Interest in these movements was revived, however, as the first anniversary of the ‘indignados’ protests approached in May 2012.

To test whether the fluidity, horizontality and connectivity often claimed for online protest networks holds true in reality, tweets referencing these protest movements during May 2012 were collected. These tweets were then classified as relating either to the ‘indignados’ or Occupy movement, using hashtags as a proxy for content. Many tweets, however, contained hashtags relevant for the two movements, creating bridges across the two streams of information. The users behind those bridges acted as  information ‘brokers’, and are fundamentally important to the global connectivity of the two movements: they joined the two streams of information and their audiences on Twitter. Once all the tweets were classified by content and author, it emerged that around 6.5% of all users posted at least one message relevant for the two movements by using hashtags from both sides jointly.

Analysis of the Twitter data shows that this small minority of ‘brokers’ play an important role connecting users to a network that would otherwise be disconnected. Brokers are significantly more active in the contribution of messages and more visible in the stream of information, being re-tweeted and mentioned more often than other users. The analysis also shows that these brokers play an important role in the global network, by helping to keep the network together and improving global connectivity. In a simulation, the removal of brokers fragmented the network faster than the removal of random users at the same rate.

What does this tell us about global networks of protest? Firstly, it is clear that global networks are more vulnerable and fragile than is often assumed. Only a small percentage of users disseminate information across transnational divides, and if any of these users cease to perform this role, they are difficult to immediately replace, thus limiting the assumed fluidity of such networks. The decentralized nature of online networks, with no central authority imposing order or even suggesting a common strategy, make the role of ‘brokers’ all the more vital to the survival of networks which cross national borders.

Secondly, the central role performed by brokers suggests that global networks of online protest lack the ‘horizontal’ structure that is often described in the literature. Talking about horizontal structures can be useful as shorthand to refer to decentralised organisations, but not to analyse the process by which these organisations materialise in communication networks. The distribution of users in those networks reveals a strong hierarchy in terms of connections and the ability to communicate effectively.

Future research into online networks, then, should keep in mind that the language of protest networks in the digital age, particularly terms like horizontality and fluidity, do not necessarily stand up to empirical scrutiny. The study of contentious politics in the digital age should be evaluated, first and foremost, through the lens of what protesters actually reveal through their actions.


Read the paper: Sandra Gonzalez-Bailon and Ning Wang (2013) The Bridges and Brokers of Global Campaigns in the Context of Social Media.

]]>
How accessible are online legislative data archives to political scientists? https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/ https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/#comments Mon, 03 Jun 2013 12:07:40 +0000 http://blogs.oii.ox.ac.uk/policy/?p=654 House chamber of the Utah State Legislature
A view inside the House chamber of the Utah State Legislature. Image by deltaMike.

Public demands for transparency in the political process have long been a central feature of American democracy, and recent technological improvements have considerably facilitated the ability of state governments to respond to such public pressures. With online legislative archives, state legislatures can make available a large number of public documents. In addition to meeting the demands of interest groups, activists, and the public at large, these websites enable researchers to conduct single-state studies, cross-state comparisons, and longitudinal analysis.

While online legislative archives are, in theory, rich sources of information that save researchers valuable time as they gather data across the states, in practice, government agencies are rarely completely transparent, often do not provide clear instructions for accessing the information they store, seldom use standardized norms, and can overlook user needs. These obstacles to state politics research are longstanding: Malcolm Jewell noted almost three decades ago the need for “a much more comprehensive and systematic collection and analysis of comparative state political data.” While the growing availability of online legislative resources helps to address the first problem of collection, the limitations of search and retrieval functions remind us that the latter remains a challenge.

The fifty state legislative websites are quite different; few of them are intuitive or adequately transparent, and there is no standardized or systematic process to retrieve data. For many states, it is not possible to identify issue-specific bills that are introduced and/or passed during a specific period of time, let alone the sponsors or committees, without reading the full text of each bill. For researchers who are interested in certain time periods, policy areas, committees, or sponsors, the inability to set filters or immediately see relevant results limits their ability to efficiently collect data.

Frustrated by the obstacles we faced in undertaking a study of state-level immigration legislation before and after September 11, 2001, we decided to instead  evaluate each state legislative website — a “state of the states” analysis — to help scholars who need to understand the limitations of the online legislative resources they may want to use. We evaluated three main dimensions on an eleven-point scale: (1) the number of searchable years; (2) the keyword search filters; and (3) the information available on the immediate results pages. The number of searchable sessions is crucial for researchers interested in longitudinal studies, before/after comparisons, other time-related analyses, and the activity of specific legislators across multiple years. The “search interface” helps researchers to define, filter, and narrow the scope of the bills—a particularly important feature when keywords can generate hundreds of possibilities. The “results interface” allows researchers to determine if a given bill is relevant to a research project.

Our paper builds on the work of other scholars and organizations interested in state policy. To help begin a centralized space for data collection, Kevin Smith and Scott Granberg-Rademacker publicly invited “researchers to submit descriptions of data sources that were likely to be of interest to state politics and policy scholars,” calling for “centralized, comprehensive, and reliable datasets” that are easy to download and manipulate. In this spirit, Jason Sorens, Fait Muedini, and William Ruger introduced a free database that offered a comprehensive set of variables involving over 170 public policies at the state and local levels in order to “reduce reduplication of scholarly effort.” The National Conference of State Legislatures (NCSL) provides links to state legislatures, bill lists, constitutions, reports, and statutes for all fifty states. The State Legislative History Research Guides compiled by the University of Indiana Law School also include links to legislative and historical resources for the states, such as the Legislative Reference Library of Texas. However, to our knowledge, no existing resource assesses usability across all state websites.

So, what did we find during our assessment of the state websites? In general, we observed that the archival records as well as the search and results functions leave considerable room for improvement. The maximum possible score was 11 in each year, and the average was 3.87 in 2008 and 4.25 in 2010. For researchers interested in certain time periods, policy areas, committees, or sponsors, the inability to set filters, immediately see relevant results, and access past legislative sessions limits their ability to complete projects in a timely manner (or at all). We also found a great deal of variation in site features, content, and navigation. Greater standardization would improve access to information about state policymaking by researchers and the general public—although some legislators may well see benefits to opacity.

While we noted some progress over the study period, not all change was positive. By 2010, two states had scored 10 points (no state scored the full 11), fewer states had very low scores, and the average score rose slightly from 3.87 to 4.25 (out of 11). This suggests slow but steady improvement, and the provision of a baseline of support for researchers. However, a quarter of the states showed score drops over the study period, for the most part reflecting the adoption of “Powered by Google” search tools that used only keywords, and some in a very limited manner. If the latter becomes a trend, we could see websites becoming less, not more, user friendly in the future.

In addition, our index may serve as a proxy variable for state government transparency. While  the website scores were not statistically associated with Robert Erikson, Gerald Wright, and John McIver’s measure of state ideology, there may nevertheless be promise for future research along these lines; additional transparency determinants worth testing include legislative professionalism and social capital. Moving forward, the states might consider creating a working group to share ideas and best practices, perhaps through an organization like the National Conference of State Legislatures, rather than the national government, as some states might resist leadership from D.C. on federalist grounds.

Helen Margetts (2009) has noted that “The Internet has the capacity to provide both too much (which poses challenges to analysis) and too little data (which requires innovation to fill the gaps).” It is notable, and sometimes frustrating, that state legislative websites illustrate both dynamics. As datasets come online at an increasing rate, it is also easy to forget that websites can vary in terms of user friendliness, hierarchical structure, search terms and functions, terminology, and navigability — causing unanticipated methodological and data capture problems (i.e. headaches) to scholars working in this area.


Read the full paper: Taofang Huang, David Leal, B.J. Lee, and Jill Strube (2012) Assessing the Online Legislative Resources of the American States. Policy and Internet 4 (3-4).

]]>
https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/feed/ 1
Why do (some) political protest mobilisations succeed? https://ensr.oii.ox.ac.uk/why-do-some-political-protest-mobilisations-succeed/ Fri, 19 Apr 2013 13:40:55 +0000 http://blogs.oii.ox.ac.uk/policy/?p=909 The communication technologies once used by rebels and protesters to gain global visibility now look burdensome and dated: much separates the once-futuristic-looking image of Subcomandante Marcos posing in the Chiapas jungle draped in electronic gear (1994) from the uprisings of the 2011 Egyptian revolution. While the only practical platform for amplifying a message was once provided by organisations, the rise of the Internet means that cross-national networks are now reachable by individuals—who are able to bypass organisations, ditch membership dues, and embrace self-organization. As social media and mobile applications increasingly blur the distinction between public and private, ordinary citizens are becoming crucial nodes in the contemporary protest network.

The personal networks that are the main channels of information flow in sites such as Facebook, Twitter and LinkedIn mean that we don’t need to actively seek out particular information; it can be served to us with no more effort than that of maintaining a connection with our contacts. News, opinions, and calls for justice are now shared and forwarded by our friends—and their friends—in a constant churn of information, all attached to familiar names and faces. Given we are more likely to pass on information if the source belongs to our social circle, this has had an important impact on the information environment within which protest movements are initiated and develop.

Mobile connectivity is also important for understanding contemporary protest, given that the ubiquitous streams of synchronous information we access anywhere are shortening our reaction times. This is important, as the evolution of mass recruitments—whether they result in flash mobilisations, slow burns, or simply damp squibs—can only be properly understood if we have a handle on the distribution of reaction times within a population. The increasing integration of the mainstream media into our personal networks is also important, given that online networks (and independent platforms like Indymedia) are not the clear-cut alternative to corporate media they once were. We can now write on the walls or feeds of mainstream media outlets, creating two-way communication channels and public discussion.

Online petitions have also transformed political protest; lower information diffusion costs mean that support (and signatures) can be scaled up much faster. These petitions provide a mine of information for researchers interested in what makes protests succeed or fail. The study of cascading behaviour in online networks suggests that most chain reactions fail quickly, and most petitions don’t gather that much attention anyway. While large cascades tend to start at the core of networks, network centrality is not always a guarantor of success.

So what does a successful cascade look like? Work by Duncan Watts has shown that the vast majority of cascades are small and simple, terminating within one degree of an initial adopting ‘seed.’ Research has also shown that adoptions resulting from chains of referrals are extremely rare; even for the largest cascades observed, the bulk of adoptions often took place within one degree of a few dominant individuals. Conversely, research on the spreading dynamics of a petition organised in opposition to the 2002-2003 Iraq war showed a narrow but very deep tree-like distribution, progressing through many steps and complex paths. The deepness and narrowness of the observed diffusion tree meant that it was fragile—and easily broken at any of the levels required for further distribution. Chain reactions are only successful with the right alignment of factors, and this becomes more likely as more attempts are launched. The rise of social media means that there are now more attempts.

One consequence of these—very recent—developments is the blurring of the public and the private. A significant portion of political information shared online travels through networks that are not necessarily political, but that can be activated for political purposes as circumstances arise. Online protest networks are decentralised structures that pull together local sources of information and create efficient channels for a potentially global diffusion, but they replicate the recruitment dynamics that operated in social networks prior to the emergence of the Internet.

The wave of protests seen in 2011—including the Arab Spring, the Spanish Indignados, and the Global Occupy Campaign—reflects this global interdependence of localised, personal networks, with protest movements emerging spontaneously from the individual actions of many thousands (or millions) of networked users. Political protest movements are seldom stable and fixed organisational structures, and online networks are inherently suited to channeling this fluid commitment and identity. However, systematic research to uncover the bridges and precise network mechanisms that facilitate cross-border diffusion is still lacking. Decentralized networks facilitate mobilisations of unprecedented reach and speed—but are actually not very good at maintaining momentum, or creating particularly stable structures. For this, traditional organisations are still relevant, even while they struggle to maintain a critical mass.

The general failure of traditional organisations to harness the power of these personal networks results from their complex structure, which complicates any attempts at prediction, planning, and engineering. Mobilization paths are difficult to predict because they depend on the right alignment of conditions on different levels—from the local information contexts of individuals who initiate or sustain diffusion chains, to the global assembly of separate diffusion branches. The networked chain reactions that result as people jump onto bandwagons follow complex paths; furthermore, the cumulative effects of these individual actions within the network are not linear, due to feedback mechanisms that can cause sudden changes and flips in mobilisation dynamics, such as exponential growth.

Of course, protest movements are not created by social media technologies; they provide just one mechanism by which a movement can emerge, given the right social, economic, and historical circumstances. We therefore need to focus less on the specific technologies and more on how they are used if we are to explain why most mobilisations fail, but some succeed. Technology is just a part of the story—and today’s Twitter accounts will soon look as dated as the electronic gizmos used by the Zapatistas in the Chiapas jungle.

]]>
Online collective action and policy change: shifting contentious politics in policy processes https://ensr.oii.ox.ac.uk/online-collective-action-and-policy-change-shifting-contentious-politics-in-policy-processes/ Tue, 02 Apr 2013 10:53:27 +0000 http://blogs.oii.ox.ac.uk/policy/?p=869 Research has disproved the notion held by political elites — and some researchers — that collective action resides outside of policy-making processes, and is limited in generating a response from government. The Internet can facilitate the involvement of social movements in policymaking processes, but also constitute in itself the object of contentious politics. Most research on online mobilisations focuses on the Internet as a tool for campaigning and for challenging decision-makers at the national and international level.

Meanwhile, less attention is paid on the fact that the Internet has raised new issues for the policy making agenda around which activists are mobilising, such as issues related to internet governance, online freedom of expression, digital privacy or copyright. Contemporary social movements serve as indicators of new core challenges within society, and can thus constitute an enriching resource for the policy debate arena, particularly with regards to the urgent issues raised by the fast development of the Internet. The literature on social movements is rich in examples of campaigns that have successfully influenced public policy. Classic works have proved how major reforms can start as a consequence of civic mobilisations, and the history provides evidence of the influence of collective action within policy debates on environmental, national security, and peace issues.

However, as mentioned above and argued by Giugni (2004), social movement research has traditionally paid more attention to the process rather than the outcomes of mobilisations. The difficulty of identifying the consequences of collective action and the factors that contribute to its success may lie behind this tendency. As Gamson (1975) argues, the notion of success is elusive and can be most usefully defined with reference to a set of outcomes; these may include the new advantages gained by the group’s beneficiaries after a challenge with targets, or they may refer to the status of the challenging group and its legitimacy.

As for the factors determining success, some scholars believe that collective action is likely to succeed when its claims are close to the aims of political elites. Others, like Kriesi (1995), note that government responses depend on the forms and tactics of contentious politics. Activists occupy different positions on the spectrum between radical and reformist, which in turn affects their willingness to engage with policy-makers. For instance, movements pursuing a type of ‘prefigurative’ politics tend to avoid direct contact with policy-makers, focusing instead on building alternatives which ‘prefigure’ the values that they would like to see on a grander scale.

The Internet has raised new questions around the outcomes of collective action and its interface with policy making. Internet governance, for instance, constitutes a new source of contentious politics, while the use of the Internet for the organisation of protest influences the forms of contemporary collective action. As a tool of collective action, the Internet facilitates the rapid organisation of protests around issues of public concern. Mobilizations can be organised without a formal hierarchy in place, and spread easily as online networking helps the creation of flexible ‘opt-in/opt-out’ coalitions. Information about protest can diffuse through online interpersonal networks and alternative media, and can capture the attention of mainstream news outlets. In addition, the Internet has expanded the ‘repertoire of contention’ of current movements. Petitions, direct action, and occupations now have their online counterparts with tactics such as email bombings, DDoS attacks, and e-petitions.

The affordances of online tools are thought to be affecting the characteristics of collective action and thus its capacities for policy change. Compared to past mobilisations, current movements tend to be more decentralised and flexible, constituted by loose coalitions between a plurality of actors. They are also more inclusive, addressing a diverse and at times incongruent combination of issues. As Della Porta (2005) points out, online mobilisations tend to have a more temporary and fleeting character as they can emerge and dissolve with equal speed; they are also more global in nature, since they can scale up easily and at a low cost.

At the same time, the Internet has facilitated the rise of a what Chadwick defines as new ‘hybrid’ type of civil society actor who combines traditional tactics, such as petitioning, with the more flexible forms of organising favoured by less institutional groups. Organizations like MoveOn in the U.S., Avaaz on the transnational level, and GetUp! in Australia use the power of the Internet to influence policy makers. The Internet helps such organisations to operate at a very low cost, which in turn allows them to be flexible and easily switch the focus of campaigns around issues that capture the public interest.

However, the Internet has become in itself an object of policy and further attention must be paid in considering its implications as a source of new contentious politics. Melucci (1996) argues that research on collective action should pay attention to the new types of inequality that generate contentious politics, rather than restricting its focus to the forms of mobilisations. Studies of online collective action should thus consider the Internet not only as a tool for practicing politics but also as a new “dominant discourse” which produces new claims and inequalities.

Current inequalities in governing digital mediated communication generate what Kriesi (1995) calls ‘windows of opportunities’ for collective action to play an important role in policy making on Internet-related issues. Within this framework, an increasing body of research is addressing collective action around the governance of the Internet, the regulation of free and open software, privacy and data retention, file sharing and copyright issues, and online freedom of expression as a fundamental human right.

Whether and to what extent these new types of actors and mobilisations organised through and around the Internet can have an effect on policy making is an issue requiring further research. For instance, the decentralised and temporary character of some movements , together with their lack of clearly identified leaders and spokespersons can make it difficult in establishing themselves as legitimate representatives of public opinion. New types of ‘hybrid’ actors may encounter problems in establishing themselves as legitimate representatives of public opinion. In addition, new online tactics like e-petitions, which require limited time and commitment from participants, tend to have a weaker impact on policy makers. At the same time, the technical nature of Internet regulation complicates efforts to influence public opinion due to the degree of knowledge necessary for the lay public to understand the issues at stake.

Despite these potential limitations, collective action through and around the internet can enrich policy making. It can help to connect local voices with policy makers and facilitate their involvement in policy-making processes. It can also contribute to new policy debates around urgent issues concerning the Internet. As the Policy and Internet special issue on ‘Online Collective Action and Policy Change’ explores, online collective action can thus constitute an important resource and participant in policy debates with whom policy makers should create new lines of dialogue.

Read the full article at: Calderaro, A. and Kavada A., (2013) “Challenges and Opportunities of Online Collective Action for Policy Change“, Policy and Internet 5(1).

Twitter: @andreacalderaro / @AnastasiaKavada
Web: Andrea’s Personal Page / Anastasia’s Personal Page

References

Giugni, Marco. 2004. Social Protest and Policy Change : Ecology, Antinuclear, and Peace Movements in Comparative Perspective. Lanham: Rowman & Littlefield.

Gamson, William A. 1975. The Strategy of Social Protest. Homewood, Ill.: Dorsey Press.

Kriesi, Hanspeter. 1995. “The Political Opportunity Structure of New Social Movements: its Impact on their Mobilization.” In The Politics of Social Protest, eds. J. Jenkins and B. Dermans. London: UCL Press, pp. 167–198.

Della Porta, Donatella, and Mario Diani. 2006. Social Movements: An Introduction. 2nd ed. Malden, MA: Blackwell Pub.

Melucci, Alberto. 1996. Challenging Codes: Collective Action in the Information Age. Cambridge: Cambridge University Press.

]]>
Online collective action and policy change: new special issue from Policy and Internet https://ensr.oii.ox.ac.uk/online-collective-action-and-policy-change-new-special-issue-from-policy-and-internet/ Mon, 18 Mar 2013 14:22:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=854 The Internet has multiplied the platforms available to influence public opinion and policy making. It has also provided citizens with a greater capacity for coordination and mobilisation, which can strengthen their voice and representation in the policy agenda. As waves of protest sweep both authoritarian regimes and liberal democracies, this rapidly developing field calls for more detailed enquiry. However, research exploring the relationship between online mobilisation and policy change is still limited. This special issue of ‘Policy and Internet’ addresses this gap through a variety of perspectives. Contributions to this issue view the Internet both as a tool that allows citizens to influence policy making, and as an object of new policies and regulations, such as data retention, privacy, and copyright laws, around which citizens are mobilising. Together, these articles offer a comprehensive empirical account of the interface between online collective action and policy making.

Within this framework, the first article in this issue, “Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena?” by Stefania Milan and Arne Hintz (2013), looks at the Internet as both a tool of collective action and an object of policy. The authors provide a comprehensive overview of how computer-mediated communication creates not only new forms of organisational structure for collective action, but also new contentious policy fields. By focusing on what the authors define as ‘techie activists,’ Milan and Hintz explore how new grassroots actors participate in policy debates around the governance of the Internet at different levels. This article provides empirical evidence to what Kriesi et al. (1995) defines as “windows of opportunities” for collective action to contribute to the policy debate around this new space of contentious politics. Milan and Hintz demonstrate how this has happened from the first World Summit of Information Society (WSIS) in 2003 to more recent debates about Internet regulation.

Yana Breindl and François Briatte’s (2013) article “Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union” complements Milan and Hintz’s analysis by looking at how the regulation of copyright issues opens up new spaces of contentious politics. The authors compare how online and offline initiatives and campaigns in France around the “Droit d’Auteur et les Droits Voisins dans la Société de l’Information” (DADVSI) and “Haute Autorité pour la diffusion des œuvres et la protection des droits sure Internet” (HADOPI) laws, and in Europe around the Telecoms Package Reform, have contributed to the deliberations within the EU Parliament. They thus add to the rich debate on the contentious issues of intellectual property rights, demonstrating how collective action contributes to this debate at the European level.

The remaining articles in this special issue focus more on the online tactics and strategies of collective actors and the opportunities opened by the Internet for them to influence policy makers. In her article, “Activism and The Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies?” Julie Uldam (2013) discusses the tactics used by London-based environmental activists to influence policy making during the 17th UN climate conference (COP17) in 2011. Based on ethnographic research, Uldam traces the relationship between online modes of action and problem identification and demands. She also discusses the differences between radical and reformist activists in both their preferences for online action and their attitudes towards policy makers. Drawing on Cammaerts’ (2012) framework of the mediation opportunity structure, Uldam shows that radical activists preferred online tactics that aimed at disrupting the conference, since they viewed COP17 as representative of an unjust system. However, their lack of technical skills and resources prevented them from disrupting the conference in the virtual realm. Reformist activists, on the other hand, considered COP17 as a legitimate adversary, and attempted to influence its politics mainly through the diffusion of alternative information online.

The article by Ariadne Vromen and William Coleman (2013) “Online Campaigning Organizations and Storytelling Strategies: GetUp! Australia,” also investigates a climate change campaign but shifts the focus to the new ‘hybrid’ collective actors, who use the Internet extensively for campaigning. Based on a case study of GetUp!, Vromen and Coleman examine the storytelling strategies employed by the organisation in two separate campaigns, one around climate change, the other around mental health. The authors investigate the factors that led one campaign to be successful and the other to have limited resonance. They also skilfully highlight the difficulties encountered by new collective actors to gain legitimacy and influence policy making. In this respect, GetUp! used storytelling to set itself apart from traditional party-based politics and to emphasise its identity as an organiser and representative of grassroots communities, rather than as an insider lobbyist or disruptive protestor.

Romain Badouard and Laurence Monnoyer-Smith (2013), in their article “Hyperlinks as Political Resources: The European Commission Confronted with Online Activism,” explore some of the more structured ways in which citizens use online tools to engage with policy makers. They investigate the political opportunities offered by the e-participation and e-government platforms of the European Commission for activists wishing to make their voice heard in the European policy making sphere. They focus particularly on strategic uses of web technical resources and hyperlinks, which allows citizens to refine their proposals and thus increase their influence on European policy.

Finally, Jo Bates’ (2013) article “The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis” provides a pertinent framework that facilitates our understanding of the policy challenges posed by the issue of open data. The digitisation of data offers new opportunities for increasing transparency; traditionally considered a fundamental public good. By focusing on the Open Data Government initiative in the UK, Bates explores the policy challenges generated by increasing transparency via new Internet platforms by applying the established theoretical instruments of Gramscian ‘Trasformismo.’ This article frames the open data debate in terms consistent with the literature on collective action, and provides empirical evidence as to how citizens have taken an active role in the debate on this issue, thereby challenging the policy debate on public transparency.

Taken together, these articles advance our understanding of the interface between online collective action and policy making. They introduce innovative theoretical frameworks and provide empirical evidence around the new forms of collective action, tactics, and contentious politics linked with the emergence of the Internet. If, as Melucci (1996) argues, contemporary social movements are sensors of new challenges within current societies, they can be an enriching resource for the policy debate arena. Gaining a better understanding of how the Internet might strengthen this process is a valuable line of enquiry.

Read the full article at: Calderaro, A. and Kavada A., (2013) “Challenges and Opportunities of Online Collective Action for Policy Change“, Policy and Internet 5(1).

Twitter: @AnastasiaKavada / @andreacalderaro
Web: Anastasia’s Personal Page / Andrea’s Personal Page

References

Badouard, R., and Monnoyer-Smith, L. 2013. Hyperlinks as Political Resources: The European Commission Confronted with Online Activism. Policy and Internet 5(1).

Bates, J. 2013. The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis. Policy and Internet 5(1).

Breindl, Y., and Briatte, F. 2013. Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5(1).

Cammaerts, Bart. 2012. “Protest Logics and the Mediation Opportunity Structure.” European Journal of Communication 27(2): 117–134.

Kriesi, Hanspeter. 1995. “The Political Opportunity Structure of New Social Movements: its Impact on their Mobilization.” In The Politics of Social Protest, eds. J. Jenkins and B. Dermans. London: UCL Press, pp. 167–198.

Melucci, Alberto. 1996. Challenging Codes: Collective Action in the Information Age. Cambridge: Cambridge University Press.

Milan, S., and Hintz, A. 2013. Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Policy and Internet 5(1).

Uldam, J. 2013. Activism and the Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies? Policy and Internet 5(1).

Vromen, A., and Coleman, W. 2013. Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5(1).

]]>
Did Libyan crisis mapping create usable military intelligence? https://ensr.oii.ox.ac.uk/did-libyan-crisis-mapping-create-usable-military-intelligence/ Thu, 14 Mar 2013 10:45:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=817 The Middle East has recently witnessed a series of popular uprisings against autocratic rulers. In mid-January 2011, Tunisian President Zine El Abidine Ben Ali fled his country, and just four weeks later, protesters overthrew the regime of Egyptian President Hosni Mubarak. Yemen’s government was also overthrown in 2011, and Morocco, Jordan, and Oman saw significant governmental reforms leading, if only modestly, toward the implementation of additional civil liberties.

Protesters in Libya called for their own ‘day of rage’ on February 17, 2011, marked by violent protests in several major cities, including the capitol Tripoli. As they transformed from ‘protestors’ to ‘Opposition forces’ they began pushing information onto Twitter, Facebook, and YouTube, reporting their firsthand experiences of what had turned into a civil war virtually overnight. The evolving humanitarian crisis prompted the United Nations to request the creation of the Libya Crisis Map, which was made public on March 6, 2011. Other, more focused crisis maps followed, and were widely distributed on Twitter.

While the map was initially populated with humanitarian information pulled from the media and online social networks, as the imposition of an internationally enforced No Fly Zone (NFZ) over Libya became imminent, information began to appear on it that appeared to be of a tactical military nature. While many people continued to contribute conventional humanitarian information to the map, the sudden shift toward information that could aid international military intervention was unmistakable.

How useful was this information, though? Agencies in the U.S. Intelligence Community convert raw data into useable information (incorporated into finished intelligence) by utilizing some form of the Intelligence Process. As outlined in the U.S. military’s joint intelligence manual, this consists of six interrelated steps all centered on a specific mission. It is interesting that many Twitter users, though perhaps unaware of the intelligence process, replicated each step during the Libyan civil war; producing finished intelligence adequate for consumption by NATO commanders and rebel leadership.

It was clear from the beginning of the Libyan civil war that very few people knew exactly what was happening on the ground. Even NATO, according to one of the organization’s spokesmen, lacked the ground-level informants necessary to get a full picture of the situation in Libya. There is no public information about the extent to which military commanders used information from crisis maps during the Libyan civil war. According to one NATO official, “Any military campaign relies on something that we call ‘fused information’. So we will take information from every source we can… We’ll get information from open source on the internet, we’ll get Twitter, you name any source of media and our fusion centre will deliver all of that into useable intelligence.”

The data in these crisis maps came from a variety of sources, including journalists, official press releases, and civilians on the ground who updated blogs and/or maintaining telephone contact. The @feb17voices Twitter feed (translated into English and used to support the creation of The Guardian’s and the UN’s Libya Crisis Map) included accounts of live phone calls from people on the ground in areas where the Internet was blocked, and where there was little or no media coverage. Twitter users began compiling data and information; they tweeted and retweeted data they collected, information they filtered and processed, and their own requests for specific data and clarifications.

Information from various Twitter feeds was then published in detailed maps of major events that contained information pertinent to military and humanitarian operations. For example, as fighting intensified, @LibyaMap’s updates began to provide a general picture of the battlefield, including specific, sourced intelligence about the progress of fighting, humanitarian and supply needs, and the success of some NATO missions. Although it did not explicitly state its purpose as spreading mission-relevant intelligence, the nature of the information renders alternative motivations highly unlikely.

Interestingly, the Twitter users featured in a June 2011 article by the Guardian had already explicitly expressed their intention of affecting military outcomes in Libya by providing NATO forces with specific geographical coordinates to target Qadhafi regime forces. We could speculate at this point about the extent to which the Intelligence Community might have guided Twitter users to participate in the intelligence process; while NATO and the Libyan Opposition issued no explicit intelligence requirements to the public, they tweeted stories about social network users trying to help NATO, likely leading their online supporters to draw their own conclusions.

It appears from similar maps created during the ongoing uprisings in Syria that the creation of finished intelligence products by crisis mappers may become a regular occurrence. Future study should focus on determining the motivations of mappers for collecting, processing, and distributing intelligence, particularly as a better understanding of their motivations could inform research on the ethics of crisis mapping. It is reasonable to believe that some (or possibly many) crisis mappers would be averse to their efforts being used by military commanders to target “enemy” forces and infrastructure.

Indeed, some are already questioning the direction of crisis mapping in the absence of professional oversight (Global Brief 2011): “[If] crisis mappers do not develop a set of best practices and shared ethical standards, they will not only lose the trust of the populations that they seek to serve and the policymakers that they seek to influence, but (…) they could unwittingly increase the number of civilians being hurt, arrested or even killed without knowing that they are in fact doing so.”


Read the full paper: Stottlemyre, S., and Stottlemyre, S. (2012) Crisis Mapping Intelligence Information During the Libyan Civil War: An Exploratory Case Study. Policy and Internet 4 (3-4).

]]>
Experiments are the most exciting thing on the UK public policy horizon https://ensr.oii.ox.ac.uk/experiments-are-the-most-exciting-thing-on-the-uk-public-policy-horizon/ Thu, 28 Feb 2013 10:20:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=392
Iraq War protesters in Trafalgar Square, London
What makes people join political actions? Iraq War protesters crowd Trafalgar Square in February 2007. Image by DavidMartynHunt.
Experiments – or more technically, Randomised Control Trials – are the most exciting thing on the UK public policy horizon. In 2010, the incoming Coalition Government set up the Behavioural Insights Team in the Cabinet Office to find innovative and cost effective (cheap) ways to change people’s behaviour. Since then the team have run a number of exciting experiments with remarkable success, particularly in terms of encouraging organ donation and timely payment of taxes. With Bad Science author Ben Goldacre, they have now published a Guide to RCTs, and plenty more experiments are planned.

This sudden enthusiasm for experiments in the UK government is very exciting. The Behavioural Insights Team is the first of its kind in the world – In the US, there are few experiments at federal level, although there have been a few well publicised ones at local level – and the UK government has always been rather scared of the concept before, there being a number of cultural barriers to the very word ‘experiment’ in British government. Experiments came to the fore in the previous Administration’s Mindscape document. But what made them popular for Public Policy may well have been the 2008 book Nudge by Thaler and Sunstein, which shows that by knowing how people think, it is possible to design choice environments that make it “easier for people to choose what is best for themselves, their families, and their society.” Since then, the political scientist Peter John has published ‘Nudge, Nudge, Think, Think, which has received positive coverage in The Economist: The use of behavioural economics in public policy shows promise and the Financial Times: Nudge, nudge. Think, think. Say no more …; and has been reviewed by the LSE Review of Books: Nudge, Nudge, Think, Think: experimenting with ways to change civic behaviour.

But there is one thing missing here. Very few of these experiments use manipulation of information environments on the internet as a way to change people’s behaviour. The Internet seems to hold enormous promise for ‘Nudging’ by redesigning ‘choice environments’, yet Thaler and Sunstein’s book hardly mentions it, and none of the BIT’s experiments so far have used the Internet; although a new experiment looking at ways of encouraging court attendees to pay fines is based on text messages.

So, at the Oxford Internet Institute we are doing something about that. At OxLab, an experimental laboratory for the social sciences run by the OII and Said Business School, we are running online experiments to test the impact of various online platforms on people’s behaviour. So, for example, two reports for the UK National Audit Office: Government on the Internet (2007) and Communicating with Customers (2009) carried out by a joint OII-LSE team used experiments to see how people search for and find government-internet related information. Further experiments investigated the impact of various types of social influence, particularly social information about the behaviour of others and visibility (as opposed to anonymity), on the propensity of people to participate politically.

And the OII-edited journal Policy and Internet has been a good venue for experimentalists to publicise their work. So, Stephan Grimmelikhuijsen’s paper Transparency of Public Decision-Making: Towards Trust in Local Government? (Policy & Internet 2010; 2:1) reports an experiment to see if transparency (relating to decision-making by local government) actually leads to higher levels of trust. Interestingly, his results indicated that participants exposed to more information (in this case, full council minutes) were significantly more negative regarding the perceived competence of the council compared to those who did not access all the available information. Additionally, participants who received restricted information about the minutes thought the council was less honest compared to those who did not read them.

]]>
Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project https://ensr.oii.ox.ac.uk/preserving-the-digital-record-of-major-natural-disasters-the-ceismic-canterbury-earthquakes-digital-archive-project/ Fri, 29 Jun 2012 09:57:55 +0000 http://blogs.oii.ox.ac.uk/policy/?p=277 The 6.2 magnitude earthquake that struck the centre of Christchurch on 22 February 2011 claimed 185 lives, damaged 80% of the central city beyond repair, and forced the abandonment of 6000 homes. It was the third costliest insurance event in history. The CEISMIC archive developed at the University of Canterbury will soon have collected almost 100,000 digital objects documenting the experiences of the people and communities affected by the earthquake, all of it available for study.

The Internet can be hugely useful to coordinate disaster relief efforts, or to help rebuild affected communities. Paul Millar came to the OII on 21 May 2012 to discuss the CEISMIC archive project and the role of digital humanities after a major disaster (below). We talked to him afterwards.

Ed: You have collected a huge amount of information about the earthquake and people’s experiences that would otherwise have been lost: how do you think it will be used?

Paul: From the beginning I was determined to avoid being prescriptive about eventual uses. The secret of our success has been to stick to the principles of open data, open access and collaboration — the more content we can collect, the better chance future generations have to understand and draw conclusions from our experiences, behaviour and decisions. We have already assisted a number of research projects in public health, the social and physical sciences; even accounting. One of my colleagues reads balance sheets the way I read novels, and discovers all sorts of earthquake-related signs of cause and effect in them. I’d never have envisaged such a use for the archive. We have made our ontology is as detailed and flexible as possible in order to help with re-purposing of primary material: we currently use three layers of metadata — machine generated, human-curated and crowd sourced. We also intend to work more seriously on our GIS capabilities.

Ed: How do you go about preserving this information during a period of tremendous stress and chaos? Was it difficult to convince people of the importance of this longer-term view?

Paul: There was no difficulty convincing people of the importance of what we were doing: everyone got it immediately. However, the scope of this disaster is difficult to comprehend, even for those of us who live with it every day. We’ve lost a lot of material already, and we’re losing more everyday. Our major telecommunications provider recently switched off its CDMA network — all those redundant phones are gone, and with them any earthquake pictures or texts that might have been stored. One of the things I’d encourage every community to do now is make an effort to preserve key information against a day of disaster. If we’d digitised all our architectural plans of heritage buildings and linked them electronically to building reports and engineering assessments, we might have saved more.

Ed: It seems obvious in hindsight that the Internet can (and should be) be tremendously useful in the event of this sort of disaster: how do we ensure that best use is made?

Paul: The first thing is to be prepared, even in a low-key way, for whatever might happen. Good decision-making during a disaster requires accurate, accessible, and comprehensive data: digitisation and data linking are key activities in the creation of such a resource — and robust processes to ensure that information is of high quality are vital. One of the reasons CEISMIC works is because it is a federated archive — an ideal model for this sort of event — and we were able to roll it out extremely quickly. We could also harness online expert communities, crowd-sourcing efforts, open sourcing of planning processes, and robust vetting of information and auditing of outcomes. A lot of this needs to be done before a disaster strikes, though. For years I’ve encountered the mantra ‘we support research but we don’t fund databases’. We had to build CEISMIC because there was no equivalent, off-the-shelf product — but that development process lost us a year at least.

Ed: What equivalent efforts are there to preserve information about major disasters?

Paul: The obvious ones are the world-leading projects out of Center for History and New Media at George Mason University, including their 9/11 Digital Archive. One problem for any archive of this nature is that information doesn’t exist in a free and unmediated space. For example, the only full record of the pre-quake Christchurch cityscape is historic Google Street View; one of the most immediate sources of quake information was Twitter; many people communicated with the world via Facebook, and so on. It’s a question we’re all engaging with: who owns that information? How will it be preserved and accessed? We’ve had a lot of interest in what we are doing, and plenty of consultation and discussion with groups who see our model as being of some relevance to them. The UC CEISMIC project is essentially a proof of concept — versions of it could be rolled out around the world and left to tick over in the background, quietly accumulating material in the event that it is needed one day. That’s a small cost alongside losing a community’s heritage.

Ed: What difficulties have you encountered in setting up the archive?

Paul: Where do I start? There were the personal difficulties — my home damaged, my family traumatised, the university damaged, staff and students all struggling in different ways to cope: it’s not the ideal environment to try and introduce a major IT project. But I felt I had to do something, partly as a therapeutic response. I saw my engineering and geosciences colleagues at the front of the disaster, explaining what was happening, helping to provide context and even reassurance. For quite a while I wondered what on earth a professor of literature could do. It was James Smithies – now CEISMIC’s Project Manager – who reminded me of the 9/11 Archive. The difficulties we’ve encountered since have been those that beset most under-resourced projects — trying to build a million dollar project on a much smaller budget. A lot of the future development will be funding dependent, so much of my job will be getting the word out and looking for sponsors, supporters and partners. But although we’re understaffed, over-worked and living in a shaky city, the resilience, courage, humanity and good will of so many people never ceases to amaze and hearten me.

Ed: Your own research area is English Literature: has that had any influence on the sorts of content that have been collected, or your own personal responses to it?

Paul: My interest in digital archiving started when teaching New Zealand Literature at Victoria University of Wellington. In a country this small most books have a single print run of a few hundred; and even our best writers are lucky to have a text make it to a second edition. I therefore encountered the problem that many of the texts I wanted to prescribe were out of print: digitisation seemed like a good solution. In New Zealand the digital age has negated distance — the biggest factor preventing us from immediate and meaningful engagement with the rest of the world. CEISMIC actually started life as an acronym (the Canterbury Earthquakes Images, Stories and Media Integrated Collection), and the fact that ‘stories’ sits centrally certainly represents my own interest in the way we use narratives to make sense of experience. Everyone who went through the earthquakes has a story, and every story is different. I’m fascinated by the way a collective catastrophe becomes so much more meaningful when it is broken down into individual narratives. Ironically, despite the importance of this project to me, I find the earthquakes extremely difficult to write about in any personal or creative way. I haven’t written my own earthquake story yet.


Paul Millar was talking to blog editor David Sutcliffe.

]]>
Slicing digital data: methodological challenges in computational social science https://ensr.oii.ox.ac.uk/slicing-digital-data-methodological-challenges-in-computational-social-science/ Wed, 30 May 2012 10:45:26 +0000 http://blogs.oii.ox.ac.uk/policy/?p=337 One of the big social science questions is how our individual actions aggregate into collective patterns of behaviour (think crowds, riots, and revolutions). This question has so far been difficult to tackle due to a lack of appropriate data, and the complexity of the relationship between the individual and the collective. Digital trails are allowing Social Scientists to understand this relationship better.

Small changes in individual actions can have large effects at the aggregate level; this opens up the potential for drawing incorrect conclusions about generative mechanisms when only aggregated patterns are analysed, as Schelling aimed to show in his classic example of racial segregation. 

Part of the reason why it has been so difficult to explore this connection between the individual and the collective — and the unintended consequences that arise from that connection — is lack of proper empirical data, particularly around the structure of interdependence that links individual actions. This relational information is what digital data is now providing; however, they present some new challenges to the social scientist, particularly those who are used to working with smaller, cross-sectional datasets. Suddenly, we can track and analyse the interactions of thousands (if not millions) of people with a time resolution that can go down to the second. The question is how to best aggregate that data and deal with the time dimension.

Interactions take place in continuous time; however, most digital interactions are recorded as events (i.e. sending or receiving messages), and different network structures emerge when those events are aggregated according to different windows (i.e. days, weeks, months). We still don’t have systematic knowledge on how transforming continuous data into discrete observation windows affects the networks of interaction we analyse. Reconstructing interpersonal networks (particularly longitudinal network data) used to be extremely time consuming and difficult; now it is relatively easy to obtain that sort of network data, but modelling and analysing them is still a challenge.

Another problem faced by social scientists using digital data is that most social networks are multiplex in nature, that is, we belong to many different networks that interact and affect each other by means of feedback effects: How do all these different network structures co-evolve? If we only focus on one network, such as Twitter, we lose information about how activity in other networks (like Facebook, or email, or offline communication) is related to changes in the network we observe. In our study on the Spanish protests, we only track part of the relevant activity: we have a good idea of what was happening on Twitter, but there were obviously lots of other communication networks simultaneously having an influence on people’s behaviour. And while it is exciting as a social scientist to be able to access and analyse huge quantities of detailed data about social movements as they happen, the Twitter network only provides part of the picture.

Finally, when analysing the cascading effects of individual actions there is also the challenge of separating out the effects of social influence and self-selection. Digital data allow us to follow cascading behaviour with better time resolution, but the observational data usually does not help discriminate if people behave similarly because they influence and follow each other or because they share similar attributes and motivations. Social scientists need to find ways of controlling for this self-selection in online networks; although digital data often lacks the demographic information that allows applying this control, digital technologies are also helping researchers conduct experiments that help them pin down the effects of social influence.

Digital data is allowing social scientists pose questions that couldn’t be answered before. However, there are many methodological challenges that need solving. This talk considers a few, emphasising that strong theoretical motivations should still direct the questions we pose to digital data.

Further reading:

Gonzalez-Bailon, S., Borge-Holthoefer, J. and Moreno, Y. (2013) Broadcasters and Hidden Influentials in Online Protest Diffusion. American Behavioural Scientist (forthcoming).

Gonzalez-Bailon, S., Wang, N., Rivero, A., Borge-Holthoefer, J., and Moreno, Y. (2012) Assessing the Bias in Communication Networks Sampled from Twitter. Working Paper.

Gonzalez-Bailon, S., Borge-Holthoefer, J., Rivero, A. and Moreno, Y. (2011) The Dynamics of Protest Recruitment Through an Online Network. Scientific Reports 1, 197. DOI: 10.1038/srep00197

González-Bailón, S., Kaltenbrunner, A. and Banchs, R.E. (2010) The Structure of Political Discussion Networks: A Model for the Analysis of Online Deliberation. Journal of Information Technology 25 (2) 230-243.

]]>