Articles from Policy & Internet – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:21 +0000 en-GB hourly 1 Can “We the People” really help draft a national constitution? (sort of..) https://ensr.oii.ox.ac.uk/can-we-the-people-really-help-draft-a-national-constitution-sort-of/ Thu, 16 Aug 2018 14:26:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4687 As innovations like social media and open government initiatives have become an integral part of the politics in the twenty-first century, there is increasing interest in the possibility of citizens directly participating in the drafting of legislation. Indeed, there is a clear trend of greater public participation in the process of constitution making, and with the growth of e-democracy tools, this trend is likely to continue. However, this view is certainly not universally held, and a number of recent studies have been much more skeptical about the value of public participation, questioning whether it has any real impact on the text of a constitution.

Following the banking crisis, and a groundswell of popular opposition to the existing political system in 2009, the people of Iceland embarked on a unique process of constitutional reform. Having opened the entire drafting process to public input and scrutiny, these efforts culminated in Iceland’s 2011 draft crowdsourced constitution: reputedly the world’s first. In his Policy & Internet article “When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution”, Alexander Hudson examines the impact that the Icelandic public had on the development of the draft constitution. He finds that almost 10 percent of the written proposals submitted generated a change in the draft text, particularly in the area of rights.

This remarkably high number is likely explained by the isolation of the drafters from both political parties and special interests, making them more reliant on and open to input from the public. However, although this would appear to be an example of successful public crowdsourcing, the new constitution was ultimately rejected by parliament. Iceland’s experiment with participatory drafting therefore demonstrates the possibility of successful online public engagement — but also the need to connect the masses with the political elites. It was the disconnect between these groups that triggered the initial protests and constitutional reform, but also that led to its ultimate failure.

We caught up with Alexander to discuss his findings.

Ed: We know from Wikipedia (and other studies) that group decisions are better, and crowds can be trusted. However, I guess (re US, UK) I also feel increasingly nervous about the idea of “the public” having a say over anything important and binding. How do we distribute power and consultation, while avoiding populist chaos?  

Alexander: That’s a large and important question, which I can probably answer only in part. One thing we need to be careful of is what kind of public we are talking about. In many cases, we view self-selection as a bad thing — it can’t be representative. However, in cases like Wikipedia, we see self-selected individuals with specialized knowledge and an uncommon level of interest collaborating. I would suggest that there is an important difference between the kind of decisions that are made by careful and informed participants in citizens’ juries, deliberative polls, or Wikipedia editing, and the oversimplified binary choices that we make in elections or referendums.

So, while there is research to suggest that large numbers of ordinary people can make better decisions, there are some conditions in terms of prior knowledge and careful consideration attached to that. I have high hopes for these more deliberative forms of public participation, but we are right to be cautious about referendums. The Icelandic constitutional reform process actually involved several forms of public participation, including two randomly selected deliberative fora, self-selected online participation, and a popular referendum with several questions.

Ed: A constitution is a very technical piece of text: how much could non-experts realistically contribute to its development — or was there also contribution from specialised interest groups? Presumably there was a team of lawyers and drafters managing the process? 

Alexander: All of these things were going on in Iceland’s drafting process. In my research here and on a few other constitution-making processes in other countries, I’ve been impressed by the ability of citizens to engage at a high level with fundamental questions about the nature of the state, constitutional rights, and legal theory. Assuming a reasonable level of literacy, people are fully capable of reading some literature on constitutional law and political philosophy, and writing very well-informed submissions that express what they would like to see in the constitutional text. A small, self-selected set of the public in many countries seeks to engage in spirited and for the most part respectful debate on these issues. In the Icelandic case, these debates have continued from 2009 to the present.

I would also add that public interest is not distributed uniformly across all the topics that constitutions cover. Members of the public show much more interest in discussing issues of human rights, and have more success in seeing proposals on that theme included in the draft constitution. Some NGOs were involved in submitting proposals to the Icelandic Constitutional Council, but interest groups do not appear to have been a major factor in the process. Unlike some constitution-making processes, the Icelandic Constitutional Council had a limited staff, and the drafters themselves were very engaged with the public on social media.

Ed: I guess Iceland is fairly small, but also unusually homogeneous. That helps, presumably, in creating a general consensus across a society? Or will party / political leaning always tend to trump any sense of common purpose and destiny, when defining the form and identity of the nation?

Alexander: You are certainly right that Iceland is unusual in these respects, and this raises important questions of what this is a case of, and how the findings here can inform us about what might happen in other contexts. I would not say that the Icelandic people reached any sort of broad, national-level consensus about how the constitution should change. During the early part of the drafting process, it seems that those who had strong disagreements with what was taking place absented themselves from the proceedings. They did turn up later to some extent (especially after the 2012 referendum), and sought to prevent this draft from becoming law.

Where the small size and homogeneous population really came into play in Iceland is through the level of knowledge that those who participated had of one another before entering into the constitution-making process. While this has been over emphasized in some discussions of Iceland, there are communities of shared interests where people all seem to know each other, or at least know of each other. This makes forming new societies, NGOs, or interest groups easier, and probably helped to launch the constitution-making project in the first place. 

Ed: How many people were involved in the process — and how were bad suggestions rejected, discussed, or improved? I imagine there must have been divisive issues, that someone would have had to arbitrate? 

Alexander: The number of people who interacted with the process in some way, either by attending one of the public forums that took place early in the process, voting in the election for the Constitutional Council, or engaging with the process on social media, is certainly in the tens of thousands. In fact, one of the striking things about this case is that 522 people stood for election to the 25 member Constitutional Council which drafted the new constitution. So there was certainly a high level of interest in participating in this process.

My research here focused on the written proposals that were posted to the Constitutional Council’s website. 204 individuals participated in that more intensive way. As the members of the Constitutional Council tell it, they would read some of the comments on social media, and the formal submissions on their website during their committee meetings, and discuss amongst themselves which ideas should be carried forward into the draft. The vast majority of the submissions were well-informed, on topic, and conveyed a collegial tone. In this case at least, there was very little of the kind of abusive participation that we observe in some online networks. 

Ed: You say that despite the success in creating a crowd-sourced constitution (that passed a public referendum), it was never ratified by parliament — why is that? And what lessons can we learn from this?

Alexander: Yes, this is one of the most interesting aspects of the whole thing for scholars, and certainly a source of some outrage for those Icelanders who are still active in trying to see this draft constitution become law. Some of this relates to the specifics of Iceland’s constitutional amendment process (which disincentives parliament from approving changes in between elections), but I think that there are also a couple of broadly applicable things going on here. First, the constitution-making process arose as a response to the way that the Icelandic government was perceived to have failed in governing the financial system in the late 2000s. By the time a last-ditch attempt to bring the draft constitution up for a vote in parliament occurred right before the 2013 election, almost five years had passed since the crisis that began this whole saga, and the economic situation had begun to improve. So legislators were not feeling pressure to address those issues any more.

Second, since political parties were not active in the drafting process, too few members of parliament had a stake in the issue. If one of the larger parties had taken ownership of this draft constitution, we might have seen a different outcome. I think this is one of the most important lessons from this case: if the success of the project depends on action by elite political actors, they should be involved in the earlier stages of the process. For various reasons, the Icelanders chose to exclude professional politicians from the process, but that meant that the Constitutional Council had too few friends in parliament to ratify the draft.

Read the full article: Hudson, A. (2018) When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution. Policy & Internet 10 (2) 185-217. DOI: https://doi.org/10.1002/poi3.167

Alexander Hudson was talking to blog editor David Sutcliffe.

]]>
Bursting the bubbles of the Arab Spring: the brokers who bridge ideology on Twitter https://ensr.oii.ox.ac.uk/bursting-the-bubbles-of-the-arab-spring-the-brokers-who-bridge-ideology-on-twitter/ Fri, 27 Jul 2018 11:50:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4679 Online activism has become increasingly visible, with social media platforms being used to express protest and dissent from the Arab Spring to #MeToo. Scholarly interest in online activism has grown with its use, together with disagreement about its impact. Do social media really challenge traditional politics? Some claim that social media have had a profound and positive effect on modern protest — the speed of information sharing making online networks highly effective in building revolutionary movements. Others argue that this activity is merely symbolic: online activism has little or no impact, dilutes offline activism, and weakens social movements. Given online activity doesn’t involve the degree of risk, trust, or effort required on the ground, they argue that it can’t be considered to be “real” activism. In this view, the Arab Spring wasn’t simply a series of “Twitter revolutions”.

Despite much work on offline social movements and coalition building, few studies have used social network analysis to examine the influence of brokers of online activists (i.e. those who act as a bridge between different ideological groups), or their role in information diffusion across a network. In her Policy & Internet article “Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution”, Deena Abul-Fottouh tests whether social movements theory of networks and coalition building — developed to explain brokerage roles in offline networks, between established parties and organisations — can also be used to explain what happens online.

Social movements theory suggests that actors who occupy an intermediary structural position between different ideological groups are more influential than those embedded only in their own faction. That is, the “bridging ties” that link across political ideologies have a greater impact on mobilization than the bonding ties within a faction. Indeed, examining the Egyptian revolution and ensuing crisis, Deena finds that these online brokers were more evident during the first phase of movement solidarity between liberals, islamists, and socialists than in the period of schism and crisis (2011-2014) that followed the initial protests. However, she also found that the online brokers didn’t match the brokers on the ground: they played different roles, complementing rather than mirroring each other in advancing the revolutionary movement.

We caught up with Deena to discuss her findings:

Ed: Firstly: is the “Arab Spring” a useful term? Does it help to think of the events that took place across parts of the Middle East and North Africa under this umbrella term — which I suppose implies some common cause or mechanism?

Deena: Well, I believe it’s useful to an extent. It helps describe some positive common features that existed in the region such as dissatisfaction with the existing regimes, a dissatisfaction that was transformed from the domain of advocacy to the domain of high-risk activism, a common feeling among the people that they can make a difference, even though it did not last long, and the evidence that there are young people in the region who are willing to sacrifice for their freedom. On the other hand, structural forces in the region such as the power of deep states and the forces of counter-revolution were capable of halting this Arab Spring before it burgeoned or bore fruit, so may be the term “Spring” is no longer relevant.

Ed: Revolutions have been happening for centuries, i.e. they obviously don’t need Twitter or Facebook to happen. How significant do you think social media were in this case, either in sparking or sustaining the protests? And how useful are these new social media data as a means to examine the mechanisms of protest?

Deena: Social media platforms have proven to be useful in facilitating protests such as by sharing information in a speedy manner and on a broad range across borders. People in Egypt and other places in the region were influenced by Tunisia, and protest tactics were shared online. In other words, social media platforms definitely facilitate diffusion of protests. They are also hubs to create a common identity and culture among activists, which is crucial for the success of social movements. I also believe that social media present activists with various ways to circumvent policing of activism (e.g. using pseudonyms to hide the identity of the activists, sharing information about places to avoid in times of protests, many platforms offer the possibility for activists to form closed groups where they have high privacy to discuss non-public matters, etc.).

However, social media ties are weak ties. These platforms are not necessarily efficient in building the trust needed to bond social movements, especially in times of schism and at the level of high-risk activism. That is why, as I discuss in my article, we can see that the type of brokerage that is formed online is brokerage that is built on weak ties, not necessarily the same as offline brokerage that usually requires high trust.

Ed: It’s interesting that you could detect bridging between groups. Given schism seems to be fairly standard in society (Cf filter bubbles etc.) .. has enough attention been paid to this process of temporary shifting alignments, to advance a common cause? And are these incidental, or intentional acts of brokerage?

Deena: I believe further studies need to be made on the concepts of solidarity, schism and brokerage within social movements both online and offline. Little attention has been given to how movements come together or break apart online. The Egyptian revolution is a rich case to study these concepts as the many changes that happened in the path of the revolution in its first five years and the intervention of different forces have led to multiple shifts of alliances that deserve study. Acts of brokerage do not necessarily have to be intentional. In social movements studies, researchers have studied incidental acts that could eventually lead to formation of alliances, such as considering co-members of various social movements organizations as brokers between these organizations.

I believe that the same happens online. Brokerage could start with incidental acts such as activists following each other on Twitter for example, which could develop into stronger ties through mentioning each other. This could also build up to coordinating activities online and offline. In the case of the Egyptian revolution, many activists who met in protests on the ground were also friends online. The same happened in Moldova where activists coordinated tactics online and met on the ground. Thus, incidental acts that start with following each other online could develop into intentional coordinated activism offline. I believe further qualitative interviews need to be conducted with activists to study how they coordinate between online and offline activism, as there are certain mechanisms that cannot be observed through just studying the public profiles of activists or their structural networks.

Ed: The “Arab Spring” has had a mixed outcome across the region — and is also now perhaps a bit forgotten in the West. There have been various network studies of the 2011 protests: but what about the time between visible protests .. isn’t that in a way more important? What would a social network study of the current situation in Egypt look like, do you think?

Deena: Yes, the in-between times of waves of protests are as important to study as the waves themselves as they reveal a lot about what could happen, and we usually study them retroactively after the big shocks happen. A social network of the current situation in Egypt would probably include many “isolates” and tiny “components”, if I would use social network analysis terms. This started showing in 2014 as the effects of schism in the movement. I believe this became aggravated over time as the military coup d’état got a stronger grip over the country, suppressing all opposition. Many activists are either detained or have left the country. A quick look at their online profiles does not reveal strong communication between them. Yet, this is what apparently shows from public profiles. One of the levers that social media platforms offer is the ability to create private or “closed” groups online.

I believe these groups might include rich data about activists’ communication. However, it is very difficult, almost impossible to study these groups, unless you are a member or they give you permission. In other words, there might be some sort of communication occurring between activists but at a level that researchers unfortunately cannot access. I think we might call it the “underground of online activism”, which I believe is potentially a very rich area of study.

Ed: A standard criticism of “Twitter network studies” is that they aren’t very rich — they may show who’s following whom, but not necessarily why, or with what effect. Have there been any larger, more detailed studies of the Arab Spring that take in all sides: networks, politics, ethnography, history — both online and offline?

Deena: To my knowledge, there haven’t been studies that have included all these aspects together. Yet there are many studies that covered each of them separately, especially the politics, ethnography, and history of the Arab Spring (see for example: Egypt’s Tahrir Revolution 2013, edited by D. Tschirgi, W. Kazziha and S. F. McMahon). Similarly, very few studies have tried to compare the online and offline repertoires (see for example: Weber, Garimella and Batayneh 2013, Abul-Fottouh and Fetner 2018). In my doctoral dissertation (2018 from McMaster University), I tried to include many of these elements.

Read the full article: Abul-Fottouh, D. (2018) Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution. Policy & Internet 10: 218-240. doi:10.1002/poi3.169

Deena Abul-Fottouh was talking to blog editor David Sutcliffe.

]]>
Call for Papers: Government, Industry, Civil Society Responses to Online Extremism https://ensr.oii.ox.ac.uk/call-for-papers-responses-to-online-extremism/ Mon, 02 Jul 2018 12:52:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4666 We are calling for articles for a Special Issue of the journal Policy & Internet on “Online Extremism: Government, Private Sector, and Civil Society Responses”, edited by Jonathan Bright and Bharath Ganesh, to be published in 2019. The submission deadline is October 30, 2018.

Issue Outline

Governments, the private sector, and civil society are beginning to work together to challenge extremist exploitation of digital communications. Both Islamic and right-wing extremists use websites, blogs, social media, encrypted messaging, and filesharing websites to spread narratives and propaganda, influence mainstream public spheres, recruit members, and advise audiences on undertaking attacks.

Across the world, public-private partnerships have emerged to counter this problem. For example, the Global Internet Forum to Counter Terrorism (GIFCT) organized by the UN Counter-Terrorism Executive Directorate has organized a “shared hash database” that provides “digital fingerprints” of ISIS visual content to help platforms quickly take down content. In another case, the UK government funded ASI Data Science to build a tool to accurately detect jihadist content. Elsewhere, Jigsaw (a Google-owned company) has developed techniques to use content recommendations on YouTube to “redirect” viewers of extremist content to content that might challenge their views.

While these are important and admirable efforts, their impacts and effectiveness is unclear. The purpose of this special issue is to map and evaluate emerging public-private partnerships, technologies, and responses to online extremism. There are three main areas of concern that the issue will address:

(1) the changing role of content moderation, including taking down content and user accounts, as well as the use of AI techniques to assist;

(2) the increasing focus on “counter-narrative” campaigns and strategic communication; and

(3) the inclusion of global civil society in this agenda.

This mapping will contribute to understanding how power is distributed across these actors, the ways in which technology is expected to address the problem, and the design of the measures currently being undertaken.

Topics of Interest

Papers exploring one or more of the following areas are invited for consideration:

Content moderation

  • Efficacy of user and content takedown (and effects it has on extremist audiences);
  • Navigating the politics of freedom of speech in light of the proliferation of hateful and extreme speech online;
  • Development of content and community guidelines on social media platforms;
  • Effect of government policy, recent inquiries, and civil society on content moderation practices by the private sector (e.g. recent laws in Germany, Parliamentary inquiries in the UK);
  • Role and efficacy of Artificial Intelligence (AI) and machine learning in countering extremism.

Counter-narrative Campaigns and Strategic Communication

  • Effectiveness of counter-narrative campaigns in dissuading potential extremists;
  • Formal and informal approaches to counter narratives;
  • Emerging governmental or parastatal bodies to produce and disseminate counter-narratives;
  • Involvement of media and third sector in counter-narrative programming;
  • Research on counter-narrative practitioners;
  • Use of technology in supporting counter-narrative production and dissemination.

Inclusion of Global Civil Society

  • Concentration of decision making power between government, private sector, and civil society actors;
  • Diversity of global civil society actors involved in informing content moderation and counter-narrative campaigns;
  • Extent to which inclusion of diverse civil society/third sector actors improves content moderation and counter-narrative campaigns;
  • Challenges and opportunities faced by global civil society in informing agendas to respond to online extremism.

Submitting your Paper

We encourage interested scholars to submit 6,000 to 8,000 word papers that address one or more of the issues raised in the call. Submissions should be made through Policy & Internet’s manuscript submission system. Interested authors are encouraged to contact Jonathan Bright (jonathan.bright@oii.ox.ac.uk) and Bharath Ganesh (bharath.ganesh@oii.ox.ac.uk) to check the suitability of their paper.

Special Issue Schedule

The special issue will proceed according to the following timeline:

Paper submission: 30 October 2018

First round of reviews: January 2019

Revisions received: March 2019

Final review and decision: May 2019

Publication (estimated): December 2019

The special issue as a whole will be published at some time in late 2019, though individual papers will be published online in EarlyView as soon as they are accepted.

]]>
In a world of “connective action” — what makes an influential Twitter user? https://ensr.oii.ox.ac.uk/in-a-world-of-connective-action-what-makes-an-influential-twitter-user/ Sun, 10 Jun 2018 08:07:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4183 A significant part of political deliberation now takes place on online forums and social networking sites, leading to the idea that collective action might be evolving into “connective action”. The new level of connectivity (particularly of social media) raises important questions about its role in the political process. but understanding important phenomena, such as social influence, social forces, and digital divides, requires analysis of very large social systems, which traditionally has been a challenging task in the social sciences.

In their Policy & Internet article “Understanding Popularity, Reputation, and Social Influence in the Twitter Society“, David Garcia, Pavlin Mavrodiev, Daniele Casati, and Frank Schweitzer examine popularity, reputation, and social influence on Twitter using network information on more than 40 million users. They integrate measurements of popularity, reputation, and social influence to evaluate what keeps users active, what makes them more popular, and what determines their influence in the network.

Popularity in the Twitter social network is often quantified as the number of followers of a user. That implies that it doesn’t matter why some user follows you, or how important she is, your popularity only measures the size of your audience. Reputation, on the other hand, is a more complicated concept associated with centrality. Being followed by a highly reputed user has a stronger effect on one’s reputation than being followed by someone with low reputation. Thus, the simple number of followers does not capture the recursive nature of reputation.

In their article, the authors examine the difference between popularity and reputation on the process of social influence. They find that there is a range of values in which the risk of a user becoming inactive grows with popularity and reputation. Popularity in Twitter resembles a proportional growth process that is faster in its strongly connected component, and that can be accelerated by reputation when users are already popular. They find that social influence on Twitter is mainly related to popularity rather than reputation, but that this growth of influence with popularity is sublinear. In sum, global network metrics are better predictors of inactivity and social influence, calling for analyses that go beyond local metrics like the number of followers.

We caught up with the authors to discuss their findings:

Ed.: Twitter is a convenient data source for political scientists, but they tend to get criticised for relying on something that represents only a tiny facet of political activity. But Twitter is presumably very useful as a way of uncovering more fundamental / generic patterns of networked human interaction?

David: Twitter as a data source to study human behaviour is both powerful and limited. Powerful because it allows us to quantify and analyze human behaviour at scales and resolutions that are simply impossible to reach with traditional methods, such as experiments or surveys. But also limited because not every aspect of human behaviour is captured by Twitter and using its data comes with significant methodological challenges, for example regarding sampling biases or platform changes. Our article is an example of an analysis of general patterns of popularity and influence that are captured by spreading information in Twitter, which only make sense beyond the limitations of Twitter when we frame the results with respect to theories that link our work to previous and future scientific knowledge in the social sciences.

Ed.: How often do theoretical models (i.e. describing the behaviour of a network in theory) get linked up with empirical studies (i.e. of a network like Twitter in practice) but also with qualitative studies of actual Twitter users? And is Twitter interesting enough in itself for anyone to attempt to develop an overall theoretico-empirico-qualitative theory about it?

David: The link between theoretical models and large-scale data analyses of social media is less frequent than we all wish. But the gap between disciplines seems to be narrowing in the last years, with more social scientists using online data sources and computer scientists referring better to theories and previous results in the social sciences. What seems to be quite undeveloped is an interface with qualitative methods, specially with large-scale analyses like ours.

Qualitative methods can provide what data science cannot: questions about important and relevant phenomena that then can be explained within a wider theory if validated against data. While this seems to me as a fertile ground for interdisciplinary research, I doubt that Twitter in particular should be the paragon of such combination of approaches. I advocate for starting research from the aspect of human behaviour that is the subject of study, and not from a particularly popular social media platform that happens to be used a lot today, but might not be the standard tomorrow.

Ed.: I guess I’ve see a lot of Twitter networks in my time, but not much in the way of directed networks, i.e. showing direction of flow of content (i.e. influence, basically) — or much in the way of a time element (i.e. turning static snapshots into dynamic networks). Is that fair, or am I missing something? I imagine it would be fun to see how (e.g.) fake news or political memes propagate through a network?

David: While Twitter provides amazing volumes of data, its programming interface is notorious for the absence of two key sources: the date when follower links are created and the precise path of retweets. The reason for the general picture of snapshots over time is that researchers cannot fully trace back the history of a follower network, they can only monitor it with certain frequency to overcome the fact that links do not have a date attached.

The generally missing picture of flows of information is because when looking up a retweet, we can see the original tweet that is being retweeted, but not if the retweet is of a retweet of a friend. This way, without special access to Twitter data or alternative sources, all information flows look like stars around the original tweet, rather than propagation trees through a social network that allow the precise analysis of fake news or memes.

Ed.: Given all the work on Twitter, how well-placed do you think social scientists would be to advise a political campaign on “how to create an influential network” beyond just the obvious (Tweet well and often, and maybe hire a load of bots). i.e. are there any “general rules” about communication structure that would be practically useful to campaigning organisations?

David: When we talk about influence on Twitter, we usually talk about rather superficial behaviour, such as retweeting content or clicking on a link. This should not be mistaken as a more substantial kind of influence, the kind that makes people change their opinion or go to vote. Evaluating the real impact of Twitter influence is a bottleneck for how much social scientists can advise a political campaign. I would say than rather than providing general rules that can be applied everywhere, social scientists and computer scientists can be much more useful when advising, tracking, and optimizing individual campaigns that take into account the details and idiosyncrasies of the people that might be influenced by the campaign.

Ed.: Random question: but where did “computational social science” emerge from – is it actually quite dependent on Twitter (and Wikipedia?), or are there other commonly-used datasets? And are computational social science, “big data analytics”, and (social) data science basically describing the same thing?

David: Tracing back the meaning and influence of “computational social science” could take a whole book! My impression is that the concept started few decades ago as a spin on “sociophysics”, where the term “computational” was used as in “computational model”, emphasizing a focus on social science away from toy model applications from physics. Then the influential Science article by David Lazer and colleagues in 2009 defined the term as the application of digital trace datasets to test theories from the social sciences, leaving the whole computational modelling outside the frame. In that case, “computational” was used more as it is used in “computational biology”, to refer to social science with increased power and speed thanks to computer-based technologies. Later it seems to have converged back into a combination of both the modelling and the data analysis trends, as in the “Manifesto of computational social science” by Rosaria Conte and colleagues in 2012, inspired by the fact that we need computational modelling techniques from complexity science to understand what we observe in the data.

The Twitter and Wikipedia dependence of the field is just a path dependency due to the ease and open access to those datasets, and a key turning point in the field is to be able to generalize beyond those “model organisms”, as Zeynep Tufekci calls them. One can observe these fads in the latest computer science conferences, with the rising ones being Reddit and Github, or when looking at earlier research that heavily used product reviews and blog datasets. Computational social science seems to be maturing as a field, make sense out of those datasets and not just telling cool data-driven stories about one website or another. Perhaps we are beyond the peak of inflated expectations of the hype curve and the best part is yet to come.

With respect to big data and social data science, it is easy to get lost in the field of buzzwords. Big data analytics only deals with the technologies necessary to process large volumes of data, which could come from any source including social networks but also telescopes, seismographs, and any kind of sensor. These kind of techniques are only sometimes necessary in computational social science, but are far from the core of topics of the field.

Social data science is closer, but puts a stronger emphasis on problem-solving rather than testing theories from the social sciences. When using “data science” we usually try to emphasize a predictive or explorative aspect, rather than the confirmatory or generative approach of computational social science. The emphasis on theory and modelling of computational social science is the key difference here, linking back to my earlier comment about the role of computational modelling and complexity science in the field.

Ed.: Finally, how successful do you think computational social scientists will be in identifying any underlying “social patterns” — i.e. would you agree that the Internet is a “Hadron Collider” for social science? Or is society fundamentally too chaotic and unpredictable?

David: As web scientists like to highlight, the Web (not the Internet, which is the technical infrastructure connecting computers) is the largest socio-technical artifact ever produced by humanity. Rather than as a Hadron Collider, which is a tool to make experiments, I would say that the Web can be the Hubble telescope of social science: it lets us observe human behaviour at an amazing scale and resolution, not only capturing big data but also, fast, long, deep, mixed, and weird data that we never imagined before.

While I doubt that we will be able to predict society in some sort of “psychohistory” manner, I think that the Web can help us to understand much more about ourselves, including our incentives, our feelings, and our health. That can be useful knowledge to make decisions in the future and to build a better world without the need to predict everything.

Read the full article: Garcia, D., Mavrodiev, P., Casati, D., and Schweitzer, F. (2017) Understanding Popularity, Reputation, and Social Influence in the Twitter Society. Policy & Internet 9 (3) doi:10.1002/poi3.151

David Garcia was talking to blog editor David Sutcliffe.

]]>
How can we encourage participation in online political deliberation? https://ensr.oii.ox.ac.uk/how-can-we-encourage-participation-in-online-political-deliberation/ Fri, 01 Jun 2018 14:54:48 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4186 Political parties have been criticized for failing to link citizen preferences to political decision-making. But in an attempt to enhance policy representation, many political parties have established online platforms to allow discussion of policy issues and proposals, and to open up their decision-making processes. The Internet — and particularly the social web — seems to provide an obvious opportunity to strengthen intra-party democracy and mobilize passive party members. However, these mobilizing capacities are limited, and in most instances, participation has been low.

In their Policy & Internet article “Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party,” Katharina Gerl, Stefan Marschall, and Nadja Wilker examine the German Greens’ online collaboration platform to ask why only some party members and supporters use it. The platform aims improve the inclusion of party supporters and members in the party’s opinion-formation and decision-making process, but it has failed to reach inactive members. Instead, those who have already been active in the party also use the online platform. It also seems that classical resources such as education and employment status do not (directly) explain differences in participation; instead, participation is motivated by process-related and ideological incentives.

We caught up with the authors to discuss their findings:

Ed.: You say “When it comes to explaining political online participation within parties, we face a conceptual and empirical void” .. can you explain briefly what the offline models are, and why they don’t work for the Internet age?

Katharina / Stefan / Nadja: According to Verba et al. (1995) the reasons for political non-participation can be boiled down to three factors: (1) citizens do not want to participate, (2) they cannot, (3) nobody asked them to. Speaking model-wise we can distinguish three perspectives: Citizens need certain resources like education, information, time and civic skills to participate (resource model and civic voluntarism model). The social psychological model looks at the role of attitudes and political interest that are supposed to increase participation. In addition to resources and attitudes, the general incentives model analyses how motives, costs and benefits influence participation.

These models can be applied to online participation as well, but findings for the online context indicate that the mechanisms do not always work like in the offline context. For example, age plays out differently for online participation. Generally, the models have to be specified for each participation context. This especially applies for the online context as forms of online participation sometimes demand different resources, skills or motivational factors. Therefore, we have to adapt and supplemented the models with additional online factors like internet skills and internet sophistication.

Ed.: What’s the value to a political party of involving its members in policy discussion? (i.e. why go through the bother?)

Katharina / Stefan / Nadja: Broadly speaking, there are normative and rational reasons for that. At least for the German parties, intra-party democracy plays a crucial role. The involvement of members in policy discussion can serve as a means to strengthen the integration and legitimation power of a party. Additionally, the involvement of members can have a mobilizing effect for the party on the ground. This can positively influence the linkage between the party in central office, the party on the ground, and the societal base. Furthermore, member participation can be a way to react on dissatisfaction within a party.

Ed.: Are there any examples of successful “public deliberation” — i.e. is this maybe just a problem of getting disparate voices to usefully engage online, rather than a failure of political parties per se?

Katharina / Stefan / Nadja: This is definitely not unique to political parties. The problems we observe regarding online public deliberation in political parties also apply to other online participation platforms: political participation and especially public deliberation require time and effort for participants, so they will only be willing to engage if they feel they benefit from it. But the benefits of participation may remain unclear as public deliberation – by parties or other initiators – often takes place without a clear goal or a real say in decision-making for the participants. Initiators of public deliberation often fail to integrate processes of public deliberation into formal and meaningful decision-making procedures. This leads to disappointment for potential participants who might have different expectations concerning their role and scope of influence. There is a risk of a vicious circle and disappointed expectations on both sides.

Ed.: Based on your findings, what would you suggest that the Greens do in order to increase participation by their members on their platform?

Katharina / Stefan / Nadja: Our study shows that the members of the Greens are generally willing to participate online and appreciate this opportunity. However, the survey also revealed that the most important incentive for them is to have an influence on the party’s decision-making. We would suggest that the Greens create an actual cause for participation, meaning to set clear goals and to integrate it into specific and relevant decisions. Participation should not be an end in itself!

Ed.: How far do political parties try to harness deliberation where it happens in the wild e.g. on social media, rather than trying to get people to use bespoke party channels? Or might social media users see this as takeover by the very “establishment politics” they might have abandoned, or be reacting against?

Katharina / Stefan / Nadja: Parties do not constrain their online activities to their own official platforms and channels but also try to develop strategies for influencing discourses in the wild. However, this works much better and has much more authenticity as well as credibility if it isn’t parties as abstract organizations but rather individual politicians such as members of parliament who engage in person on social media, for example by using Twitter.

Ed.: How far have political scientists understood the reasons behind the so-called “crisis of democracy”, and how to address it? And even if academics came up with “the answer” — what is the process for getting academic work and knowledge put into practice by political parties?

Katharina / Stefan / Nadja: The alleged “crisis of democracy” is in first line seen as a crisis of representation in which the gap between political elites and the citizens has widened drastically within the last years, giving room to populist movements and parties in many democracies. Our impression is that facing the rise of populism in many countries, politicians have become more and more attentive towards discussions and findings in political science which have been addressing the linkage problems for years. But perhaps this is like shutting the stable door after the horse has bolted.

Read the full article: Gerl, K., Marschall, S., and Wilker, N. (2016) Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party. Policy & Internet doi:10.1002/poi3.149

Katharina Gerl, Stefan Marschall, and Nadja Wilker were talking to blog editor David Sutcliffe.

]]>
Making crowdsourcing work as a space for democratic deliberation https://ensr.oii.ox.ac.uk/making-crowdsourcing-work-as-a-space-for-democratic-deliberation/ Sat, 26 May 2018 12:44:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4245 There are a many instances of crowdsourcing in both local and national governance across the world, as governments implement crowdsourcing as part of their open government practices aimed at fostering civic engagement and knowledge discovery for policies. But is crowdsourcing conducive to deliberation among citizens or is it essentially just a consulting mechanism for information gathering? Second, if it is conducive to deliberation, what kind of deliberation is it? (And is it democratic?) Third, how representative are the online deliberative exchanges of the wishes and priorities of the larger population?

In their Policy & Internet article “Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland”, Tanja Aitamurto and Hélène Landemore examine a partially crowdsourced reform of the Finnish off-road traffic law. The aim of the process was to search for knowledge and ideas from the crowd, enhance people’s understanding of the law, and to increase the perception of the policy’s legitimacy. The participants could propose ideas on the platform, vote others’ ideas up or down, and comment.

The authors find that despite the lack of explicit incentives for deliberation in the crowdsourced process, crowdsourcing indeed functioned as a space for democratic deliberation; that is, an exchange of arguments among participants characterized by a degree of freedom, equality, and inclusiveness. An important finding, in particular, is that despite the lack of statistical representativeness among the participants, the deliberative exchanges reflected a diversity of viewpoints and opinions, tempering to a degree the worry about the bias likely introduced by the self-selected nature of citizen participation.

They introduce the term “crowdsourced deliberation” to mean the deliberation that happens (intentionally or unintentionally) in crowdsourcing, even when the primary aim is to gather knowledge rather than to generate deliberation. In their assessment, crowdsourcing in the Finnish experiment was conducive to some degree of democratic deliberation, even though, strikingly, the process was not designed for it.

We caught up with the authors to discuss their findings:

Ed.: There’s a lot of discussion currently about “filter bubbles” (and indeed fake news) damaging public deliberation. Do you think collaborative crowdsourced efforts (that include things like Wikipedia) help at all more generally, or .. are we all damned to our individual echo chambers?

Tanja and Hélène: Deliberation, whether taking place within a crowdsourced policymaking process or in another context, has a positive impact on society, when the participants exchange knowledge and arguments. While all deliberative processes are, to a certain extent, their own microcosms, there is typically at least some cross-cutting exposure of opinions and perspectives among the crowd. The more diverse the participant crowd is and the larger the number of participants, the more likely there is diversity also in the opinions, preventing strictly siloed echo chambers.

Moreover, it all comes down to design and incentives in the end. In our crowdsourcing platform we did not particularly try to attract a cross-cutting section of the population so there was a risk of having only a relatively homogenous population self-selecting into the process, which is what happened to a degree, demographically at last (over 90% of our participants were educated male professionals). In terms of ideas though, the pool was much more diverse than the demography would have suggested, and techniques we used (like clustering) helped maintain the visibility (to the researchers) of the minority views.

That said, if what you are is after is maximal openness and cross-cutting exposure, nothing beats random selection, like the one used in mini-publics of all kinds, from citizens’ juries to deliberative polls to citizens’ assemblies… That’s what Facebook and Twitter should use in order to break the filter bubbles in which people lock themselves: algorithms that randomize the content of our newsfeed and expose us to a vast range of opinions, rather than algorithms that maximize similarity with what we already like.

But for us the goal was different and so our design was different. Our goal was to gather knowledge and ideas and for this self-selection (the sort also at play in Wikipedia) is better than random-selection: whereas with random selection you shut the door on most people, in crowdsourcing platform you just let the door open to anyone who can self-identify as having a relevant form of knowledge and has the motivation to participate. The remarkable thing in our case is that even though we didn’t design the process for democratic deliberation, it occurred anyway, between the cracks of the design so to speak.

Ed.: I suppose crowdsourcing won’t work unless there is useful cooperation: do you think these successful relationships self-select on a platform, or do things perhaps work precisely because people may NOT be discussing other, divisive things (like immigration) when working together on something apparently unrelated, like an off-road law?

Tanja and Hélène: There is a varying degree of collaboration in crowdsourcing. In crowdsourced policymaking, the crowd does not typically collaborate on drafting the law (unlike the crowd does in Wikipedia writing), but they rather respond to the crowdsourcer’s, in this case, the government’s prompts. In this type of crowdsourcing, which was the case in the crowdsourced off-road traffic law reform, the crowd members don’t need to collaborate with each other in order the process to achieve its goal of finding new knowledge. The crowd, can, of course decide not to collaborate with the government and not answer the prompts, or start sabotaging the process.

The degree and success of collaboration will depend on the design and the goals of your experiment. In our case, crowdsourcing might have worked even without collaboration because our goal was to gather knowledge and information, which can be done by harvesting the contributions of the individual members of the crowd without them interacting with each other. But if what you are after is co-creation or deliberation, then yes you need to create the background conditions and incentives for cooperation.

Cooperation may require bracketing some sensitive topics or else learning to disagree in respectful ways. Deliberation, and more broadly cooperation are social skills — human technologies you might say — that we still don’t know how to use very well. This comes in part from the fact that our school systems do not teach those skills, focused as they are on promoting individual rather than collaborative success and creating an eco-system of zero-sum competition between students, when in the real world there is almost nothing you can do all by yourself and we would be much better off nurturing collaborative skills and the art or technology of deliberation.

Ed.: Have there been any other examples in Finland — i.e. is crowdsourcing (and deliberation) something that is seen as useful and successful by the government?

Tanja and Hélène: Yes, there has been several crowdsourced policymaking processes in Finland. One is a crowdsourced Limited Liability Housing Company Law reform, organized by the Ministry of Justice in the Finland government. We examined the quality of deliberation in the case, and the findings show that the quality of deliberation, as measured by Discourse Quality Index, was pretty good.

Read the full article: Aitamurto, T. and Landemore, H. (2016) Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland. Policy & Internet 8 (2) doi:10.1002/poi3.115.


Tanja Aitamurto and Hélène Landemore were talking to blog editor David Sutcliffe.

]]>
Habermas by design: designing public deliberation into online platforms https://ensr.oii.ox.ac.uk/habermas-by-design-designing-public-deliberation-into-online-platforms/ Thu, 03 May 2018 13:59:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4673 Advocates of deliberative democracy have always hoped that the Internet would provide the means for an improved public sphere. But what particular platform features should we look to, to promote deliberative debate online? In their Policy & Internet article “Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms“, Katharina Esau, Dennis Friess, and Christiane Eilders show how differences in the design of various news platforms result in significant variation in the quality of deliberation; measured as rationality, reciprocity, respect, and constructiveness.

The empirical findings of their comparative analysis across three types of news platforms broadly support the assumption that platform design affects the level of deliberative quality of user comments. Deliberation was most likely to be found in news fora, which are of course specifically designed to initiate user discussions. News websites showed a lower level of deliberative quality, with Facebook coming last in terms of meeting deliberative design criteria and sustaining deliberation. However, while Facebook performed poorly in terms of overall level of deliberative quality, it did promote a high degree of general engagement among users.

The study’s findings suggest that deliberative discourse in the virtual public sphere of the Internet is indeed possible, which is good news for advocates of deliberative theory. However, this will only be possible by carefully considering how platforms function, and how they are designed. Some may argue that the “power of design” (shaped by organizers like media companies), contradicts the basic idea of open debate amongst equals where the only necessary force is Habermas’s “forceless force of the better argument”. These advocates of an utterly free virtual public sphere may be disappointed, given it’s clear that deliberation is only likely to emerge if the platform is designed in a particular way.

We caught up with the authors to discuss their findings:

Ed: Just briefly: what design features did you find helped support public deliberation, i.e. reasoned, reciprocal, respectful, constructive discussion?

Katharina / Dennis / Christiane: There are several design features which are known to influence online deliberation. However, in this study we particularly focus on moderation, asynchronous discussion, clear topic definition, and the availability of information, which we have found to have a positive influence on the quality of online deliberation.

Ed.: I associate “Internet as a deliberative space” with Habermas, but have never read him: what’s the short version of what he thinks about “the public sphere” — and how the Internet might support this?

Katharina / Dennis / Christiane: Well, Habermas describes the public sphere as a space where free and equal people discuss topics of public import in a specific way. The respectful exchange of rational reasons is crucial in this normative ideal. Due to its open architecture, the Internet has often been presented as providing the infrastructure for large scale deliberation processes. However, Habermas himself is very skeptical as to whether online spaces support his ideas on deliberation. Ironically, he is one of the most influential authors in online deliberation scholarship.

Ed.: What do advocates of the Internet as a “deliberation space” hope for — simply that people will feel part of a social space / community if they can like things or comment on them (and see similar viewpoints); or that it will result in actual rational debate, and people changing their minds to “better” viewpoints, whatever they may be? I can personally see a value for the former, but I can’t imagine the latter ever working, i.e. given people basically don’t change?

Katharina / Dennis / Christiane: We are thinking that both hopes are present in the current debate, and we partly agree with your perception that changing minds seems to be difficult. But we may also be facing some methodological or empirical issues here, because changing of minds is not an easy thing to measure. We know from other studies that deliberation can indeed cause changes of opinion. However, most of this probably takes place within the individual’s mind. Robert E. Goodin has called this process “deliberation within” and this is not accessible through content analysis. People do not articulate “Oh, thanks for this argument, I have changed my mind”, but they probably take something away from online discussions which makes them more open minded.

Ed.: Does Wikipedia provide an example where strangers have (oddly!) come together to create something of genuine value — but maybe only because they’re actually making a specific public good? Is the basic problem of the idea of the “Internet supporting public discourse” that this is just too aimless an activity, with no obvious individual or collective benefit?

Katharina / Dennis / Christiane: We think Wikipedia is a very particular case. However, we can learn from this case that the collective goal plays a very important role for the quality of contributions. We know from empirical research that if people have the intention of contributing to something meaningful, discussion quality is significantly higher than in online spaces without that desire to have an impact.

Ed.: I wonder: isn’t Twitter the place where “deliberation” now takes place? How does it fit into, or inform, the deliberation literature, which I am assuming has largely focused on things like discussion fora?

Katharina / Dennis / Christiane: This depends on the definition of the term “deliberation”. We would argue that the limitation to 280 characters is probably not the best design feature for meaningful deliberation. However, we may have to think about deliberation in less complex contexts in order to reach more people; but this is a polarizing debate.

Ed.: You say that “outsourcing discussions to social networking sites such as Facebook is not advisable due to the low level of deliberative quality compared to other news platforms”. Facebook has now decided that instead of “connecting the world” it’s going to “bring people closer together” — what would you recommend that they do to support this, in terms of the design of the interactive (or deliberative) features of the platform?

Katharina / Dennis / Christiane: This is a difficult one! We think that the quality of deliberation on Facebook would strongly benefit from moderators, which should be more present on the platform to structure the discussions. By this we do not only mean professional moderators but also participative forms of moderation, which could be encouraged more by mechanisms which support such behaviour.

Read the full article: Katharina Esau, Dennis Friess, and Christiane Eilders (2017) Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms. Policy & Internet 9 (3) 321-342.

Katharina (@kathaesa), Dennis, and Christiane were talking to blog editor David Sutcliffe.

]]>
Censorship or rumour management? How Weibo constructs “truth” around crisis events https://ensr.oii.ox.ac.uk/censorship-or-rumour-management-how-weibo-constructs-truth-around-crisis-events/ Tue, 03 Oct 2017 08:48:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4350 As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality — as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking.

There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts“, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion.

We caught up with the authors to discuss their findings:

Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both exploring mechanisms to try to combat it. On the flip-side we have state-sponsored propaganda now suddenly very visible (e.g. Russia), in an attempt to reduce trust, destabilise institutions, and inject rumour into the public sphere. What is the difference between rumour, propaganda and fake news; and how do they play out online in China?

Jing / Chung-hong / King-wa: The definition of rumour is very fuzzy, and it is very common to see ‘rumour’ being used interchangeably with other related concepts. Our study drew the definition of rumour from the fields of sociology and social psychology, wherein this concept has been most thoroughly articulated.

Rumour is a form of unverified information circulated in uncertain circumstances. The major difference between rumour and propaganda lies in their functions. Rumour sharing is a social practice of sense-making, therefore it functions to help people make meaning of an uncertain situation. In contrast, the concept of propaganda is more political. Propaganda is a form of information strategically used to mobilise political support for a political force.

Fake news is a new buzz word and works closely with another buzz term – post-truth. There is no established and widely accepted definition of fake news, and its true meaning(s) should be understood with respect to specific contexts. For example, Donald Trump’s use of “fake news” in his tweets aims to attack a few media outlets who have reported unfavourable stories about the him, whereas ungrounded and speculative “fake news” is created and widely circulated on the public’s social media. If we simply understand fake news as a form of fabricated news, I would argue that fake news can operate as either rumour, propaganda, or both of them.

It is worth pointing out that, in the Chinese contexts, rumour may not always be fake and propaganda is not necessarily bad. As pointed out by different scholars, rumour functions as a social protest against the authoritarian state’s information control. And in the Chinese language, the Mandarin term Xuanchuan (‘propaganda’) does not always have the same negative connotation as does its English counterpart.

Ed.: You mention previous research finding that the “Chinese government’s propaganda and censorship policies were mainly used by the authoritarian regime to prevent collective action and to maintain social stability” — is that what you found as well? i.e. that criticism of the Government is tolerated, but not organised protest?

Jing / Chung-hong / King-wa: This study examined rumour communication around the 2015 Tianjin blasts, therefore our analyses did not directly address Weibo users’ attempts to organise protest. However, regarding the Chinese government’s response to Weibo users’ criticism of its handling of the crisis, our study suggested that some criticisms of the government were tolerated. For example, the messages about local government officials mishandling of the crisis were not heavily censored. Instead, what we have found seems to confirm that social stability is of paramount importance for the ruling regime and thus online censorship was used as a mean to maintain social stability. It explains Weibo’s decision to silence the discussions on the assault of a CNN reporter, the chaotic aftermath of the blasts and the local media’s reluctance to broadcast the blasts.

Ed.: What are people’s responses to obvious government attempts to censor or head-off online rumour, e.g. by deleting posts or issuing statements? And are people generally supportive of efforts to have a “clean, rumour-free Internet”, or cynical about the ultimate intentions or effects of censorship?

Jing / Chung-hong / King-wa: From our time series analysis, we found different responses from netizens with respect to topics but we cannot find a consistent pattern of a chilling effect. Basically, the Weibo rumour management strategies, either deleting posts or refuting posts, will usually stimulate more public interest. At least as shown in our data, netizens are not supportive of those censorship efforts and somehow end up posting more messages of rumours as a counter-reaction.

Ed.: Is online rumour particularly a feature of contemporary Chinese society — or do you think that’s just a human thing (we’ve certainly seen lots of lying in the Brexit and Trump campaigns)? How might rumour relate more generally to levels of trust in institutions, and the presence of a strong, free press?

Jing / Chung-hong / King-wa: Online rumour is common in China, but it can be also pervasive in any country where use of digital technologies for communication is prevalent. Rumour sharing is a human thing, yes you can say that. But it is more accurate to say, it is a societally constructed thing. As mentioned earlier, rumour is a social practice of collective sense-making under uncertain circumstances.

Levels of public trust in governmental organisations and the media can directly impact rumour circulation, and rumour-debunking efforts. When there is a lack of public trust in official sources of information, it opens up room for rumour circulation. Likewise, when the authorities have low credibility, the official rumour debunking efforts can backfire, because the public may think the authorities are trying to hide something. This might explain what we observed in our study.

Ed.: I guess we live in interesting times; Theresa May now wants to control the Internet, Trump is attacking the very institution of the press, social media companies are under pressure to accept responsibility for the content they host. What can we learn from the Chinese case, of a very sophisticated system focused on social control and stability?

Jing / Chung-hong / King-wa: The most important implication of this study is that the most sophisticated rumour control mechanism can only be developed on a good understanding of the social roots of rumour. As our study shows, without solving the more fundamental social cause of rumour, rumour debunking efforts can backfire.


Read the full article: Jing Zeng, Chung-hong Chan and King-wa Fu (2017) How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts. Policy & Internet 9 (3) 297-320. DOI: 10.1002/poi3.155

Jing Zeng, Chung-hong Chan and King-wa Fu were talking to blog editor David Sutcliffe.

]]>
Does Internet voting offer a solution to declining electoral turnout? https://ensr.oii.ox.ac.uk/does-internet-voting-offer-a-solution-to-declining-electoral-turnout/ Tue, 19 Sep 2017 09:27:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4379 e-voting had been discussed as one possible remedy for the continuing decline in turnout in Western democracies. In their Policy & Internet article “Could Internet Voting Halt Declining Electoral Turnout? New Evidence that e-Voting is Habit-forming”, Mihkel Solvak and Kristjan Vassil examine the degree to which e-voting is more habit forming than paper voting. Their findings indicate that while e-voting doesn’t seem to raise turnout, it might at least arrest its continuing decline in Western democracies. And any technology capable of stabilizing turnout is worth exploring.

Using cross-sectional survey data from five e-enabled elections in Estonia — a country with a decade’s experience of nationwide remote Internet voting — the authors show e-voting to be strongly persistent among voters, with clear evidence of habit formation. While a technological fix probably won’t address the underlying reasons for low turnout, it could help stop further decline by making voting easier for those who are more likely to turn out. Arresting turnout decline by keeping those who participate participating might be one realistic goal that e-voting is able to achieve.

We caught up with the authors to discuss their findings:

Ed.: There seems to be a general trend of declining electoral turnouts worldwide. Is there any form of consensus (based on actual data) on why voting rates are falling?

Mihkel / Kristjan: A consensus in terms of a single major source of turnout decline that the data points to worldwide is clearly lacking. There is however more of an agreement as to why certain regions are experiencing a comparatively steeper decline. Disenchantment with democracy and an overall disappointment in politics is the number one reason usually listed when discussing lower and declining turnout levels in new democracies.

While the same issues are nowadays also listed for older established democracies, there is no hard comparative evidence for it. We do know that the level of interest in and engagement with politics has declined across the board in Western Europe when compared to the 1960-70s, but this doesn’t count as disenchantment, and the clear decline in turnout levels in established democracies started a couple of decades later, in the early 1990s.

Given that turnout levels are still widely different depending on the country, the overall worldwide decline is probably a combination of the addition of new democracies with low and more-rapidly declining turnout levels, and a plethora of country-specific reasons in older democracies that are experiencing a somewhat less steep decline in turnout.

Ed.: Is the worry about voting decline really about “falling representation” per se, or that it might be symptomatic of deeper problems with the political ecosystem, i.e. fewer people choosing politics as a career, less involvement in local politics, less civic engagement (etc.). In other words — is falling voting (per se) even the main problem?

Mihkel / Kristjan: We can only agree; it clearly is a symptom of deeper problems. Although high turnout is a good thing, low turnout is not necessarily a problem as people have the freedom not to participate and not to be interested in politics. It becomes a problem when low turnout leads to a lack of legitimacy of the representative body and consequently also of the whole process of representation. And as you rightly point out, real problems start much earlier and at a lower level than voting in parliamentary elections. The paradox is that the technology we have examined in our article — remote internet voting — clearly can’t address these fundamental problems.

Ed.: I’m assuming the Estonian voters were voting remotely online (rather than electronically in a booth), i.e. in their own time, at their convenience? Are you basically testing the effect of offering a more convenient voting format? (And finding that format to be habit-forming?).

Mihkel / Kristjan: Yes. One of the reasons we examined Internet voting from this angle was the apparent paradox of every third vote being cast online but also only a minute increase in turnout. A few other countries also experimenting with electronic voting have seen no tangible differences in turnout levels. The explanation is of course that it is a convenience voting method that makes voting simpler for people who are already quite likely to vote — now they simply use a more convenient option to do so. But what we noted in our article was a clearly higher share of electronic voters who turned out more consistently over different elections in comparison to voters voting on paper, and even when they didn’t show traits that usually correlate with electronic voting, like living further away from polling stations. So convenience did not seem to tell the whole story, even though it might have been one of the original reasons why electronic voting was picked up.

Ed.: Presumably with remote online voting, it’s possible to send targeted advertising to voters (via email and social media), with links to vote, i.e. making it more likely people will vote in the moment, in response to whatever issues happen to be salient at the time. How does online campaigning (and targeting) change once you introduce online voting?

Mihkel / Kristjan: Theoretically, parties should be able to lock voters in more easily by advertising links to the voting solution in their online campaigns; as in banners saying “vote for me and you can do it directly here (linked)”. In the Estonian case there is an informal agreement to remain from doing that, however, in order to safeguard the neutrality of online voting. Trust in online voting is paramount, even more so than is the case with paper voting, so it probably is a good idea to try to ensure that people trust the online voting solution to be controlled by a neutral state agent tasked with conducting the elections, in order to avoid any possible associations between certain parties and the voting environment (which linking directly to the voting mechanism might cause to happen). That can never be 100% ensured though, so online campaigns coupled with online voting can make it harder for election authorities to convey the image of impartiality of their procedures.

As for voting in the moment I don’t see online voting to be substantially more susceptible to this than other voting modes — given last minute developments can influence voters voting on paper as well. I think the latest US and French presidential elections are a case in point. Some argue that the immediate developments and revelations in the Clinton email scandal investigation a couple of weeks before voting day turned the result. In the French case the hacking and release of Macron’s campaign communications immediately before voting day however didn’t play a role in the outcome. Voting in the moment will happen or not regardless of the voting mode being used.

Ed.: What do you think the barriers are to greater roll-out of online voting: presumably there are security worries, i.e. over election hacking and lack of a paper trail? (and maybe also worries about the possibility of coercive voting, if it doesn’t take place alone in a booth?)

Mihkel / Kristjan: The number one barrier to greater roll-out remains security worries about hacking. Given that people cannot observe electronic voting (i.e. how their vote arrives at the voting authorities) the role of trust becomes more central than for paper voting. And trust can be eroded easily by floating rumours even without technically compromising voting systems. The solution is to introduce verifiability into the system, akin to a physical ballot in the case of paper voting, but this makes online voting even more technologically complex.

A lot of research is being put into verifiable electronic voting systems to meet very strict security requirements. The funny thing is however that the fears holding back wider online voting are not really being raised for paper voting, even though they should. At a certain stage of the process all paper votes become bits of information in an information system as local polling stations enter or report them into computer systems that are used to aggregate the votes and determine the seat distribution. No election is fully paper based anymore.

Vote coercion problems of course cannot be ruled out and is by definition more likely when the voting authorities don’t exercise control over the immediate voting environment. I think countries that suffer from such problems shouldn’t introduce a system that might exacerbate that even more. But again, most countries allow for multiple modes that differ in the degree of neutrality and control exercised by the election authority. Absentee ballots and postal voting (which is very widespread in some countries, like Switzerland), are as vulnerable to voter coercion as is remote Internet voting. Online voting is simply one mode of voting — maintaining a healthy mix of voting modes is probably the best solution to ensure that elections are not compromised.

Ed.: I guess declining turnout is probably a problem that is too big and complex to be understood or “fixed” — but how would you go about addressing it, if asked to do so..?

Mihkel / Kristjan: We fully agree — the technology of online voting will not fix low turnout as it doesn’t address the underlying problem. It simply makes voting somewhat more convenient. But voting is not difficult in the first place — with weekend voting, postal voting and absentee ballots; just to name a few things that already ease participation.

There are technologies that have a revolutionary effect (i.e. that alter impact and that are truly innovatory) and then there are small technological fixes that provide for a simpler and more pleasurable existence. Online voting is not revolutionary; it does not give a new experience of participation, it is simply one slightly more convenient mode of voting and for that a very worthwhile thing. And I think this is the maximum that can be done and that is within our control when it comes to influencing turnout. Small incremental fixes to a large multifaceted problem.

Read the full article: Mihkel Solvak and Kristjan Vassil (2017) Could Internet Voting Halt Declining Electoral Turnout? New Evidence that e-Voting is Habit-forming. Policy & Internet. DOI: 10.1002/poi3.160
Mihkel Solvak and Kristjan Vassil were talking to blog editor David Sutcliffe.
]]>
Digital platforms are governing systems — so it’s time we examined them in more detail https://ensr.oii.ox.ac.uk/digital-platforms-are-governing-systems-so-its-time-we-examined-them-in-more-detail/ Tue, 29 Aug 2017 09:49:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4346 Digital platforms are not just software-based media, they are governing systems that control, interact, and accumulate. As surfaces on which social action takes place, digital platforms mediate — and to a considerable extent, dictate — economic relationships and social action. By automating market exchanges they solidify relationships into material infrastructure, lend a degree of immutability and traceability to engagements, and render what previously would have been informal exchanges into much more formalized rules.

In his Policy & Internet article “Platform Logic: An Interdisciplinary Approach to the Platform-based Economy“, Jonas Andersson Schwarz argues that digital platforms enact a twofold logic of micro-level technocentric control and macro-level geopolitical domination, while supporting a range of generative outcomes between the two levels. Technology isn’t ‘neutral’, and what designers want may clash with what users want: so it’s important that we take a multi-perspective view of the role of digital platforms in contemporary society. For example, if we only consider the technical, we’ll notice modularity, compatibility, compliance, flexibility, mutual subsistence, and cross-subsidization. By contrast, if we consider ownership and organizational control, we’ll observe issues of consolidation, privatization, enclosure, financialization and protectionism.

When focusing on local interactions (e.g. with users), the digital nature of platforms is seen to strongly determine structure; essentially representing an absolute or totalitarian form of control. When we focus on geopolitical power arrangements in the “platform society”, patterns can be observed that are worryingly suggestive of market dominance, colonization, and consolidation. Concerns have been expressed that these (overwhelmingly US-biased) platform giants are not only enacting hegemony, but are on a road to “usurpation through tech — a worry that these companies could grow so large and become so deeply entrenched in world economies that they could effectively make their own laws”.

We caught up with Jonas to discuss his findings:

Ed.: You say that there are lots of different ways of considering “platforms”: what (briefly) are some of these different approaches, and why should they be linked up a bit? Certainly the conference your paper was presented at (“IPP2016: The Platform Society”) seemed to have struck an incredibly rich seam in this topic, and I think showed the value of approaching an issue like digital platforms from multiple disciplinary angles.

Jonas: In my article I’ve chosen to exclusively theorize *digital* platforms, which of course narrows down the meaning of the concept, to begin with. There are different interpretations as for what actually constitutes a digital platform. There has to be an element of proprietary control over the surface on which interaction takes place, for example. While being ubiquitous digital tools, free software and open protocols need not necessarily be considered as platforms, while proprietary operating systems should.

Within contemporary media studies there is considerable divergence as to whether one should define so-called over-the-top streaming services as platforms or not. Netflix, for example: In a strict technical sense, it’s not a platform for self-publishing and sharing in the way that YouTube is—but, in an economic sense, Netflix definitely enacts a multi-sided market, which is one of the key components of a what a platform does, economically speaking. Since platforms crystallize economic relationships into material infrastructure, conceptual conflation of this kind is unavoidable—different scholars tend to put different emphasis on different things.

Hence, when it comes to normative concerns, there are numerous approaches, ranging from largely apolitical computer science and design management studies, brandishing a largely optimistic view where blithe conceptions of innovation and generativity are emphasized, to critical approaches in political economy, where things like market dominance and consolidation are emphasized.

In my article, I try to relate to both of these schools of thought, by noting that they each are normative — albeit in vastly different ways — and by noting that not only do they each have somewhat different focus, they actually bring different research objects to the table: Usually, “efficacy” in purely technical interaction design is something altogether different than “efficacy” in matters of societal power relations, for example. While both notions can be said to be true, their respective validity might differ, depending on which matter of concern we are dealing with in each respective inquiry.

Ed.: You note in your article that platforms have a “twofold logic of micro-level technocentric control and macro-level geopolitical domination” .. which sounds quite a lot like what government does. Do you think “platform as government” is a useful way to think about this, i.e. are there any analogies?

Jonas: Sure, especially if we understand how platforms enact governance in really quite rigid forms. Platforms literally transform market relations into infrastructure. Compared to informal or spontaneous social structures, where there’s a lot of elasticity and ambiguity — put simply, giving-and-taking — automated digital infrastructure operates by unambiguous implementations of computer code. As Lawrence Lessig and others have argued, the perhaps most dangerous aspect of this is when digital infrastructures implement highly centralized modes of governance, often literally only having one point of command-and-control. The platform owner flicks a switch, and then certain listings and settings are allowed or disallowed, and so on…

This should worry any liberal, since it is a mode of governance that is totalitarian by nature; it runs counter to any democratic, liberal notion of spontaneous, emergent civic action. Funnily, a lot of Silicon Valley ideology appears to be indebted to theorists like Friedrich von Hayek, who observed a calculative rationality emerging out of heterogeneous, spontaneous market activity — but at the same time, Hayek’s call to arms was in itself a reaction to central planning of the very kind that I think digital platforms, when designed in too rigid a way, risk erecting.

Ed.: Is there a sense (in hindsight) that these platforms are basically the logical outcome of the ruthless pursuit of market efficiency, i.e. enabled by digital technologies? But is there also a danger that they could lock out equitable development and innovation if they become too powerful (e.g. leading to worries about market concentration and anti-trust issues)? At one point you ask: “Why is society collectively acquiescing to this development?” .. why do you think that is?

Jonas: The governance aspect above rests on a kind of managerialist fantasy of perfect calculative rationality that is conferred upon the platform as an allegedly neutral agent or intermediary; scholars like Frank Pasquale have begun to unravel some of the rather dodgy ideology underpinning this informational idealism, or “dataism,” as José van Dijck calls it. However, it’s important to note how much of this risk for overly rigid structures comes down to sheer design implementation; I truly believe there is scope for more democratically adaptive, benign platforms, but that can only be achieved either through real incentives at the design stage (e.g. Wikipedia, and the ways in which its core business idea involves quality control by design), or through ex-post regulation, forcing platform owners to consider certain societally desirable consequences.

Ed.: A lot of this discussion seems to be based on control. Is there a general theory of “control” — i.e. are these companies creating systems of user management and control that follow similar conceptual / theoretical lines, or just doing “what seems right” to them in their own particular contexts?

Jonas: Down the stack, there is always a binary logic of control at play in any digital infrastructure. Still, on a higher level in the stack, as more complexity is added, we should expect to see more non-linear, adaptive functionality that can handle complexity and context. And where computational logic falls short, we should demand tolerable degrees of human moderation, more than there is now, to be sure. Regulators are going this way when it comes to things like Facebook and hate speech, and I think there is considerable consumer demand for it, as when disputes arise on Airbnb and similar markets.

Ed.: What do you think are the main worries with the way things are going with these mega-platforms, i.e. the things that policy-makers should hopefully be concentrating on, and looking out for?

Jonas: Policymakers are beginning to realize the unexpected synergies that big data gives rise to. As The Economist recently pointed out, once you control portable smartphones, you’ll have instant geopositioning data on a massive scale — you’ll want to own and control map services because you’ll then also have data on car traffic in real time, which means you’d be likely to have the transportation market cornered, self driving cars especially… If one takes an agnostic, heterodox view on companies like Alphabet, some of their far-flung projects actually begin to make sense, if synergy is taken into consideration. For automated systems, the more detailed the data becomes, the better the system will perform; vast pools of data get to act as protective moats.

One solution that The Economist suggests, and that has been championed for years by internet veteran Doc Searls, is to press for vastly increased transparency in terms of user data, so that individuals can improve their own sovereignty, control their relationships with platform companies, and thereby collectively demand that the companies in question disclose the value of this data — which would, by extent, improve signalling of the actual value of the company itself. If today’s platform companies are reluctant to do this, is that because it would perhaps reveal some of them to be less valuable than what they are held out to be?

Another potentially useful, proactive measure, that I describe in my article, is the establishment of vital competitors or supplements to the services that so many of us have gotten used to being provided for by platform giants. Instead of Facebook monopolizing identity management online, which sadly seems to have become the norm in some countries, look to the Scandinavian example of BankID, which is a platform service run by a regional bank consortium, offering a much safer and more nationally controllable identity management solution.

Alternative platform services like these could be built by private companies as well as state-funded ones; alongside privately owned consortia of this kind, it would be interesting to see innovation within the public service remit, exploring how that concept could be re-thought in an era of platform capitalism.


Read the full article: Jonas Andersson Schwarz (2017) Platform Logic: An Interdisciplinary Approach to the Platform-based Economy. Policy & Internet DOI: 10.1002/poi3.159.

Jonas Andersson Schwarz was talking to blog editor David Sutcliffe.

]]>
Open government policies are spreading across Europe — but what are the expected benefits? https://ensr.oii.ox.ac.uk/open-government-policies-are-spreading-across-europe-but-what-are-the-expected-benefits/ Mon, 17 Jul 2017 08:34:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4272 Open government policies are spreading across Europe, challenging previous models of the public sector, and defining new forms of relationship between government, citizens, and digital technologies. In their Policy & Internet article “Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries,” Emiliana De Blasio and Donatella Selva present a qualitative analysis of policy documents from France, Italy, Spain, and the UK, in order to map out the different meanings of open government, and how it is framed by different national governments.

As a policy agenda, open government can be thought of as involving four variables: transparency, participation, collaboration, and digital technologies in democratic processes. Although the variables are all interpreted in different ways, participation, collaboration, and digital technology provide the greatest challenge to government, given they imply a major restructuring of public administration, whereas transparency goals (i.e., the disclosure of open data and the provision of monitoring tools) do not. Indeed, transparency is mentioned in the earliest accounts of open government from the 1950s.

The authors show the emergence of competing models of open government in Europe, with transparency and digital technologies being the most prominent issues in open government, and participation and collaboration being less considered and implemented. The standard model of open government seems to stress innovation and openness, and occasionally of public–private collaboration, but fails to achieve open decision making, with the policy-making process typically rooted in existing mechanisms. However, the authors also see the emergence of a policy framework within which democratic innovations can develop, testament to the vibrancy of the relationship between citizens and the public administration in contemporary European democracies.

We caught up with the authors to discuss their findings:

Ed.: Would you say there are more similarities than differences between these countries’ approaches and expectations for open government? What were your main findings (briefly)?

Emiliana / Donatella: We can imagine the four European countries (France, Italy, Spain and the UK) as positioned in a continuum between a participatory frame and an economic/innovation frame: on the one side, we could observe that French policies focus on open government in order to strenghten and innovate the tradition of débat public; at the opposite side, the roots of the UK’s open government are in cost-efficiency, accountability and transparency arguments. Just between those two poles, Italian and Spanish policies situate open government in the context of a massive reform of the public sector, in order to reduce the administrative burden and to restore citizen trust in institutions. Two years after we wrote the article, we can observe that both in Italy and Spain something has changed, and participation has regained attention as a public policy issue.

Ed.: How much does policy around open data change according to who’s in power? (Obama and Trump clearly have very different ideas about the value of opening up government..). Or do civil services tend to smooth out any ideological differences around openness and transparency, even as parties enter and leave power?

Emiliana / Donatella: The case of open data is quite peculiar: it is one of the few policy issues directly addressed by the European Union Commission, and now by the transnational agreement on the G8 Open Data Charter, and for this reason we could say there is a homogenising trend. Moreover, opening up data is an ongoing process — started at least eight years ago — that will be too difficult for any new government to stop. As for openness and transparency in general, Cameron (and now May), Hollande, Monti (and then Renzi) and Rajoy’s governments, all wrote policies with a strong emphasis on innovation and openness as the key for a better future.

In fact, we observed that at the national level, the rhetoric of innovation and openness is bipartisan, and not dependent on political orientation — although the concrete policy instruments and implementation strategies might differ. It is also for this reason that governments tend to remain in the “comfort zone” of transparency and public-private partnerships: they still evocate a change in the relationship between public sector and civil society, but they don’t actually address this change.

Still, we should highlight that at the regional and local levels open data, transparency and participation policies are mostly promoted by liberal and/or left-leaning administrations.

Ed.: Your results for France (i.e. almost no mention of the digital economy, growth, or reform of public services) are basically the opposite of Macron’s (winning) platform of innovation and reform. Did Macron identify a problem in France; and might you expect a change as he takes control?

Emiliana / Donatella: Macron’s electoral programme is based on what he already did while in charge at the Ministry of Economy: he pursued a French digital agenda willing to attract foreign investments, to create digital productive hubs (the French Tech), and innovate the whole economy. Interestingly, however, he did not frame those policies under the umbrella of open government, preferring to speak about “modernisation”. The importance given by Macron to innovation in the economy and public sector finds some antecedents in the policies we analysed: the issue of “modernisation” was prominent and we expect it will be even more, now that he has gained the presidency.

Ed.: In your article you analyse policy documents, i.e. texts that set out hopes and intentions. But is there any sense of how much practical effect these have: particularly given how expensive it is to open up data? You note “the Spanish and Italian governments are especially focused on restoring trust in institutions, compensating for scandals, corruption, and a general distrust which is typical in Southern Europe” .. and yet the current Spanish government is still being rocked by corruption scandals.

Emiliana / Donatella: The efficacy of any kind of policies can vary depending on many factors — such as internal political context, international constraints, economic resources, and clarity of policy instruments. In addition, we should consider that at the national level, very few policies have an immediate consequence on citizens’ everyday lives. This is surely one of the worst problems of open government: from the one side, it is a policy agenda promoted in a top-down perspective — from international and/or national institutions; and from the other side, it fails to engage local communities in a purposeful dialogue. At such, open government policies appear to be self-reflective acts by governments, as paradoxical as this might be.

Ed.: Despite terrible, terrible things like the Trump administration’s apparent deletion of climate data, do you see a general trend towards increased datafication, accountability, and efficiency (perhaps even driven by industry, as well as NGOs)? Or are public administrations far too subject to political currents and individual whim?

Emiliana / Donatella: As we face turbulent times, it would be very risky to assert that tomorrow’s world will be more open than today’s. But even if we observe some interruptions, the principles of open democracy and open government have colonised public agendas: as we have tried to stress in our article, openness, participation, collaboration and innovation can have different meanings and degrees, but they succeeded in acquiring the status of policy issues.

And as you rightly point out, the way towards accountability and openness is not a public sector’s prerogative any more: many actors from civil society and industry have already mobilised in order to influence government agendas, public opinion, and to inform citizens. As the first open government policies start to produce practical effects on people’s everyday lives, we might expect that public awareness will rise, and that no individual will be able to ignore it.

Ed.: And does the EU have any supra-national influence, in terms of promoting general principles of openness, transparency etc.? Or is it strictly left to individual countries to open up (if they want), and in whatever direction they like? I would have thought the EU would be the ideal force to promote rational technocratic things like open government?

Emiliana / Donatella: The EU has the power of stressing some policy issues, and letting some others be “forgotten”. The complex legislative procedures of the EU, together with the trans-national conflictuality, produce policies with different degrees of enforcement. Generally speaking, some EU policies have a direct influence on national laws, whereas some others don’t, leaving with national governments the decision of whether or not to act. In the case of open government, we see that the EU has been particularly influential in setting the Digital Agenda for 2020 and now the Sustainable Future Agenda for 2030; in both documents, Europe encourages Member States to dialogue and collaborate with private actors and civil society, in order to achieve some objectives of economic development.

At the moment, initiatives like the Open Government Partnership — which runs outside the EU competence and involves many European countries — are tying up governments in trans-national networks converging on a set of principles and methods. Because of that Partnership, for example, countries like Italy and Spain have experimented with the first national co-drafting procedures.

Read the full article: De Blasio, E. and Selva, D. (2016) Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries. Policy & Internet 8 (3). DOI: doi:10.1002/poi3.118.


Emiliana De Blasio and Donatella Selva were talking to blog editor David Sutcliffe.

]]>
Does Twitter now set the news agenda? https://ensr.oii.ox.ac.uk/does-twitter-now-set-the-news-agenda/ Mon, 10 Jul 2017 08:30:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4176 The information provided in the traditional media is of fundamental importance for the policy-making process, signalling which issues are gaining traction, which are falling out of favour, and introducing entirely new problems for the public to digest. But the monopoly of the traditional media as a vehicle for disseminating information about the policy agenda is being superseded by social media, with Twitter in particular used by politicians to influence traditional news content.

In their Policy & Internet article, “Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content?” Matthew A. Shapiro and Libby Hemphill examine the extent to which he traditional media is influenced by politicians’ Twitter posts. They draw on indexing theory, which states that media coverage and framing of key policy issues will tend to track elite debate. To understand why the newspaper covers an issue and predict the daily New York Times content, it is modelled as a function of all of the previous day’s policy issue areas as well as all of the previous day’s Twitter posts about all of the policy issue areas by Democrats and Republicans.

They ask to what extent are the agenda-setting efforts of members of Congress acknowledged by the traditional media; what, if any, the advantages are for one party over the other, measured by the traditional media’s increased attention; and whether there is any variance across different policy issue areas? They find that Twitter is a legitimate political communication vehicle for US officials, that journalists consider Twitter when crafting their coverage, and that Twitter-based announcements by members of Congress are a valid substitute for the traditional communiqué in journalism, particularly for issues related to immigration and marginalized groups, and issues related to the economy and health care.

We caught up with the authors to discuss their findings:

Ed.: Can you give a quick outline of media indexing theory? Does it basically say that the press reports whatever the elite are talking about? (i.e. that press coverage can be thought of as a simple index, which tracks the many conversations that make up elite debate).

Matthew: Indexing theory, in brief, states that the content of media reports reflects the degree to which elites – politicians and leaders in government in particular – are in agreement or disagreement. The greater the level of agreement or consensus among elites, the less news there is to report in terms of elite conflict. This is not to say that a consensus among elites is not newsworthy; indexing theory conveys how media reporting is a function of the multiple voices that exist when there is elite debate.

Ed.: You say Twitter seemed a valid measure of news indexing (i.e. coverage) for at least some topics. Could it be that the NYT isn’t following Twitter so much as Twitter (and the NYT) are both following something else, i.e. floor debates, releases, etc.?

Matthew: We can’t test for whether the NYT is following Twitter rather than floor debates/press releases without collecting data for the latter. Perhaps If the House and Senate Press Galleries are indexing the news based on House and Senate debates, and if Twitter posts by members of Congress reflect the House and Senate discussions, we could still argue that Twitter remains significant because there are no limits on the amount of discussion – i.e. the boundaries of the House and Senate floors no longer exist – and the media are increasingly reliant on politicians’ use of Twitter to communicate to the press. In any case, the existing research shows that journalists are increasingly relying on Twitter posts for updates from elites.

Ed.: I’m guessing that indexing theory only really works for non-partisan media that follow elite debates, like the NYT? Or does it also work for tabloids? And what about things like Breitbart (and its ilk) .. which I’m guessing appeals explicitly to a populist audience, rather than particularly caring what the elite are talking about?

Matthew: If a study similar to our was done to examine the indexing tendencies of tabloids, Breitbart, or a similar type of media source, the first step would be to determine what is being discussed regularly in these outlets. Assuming, for example, that there isn’t much discussion about marginalized groups in Breitbart, in the context of indexing theory it would not be relevant to examine the pool of congressional Twitter posts mentioning marginalized groups. Those posts are effectively off of Breitbart’s radar. But, generally, indexing theory breaks down if partisanship and bias drive the reporting.

Ed.: Is there any sense in which Trump’s “Twitter diplomacy” has overturned or rendered moot the recent literature on political uses of Twitter? We now have a case where a single (personal) Twitter account can upset the stock market — how does one theorise that?

Matthew: In terms of indexing theory, we could argue that Trump’s Twitter posts themselves generate a response from Democrats and Republicans in Congress and thus muddy the waters by conflating policy issues with other issues like his personality, ties to Russia, his fact-checking problems, etc. This is well beyond our focus in the article, but we speculate that Trump’s early-dawn use of Twitter is primarily for marketing, damage control, and deflection. There are really many different ways to study this phenomenon. One could, for example, examine the function of unfiltered news from politician to the public and compare it with the news that is simultaneously reported in the media. We would also be interested in understanding why Trump and politicians like Trump frame their Twitter posts the way they do, what effect these posts have on their devoted followers as well as their fence-sitting followers, and how this mobilizes Congress both online (i.e. on Twitter) and when discussing and voting on policy options on the Senate and House floors. These areas of research would all build upon rather than render moot the extant literature on the political uses of Twitter.

Ed.: Following on: how does Indexing theory deal with Trump’s populism (i.e. avowedly anti-Washington position), hatred and contempt of the media, and apparent aim of bypassing the mainstream press wherever possible: even ditching the press pool and favouring populist outlets over the NYT in press gaggles. Or is the media bigger than the President .. will indexing theory survive Trump?

Matthew: Indexing theory will of course survive Trump. What we are witnessing in the media is an inability, however, to limit gaper’s block in the sense that the media focus on the more inflammatory and controversial aspects of Trump’s Twitter posts – unfortunately on a daily basis – rather than reporting the policy implications. The media have to report what is news, and Presidential Twitter posts are now newsworthy, but we would argue that we are reaching a point where anything but the meat of the policy implications must be effectively filtered. Until we reach a point where the NYT ignores the inflammatory nature of Trumps Twitter posts, it will be challenging to test indexing theory in the context of the policy agenda setting process.

Ed.: There are recent examples (Brexit, Trump) of the media apparently getting things wrong because they were following the elites and not “the forgotten” (or deplorable) .. who then voted in droves. Is there any sense in the media industry that it needs to rethink things a bit — i.e. that maybe the elite is not always going to be in control of events, or even be an accurate bellwether?

Matthew: This question highlights an omission from our article, namely that indexing theory marginalizes the role of non-elite voices. We agree that the media could do a better job reporting on certain things; for instance, relying extensively on weather vanes of public opinion that do not account for inaccurate self-reporting (i.e. people not accurately representing themselves when being polled about their support for Trump, Brexit, etc.) or understanding why disenfranchised voters might opt to stay home on Election Day. When it comes to setting the policy agenda, which is the focus of our article, we stand by indexing theory given our assumption that the policy process itself is typically directed from those holding power. On that point, and regardless of whether it is normatively appropriate, elites are accurate bellwethers of the policy agenda.

Read the full article: Shapiro, M.A. and Hemphill, L. (2017) Politicians and the Policy Agenda: Does Use of Twitter by the U.S. Congress Direct New York Times Content? Policy & Internet 9 (1) doi:10.1002/poi3.120.


Matthew A. Shapiro and Libby Hemphill were talking to blog editor David Sutcliffe.

]]>
How policy makers can extract meaningful public opinion data from social media to inform their actions https://ensr.oii.ox.ac.uk/extracting-meaningful-public-opinion-data-from-social-media-to-inform-policy-makers/ Fri, 07 Jul 2017 09:48:53 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4325 The role of social media in fostering the transparency of governments and strengthening the interaction between citizens and public administrations has been widely studied. Scholars have highlighted how online citizen-government and citizen-citizen interactions favour debates on social and political matters, and positively affect citizens’ interest in political processes, like elections, policy agenda setting, and policy implementation.

However, while top-down social media communication between public administrations and citizens has been widely examined, the bottom-up side of this interaction has been largely overlooked. In their Policy & Internet article “The ‘Social Side’ of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle,” Andrea Ceron and Fedra Negri aim to bridge the gap between knowledge and practice, by examining how the information available on social media can support the actions of politicians and bureaucrats along the policy cycle.

Policymakers, particularly politicians, have always been interested in knowing citizens’ preferences, in measuring their satisfaction and in receiving feedback on their activities. Using the technique of Supervised Aggregated Sentiment Analysis, the authors show that meaningful information on public services, programmes, and policies can be extracted from the unsolicited comments posted by social media users, particularly those posted on Twitter. They use this technique to extract and analyse citizen opinion on two major public policies (on labour market reform and school reform) that drove the agenda of the Matteo Renzi cabinet in Italy between 2014 and 2015.

They show how online public opinion reacted to the different policy alternatives formulated and discussed during the adoption of the policies. They also demonstrate how social media analysis allows monitoring of the mobilization and de-mobilization processes of rival stakeholders in response to the various amendments adopted by the government, with results comparable to those of a survey and a public consultation that were undertaken by the government.

We caught up with the authors to discuss their findings:

Ed.: You say that this form of opinion monitoring and analysis is cheaper, faster and easier than (for example) representative surveys. That said, how commonly do governments harness this new form of opinion-monitoring (with the requirement for new data skills, as well as attitudes)? Do they recognise the value of it?

Andrea / Fedri: Governments are starting to pay attention to the world of social media. Just to give an idea, the Italian government has issued a call to jointly collect survey data together with the results of social media analysis and these two types of data are provided in a common report. The report has not been publicly shared, suggesting that the cabinet considers such information highly valuable. VOICES from the blogs, a spin-off created by Stefano Iacus, Luigi Curini and Andrea Ceron (University of Milan), has been involved in this and, for sure, we can attest that in a couple of instances the government modified its actions in line with shifts in public opinion observed both through survey polls and sentiment analysis. This happened with the law on Civil Unions and with the abolishment of the “voucher” (a flexible form of worker payment). So far these are just instances — although there are signs of enhanced responsiveness, particularly when online public opinion represents the core constituency of ruling parties, as the case of the school reform (discussed in the article) clearly indicates: teachers are in fact the core constituency of the Democratic Party.

Ed.: You mention that the natural language used by social media users evolves continuously and is sensitive to the discussed topic: resulting in error. The method you use involves scaling up of a human-coded (=accurate) ontology. Could you discuss how this might work in practice? Presumably humans would need to code the terms of interest first, as it wouldn’t be able to pick up new issues (e.g. around a completely new word: say, “Bowling Green”?) automatically.

Andrea / Fedri: Gary King says that the best technology is human empowered. There are at least two great advantages in exploiting human coders. First, with our technique coders manage to get rid of noise better than any algorithm, as often a single word can be judged to be in-topic or out of topic based on the context and on the rest of the sentence. Second, human-coders can collect deeper information by mining the real opinions expressed in the online conversations. This sometimes allows them to detect, bottom-up, some arguments that were completely ignored ex-ante by scholars or analysts.

Ed.: There has been a lot of debate in the UK around “false balance”, e.g. the BBC giving equal coverage to climate deniers (despite being a tiny, unrepresentative, and uninformed minority), in an attempt at “impartiality”: how do you get round issues of non-representativeness in social media, when tracking — and more importantly, acting on — opinion?

Andrea / Fedri: Nowadays social media are a non-representative sample of a country’s population. However, the idea of representativeness linked to the concept of “public opinion” dates back to the early days of polling. Today, by contrast, online conversations often represent an “activated public opinion” comprising stakeholders who express their voices in an attempt to build wider support around their views. In this regard, social media data are interesting precisely due to their non-representativeness. A tiny group can speak loudly and this voice can gain the support of an increasing number of people. If the activated public opinion acts as an “influencer”, this implies that social media analysis could anticipate trends and shifts in public opinion.

Ed.: As data becomes increasingly open and tractable (controlled by people like Google, Facebook, or monitored by e.g. GCHQ / NSA), and text-techniques become increasingly sophisticated: what is the extreme logical conclusion in terms of government being able to track opinion, say in 50 years, following the current trajectory? Or will the natural messiness of humans and language act as a natural upper limit on what is possible?

Andrea / Fedri: The purpose of scientific research, particularly applied research, is to improve our well-being and to make our life easier. For sure there could be issues linked with the privacy of our data and, in a sci-fi scenario, government and police will be able to read our minds — either to prevent crimes and terrorist attacks (as in the Minority Report movie) or to detect, isolate and punish dissent. However, technology is not a standalone object and we should not forget that there are humans behind it. Whether these humans are governments, activists or common citizens, can certainly make a difference. If governments try to misuse technology, they will certainly meet a reaction from citizens — which can be amplified precisely via this new technology.

Read the full article: Ceron, A. and Negri, F. (2016) The “Social Side” of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle. Policy & Internet 8 (2) DOI:10.1002/poi3.117


Andrea Ceron and Fedra Negri were talking to blog editor David Sutcliffe.

]]>
We should pay more attention to the role of gender in Islamist radicalization https://ensr.oii.ox.ac.uk/we-should-pay-more-attention-to-the-role-of-gender-in-islamist-radicalization/ Tue, 04 Jul 2017 08:54:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4249 One of the key current UK security issues is how to deal with British citizens returning from participation in ISIS in Syria and Iraq. Most of the hundreds fighting with ISIS were men and youths. But, dozens of British women and girls also travelled to join Islamic State in Syria and Iraq. For some, online recruitment appeared to be an important part of their radicalization, and many took to the Internet to praise life in the new Caliphate once they arrived there. These cases raised concerns about female radicalization online, and put the issue of women, terrorism, and radicalization firmly on the policy agenda. This was not the first time such fears had been raised. In 2010, the university student Roshonara Choudhry stabbed her Member of Parliament, after watching YouTube videos of the radical cleric Anwar Al Awlaki. She is the first and only British woman so far convicted of a violent Islamist attack.

In her Policy & Internet article “The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad”, Elizabeth Pearson explores how gender might have factored in Roshonara’s radicalization, in order to present an alternative to existing theoretical explanations. First, in precluding her from a real-world engagement with Islamism on her terms, gender limitations in the physical world might have pushed her to the Internet. Here, a lack of religious knowledge made her particularly vulnerable to extremist ideology; a susceptibility only increased through Internet socialization and to an active radical milieu. Finally, it might have created a dissonance between her online and multiple “real” gendered identities, resulting in violence.

As yet, there is no adequately proven link between online material and violent acts. But given the current reliance of terrorism research on the online environment, and the reliance of policy on terrorism research, the relationship between the virtual and offline domains must be better understood. So too must the process of “radicalization” — which still lacks clarity, and relies on theorizing that is rife with assumptions. Whatever the challenges, understanding how men and women become violent radicals, and the differences there might be between them, has never been more important.

We caught up with Elizabeth to discuss her findings:

Ed.: You note “the Internet has become increasingly attractive to many women extremists in recent years” — do these extremist views tend to be found on (general) social media or on dedicated websites? Presumably these sites are discoverable via fairly basic search?

Elizabeth: Yes and no. Much content is easily found online. ISIS has been very good at ‘colonizing’ popular social media platforms with supporters, and in particular, Twitter was for a period the dominant site. It was ideal as it allowed ISIS fans to find one another, share material, and build networks and communities of support. In the past 18 months Twitter has made a concerted – and largely successful – effort to ‘take down’ or suspend accounts. This may simply have pushed support elsewhere. We know that Telegram is now an important channel for information, for example. Private groups, the dark web and hidden net resources exist alongside open source material on sites such as Facebook, familiar to everyone. Given the illegality of much of this content, there has been huge pressure on companies to respond. Still there is criticism from bodies such as the Home Affairs Select Committee that they are not responding quickly or efficiently enough.

Ed.: This case seemed to represent a collision not just of “violent jihadists vs the West” but also “Salafi-Jihadists vs women” (as well as “Western assumptions of Muslim assumptions of acceptable roles for women”) .. were these the main tensions at play here?

Elizabeth: One of the key aspects of Roshonara’s violence was that it was transgressive. Violent Jihadist groups tend towards conservatism regarding female roles. Although there is no theological reason why women should not participate in the defensive Jihad, they are not encouraged to do so. ISIS has worked hard in its propaganda to keep female roles domestic – yet ideologically so. Roshonara appears to have absorbed Al Awlaki’s messaging regarding the injustices faced by Muslims, but only acted when she saw a video by Azzam, a very key scholar for Al Qaeda supporters, which she understood as justifying female violence. Hatred of western foreign policy, and support for intervention in Iraq appeared to be the motivation for her attack; a belief that women could also fight is what prompted her to carry this out herself.

Ed.: Does this struggle tend to be seen as a political struggle about land and nationhood; or a supranational religious struggle — or both? (with the added complication of Isis conflating nation and religion..)

Elizabeth: Nobody yet understands exactly why people ‘radicalize’. It’s almost impossible to profile violent radicals beyond saying they tend to be mainly male – and as we know, that is not a hard and fast rule either. What we can say is that there are complex factors, and a variety of recurrent themes cited by violent actors, and found in propaganda and messaging. One narrative is about political struggle on behalf of Muslims, who face injustice, particularly from the West. ISIS has made this struggle about the domination of land and nationhood, a development of Al Qaeda’s message. Religion is also important to this. Despite different levels of knowledge of Islam, supporters of the violent Jihad share commitment to battle as justified in the Quran. They believe that Islam is the way, the only way, and they find in their faith an answer to global issues, and whatever is happening personally to them. It is not possible, in my view, to ignore the religious component declared in this struggle. But there are other factors too. That’s what makes this so difficult and complex.

Ed.: You say that Roshonara “did not follow the path of radicalization set out in theory”. How so? But also .. how important and grounded is this “theory” in the practice of counter-radicalization? And what do exceptions like Roshonara Choudhry signify?

Elizabeth: Theory — based on empirical evidence — suggests that violence is a male preserve. Violent Jihadist groups also generally restrict their violence to men, and men only. Theory also tells us that actors rarely carry out violence alone. Belonging is an important part of the violent Jihad and ‘entrance’ to violence is generally through people you know, friends, family, acquaintances. Even where we have seen young women for example travel to join ISIS, this has tended to be facilitated through friends, or online contacts, or family. Roshanara as a female acting alone in this time before ISIS is therefore something quite unusual. She signifies – through her somewhat unique case – just how transgressive female violence is, and just how unusual solitary action is. She also throws into question the role of the internet. The internet alone is not usually sufficient for radicalization; offline contacts matter. In her case there remain some questions of what other contacts may have influenced her violence.

I’m not entirely sure how joined up counter-radicalization practices and radicalization theory are. The Prevent strategy aside, there are many different approaches, in the UK alone. The most successful that I have seen are due to committed individuals who know the communities they are based in and are trusted by them. It is relationships that seem to count, above all else.

Ed.: Do you think her case is an interesting outlier (a “lone wolf” as people commented at the time), or do you think there’s a need for more attention to be paid to gender (and women) in this area, either as potential threats, or solutions?

Elizabeth: Roshonara is a young woman, still in jail for her crime. As I wrote this piece I thought of her as a student at King’s College London, as I am, and I found it therefore all the more affecting that she did what she did. There is a connection through that shared space. So it’s important for me to think of her in human terms, in terms of what her life was like, who her friends were, what her preoccupations were and how she managed, or did not manage, her academic success, her transition to a different identity from the one her parents came from. She is interesting to me because of this, and because she is an outlier. She is an outlier who reveals certain truths about what gender means in the violent Jihad. That means women, yes, but also men, ideas about masculinity, male and female roles. I don’t think we should think of young Muslim people as either ‘threats’ or ‘solutions’. These are not the only possibilities. We should think about society, and how gender works within it, and within particular communities within it.

Ed.: And is gender specifically “relevant” to consider when it comes to Islamic radicalization, or do you see similar gender dynamics across all forms of political and religious extremism?

Elizabeth: My current PhD research considers the relationship between the violent Jihad and the counter-Jihad – cumulative extremism. To me, gender matters in all study. It’s not really anything special or extra, it’s just a recognition that if you are looking at groups you need to take into account the different ways that men and women are affected. To me that seems quite basic, because otherwise you are not really seeing a whole picture. Conservative gender dynamics are certainly also at work in some nationalist groups. The protection of women, the function of women as representative of the honour or dishonour of a group or nation – these matter to groups and ideologies beyond the violent Jihad. However, the counter-Jihad is in other ways progressive, for example promoting narratives of protecting gay rights as well as women’s rights. So women for both need to be protected – but what they need to be protected from and how differs for each. What is important is that the role of women, and of gender, matters in consideration of any ‘extremism’, and indeed in politics more broadly.

Ed.: You’re currently doing research on Boko Haram — are you also looking at gender? And are there any commonalities with the British context you examined in this article?

Elizabeth: Boko Haram interests me because of the ways in which it has transgressed some of the most fundamental gender norms of the Jihad. Since 2014 they have carried out hundreds of suicide attacks using women and girls. This is highly unusual and in fact unprecedented in terms of numbers. How this impacts on their relationship with the international Jihad, and since 2015, ISIS, to whom their leader gave a pledge of allegiance is something I have been thinking about.

There are many local aspects of the Nigerian conflict that do not translate – poverty, the terrain, oral traditions of preaching, human rights violations, Sharia in northern Nigerian states, forced recruitment.. In gender terms however, the role of women, the honour/dishonour of women, and gender-based violence translate across contexts. In particular, women are frequently instrumentalized by movements for a greater cause. Perhaps the greatest similarity is the resistance to the imposition of Western norms, including gender norms, free-mixing between men and women and gender equality. This is a recurrent theme for violent Jihadists and their supporters across geography. They wish to protect the way of life they understand in the Quran, as they believe this is the word of God, and the only true word, superseding all man-made law.

Read the full article: Pearson, E. (2016) The Case of Roshonara Choudhry: Implications for Theory on Online Radicalization, ISIS Women, and the Gendered Jihad. Policy & Internet 8 (1) doi:10.1002/poi3.101.


Elizabeth Pearson was talking to blog editor David Sutcliffe.

]]>
What are the barriers to big data analytics in local government? https://ensr.oii.ox.ac.uk/what-are-the-barriers-to-big-data-analytics-in-local-government/ Wed, 28 Jun 2017 08:11:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4208 The concept of Big Data has become very popular over the last decade, with many large technology companies successfully building their business models around its exploitation. The UK’s public sector has tried to follow suit, with local governments in particular trying to introduce new models of service delivery based on the routine extraction of information from their own big data. These attempts have been hailed as the beginning of a new era for the public sector, with some commentators suggesting that it could help local governments transition toward a model of service delivery where the quantity and quality of commissioned services is underpinned by data intelligence on users and their current and future needs.

In their Policy & Internet article “Data Intelligence for Local Government? Assessing the Benefits and Barriers to Use of Big Data in the Public Sector“, Fola Malomo and Vania Sena examine the extent to which local governments in the UK are indeed using intelligence from big data, in light of the structural barriers they face when trying to exploit it. Their analysis suggests that the ambitions around the development of big data capabilities in local government are not reflected in actual use. Indeed, these methods have mostly been employed to develop new digital channels for service delivery, and even if the financial benefits of these initiatives are documented, very little is known about the benefits generated by them for the local communities.

While this is slowly changing as councils start to develop their big data capability, the overall impression gained from even a cursory overview is that the full potential of big data is yet to be exploited.

We caught up with the authors to discuss their findings:

Ed.: So what actually is “the full potential” that local government is supposed to be aiming for? What exactly is the promise of “big data” in this context?

Fola / Vania: Local governments seek to improve service delivery amongst other things. Big Data helps to increase the number of ways that local service providers can reach out to, and better the lives of, local inhabitants. In addition, the exploitation of Big Data allows to better target the beneficiaries of their services and emphasise early prevention which may result into a reduction of the delivery costs. Commissioners in a Council needed to understand the drivers of the demand for services across different departments and their connections: how the services are connected to each other and how changes in the provision of “upstream” services can affect the “downstream” provision. Many local governments have reams of data (both hard data and soft data) on local inhabitants and local businesses. Big Data can be used to improve services, increase quality of life and make doing business easier.

Ed.: I wonder: can the data available to a local authority even be considered to be “big data” — you mention that local government data tends to be complex, rather than “big and fast”, as in the industry understanding of “big data”. What sorts of data are we talking about?

Fola / Vania: Local governments hold data on individuals, companies, projects and other activities concerning the local community. Health data, including information on children and other at-risk individuals, forms a huge part of the data within local governments. We use the concept of the data-ecosystem to talk about Big Data within local governments. The data ecosystem consists of different types of data on different topics and units which may be used for different purposes.

Complexity within data is driven by the volume of data and the large number of data sources. One must consider the fact that public agencies address needs from communities that cross administrative boundaries of a single administrative body. Also, the choice of data collection methodology and observation unit is driven by reporting requirements which is influenced by central government. Lastly, data storage infrastructure may be designed to comply with reporting requirements rather than linking data across agencies; data is not necessarily produced to be merged The data is not always “big and fast” but requires the use of advanced storage and analytic tools to get useful information that local areas benefit from.

Ed.: Do you think local governments will ever have the capacity (budget, skill) to truly exploit “big data”? What were the three structural barriers you particularly identified?

Fola / Vania: Without funding there is no chance that local governments can fully exploit big data. With funding, Local government can benefit from Big Data in a number of ways. The improved usage of Big Data usually requires collaboration between agents. The three main structural barriers to the fruitful exploitation of big data by local governments are: data access; ethical issues; and organisational changes. In addition, skill gaps; and investment in information technology have proved problematic.

Data access can be a problem if data exists in separate locations with little communication between the housing organisations and no easy way to move the data from one place to another. The main advantage of big data technologies is their ability to merge different types of data; mine them for insights; and combine them for actionable insights. Nevertheless, while the use of big data approaches to data exploitation assumes that organisations can access all the data they need; this is not the case in the public sector. A uniform practice on what data can be shared locally has not yet emerged. Furthermore there is no solution to the fact that data can span across organisations that are not part of the public sector and that may therefore be unwilling to share data with public bodies.

De-identifying personal data is another key requirement to fulfil before personal data can be shared under the terms of the Data Protection Agreement. It is argued that this requirement is relevant when trying to merge small data sets as individuals can be easily re-identified once the data linkage is completed. As a result, the only option left to facilitate the linkage of data sets with personal information is to create a secure environment where data can be safely de-identified and then matched. Safe havens and trusted third parties have been developed exactly for` this purpose. Data warehouses, where data from local governments and from other parts of the public sector can be matched and linked, have been developed as an intermediate solution to the lack of infrastructure for matching sensitive data.

Due to the personal nature of the data, ethical issues arise concerning how to use information about individuals and whether persons should be identifiable. There is a huge debate on ethical challenges posed by the routine extraction of information from Big Data. The extraction and manipulation of personal information cannot be easily reconciled with what is perceived to be ethically acceptable in this area. Additional ethical issues related to the re-use of output from specific predictive models for other purposes within the public sector. This issue is particularly relevant given the fact that most predictive analytics algorithms only provide an estimate of the risk of an event.

Data usage is related to culture; and organisational changes can be a medium to longer term process. As long as key stakeholders in the organisation accept that insights from data will inform service delivery; big data technologies can be used as levers to introduce changes in the way services are provided. Unfortunately, it is commonly believed that the deployment of big data technologies simply implies a change in the way data are interrogated and interpreted and therefore should not have any bearing on the way internal processes are organised.

In addition, data usage can involve investment in information technology and training. It is well known that investment in IT has been very uneven between the private and public sector, and within the private sector as well. Despite the growth in information and communications technology (ICT) budgets across the private sector, the banking sector and the financial services industry spend 8 percent of their total operating expenditure on ICT, among local authorities, ICT spending makes up only 3-6% of the total budget. Furthermore, successful deployment of Big Data technologies needs to be accompanied by the development of internal skills that allow for the analysis and modelling of complex phenomena that is essential to the development of a data-driven approach to decision making within local governments. However, local governments tend to lack these skills and this skills gap may be exacerbated by the high turnover in the sector. All this, in addition to the sector’s fragmentation in terms of IT provision, reinforces the structural silos that prevent local authorities from sharing and exploiting their data.

Ed.: And do you think these big data techniques will just sort-of seep in to local government, or that there will need to be a proper step-change in terms of skills and attitudes?

Fola / Vania: The benefits of data-driven analysis are being increasingly accepted. Whilst the techniques used might seem to be steadily accepted by local governments, in order to make a real and lasting improvement public bodies should ideally have a big data strategy in place to determine how they will use the data they have available to them. Attitudes can take time to change and the provision of information can help people become more willing to use Big Data in their work.

Ed.: I suppose one solution might for local councils to buy in the services of third-party specialist “big data for local government” providers, rather than trying to develop in-house capacity: do these providers exist? I imagine local government might have data that would be attractive to commercial companies, maybe as a profit-sharing data partnership?

Fola / Vania: The truth is that providers do exist and they always charge local governments. What is underestimated is the role that data centres can play in this arena. The authors are members of the economic and social research council funded business and local government data research centre for smart analytics. This centre helps local councils use their big data better by collating data and performing analysis that is of use to local councils. The centre also provides training to public officials, giving them tools to understand and use data better. The centre is a collaboration between the Universities of Essex, Kent, East Anglia and the London School of Economics. Academics work closely with public officials to come up with solutions to problems facing local areas. In addition, commercial companies are interested in working with local government data. Working with third-party organisations is a good method to ease into the process of using Big Data solutions without having to make a huge changes to one’s organisation.

Ed.: Finally — is there anything that central Government can do (assuming it isn’t already 100% occupied with Brexit) to help local governments develop their data analytic capacity?

Fola / Vania: Central governments influence the environment in which local government operate. Despite local councils making decisions over things such as how data is stored, central government can assist by removing some of the previously-mentioned barriers to data usage. For example, government cuts are excessive and are making the sector very volatile so financial help will be useful in this area. Moreover, data access and transfer is made easier with uniformity of data storage protocols. In addition, the public will have more confidence in providing data if there is transparency in the collection, usage and provision of data. Guidelines for the use of sensitive data should be agreed upon and made known in order to improve the quality of the work. Central governments can also help change the general culture of local governments and attitudes towards Big Data. In order for Big Data to work well for all, individuals, companies, local governments and central governments should be well informed about the issues and able to effect change concerning Big Data issues.

Read the full article: Malomo, F. and Sena, V. (2107) Data Intelligence for Local Government? Assessing the Benefits and Barriers to Use of Big Data in the Public Sector. Policy & Internet 9 (1) DOI: 10.1002/poi3.141.


Fola Malomo and Vania Sena were talking to blog editor David Sutcliffe.

]]>
What explains variation in online political engagement? https://ensr.oii.ox.ac.uk/what-explains-variation-in-online-political-engagement/ Wed, 21 Jun 2017 07:05:48 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4204
Sweden is a leader in terms of digitalization, but poorer municipalities struggle to find the resources to develop digital forms of politics. Image: Stockholm by Peter Tandlund (Flickr CC BY-NC-ND 2.0)

While much of the modern political process is now carried out digitally, ICTs have yet to bring democracies to their full utopian ideal. The drivers of involvement in digital politics from an individual perspective are well studied, but less attention has been paid to the supply-side of online engagement in politics. In his Policy & Internet article “Inequality in Local Digital Politics: How Different Preconditions for Citizen Engagement Can Be Explained,” Gustav Lidén examines the supply of channels for digital politics distributed by Swedish municipalities, in order to understand the drivers of variation in local online engagement.

He finds a positive trajectory for digital politics in Swedish municipalities, but with significant variation between municipalities when it comes to opportunities for engagement in local politics via their websites. These patterns are explained primarily by population size (digital politics is costly, and larger societies are probably better able to carry these costs), but also by economic conditions and education levels. He also find that a lack of policies and unenthusiastic politicians creates poor possibilities for development, verifying previous findings that without citizen demand — and ambitious politicians — successful provision of channels for digital politics will be hard to achieve.

We caught up with Gustav to discuss his findings:

Ed.: I guess there must be a huge literature (also in development studies) on the interactions between connectivity, education, the economy, and supply and demand for digital government: and what the influencers are in each of these relationships. Not to mention causality.. I’m guessing “everything is important, but nothing is clear”: is that fair? And do you think any “general principles” explaining demand and supply of electronic government / democracy could ever be established, if they haven’t already?

Gustav: Although the literature in this field is becoming vast the subfield that I am primarily engaged in, that is the conditions for digital policy at the subnational level, has only recently attracted greater numbers of scholars. Even if predictors of these phenomena can be highly dependent on context, there are some circumstances that we can now regard as being the ‘usual suspects’. Not surprisingly, resources of both economic and human capital appear to be important, irrespective of the empirical case. Population size also seems to be a key determinant that can influence these kind of resources.

In terms of causality, few studies that I am familiar with have succeeded in examining the interplay of both demand for and supply of digital forms of politics. In my article I try to get closer to the causal chain by examining both structural predictors as well as adding qualitative material from two cases. This makes it possible to establish better precision on causal chains since it enables judgements on how structural conditions influence key stakeholders.

Ed.: You say government-citizen interactions in Sweden “are to a larger extent digital in larger and better-off societies, while ‘analog’ methods prevail in smaller and poorer ones.” Does it particularly matter whether things are digital or analog at municipal level: as long as they all have equal access to national-level things?

Gustav: I would say so, yes. However, this could vary in relation to the responsibilities of municipalities among different countries. The municipal sector in Sweden is significant. Its general costs represent about one quarter of the country’s GDP and the sector is responsible for important parts of the welfare sector. In addition to this, municipalities also represent the most natural arena for political engagement — the typical political career starts off in the local council. Great variation in digital politics among municipalities is therefore problematic — there is a risk of inequality between municipalities if citizens from one municipality face greater possibilities for information and participation while those residing in another are more restrained.

Ed.: Sweden has areas of very low population density: are paper / telephone channels cheaper for municipalities to deliver in these areas, or might that just be an excuse for any lack of enthusiasm? i.e. what sorts of geographical constraints does Sweden face?

Gustav: This is a general problem for a large proportion of the Swedish municipalities. Due to government efforts, ambitions for assuring high-speed internet connections (including more sparsely populated areas), are under way. Yet in recent research, the importance for fast internet access in relation to municipalities’ work with digital politics has been quite ambiguous. My guess would, however, be that if the infrastructure is in place it will, sooner or later, be impossible for municipalities to refrain from working with more digital forms of politics.

Ed.: I guess a cliche of the Swedes (correct me if I’m wrong!) is that despite the welfare state / tradition of tolerance, they’re not particularly social — making it difficult, for example, for non-Swedes to integrate. How far do you think cultural / societal factors play a role in attempts to create “digital community,” in Sweden, or elsewhere?

Gustav: This cliche is perhaps most commonly related to the Swedish countryside. However, the case studies in my article illustrates a contrary image. Take the municipality of Gagnef as an example, one of my two cases, in which informants describe a vibrant civil society with associations representing a great variety of sectors. One interesting finding, though, is that local engagement is channeled through these traditional forms and not particularly through digital media. Still, from a global perspective, Sweden is rightfully described as an international leader in terms of digitalization. This is perhaps most visible in the more urban parts of the country; even if there are many good examples from the countryside in which the technology is one way to counteract great distances and low population density.

Ed.: And what is the role of the central government in all this? i.e. should they (could they? do they?) provide encouragement and expertise in providing local-level digital services, particularly for the smaller and poorer districts?

Gustav: Due to the considerable autonomy among the municipalities the government has not regulated municipalities working with this issue. However, they have encouraged and supported parts of it, primarily when it comes to the investment of technological infrastructure. My research does show that smaller and poorer municipalities have a hard time finding the right resources for developing digital forms of politics. Local political leaders find it hard to prioritize these issues when there is almost a constant need for more resources for schools and elderly care. But this is hardly unique for Sweden. In a study of the local level in the US, Norris and Reddick show how lack of financial resources is the number one constraint for the development of digital services. I think that government regulation, i.e. forcing municipalities to distribute specific digital channels, could lower inequalities between municipalities but would be unthinkable without additional government funding.

Ed.: Finally: do you see it as “inevitable” that everyone will eventually be online, or could pockets of analog government-citizen interaction persist basically indefinitely?

Gustav: Something of a countermovement opposing the digital society appears to exist in several societies. In general, I think we need to find a more balanced way to describe the consequences of digitalization. Hopefully, most people see both the value and the downsides of a digital society, but the debate tends to be dominated either by naïve optimists or complete pessimists. Policy makers need though, to start thinking of the consequences of both inequalities in relation to this technique and pay more attention to the risks related to it.

Read the full article: Lidén, G. (2016) Inequality in Local Digital Politics: How Different Preconditions for Citizen Engagement Can Be Explained. Policy & Internet 8 (3) doi:10.1002/poi3.122.


Gustav Lidén was talking to blog editor David Sutcliffe.

See his websites: https://www.miun.se/Personal/gustavliden/ and http://gustavliden.blogspot.se/

]]>
Could Voting Advice Applications force politicians to keep their manifesto promises? https://ensr.oii.ox.ac.uk/could-voting-advice-applications-force-politicians-to-keep-their-manifesto-promises/ Mon, 12 Jun 2017 09:00:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4199 In many countries, Voting Advice Applications (VAAs) have become an almost indispensable part of the electoral process, playing an important role in the campaigning activities of parties and candidates, an essential element of media coverage of the elections, and being widely used by citizens. A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for.

These applications are based on the idea of issue and proximity voting — the parties and candidates recommended by VAAs are those with the highest number of matching positions on a number of political questions and issues. Many of these questions are much more specific and detailed than party programs and electoral platforms, and show the voters exactly what the party or candidates stand for and how they will vote in parliament once elected. In his Policy & Internet article “Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote,” Andreas Ladner examines the extent to which VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises.

His main hypothesis is that VAAs lead to “promissory representation” — where parties and candidates are elected for their promises and sanctioned by the electorate if they don’t keep them. He suggests that as these tools become more popular, the “delegate model” is likely to increase in popularity: i.e. one in which politicians are regarded as delegates voted into parliament to keep their promises, rather than being voted a free mandate to act how they see fit (the “trustee model”).

We caught up with Andreas to discuss his findings:

Ed.: You found that issue-voters were more likely (than other voters) to say they would sanction a politician who broke their election promises. But also that issue voters are less politically engaged. So is this maybe a bit moot: i.e. if the people most likely to force the “delegate model” system are the least likely to enforce it?

Andreas: It perhaps looks a bit moot in the first place, but what happens if the less engaged are given the possibility to sanction them more easily or by default. Sanctioning a politician who breaks an election promise is not per se a good thing, it depends on the reason why he or she broke it, on the situation, and on the promise. VAA can easily provide information to what extent candidates keep their promises — and then it gets very easy to sanction them simply for that without taking other arguments into consideration.

Ed.: Do voting advice applications work best in complex, multi-party political systems? (I’m not sure anyone would need one to distinguish between Trump / Clinton, for example?)

Andreas: Yes, I believe that in very complex systems – like for example in the Swiss case where voters not only vote for parties but also for up to 35 different candidates – VAAs are particularly useful since they help to process a huge amount of information. If the choice is only between two parties or two candidates which are completely different, than VAAs are less helpful.

Ed.: I guess the recent elections / referendum I am most familiar with (US, UK, France) have been particularly lurid and nasty: but I guess VAAs rely on a certain quiet rationality to work as intended? How do you see your Swiss results (and Swiss elections, generally) comparing with these examples? Do VAAs not just get lost in the noise?

Andreas: The idea of VAAs is to help voters to make better informed choices. This is, of course, opposed to decisions based on emotions. In Switzerland, elections are not of outmost importance, due to specific features of our political system such as direct democracy and power sharing, but voters seem to appreciate the information provided by smartvote. Almost 20% of the voter cast their vote after having consulted the website.

Ed.: Macron is a recent example of someone who clearly sought (and received) a general mandate, rather than presenting a detailed platform of promises. Is that unusual? He was criticised in his campaign for being “too vague,” but it clearly worked for him. What use are manifesto pledges in politics — as opposed to simply making clear to the electorate where you stand on the political spectrum?

Andreas: Good VAAs combine electoral promises on concrete issues as well as more general political positions. Voters can base their decisions on either of them, or on a combination of both of them. I am not arguing in favour of one or the other, but they clearly have different implications. The former is closer to the delegate model, the latter to the trustee model. I think good VAAs should make the differences clear and should even allow the voters to choose.

Ed.: I guess Trump is a contrasting example of someone whose campaign was all about promises (while also seeking a clear mandate to “make America great again”), but who has lied, and broken these (impossible) promises seemingly faster than people can keep track of them. Do you think his supporters care, though?

Andreas: His promises were too far away from what he can possibly keep. Quite a few of his voters, I believe, do not want them to be fully realized but rather that the US move a bit more into this direction.

Ed.: I suppose another example of an extremely successful quasi-pledge was the Brexit campaign’s obviously meaningless — but hugely successful — “We send the EU £350 million a week; let’s fund our NHS instead.” Not to sound depressing, but do promises actually mean anything? Is it the candidate / issue that matters (and the media response to that), or the actual pledges?

Andreas: I agree that the media play an important role and not always into the direction they intend to do. I do not think that it is the £350 million a week which made the difference. It is much more a general discontent and a situation which was not sufficiently explained and legitimized which led to this unexpected decision. If you lose the support for your policy than it gets much easier for your opponents. It is difficult to imagine that you can get a majority built on nothing.

Ed.: I’ve read all the articles in the Policy & Internet special issue on VAAs: one thing that struck me is that there’s lots of incomplete data, e.g. no knowledge of how people actually voted in the end (or would vote in future). What are the strengths and weaknesses of VAAs as a data source for political research?

Andreas: The quality of the data varies between countries and voting systems. We have a self-selection bias in the use of VAAs and often also into the surveys conducted among the users. In general we don’t know how they voted, and we have to believe them what they tell us. In many respects the data does not differ that much from what we get from classic electoral studies, especially since they also encounter difficulties in addressing a representative sample. VAAs usually have much larger Ns on the side of the voters, generate more information about their political positions and preferences, and provide very interesting information about the candidates and parties.

Read the full article: Ladner, A. (2016) Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote. Policy & Internet 8 (4). DOI: doi:10.1002/poi3.137.


Andreas Ladner was talking to blog editor David Sutcliffe.

]]>
Social media and the battle for perceptions of the U.S.–Mexico border https://ensr.oii.ox.ac.uk/social-media-and-the-battle-for-perceptions-of-the-u-s-mexico-border/ Wed, 07 Jun 2017 07:33:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4195 The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories.

However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it.

The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”.

In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of the second creates a policy problem that is more value-laden than empirically based and that creates distrust and polarization among stakeholders and between the countries. The U.S.–Mexico border is clearly a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border; possibly creating the greater cooperation we need to solve many of the urgent problems faced by border communities.

We caught up with the authors to discuss their findings:

Ed.: Who created the videos you studied: were they created by the public, or were they also produced by perhaps more progressive media outlets? i.e. were you able to disentangle the effect of the media in terms of these narratives?

Mark / Donna: For this study, we studied YouTube videos, using the “relevance” filter. Thus, the videos were ordered by most related to our topic and by most frequently viewed. With this selection method we captured videos produced by a variety of sources; some that contained embedded videos from mainstream media, others created by non-profit groups and public television groups, but also videos produced by interested citizens or private groups. The non-profit and media groups more often discuss the beneficial elements of the border (trade, shared environmental protection, etc.), while individual citizens or groups tended to post the more emotional and narrative-driven videos more likely to construct the border residents in a non-deserving sense.

Ed.: How influential do you think these videos are? In a world of extreme media concentration (where even the US President seems to get his news from Fox headlines and the 42 people he follows on Twitter) .. how significant is “home grown” content; which after all may have better, or at least more locally-representative, information than certain parts of the national media?

Mark / Donna: Today’s extreme media world supplies us with constant and fast-moving news. YouTube is part of the media mix, frequently mentioned as the second largest search engine on the web, and as such is influential. Media sources report that a large number of diverse people use YouTube, thus the videos encompass a broad swath of international, domestic and local issues. That said, as with most news sources today, some individuals gravitate to the stories that represent their point of view, and YouTube makes it possible for individuals to do just this. In other words, if a person perceives the US-Mexico border as a horrible place, they can use key words to search YouTube videos that represent that point of view.

However, we believe YouTube to be more influential than some other sources precisely because it encompasses diversity, thus, even when searching using specific terms, there will likely be a few videos included in search results that provide a different point of view. Furthermore, we did find some local, “home grown” content included in search results, again adding to the diversity presented to the individual watching YouTube. Although, we found less homegrown content than initially expected. Overall, there is selectivity bias with YouTube, like any type of media, but YouTube’s greater diversity of postings and viewers and broad distribution may increase both exposure and influence.

Ed.: Your article was published pre-Trump. How do you think things might have changed post-election, particularly given the uncertainty over “the wall“ and NAFTA — and Trump’s rather strident narratives about each? Is it still a case of “negative traditional media; equivocal social media”?

Mark / Donna: Our guess is that anti-border forces are more prominent on YouTube since Trump’s election and inauguration. Unless there is an organized effort to counter discussion of “the wall” and produce positive constructions of the border, we expect that YouTube videos posted over the past few months lean more toward non-deserving constructions.

Ed.: How significant do you think social media is for news and politics generally, i.e. its influence in this information environment — compared with (say) the mainstream press and party-machines? I guess Trump’s disintermediated tweeting might have turned a few assumptions on their heads, in terms of the relation between news, social media and politics? Or is the media always going to be bigger than Trump / the President?

Mark / Donna: Social media, including YouTube and Twitter, is interactive and thus allows anyone to bypass traditional institutions. President Trump can bypass institutions of government, media institutions, even his own political party and staff and communicate directly with people via Twitter. Of course, there are advantages to that, including hearing views that differ from the “official lines,” but there are also pitfalls, such as minimized editing of comments.

We believe people see both the strengths and the weakness with social media, and thus often read news from both traditional media sources and social media. Traditional media is still powerful and connected to traditional institutions, thus, remains a substantial source of information for many people — although social media numbers are climbing, particularly with the President’s use of Twitter. Overall, both types of media influence politics, although we do not expect future presidents will necessarily emulate President Trump’s use of social media.

Ed.: Another thing we hear a lot about now is “filter bubbles” (and whether or not they’re a thing). YouTube filters viewing suggestions according to what you watch, but still presents a vast range of both good and mad content: how significant do you think YouTube (and the explosion of smartphone video) content is in today’s information / media environment? (And are filter bubbles really a thing..?)

Mark / Donna: Yeah, we think that the filter bubbles are real. Again, we think that social media has a lot of potential to provide new information to people (and still does); although currently social media is falling into the same selectivity bias that characterizes the traditional media. We encourage our students to use online technology to seek out diverse sources; sources that both mirror their opinions and that oppose their opinions. People in the US can access diverse sources on a daily basis, but they have to be willing to seek out perspectives that differ from their own view, perspectives other than their favoured news source.

The key is getting individuals to want to challenge themselves and to be open to cognitive dissonance as they read or watch material that differs from their belief systems. Technology is advanced but humans still suffer the cognitive limitations from which they have always suffered. The political system in the US, and likely other places, encourages it. The key is for individuals to be willing to listen to views unlike their own.

Read the full article: Lybecker, D.L., McBeth, M.K., Husmann, M.A, and Pelikan, N. (2015) Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube. Policy & Internet 7 (4). DOI: 10.1002/poi3.94.


Mark McBeth and Donna Lybecker were talking to blog editor David Sutcliffe.

]]>
Using Open Government Data to predict sense of local community https://ensr.oii.ox.ac.uk/using-open-government-data-to-predict-sense-of-local-community/ Tue, 30 May 2017 09:31:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4137 Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community — i.e. the ability of people to act individually or collectively to benefit the community — is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions — including opportunities and skills development, resource mobilization, leadership, participatory decision making, etc. — all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured.

A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data”, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables.

The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose — being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy.

We caught up with the authors to discuss their findings:

Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys?

Authors: Our research stemmed from an observation of the measures of social characteristics available. These are generally obtained through expensive surveys, so we asked ourselves “how could we generate them in a more economic and efficient way?” In recent years, the UK government has openly released a wealth of datasets, which could be used to provide information for other purposes — in our case, providing measures of sense of community and participation — than those for which they had been created. We started our work by consulting papers from the social science domain, to understand which factors were associated to sense of community and participation. Afterwards, we matched the factors that were most commonly mentioned in the literature with “actual” variables found in UK Open Government Data sources.

Ed.: You say “the most determinant variables in our models were only partially in agreement with the most influential factors for sense of community and participation according to the social science literature” — which were they, and how do you account for the discrepancy?

Authors: We observed two types of discrepancy. The first was the case of variables that had roughly the same level of importance in our models and in others previously developed, but with a different rank. For instance, median age was by far the most determinant variable in our model for sense of community. This variable was not ranked among the top five variables in the literature, although it was listed among the significant variables.

The second type of discrepancy regarded variables which were highly important in our models and not influential in others, or vice versa. An example is the socioeconomic status of residents of a neighbourhood, which appeared to have no effect on participation in prior studies, but was the top-ranking variable in our participation model (operationalised as the number of people in intermediate occupation).

We believe that there are multiple explanations for these phenomena, all of which deserve further investigation. First, highly determinant predictors in conventional statistical models have been proven to have little or no importance in ensemble algorithms, such as the one we used [1]. Second, factors influencing sense of community and civic participation may vary according to the context (e.g. different countries; see [3] about sense of community in China for an example). Finally, different methods may measure different aspects related to a socially meaningful concept, leading to different partial explanations.

Ed.: What were the predictors for “lack of community” — i.e. what would a terrible community look like, according to your models?

Authors: Our work did not really focus on finding “good” and “bad” communities. However, we did notice some characteristics that were typical of communities with low sense of community or participation in our dataset. For example, sense of community had a strong negative correlation with work and stores accessibility, with ethnic fragmentation, and with the number of people living in the UK for less than 10 years. On the other hand, it was positively correlated with the age of residents. Participation, instead, was negatively correlated with household composition and occupation of its residents, whilst it had a positive relation with their level of education and the weekly worked hours. Of course, these data would require to be interpreted by a social scientist, in order to properly contextualise and understand them.

Ed.: Do you see these techniques as being more useful to highlight issues and encourage discussion, or actually being used in planning? For example, I can see it might raise issues if machine-learning models “proved” that presence of immigrant populations, or neighbourhoods of mixed economic or ethnic backgrounds, were less cohesive than homogeneous ones (not sure if they are?).

Authors: How machine learning algorithms work is not always clear, even to specialists, and this has led some people to describe them as “black boxes”. We believe that models like those we developed can be extremely useful to challenge existing perspectives based on past data available in the social science literature, e.g. they can be used to confirm or reject previous measures in the literature. Additionally, machine learning models can serve as indicators that can be more frequently consulted: they are cheaper to produce, we can use them more often, and see whether policies have actually worked.

Ed.: It’s great that existing data (in this case, Open Government Data) can be used, rather than collecting new data from scratch. In practice, how easy is it to repurpose this data and build models with it — including in countries where this data may be more difficult to access? And were there any variables you were interested in that you couldn’t access?

Authors: Identifying relevant datasets and getting hold of them was a lengthy process, even in the UK, where plenty of work has been done to make government data openly available. We had to retrieve many datasets from the pages of the government department that produced them, such as the Department for Work and Pensions or the Home Office, because we could not find them through the portal data.gov.uk. Next to this, the ONS website was another very useful resource, which we used to get census data.

The hurdles encountered in gathering the data led us to recommend the development of methods that would be able to more automatically retrieve datasets from a list of sources and select the ones that provide the best results for predictive models of social dimensions.

Ed.: The OII has done some similar work, estimating the local geography of Internet use across Britain, combining survey and national census data. The researchers said the small-area estimation technique wasn’t being used routinely in government, despite its power. What do you think of their work and discussion, in relation to your own?

Authors: One of the issues we were faced with in our research was the absence of nationwide data about sense of community and participation at a neighbourhood level. The small area estimation approach used by Blank et al., 2017 [2] could provide a suitable solution to the issue. However, the estimates produced by their approach understandably incorporate a certain amount of error. In order to use estimated values as training data for predictive models of community measures it would be key to understand how this error would be propagated to the predicted values.

[1] Berk, R. 2006. “ An Introduction to Ensemble Methods for Data Analysis.” Sociological Methods & Research 34 (3): 263–95.
[2] Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.
[3] Xu, Q., Perkins, D.D. and Chow, J.C.C., 2010. Sense of community, neighboring, and social capital as predictors of local political participation in China. American journal of community psychology, 45(3-4), pp.259-271.

Read the full article: Piscopo, A., Siebes, R. and Hardman, L. (2017) Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data. Policy & Internet 9 (1) doi:10.1002/poi3.145.


Alessandro Piscopo, Ronald Siebes, and Lynda Hardman were talking to blog editor David Sutcliffe.

]]>
Should adverts for social casino games be covered by gambling regulations? https://ensr.oii.ox.ac.uk/should-adverts-for-social-casino-games-be-covered-by-gambling-regulations/ Wed, 24 May 2017 07:05:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4108 Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry — social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetized features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators.

Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable.

However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorizing and normalization of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers. Given the receptivity of young people to messages that encourage gambling, the authors recommend that gaming companies embrace corporate social responsibility standards, including adding warning messages to advertisements for gambling-themed games. They hope that their qualitative research may complement existing quantitative findings, and facilitate discussions about appropriate policies for advertisements for social casino games and other gambling-themed games.

We caught up with Brett to discuss their findings:

Ed.: You say there are no policies related to the advertising of social casino games — why is this? And do you think this will change?

Brett: Social casino games are regulated under general consumer regulations, but there are no specific regulations for these types of games and they do not fall under gambling regulation. Although several gambling regulatory bodies have considered these games, as they do not require payment to play and prizes have no monetary value they are not considered gambling activities. Where the games include branding for gambling companies or are considered advertising, they may fall under relevant legislation. Currently it is up to individual consumers to consider if they are relevant, which includes parents considering their children’s’ use of the games.

Ed.: Is there work on whether these sorts of games actually encourage gambling behaviour? As opposed to gambling behaviour simply pre-existing — i.e. people are either gamblers or not, susceptible or not.

Brett: We have conducted previous research showing that almost one-fifth of adults who played social casino games had gambled for money as a direct result of these games. Research also found that two-thirds of adolescents who had paid money to play social casino games had gambled directly as a result of these games. This builds on other international research suggesting that there is a pathway between games and gambling. For some people, the games are perceived to be a way to ‘try out’ or practice gambling without money and most are motivated to gamble due to the possibility of winning real money. For some people with gambling problems, the games can trigger the urge to gamble, although for others, the games are used as a way to avoid gambling in an attempt to cut back. The pathway is complicated and needs further specific research, including longitudinal studies.

Ed.: Possibly a stupid question: you say social games are a huge and booming market, despite being basically free to play. Where does the revenue come from?

Brett: Not a stupid question at all! When something is free, of course it makes sense to question where the money comes from. The revenue in these business models comes from advertisements and players. The advertisement revenue model is similar to other revenue models, but the player revenue model, which is based largely on micropayments, is a major component of how these games make money. Players can typically play free, and micropayments are voluntary. However, when they run out of free chips, players have to wait to continue to play, or they can purchase additional chips.

The micropayments can also improve game experience, such as to obtain in-game items, as a temporary boost in the game, to add lives/strength/health to an avatar or game session, or unlock the next stage in the game. In social casino games, for example, micropayments can be made to acquire more virtual chips with which to play the slot game. Our research suggests that only a small fraction of the player base actually makes micropayments, and a smaller fraction of these pay very large amounts. Since many of these games are free to play, but one can pay to advance through game in certain ways, they have colloquially been referred to as “freemium” games.

Ed.: I guess social media (like Facebook) are a gift to online gambling companies: i.e. being able to target (and A/B test) their adverts to particular population segments? Are there any studies on the intersection of social media, gambling and behavioural data / economics?

Brett: There is a reasonable cross-over in social casino game players and gamblers – our Australian research found 25% of Internet and 5% of land-based gamblers used social casino games and US studies show around one-third of social casino gamers visit land-based casinos. Many of the most popular and successful social casino games are owned by companies that also operate gambling, in venues and online. Some casino companies offer social casino games to continue to engage with customers when they are not in the venue and may offer prizes that can be redeemed in venues. Games may allow gambling companies to test out how popular games will be before they put them in venues. Although, as most players do not pay to play social casino games, they may engage with these differently from gambling products.

Ed.: We’ve seen (with the “fake news” debate) social media companies claiming to simply be a conduit to others’ content, not content providers themselves. What do they say in terms of these social games: I’m assuming they would either claim that they aren’t gambling, or that they aren’t responsible for what people use social media for?

Brett: We don’t want to speak for the social media companies themselves, and they appear to leave quite a bit up to the game developers. Advertising standards have become more lax on gambling games – the example we give in our article is Google, who had a strict policy against advertisements for gambling-related content in the Google Play store but in February 2015 began beta testing advertisements for social casino games. In some markets where online gambling is restricted, online gambling sites offer ‘free’ social casino games that link to real money sites as a way to reach these markets.

Ed.: I guess this is just another example of the increasingly attention-demanding, seductive, sexualised, individually targeted, ubiquitous, behaviourally attuned, monetised environment we (and young children) find ourselves in. Do you think we should be paying attention to this trend (e.g. noticing the close link between social gaming and gambling) or do you think we’ll all just muddle along as we’ve always done? Is this disturbing, or simply people doing what they enjoy doing?

Brett: We should certainly be paying attention to this trend, but don’t think the activity of social casino games is disturbing. A big part of the goal here is awareness, followed by conscious action. We would encourage companies to take more care in controlling who accesses their games and to whom their advertisements are targeted. As you note, David, we are in such a highly-targeted, specified state of advertising. As a result, we should, theoretically, be able to avoid marketing games to young kids. Companies should also certainly be mindful of the potential effect of cartoon games. We don’t automatically assign a sneaky, underhanded motive to the industry, but at the same time there is a percentage of the population that is at risk for gambling problems and we don’t want to exacerbate the situation by inadvertently advertising to young people, who are more susceptible to this type of messaging.

Read the full article: Abarbanel, B., Gainsbury, S.M., King, D., Hing, N., and Delfabbro, P.H. (2017) Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults? Policy & Internet 9 (2). DOI: 10.1002/poi3.135.


Brett Abarbanel was talking to blog editor David Sutcliffe.

]]>
How useful are volunteer crisis-mappers in a humanitarian crisis? https://ensr.oii.ox.ac.uk/how-useful-are-volunteer-crisis-mappers-in-a-humanitarian-crisis/ Thu, 18 May 2017 09:11:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4129 User-generated content can provide a useful source of information during humanitarian crises like armed conflict or natural disasters. With the rise of interactive websites, social media, and online mapping tools, volunteer crisis mappers are now able to compile geographic data as a humanitarian crisis unfolds, allowing individuals across the world to organize as ad hoc groups to participate in data collection. Crisis mappers have created maps of earthquake damage and trapped victims, analyzed satellite imagery for signs of armed conflict, and cleaned Twitter data sets to uncover useful information about unfolding extreme weather events like typhoons.

Although these volunteers provide useful technical assistance to humanitarian efforts (e.g. when maps and records don’t exist or are lost), their lack of affiliation with “formal” actors, such as the United Nations, and the very fact that they are volunteers, makes them a dubious data source. Indeed, concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put. Most of these concerns assume that volunteers have no professional training. And herein lies the contradiction: by doing the work for free and at their own will the volunteers make these efforts possible and innovative, but this is also why crisis mapping is doubted and questioned by experts.

By investigating crisis-mapping volunteers and organizations, Elizabeth Resor’s article “The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers” published in Policy & Internet presents evidence of a more professional cadre of volunteers and a means to distinguish between different types of volunteer organizations. Given these organizations now play an increasingly integrated role in humanitarian responses, it’s crucial that their differences are understood and that concerns about the volunteers are answered.

We caught up with Elizabeth to discuss her findings:

Ed.: We have seen from Citizen Science (and Wikipedia) that large crowds of non-professional volunteers can produce work of incredible value, if projects are set up right. Are the fears around non-professional crisis mappers valid? For example, is this an environment where everything “must be correct”, rather than “probably mostly correct”?

Elizabeth: Much of the fears around non-professional crisis mappers comes from a lack of understanding about who the volunteers are and why they are volunteering. As these questions are answered and professional humanitarian actors become more familiar with the concept of volunteer humanitarians, I think many of these fears are diminishing.

Due to the fast-paced and resource-constrained environments of humanitarian crises, traditional actors, like the UN, are used to working with “good enough” data, or data that are “probably mostly correct”. And as you point out, volunteers can often produce very high quality data. So when you combine these two facts, it stands to reason that volunteer crisis mappers can contribute necessary data that is most likely as good as (if not better) than the data that humanitarian actors are used to working with. Moreover, in my research I found that most of these volunteers are not amateurs in the full sense because they come from related professional fields (such as GIS).

Ed.: I suppose one way of assuaging fears is to maybe set up an umbrella body of volunteer crisis mapping organisations, and maybe offer training opportunities and certification of output. But then I suppose you just end up as professionals. How blurry are the lines between useful-not useful / professional-amateur in crisis mapping?

Elizabeth: There is an umbrella group for volunteer organizations set up exactly for that reason! It’s called the Digital Humanitarian Network. At the time that I was researching this article, the DHN was very new and so I wasn’t able to ask if actors were more comfortable working with volunteers contacted through the DHN, but that would be an interesting issue to look into.

The two crisis mapping organizations I researched — the Standby Task Force and the GIS Corps — both offer training and some structure to volunteer work. They take very different approaches to the volunteer work — the Standby Task Force work can include very simple micro-tasks (like classifying photographs), whereas the GIS Corps generally provides quite specialised technical assistance (like GIS analysis). However, both of these kinds of tasks can produce useful and needed data in a crisis.

Ed.: Another article in the journal examined the effective take-over of a Russian crisis volunteer website by the Government, i.e. by professionalising (and therefore controlling) the site and volunteer details they had control over who did / didn’t turn up in disaster areas (effectively meaning nonprofessionals were kept out). How do humanitarian organisations view volunteer crisis mappers: as useful organizations to be worked with in parallel, or as something to be controlled?

Elizabeth: I have seen examples of humanitarian and international development agencies trying to lead or create crowdsourcing responses to crises (for example, USAID “Mapping to End Malaria“). I take this as a sign that these agencies understand the value in volunteer contributions — something they wouldn’t have understood without the initial examples created by those volunteers.

Still, humanitarian organizations are large bureaucracies, and even in a crisis they function as bureaucracies, while volunteer organizations take a nimble and flexible approach. This structural difference is part of the value that volunteers can offer humanitarian organizations, so I don’t believe that it would be in the best interest of the humanitarian organizations to completely co-opt or absorb the volunteer organizations.

Ed.: How does liability work? Eg if crisis workers in a conflict zone are put in danger by their locations being revealed by well-meaning volunteers? Or mistakes being being made on the ground because of incorrect data — perhaps injected by hostile actors to create confusion (thinking of our current environment of hybrid warfare..).

Elizabeth: Unfortunately, all humanitarian crises are dangerous and involve threats to “on the ground” response teams as well as affected communities. I’m not sure how liability is handled. Incorrect data or revealed locations might not be immediately traced back to the source of the problem (i.e. volunteers) and the first concern would be minimizing the harm, not penalizing the cause.

Still, this is the greatest challenge to volunteer crisis mapping that I see. Volunteers don’t want to cause more harm than good, and to do this they must understand the context of the crisis in which they are getting involved (even if it is remotely). This is where relationships with organizations “on the ground” are key. Also, while I found that most volunteers had experience related to GIS and/or data analysis, very few had experience in humanitarian work. This seems like an area where training can help volunteers understand the gravity of their work, to ensure that they take it seriously and do their best work.

Ed.: Finally, have you ever participated as a volunteer crisis mapper? And also: how do you the think the phenomenon is evolving, and what do you think researchers ought to be looking at next?

Elizabeth: I haven’t participated in any active crises, although I’ve tried some of the tools and trainings to get a sense of the volunteer activities.

In terms of future research, you mentioned hybridized warfare and it would be interesting to see how this change in the location of a crisis (i.e. in online spaces as well as physical spaces) is changing the nature of volunteer responses. For example, how can many dispersed volunteers help monitor ISIS activity on YouTube and Twitter? Or are those tasks better suited for an algorithm? I would also be curious to see how the rise of isolationist politicians in Europe and the US has influenced volunteer crisis mapping. Has this caused more people to want to reach out and participate in international crises or is it making them more inward-looking? It’s certainly an interesting field to follow!

Read the full article: Resor, E. (2016) The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers. Policy & Internet 8 (1) DOI:10.1002/poi3.112.

Elizabeth Resor was talking to blog editor David Sutcliffe.

]]>
Is Left-Right still meaningful in politics? Or are we all just winners or losers of globalisation now? https://ensr.oii.ox.ac.uk/is-left-right-still-meaningful-in-politics-or-are-we-all-just-winners-or-losers-of-globalisation-now/ Tue, 16 May 2017 08:18:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4167 The Left–Right dimension — based on the traditional cleavage in society between capital and labor — is the most common way of conceptualizing ideological difference. But in an ever more globalized world, are the concepts of Left and Right still relevant? In recent years political scientists have increasingly come to talk of a two-dimensional politics in Europe, defined by an economic (Left–Right) dimension, and a cultural dimension that relates to voter and party positions on sociocultural issues.

In his Policy & Internet article “Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data”, Jonathan Wheatley argues that the cleavage that exists in many European societies between “winners” and “losers” of globalization has engendered a new ideological dimension pitting “cosmopolitans” against “communitarians” and that draws on cultural issues relating to identity — rather than economic issues.

He identifies latent dimensions from opinion data generated by two Voting Advice Applications deployed in England in 2014 and 2015 — finding that the political space in England is defined by two main ideological dimensions: an economic Left–Right dimension and a cultural communitarian–cosmopolitan dimension. While they co-vary to a significant degree, with economic rightists tending to be more communitarian and economic leftists tending to be more cosmopolitan, these tendencies do not always hold and the two dimensions should be considered as separate.

The identification of the communitarian–cosmopolitan dimension lends weight to the hypothesis of Kriesi et al. (2006) that politics is increasingly defined by a cleavage between “winners” and “losers” of globalization, with “losers” tending to adopt a position of cultural demarcation and to perceive “outsiders” such as immigrants and the EU, as a threat. If an economic dimension pitting Left against Right (or labour against capital) defined the political arena in Europe in the twentieth century, maybe it’s a cultural cleavage that pits cosmopolitans against communitarians that defines politics in the twenty-first.

We caught up with Jonathan to discuss his findings:

Ed.: The big thing that happened since your article was published was Brexit — so I guess the “communitarian–cosmopolitan” dimension (Trump!) makes obvious intuitive sense as a political cleavage plane. Will you be comparing your GE2015 VAA data with GE2017 data? And what might you expect to see?

Jonathan: Absolutely! We will be launching the WhoGetsMyVoteUK Voting Advice Application next week. This VAA will be launched by three universities: Oxford Brookes University (where I am based), Queen Mary University London and the University of Bristol. This should provide extensive data that will allow us to make a longitudinal study: before and after Brexit.

Ed.: There was a lot of talk (for the first time) after Brexit of “the left behind” — I suppose partly corresponding to your “communitarians” — but that all seems to have died down. Of course they’re still there: is there any sense of how they will affect the upcoming election — particularly the “communitarian leftists”?

Jonathan: Well this is the very group that Theresa May’s Conservative Party seems to be targeting. We should note that May has attempted to appeal directly to this group by her claim that “if you believe you’re a citizen of the world, you’re a citizen of nowhere” made at the Tory Party Conference last autumn, and by her assertion that “Liberalism and globalisation have left people behind” made at the Lord Mayor’s banquet late last year. Her (at least superficially) economically leftist proposals during the election campaign to increase the living wage and statutory rights for family care and training, and to strengthen labour laws, together with her “hard Brexit stance” and confrontational rhetoric towards European leaders seems specifically designed to appeal to this group. Many of these “communitarian leftists” have previously been tempted by UKIP, but the Conservatives seem to be winning the battle for their votes at the moment.

Ed.: Does the UK’s first-past-the-post system (resulting in a non-proportionally representative set of MPs) just hide what is happening underneath, i.e. I’m guessing a fairly constant, unchanging spectrum of political leanings? Presumably UKIP’s rise didn’t signify a lurch to the right: it was just an efficient way of labelling (for a while) people who were already there?

Jonathan: To a certain extent, yes. Superficially the UK has very much been a case of “business as usual” in terms of its party system, notwithstanding the (perhaps brief) emergence of UKIP as a significant force in around 2012. This can be contrasted with Sweden, Finland and the Netherlands, where populist right parties obtained significant representation in parliament. And UKIP may prove to be a temporary phenomenon. The first-past-the-post system provides more incentives for parties to reposition themselves to reflect the new reality than it does for new parties to emerge. In fact it is this repositioning, from a economically right-wing, mildly cosmopolitan party to an (outwardly) economically centrist, communitarian party, that seems to characterise the Tories today.

Ed.: Everything seems to be in a tremendous mess (parties imploding, Brexit horror, blackbox campaigning, the alt-right, uncertainty over tactical voting, “election hacking”) and pretty volatile. But are these exciting times for political scientists? Or are things too messy and the data (for example, on voting intensions as well as outcomes) too inaccessible to distinguish any grand patterns?

Jonathan: Exciting from a political science point of view; alarming from the point of view of a member of society.

Ed.: But talking of “grand patterns”: do you have any intuition why “the C20 might be about capital vs labour; the C21 about local vs global”? Is it simply the next obvious reaction to ever-faster technological development and economic concentration bumping against societal inertia, or something more complex and unpredictable?

Jonathan: Over generations European societies gradually developed mechanisms of accountability to constrain their leaders and ensure they did not over-reach their powers. This is how democracy became consolidated. Hoverver, given that power is increasingly accruing to transnational and multinational corporations and networks that are beyond the reach of citizens operating in the national sphere, we must learn how to do this all over again on a global scale. Until we do so, globalisation will inevitably create “winners” and “losers” and will, I think, inevitably lead to more populism and upheaval.

Read the full article: Wheatley, J. (2016) Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data. Policy & Internet 8 (4) doi:10.1002/poi3.129

Jonathan Wheatley was talking to blog editor David Sutcliffe.

]]>
Has Internet policy had any effect on Internet penetration in Sub-Saharan Africa? https://ensr.oii.ox.ac.uk/has-internet-policy-had-any-effect-on-internet-penetration-in-sub-saharan-africa/ Wed, 10 May 2017 08:08:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4112 There is a consensus among researchers that ICT is an engine for growth, and it’s also considered by the OECD to be a part of fundamental infrastructure, like electricity and roads. The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Some African countries have an Internet penetration of over 50 percent (such as the Seychelles and South Africa) whereas some resemble digital deserts, not even reaching two percent. Even more surprisingly, countries that are seemingly comparable in terms of economic development often show considerable differences in terms of Internet access (e.g., Kenya and Ghana).

Being excluded from the Internet economy has negative economic and social implications; it is therefore important for policymakers to ask how policy can bridge this inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables? In their Policy & Internet article “Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter?”, Robert Wentrup, Xiangxuan Xu, H. Richard Nakamura, and Patrik Ström address the dearth of research assessing the interplay between policy and Internet penetration by identifying Internet penetration-related policy variables and institutional constructs in Sub-Saharan Africa. It is a first attempt to investigate whether Internet policy variables have any effect on Internet penetration in Sub-Saharan Africa, and to shed light on them.

Based on a literature review and the available data, they examine four variables: (i) free flow of information (e.g. level of censorship); (ii) market concentration (i.e. whether or not internet provision is monopolistic); (iii) the activity level of the Universal Service Fund (a public policy promoted by some governments and international telecom organizations to address digital inclusion); and (iv) total tax on computer equipment, including import tariffs on personal computers. The results show that only the activity level of the USF and low total tax on computer equipment are significantly positively related to Internet penetration in Sub-Saharan Africa. Free flow of information and market concentration show no impact on Internet penetration. The latter could be attributed to underdeveloped competition in most Sub-Saharan countries.

The authors argue that unless states pursue an inclusive policy intended to enhance Internet access for all its citizens, there is a risk that the skewed pattern between the “haves” and the “have nots” will persist, or worse, be reinforced. They recommend that policymakers promote the policy instrument of Universal Service and USF and consider substituting tax on computer equipment with other tax revenues (i.e. introduce consumption-friendly incentives), and not to blindly trust the market’s invisible hand to fix inequality in Internet diffusion.

We caught up with Robert to discuss the findings:

Ed.: I would assume that Internet penetration is rising (or possibly even booming) across the continent, and that therefore things will eventually sort themselves out — is that generally true? Or is it already stalling / plateauing, leaving a lot of people with no apparent hope of ever getting online?

Robert: Yes, generally we see a growth in Internet penetration across Africa. But it is very heterogeneous and unequal in its character, and thus country-specific. Some rich African countries are doing quite well whereas others are lagging behind. We have also seen known that Internet connectivity is vulnerable due to the political situation. The recent shutdown of the Internet in Cameroon demonstrates this vulnerability.

Ed.: You mention that “determining the causality between Internet penetration and [various] factors is problematic” – i.e. that the relation between Internet use and economic growth is complex. This presumably makes it difficult for effective and sweeping policy “solutions” to have a clear effect. How has this affected Internet-policy in the region, if at all?.

Robert: On the one hand one can say that if there is economic growth, there will be money to invest in Internet infrastructure and devices, and on the other hand if there are investments in Internet infrastructure, there will be economic growth. This resembles the chicken and egg problem. For many African countries, which lack large public investment funds and at the same time suffer from other more important socio-economic challenges, it might be tricky to put effort into Internet policy issues. But there are some good examples of countries that have actually managed to do this, for example Kenya. As a result these efforts can lead to positive effects on the society and economy as a whole. The local context, and a focus on instruments that disseminate Internet usage in unprivileged geographic areas and social groups (like the Universal Service Fund) are very important.

Ed.: How much of the low Internet perpetration in large parts of Africa is simply due to large rural populations — and therefore something that will either never be resolved properly, or that will naturally resolve itself with the ongoing shift to urban centres?

Robert: We did not see a clear causality between rural population and low Internet on a country level, mainly due to the fact that countries with a large rural population are often quite rich and thus also have money to invest in Internet infrastructure. Africa is very dependent on agriculture. Although the Internet connectivity issue might be “self-resolved” to some degree by urban migration, other issues would emerge from such a shift such as an increased socio-economic divide in the urban areas. Hence, it is more effective to make sure that the Internet reaches rural areas at an early stage.

Ed.: And how much does domestic policy (around things like telecoms) get set internally, as opposed to externally? Presumably some things (e.g. the continent-wide cables required to connect Africa to the rest of the world) are easier to bring about if there is a strong / stable regional policy around regulation of markets and competition — whether organised internally, or influenced by outside governments and industry?

Robert: The influence of telecom ministries and telecom operators is strong, but of course they are affected by intra-regional organisations, private companies etc. In the past Africa has had difficulties in developing pan-regional trade and policies. But such initiatives are encouraged, not least in order to facilitate cost-sharing of large Internet-related investments.

Ed.: Leaving aside the question of causality, you mention the strong correlation between economic activity and Internet penetration: are there any African countries that buck this trend — at either end of the economic scale?

Robert: We have seen that Kenya and Nigeria have had quite impressive rates of Internet penetration in relation to GDP. Gabon on the other hand is a relatively rich African country, but with quite low Internet penetration.

Read the full article: Wentrup, R., Xu, X., Nakamura, H.R., and Ström, P. (2016) Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter? Policy & Internet 8 (3). doi:10.1002/poi3.123


Robert Wentrup was talking to blog editor David Sutcliffe.

]]>
We aren’t “rational actors” when it come to privacy — and we need protecting https://ensr.oii.ox.ac.uk/we-arent-rational-actors-when-it-come-to-privacy-and-we-need-protecting/ Fri, 05 May 2017 08:00:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4100
We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection — and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist” — an autonomous individual who makes informed decisions about the disclosure of their personal information.

But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference — a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and benefits associated with information exchange.

Academic critiques of this model have tended to focus on the methodological and theoretical validity of the pragmatist framework; however, in light of two recent studies that suggest individuals are resigned to the loss of privacy online, this article argues for the need to examine a possibility that has been overlooked as a consequence of this focus on Westin’s typology of privacy preferences: that people have opted out of the discussion altogether. Considering a theory of resignation alters how the problem of privacy is framed and opens the door to alternative discussions around policy solutions.

We caught up with Nora to discuss her findings:

Ed.: How easy is it even to discuss privacy (and people’s “rational choices”), when we know so little about what data is collected about us through a vast number of individually innocuous channels — or the uses to which it is put?

Nora: This is a fundamental challenge in current discussions around privacy. There are steps that we can take as individuals that protect us from particular types of intrusion, but in an environment where seemingly benign data flows are used to understand and predict our behaviours, it is easy for personal privacy protection to feel like an uphill battle. In such an environment, it is increasingly important that we consider resigned inaction to be a rational choice.

Ed.: I’m not surprised that there will be people who basically give up in exhaustion, when faced with the job of managing their privacy (I mean, who actually reads the Google terms that pop up every so often?). Is there a danger that this lack of engagement with privacy will be normalised during a time that we should actually be paying more, not less, attention to it?

Nora: This feeling of powerlessness around our ability to secure opportunities for privacy has the potential to discourage individual or collective action around privacy. Anthropologists Peter Benson and Stuart Kirsch have described the cultivation of resignation as a strategy to discourage collective action against undesirable corporate practices. Whether or not these are deliberate efforts, the consequences of creating a nearly unnavigable privacy landscape is that people may accept undesirable practices as inevitable.

Ed.: I suppose another irony is the difficulty of getting people to care about something that nevertheless relates so fundamentally and intimately to themselves. How do we get privacy to seem more interesting and important to the general public?

Nora: People experience the threats of unwanted visibility very differently. For those who are used to the comfortable feeling of public invisibility — the types of anonymity we feel even in public spaces — the likelihood of an unwanted privacy breach can feel remote. This is one of the problems of thinking about privacy purely as a personal issue. When people internalize the idea that if they have done nothing wrong, they have no reason to be concerned about their privacy, it can become easy to dismiss violations when they happen to others. We can become comfortable with a narrative that if a person’s privacy has been violated, it’s likely because they failed to use the appropriate safeguards to protect their information.

This cultivation of a set of personal responsibilities around privacy is problematic not least because it has the potential to blame victims rather than those parties responsible for the privacy incursions. I believe there is real value in building empathy around this issue. Efforts to treat privacy as a community practice and, perhaps, a social obligation may encourage us to think about privacy as a collective rather than individual value.

Ed.: We have a forthcoming article that explores the privacy views of Facebook / Google (companies and employees), essentially pointing out that while the public may regard privacy as pertaining to whether or not companies collect information in the first place, the companies frame it as an issue of “control” — they collect it, but let users subsequently “control” what others see. Is this fundamental discrepancy (data collection vs control) something you recognise in the discussion?

Nora: The discursive and practical framing of privacy as a question of control brings together issues addressed in your previous two questions. By providing individuals with tools to manage particular aspects of their information, companies are able to cultivate an illusion of control. For example, we may feel empowered to determine who in our digital network has access to a particular posted image, but little ability to determine how information related to that image — for example, its associated metadata or details on who likes, comments, or reposts it — is used.

The “control” framework further encourages us to think about privacy as an individual responsibility. For example, we may assume that unwanted visibility related to that image is the result of an individual’s failure to correctly manage their privacy settings. The reality is usually much more complicated than this assigning of individual blame allows for.

Ed.: How much of the privacy debate and policy making (in the States) is skewed by economic interests — i.e. holding that it’s necessary for the public to provide data in order to keep business competitive? And is the “Europe favours privacy, US favours industry” truism broadly true?

Nora: I don’t have a satisfactory answer to this question. There is evidence from past surveys I’ve done with colleagues that people in the United States are more alarmed by the collection and use of personal information by political parties than they are by similar corporate practices. Even that distinction, however, may be too simplistic. Political parties have an established history of using consumer information to segment and target particular audience groups for political purposes. We know that the U.S. government has required private companies to share information about consumers to assist in various surveillance efforts. Discussions about privacy in the U.S. are often framed in terms of tradeoffs with, for example, technological and economic innovation. This is, however, only one of the ways in which the value of privacy is undermined through the creation of false tradeoffs. Daniel Solove, for example, has written extensively on how efforts to frame privacy in opposition to safety encourages capitulation to transparency in the service of national security.

Ed.: There are some truly terrible US laws (e.g. the General Mining Act of 1872) that were developed for one purpose, but are now hugely exploitable. What is the situation for privacy? Is the law still largely fit for purpose, in a world of ubiquitous data collection? Or is reform necessary?

Nora: One example of such a law is the Electronic Communication Privacy Act (ECPA) of 1986. This law was written before many Americans had email accounts, but continues to influence the scope authorities have to access digital communications. One of the key issues in the ECPA is the differential protection for messages depending on when they were sent. The ECPA, which was written when emails would have been downloaded from a server onto a personal computer, treats emails stored for more than 180 days as “abandoned.” While messages received in the past 180 days cannot be accessed without a warrant, so-called abandoned messages require only a subpoena. Although there is some debate about whether subpoenas offer adequate privacy protections for messages stored on remote servers, the issue is that the time-based distinction created by “180-day rule” makes little sense when access to cloud storage allows people to save messages indefinitely. Bipartisan efforts to introduce the Email Privacy Act, which would extend warrant protections to digital communication that is over 180 days old has received wide support from those in the tech industry as well as from privacy advocacy groups.

Another challenge, which you alluded to in your first question, pertains to the regulation of algorithms and algorithmic decision-making. These technologies are often described as “black boxes” to reflect the difficulties in assessing how they work. While the consequences of algorithmic decision-making can be profound, the processes that lead to those decisions are often opaque. The result has been increased scholarly and regulatory attention on strategies to understand, evaluate, and regulate the processes by which algorithms make decisions about individuals.

Read the full article: Draper, N.A. (2017) From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates. Policy & Internet 9 (2). doi:10.1002/poi3.142.


Nora A. Draper was talking to blog editor David Sutcliffe.

]]>
How do we encourage greater public inclusion in Internet governance debates? https://ensr.oii.ox.ac.uk/how-do-we-encourage-greater-public-inclusion-in-internet-governance-debates/ Wed, 03 May 2017 08:00:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4095 The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed, and role of the public in Internet governance debates is a critical issue for policymaking.

The current dominant mechanism for public inclusion is the multistakeholder approach, i.e. one that includes governments, industry and civil society in governance debates. Despite at times being used as a shorthand for public inclusion, multistakeholder governance is implemented in many different ways and has faced criticism, with some arguing that multistakeholder discussions serve as a cover for the growth of state dominance over the Web, and enables oligarchic domination of discourses that are ostensibly open and democratic.

In her Policy & Internet article “Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial”, Sarah Myers West examines the role of the public in Internet governance debates, with reference to public inclusion at the 2014 Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial). NETmundial emerged at a point when public legitimacy was a particular concern for the Internet governance community, so finding ways to include the rapidly growing, and increasingly diverse group of stakeholders in the governance debate was especially important for the meeting’s success.

This is particularly significant as the Internet governance community faces problems of increasing complexity and diversity of views. The growth of the Internet has made the public central to Internet governance — but introduces problems around the growing number of stakeholders speaking different languages, with different technical backgrounds, and different perspectives on the future of the Internet.

However, the article suggests that rather than attempting to unify behind a single institution or achieve public consensus through a single, deliberative forum, the NETmundial example suggests that the Internet community may further fragment into multiple publics, further redistributing into a more networked and “agonistic” model. This doesn’t quite reflect the model of the “public sphere” Habermas may have envisioned, but it may ultimately befit the network of networks it is forged around.

We caught up with Sarah to discuss her findings:

Ed.: You say governance debates involve two levels of contestation: firstly in how we define “the Internet community”, and secondly around the actual decision-making process. How do we even start defining what “the public” means?

Sarah: This is a really difficult question, and it’s really the one that drove me throughout my research. I think that observing examples of publics ‘in the wild’ — how they are actually constituted within the Internet governance space — is one entry point. As I found in the article, there are a number of different kinds of publics that have emerged over the history of the internet, some fairly structured and centralized and others more ad hoc and decentralized. There’s also a difference between the way public inclusion is described/structured and the way things work out in practice. But better understanding what kinds of publics actually exist is only the first step to analyzing deeper questions — about the workings of power on and through the Internet.

Ed.: I know Internet governance is important but haven’t the faintest idea who represents me (as a member of “the public”) in these debates. Are my interests represented by the UK Government? Europe? NGOs? Industry? Or by self-proclaimed “public representatives”?

Sarah: All of the above — and also, maybe, none of the above. There are a number of different kinds of stakeholders representing different constituencies on the Internet — at NETmundial, this was separated into Government, Business, Civil Society, Academia and the Technical Community. In reality, there are blurred boundaries around all these categories, and each of these groups could make claims about representing the public, though which aspects of the public interest they represent is worth a closer look.

Many Internet governance fora are constituted in a way that would also allow each of us to represent ourselves: at NETmundial, there was a lot of thought put in to facilitating remote participation and bringing in questions from the Internet. But there are still barriers — it’s not the same as being in the room with decision makers, and the technical language that’s developed around Internet governance certainly makes these discussions hard for newcomers to follow.

Ed.: Is there a tension here between keeping a process fairly closed (and efficient) vs making it totally open and paralysed? And also between being completely democratic vs being run by people (engineers) who actually understand how the Internet works? i.e. what is the point of including “the public” (whatever that means) at a global level, instead of simply being represented by the governments we elect at a national (or European) level?

Sarah: There definitely is a tension there, and I think this is part of the reason why we see such different models of public inclusion in different kinds of forums. For starters, I’m not sure that, at present, there’s a forum that I can think of that is fully democratic. But I think there is still a value in trying to be more democratic, and to placing the public at the centre of these discussions. As we’ve seen in the years following the Snowden revelations, the interests of state actors are not always aligned, and sometimes are completely at odds, with those of the public.

The involvement of civil society, academia and the technical community is really critical to counterbalancing these interests — but, as many civil society members remarked after NETmundial, this can be an uphill battle. Governments and corporations have an easier time in these kinds of forums identifying and advocating for a narrow set of interests and values, whereas civil society doesn’t always come in to these discussions with as clear a consensus. It can be a messy process.

Ed.: You say that “analyzing the infrastructure of public participation makes it possible to examine the functions of Internet governance processes at a deeper level.” Having done so, are you hopeful or cynical about “Internet governance” as it is currently done?

Sarah: I’m hopeful about the attentiveness to public inclusion exhibited at NETmundial — it really was a central part of the process and the organizers made a number of investments in ensuring it was as broadly accessible as possible. That said, I’m a bit critical of whether building technological infrastructure for inclusion on its own can overcome the real resource imbalances that effect who can participate in these kinds of forums. It’s probably going to require investments in both — there’s a danger that by focusing on the appearance of being democratic, these discussions can mask the underlying power discrepancies that inhibit deliberation on an even playing field.

Read the full article: West, S.M. (2017) Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial. Policy & Internet 9 (2). doi:10.1002/poi3.143


Sarah Myers West was talking to blog editor David Sutcliffe.

]]>
Should citizens be allowed to vote on public budgets? https://ensr.oii.ox.ac.uk/should-citizens-be-allowed-to-vote-on-public-budgets/ Tue, 18 Apr 2017 09:26:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4089
Image: a youth occupation of Belo Horizonte to present and discuss different forms of occupation of urban space, by upsilon (Flickr CC BY-SA).

There is a general understanding that public decision-making could generate greater legitimacy for political decisions, greater trust in government action and a stronger sense of representation. One way of listening to citizens’ demands and improving their trust in politics is the creation of online communication channels whereby issues, problems, demands, and suggestions can be addressed. One example, participatory budgeting, is the process by which ordinary citizens are given the opportunity to make decisions regarding a municipal budget, by suggesting, discussing, and nominating projects that can be carried out within it. Considered to be a successful example of empowered democratic governance, participatory budgeting has spread among many cities in Brazil, and after being recommended by the World Bank and UN-Habitat, also implemented in various cities worldwide.

The Policy & Internet article “Do Citizens Trust Electronic Participatory Budgeting? Public Expression in Online Forums as an Evaluation Method in Belo Horizonte” by Samuel A. R. Barros and Rafael C. Sampaio examines the feelings, emotions, narratives, and perceptions of political effectiveness and political representation shared in these forums. They discuss how online messages and feelings expressed through these channels can be used to assess public policies, as well as examining some of the consequences of ignoring them.

Recognized as one of the most successful e-democracy experiences in Brazil, Belo Horizonte’s electronic participatory budgeting platform was created in 2006 to allow citizens to deliberate and vote in online forums provided by the city hall. The initiative involved around 174,000 participants in 2006 and 124,000 in 2008. However, only 25,000 participants took part in the 2011 edition, indicating significant loss of confidence in the process. It is a useful case to assess the reasons for success and failure of e-participation initiatives.

There is some consensus in the literature on participants’ need to feel that their contributions will be taken into consideration by those who promote initiatives and, ideally, that these contributions will have effects and practical consequences in the formulation of public policies. By offering an opportunity to participate, the municipality sought to improve perceptions of the quality of representation. Nonetheless, government failure to carry out the project chosen in 2008 and lack of confidence in the voting mechanism itself may have contributed to producing the opposite effect.

Moderators didn’t facilitate conversation or answer questions or demands. No indication was given as to whether these messages were being taken into consideration or even read, and the organizers never explained how or whether the messages would be used later on. In other words, the municipality took no responsibility for reading or evaluating the messages posted there. Thus, it seems to be an online forum that offers little or no citizen empowerment.

We caught up with the authors to discuss their findings:

Ed.: You say that in 2008 62.5% of the messages expressed positive feelings, but in 2011, 59% expressed negative ones. Quite a drop! Is that all attributable to the deliberation not being run properly, i.e. to the danger of causing damage (i.e. fall in trust) by failing to deliver on promises?

Samuel + Rafael: That’s the million dollar question! The truth is: it’s hard to say. Nevertheless, our research does show some evidence of this. Most negative feelings were directly connected to this failure to deliver the previously approved work. As participatory budgeting processes are very connected to practical issues, this was probably the main reason of the drop we saw. We also indicate how the type of complaint changed significantly from one edition to another. For instance, in 2008 many people asked for small adjustments in each of the proposed works, while in 2011 they were complaining about the scope or even the relevance of the works.

Ed.: This particular example aside: is participatory budgeting generally successful? And does it tend to be genuinely participatory (and deliberative?), or more like: “do you want option A or B”?

Samuel + Rafael: That’s also a very good question. In Brazil’s case, most participatory budgeting exercises achieved good levels of participation and contributed to at least minor change in the bureaucratic routines of public servants and officials. Thus, they can be considered successful. Of course, there are many cases of failure as well, since participatory budgeting can be hard to do properly and as our article indicates, a single mistake can disrupt it for good.

Regarding the second question, we would say that it’s more about choosing what you want and what you can deliver as the public power. In actual fact, most participatory budgeting exercises are not as deliberative as everyone believes — they are more about bargaining and negotiation. Nevertheless, while the daily practice of participation may not be as deliberative as we may want it, it still achieves a lot of other benefits, such as keeping the community engaged and informed, letting people know more about how the budget works — and the negotiation itself may involve a certain amount of empathy and role-taking.

Ed.: Is there any evidence that crowdsourcing decisions of this sort leads to better ones? (or at least, more popular ones). Or was this mostly done just to “keep the people happy”?

Samuel + Rafael: We shouldn’t assume that every popular decision is necessarily better, or we’ll get trapped in the caveats. On the other hand, considering how our representative system was designed, how people feel powerless and how rulers are usually set apart from their constituents, we can easily support any real attempt to give the people a greater chance of stating their minds and even deciding on things. If the process is well designed, if the managers (i.e. public servants and officials) are truly open to these inputs and if the public is informed, we can hope for better decisions.

Ed.: Is there any conflict here between “what is popular” and “what is right”? i.e. how much should be opened up to mass voting (with the inevitable skews and take-over by vested interests).

Samuel + Rafael: This is the “dark side” of participation that we mentioned before. We should not automatically consider participation to be good in and of itself. It can be misinformed, biased, and lead to worse and not better decisions. Particularly, when people are not informed enough, when the topic was not discussed enough in the public sphere, we might end up with bad popular decisions. For instance, would Brexit have occurred with a different method of voting?

Let’s imagine several months of small scale discussions between citizens (i.e. minipublics) both face-to-face and in online deliberation spaces. Maybe these groups would reach the same decision, but at least all participants would feel more confident in their decisions, because they had enough information and were confronted with different points of view and arguments before voting. Thus, we believe that mass voting can be used for big decisions, but that there is a need for greater conversation and consensus before it.

Ed.: Is there any link between participatory budgeting and public scrutiny of public budgets (which can be tremendously corrupt, e.g. when it comes to building projects) — or does participatory budgeting tend to be viewed as something very different to oversight?

Samuel + Rafael: This is actually one of the benefits of participatory budgeting that is not correlated to participation alone. It makes corruption and bribery harder to do. As there are more people discussing and monitoring the budget, the process itself needs to be more transparent and accountable. There are some studies that find a correlation between participatory budgeting and tax payment. The problem is that participatory budgeting tends to concern only a small amount of the budget, thus this public control does not reach the whole process. Still, it proves how public participation may lead to a series of benefits both for the public agents and the public itself.

Read the full article: Barros, S.A.R. and Sampaio, R.C. (2016) Do Citizens Trust Electronic Participatory Budgeting? Public Expression in Online Forums as an Evaluation Method in Belo Horizonte. Policy & Internet 8 (3). doi:10.1002/poi3.125


Samuel A. R. Barros and Rafael C. Sampaio were talking to blog editor David Sutcliffe.

]]>
Governments Want Citizens to Transact Online: And This Is How to Nudge Them There https://ensr.oii.ox.ac.uk/governments-want-citizens-to-transact-online-and-this-is-how-to-nudge-them-there/ Mon, 10 Apr 2017 10:08:33 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4078
A randomized control trial that “nudged” users of a disability parking scheme to renew online showed a six percentage point increase in online renewals. Image: Wendell (Flickr).

In an era when most transactions occur online, it’s natural for public authorities to want the vast bulk of their contacts with citizens to occur through the Internet. But they also face a minority for whom paper and face-to-face interactions are still preferred or needed — leading to fears that efforts to move services online “by default” might reinforce or even encourage exclusion. Notwithstanding these fears, it might be possible to “nudge” citizens from long-held habits by making online submission advantageous and other routes of use more difficult.

Behavioural public policy has been strongly advocated in recent years as a low-cost means to shift citizen behaviour, and has been used to reform many standard administrative processes in government. But how can we design non-obtrusive nudges to make users shift channels without them losing access to services? In their new Policy & Internet article “Nudges That Promote Channel Shift: A Randomized Evaluation of Messages to Encourage Citizens to Renew Benefits Online” Peter John and Toby Blume design and report a randomized control trial that encouraged users of a disability parking scheme to renew online.

They found that by simplifying messages and adding incentives (i.e. signalling the collective benefit of moving online) users were encouraged to switch from paper to online channels by about six percentage points. As a result of the intervention and ongoing efforts by the Council, virtually all the parking scheme users now renew online.

The finding that it’s possible to appeal to citizens’ willingness to act for collective benefit is encouraging. The results also support the more general literature that shows that citizens’ use of online services is based on trust and confidence with public services and that interventions should go with the grain of citizen preferences and norms.

We caught up with Peter John to discuss his findings, and the role of behavioural public policy in government:

Ed.: Is it fair to say that the real innovation of behavioural public policy isn’t so much Government trying to nudge us into doing things in subtle, unremarked ways, but actually using experimental techniques to provide some empirical backing for the success of any interventions? i.e. like industry has done for years with things like A/B testing?

Peter: There is some truth in this, but the late 2000s was a time when policy-makers got more interested in the findings of the behavioural sciences and in redesigning initiatives to incorporate behaviour insights. Randomised controlled trials (RCTs) have become more commonly used more generally across governments to test for the impact of public policies — these are better than A/B testing as randomisation protocols are followed and clear reports are made of the methods used. A/B testing can be dodgy — or at least, it is done in secret and we don’t know how good the methods used are. There is much better reporting of government RCTs.

Ed.: The UK Government’s much-discussed “Nudge Unit” was part-privatised a few years ago, and the Government now has to pay for its services: was this a signal of the tremendous commercial value of behavioural economics (and all the money to be made if you sell off bits of the Civil Service), or Government not really knowing what to do with it?

Peter: I think the language of privatisation is not quite right to describe what happened. The unit was spun out of government, but government still owns a share, with Nesta owning the other large portion. It is not a profit-making enterprise, but a non-profit one — the freedom allows it to access funds from foundations and other funders. The Behavioural Insights Team is still very much a public service organisation, even if it has got much bigger since moving out of direct control by Government. Where there are public funds involved there is scrutiny through ministers and other funders — it matters that people know about the nudges, and can hold policy-makers to account.

Ed.: You say that “interventions should go with the grain of citizen preferences and norms” to be successful. Which I suppose is a sort-of built-in ethical safeguard. But do you know of any behavioural pushes that make us go against our norms, or that might raise genuine ethical concerns, or calls for oversight?

Peter: I think some of the shaming experiments done on voter turnout are on the margins of what is ethically acceptable. I agree that the natural pragmatism and caution of public agencies helps them agree relatively low key interventions.

Ed.: Finally — having spent some time studying and thinking about Government nudges .. have you ever noticed or suspected that you might have been subjected to one, as a normal citizen? I mean: how ubiquitous are they in our public environment?

Peter: Indeed — a lot of our tax letters are part of an experiment. But it’s hard to tell of course, as making nudges non-obtrusive is one of the key things. It shouldn’t be a problem that I am part of an experiment of the kind I might commission.

Read the full article: John, P. and Blume, T. (2017) Nudges That Promote Channel Shift: A Randomized Evaluation of Messages to Encourage Citizens to Renew Benefits Online. Policy & Internet. DOI: 10.1002/poi3.148.


Peter John was talking to blog editor David Sutcliffe.

]]>
Controlling the crowd? Government and citizen interaction on emergency-response platforms https://ensr.oii.ox.ac.uk/controlling-the-crowd-government-and-citizen-interaction-on-emergency-response-platforms/ Mon, 07 Dec 2015 11:21:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3529 There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov‘s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down — not just to “mobilize them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up woords in the forest one winter after a terribly huge forest fires in Russia in year 2010. Image: Max Mayorov.
RUSSIA, NEAR RYAZAN – 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).
My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realized that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves.

To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different.

Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to the image of these institutions, which didn’t want citizens to be portrayed as the leading responding actors. Second, any type of citizen-based collective action, even if not purely political, may be an issue of concern in authoritarian countries in particular. Accordingly, one can argue that, while citizens are struggling against a disaster, in some cases the traditional institutions may make substantial efforts to restrain and contain the action of citizens. In this light, the role of information technologies can include not only enhancing citizen engagement and increasing the efficiency of the response, but also controlling the digital crowd of potential volunteers.

The purpose of this paper was to conceptualize the tension between the role of ICTs in the engagement of the crowd and its resources, and the role of ICTs in controlling the resources of the crowd. The research suggests a theoretical and methodological framework that allows us to explore this tension. The paper focuses on an analysis of specific platforms and suggests empirical data about the structure of the platforms, and interviews with developers and administrators of the platforms. This data is used in order to identify how tools of engagement are transformed into tools of control, and what major differences there are between platforms that seek to achieve these two goals. That said, obviously any platform can have properties of control and properties of engagement at the same time; however the proportion of these two types of elements can differ significantly.

One of the core issues for my research is how traditional actors respond to fast, bottom-up innovation by citizens.[2]. On the one hand, the authorities try to restrict the empowerment of citizens by the new tools. On the other hand, the institutional actors also seek to innovate and develop new tools that can restore the balance of power that has been challenged by citizen-based innovation. The tension between using digital tools for the engagement of the crowd and for control of the crowd can be considered as one of the aspects of this dynamic.

That doesn’t mean that all state-backed platforms are created solely for the purpose of control. One can argue, however, that the development of digital tools that offer a mechanism of command and control over the resources of the crowd is prevalent among the projects that are supported by the authorities. This can also be approached as a means of using information technologies in order to include the digital crowd within the “vertical of power”, which is a top-down strategy of governance. That is why this paper seeks to conceptualize this phenomenon as “vertical crowdsourcing”.

The question of whether using a digital tool as a mechanism of control is intentional is to some extent secondary. What is important is that the analysis of platform structures relying on activity theory identifies a number of properties that allow us to argue that these tools are primarily tools of control. The conceptual framework introduced in the paper is used in order to follow the transformation of tools for the engagement of the crowd into tools of control over the crowd. That said, some of the interviews with the developers and administrators of the platforms may suggest the intentional nature of the development of tools of control, while crowd engagement is secondary.

[1] Asmolov G. “Natural Disasters and Alternative Modes of Governance: The Role of Social Networks and Crowdsourcing Platforms in Russia”, in Bits and Atoms Information and Communication Technology in Areas of Limited Statehood, edited by Steven Livingston and Gregor Walter-Drop, Oxford University Press, 2013.

[2] Asmolov G., “Dynamics of innovation and the balance of power in Russia”, in State Power 2.0 Authoritarian Entrenchment and Political Engagement Worldwide, edited by Muzammil M. Hussain and Philip N. Howard, Ashgate, 2013.

Read the full article: Asmolov, G. (2015) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy and Internet 7,3: 292–318.


asmolovGregory Asmolov is a PhD student at the LSE, where he is studying crowdsourcing and emergence of spontaneous order in situations of limited statehood. He is examining the emerging collaborative power of ICT-enabled crowds in crisis situations, and aiming to investigate the topic drawing on evolutionary theories concerned with spontaneous action and the sustainability of voluntary networked organizations. He analyzes whether crowdsourcing practices can lead to development of bottom-up online networked institutions and “peer-to-peer” governance.

]]>
Does crowdsourcing citizen initiatives affect attitudes towards democracy? https://ensr.oii.ox.ac.uk/does-crowdsourcing-of-citizen-initiatives-affect-attitudes-towards-democracy/ Sun, 22 Nov 2015 20:30:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3496 Crowdsourcing legislation is an example of a democratic innovation that gives citizens a say in the legislative process. In their Policy and Internet journal article ‘Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland’, Henrik Serup Christensen, Maija Karjalainen and Laura Nurminen explore how involvement in the citizen initiatives affects attitudes towards democracy. They find that crowdsourcing citizen initiatives can potentially strengthen political legitimacy, but both outcomes and procedures matter for the effects.

Crowdsourcing is a recent buzzword that describes efforts to use the Internet to mobilize online communities to achieve specific organizational goals. While crowdsourcing serves several purposes, the most interesting potential from a democratic perspective is the ability to crowdsource legislation. By giving citizens the means to affect the legislative process more directly, crowdsourcing legislation is an example of a democratic innovation that gives citizens a say in the legislative process. Recent years have witnessed a scholarly debate on whether such new forms of participatory governance can help cure democratic deficits such as a declining political legitimacy of the political system in the eyes of the citizenry. However, it is still not clear how taking part in crowdsourcing affects the political attitudes of the participants, and the potential impact of such democratic innovations therefore remain unclear.

In our study, we contribute to this research agenda by exploring how crowdsourcing citizens’ initiatives affected political attitudes in Finland. The non-binding Citizens’ Initiative instrument in Finland was introduced in spring 2012 to give citizens the chance to influence the agenda of the political decision making. In particular, we zoom in on people active on the Internet website Avoin Ministeriö (Open Ministry), which is a site based on the idea of crowdsourcing where users can draft citizens’ initiatives and deliberate on their contents. As is frequently the case for studies of crowdsourcing, we find that only a small portion of the users are actively involved in the crowdsourcing process. The option to deliberate on the website was used by about 7% of the users; the rest were only passive readers or supported initiatives made by others. Nevertheless, Avoin Ministeriö has been instrumental in creating support for several of the most successful initiatives during the period, showing that the website has been a key actor during the introductory phase of the Citizens’ initiative in Finland.

We study how developments in political attitudes were affected by outcome satisfaction and process satisfaction. Outcome satisfaction concerns whether the participants get their preferred outcome through their involvement, and this has been emphasized by proponents of direct democracy. Since citizens get involved to achieve a specific outcome, their evaluation of the experience hinges on whether or not they achieve this outcome. Process satisfaction, on the other hand, is more concerned with the perceived quality of decision making. According to this perspective, what matters is that participants find that their concerns are given due consideration. When people find the decision making to be fair and balanced, they may even accept not getting their preferred outcome. The relative importance of these two perspectives remains disputed in the literature.

The research design consisted of two surveys administered to the users of Avoin Ministeriö before and after the decision of the Finnish Parliament on the first citizens’ initiative in concerning a ban on the fur-farming industry in Finland. This allowed us to observe how involvement in the crowdsourcing process shaped developments in central political attitudes among the users of Avoin Ministeriö and what factors determined the developments in subjective political legitimacy. The first survey was conducted in fall 2012, when the initiators were gathering signatures in support of the initiative to ban fur-farming, while the second survey was conducted in summer 2013 when Parliament rejected the initiative. Altogether 421 persons filled in both surveys, and thus comprised the sample for the analyses.

The study yielded a number of interesting findings. First of all, those who were dissatisfied with Parliament rejecting the initiative experienced a significantly more negative development in political trust compared to those who did not explicitly support the initiative. This shows that the crowdsourcing process had a negative impact on political legitimacy among the initiative’s supporters, which is in line with previous contributions emphasizing the importance of outcome legitimacy. It is worth noting that this also affected trust in the Finnish President, even if he has no formal powers in relation to the Citizens’ initiative in Finland. This shows that negative effects on political legitimacy could be more severe than just a temporary dissatisfaction with the political actors responsible for the decision.

Nevertheless, the outcome may not be the most important factor for determining developments in political legitimacy. Our second major finding indicated that those who were dissatisfied with the way Parliament handled the initiative also experienced more negative developments in political legitimacy compared to those who were satisfied. Furthermore, this effect was more pervasive than the effect for outcome satisfaction. This implies that the procedures for handling non-binding initiatives may play a strong role in citizens’ perceptions of representative institutions, which is in line with previous findings emphasising the importance of procedural aspects and evaluations for judging political authorities.

We conclude that there is a beneficial impact on political legitimacy if crowdsourced citizens’ initiatives have broad appeal so they can be passed in Parliament. However, it is important to note that positive effects on political legitimacy do not hinge on Parliament approving citizens’ initiatives. If the MPs invest time and resources in the careful, transparent and publicly justified handling of initiatives, possible negative effects of rejecting initiatives can be diminished. Citizens and activists may accept an unfavourable decision if the procedure by which it was reached seems fair and just. Finally, the results give reason to be hopeful about the role of crowdsourcing in restoring political legitimacy, since a majority of our respondents felt that the possibility of crowdsourcing citizens’ initiatives clearly improved Finnish democracy.

While all hopes may not have been fulfilled so far, crowdsourcing legislation therefore still has potential to help rebuild political legitimacy.

Read the full article: Christensen, H., Karjalainen, M., and Nurminen, L., (2015) Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland. Policy and Internet 7 (1) 25–45.


Henrik Serup Christensen is Academy Research Fellow at SAMFORSK, Åbo Akademi University.

Maija Karjalainen is a PhD Candidate at the Department of Political Science and Contemporary History in the University of Turku, Finland.

Laura Nurminen is a Doctoral Candidate at the Department of Political and Economic Studies at Helsinki University, Finland.

]]>
Do Finland’s digitally crowdsourced laws show a way to resolve democracy’s “legitimacy crisis”? https://ensr.oii.ox.ac.uk/do-finlands-digitally-crowdsourced-laws-show-a-way-to-resolve-democracys-legitimacy-crisis/ Mon, 16 Nov 2015 12:29:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3475 There is much discussion about a perceived “legitimacy crisis” in democracy. In his article The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation, Taneli Heikka (University of Jyväskylä) discusses the digitally crowdsourced law for same-sex marriage that was passed in Finland in 2014, analysing how the campaign used new digital tools and created practices that affect democratic citizenship and power making.

Ed: There is much discussion about a perceived “legitimacy crisis” in democracy. For example, less than half of the Finnish electorate under 40 choose to vote. In your article you argue that Finland’s 2012 Citizens’ Initiative Act aimed to address this problem by allowing for the crowdsourcing of ideas for new legislation. How common is this idea? (And indeed, how successful?)

Taneli: The idea that digital participation could counter the “legitimacy crisis” is a fairly common one. Digital utopians have nurtured that idea from the early years of the internet, and have often been disappointed. A couple of things stand out in the Finnish experiment that make it worth a closer look.

First, the digital crowdsourcing system with strong digital identification is a reliable and potentially viral campaigning tool. Most civic initiative systems I have encountered rely on manual or otherwise cumbersome, and less reliable, signature collection methods.

Second, in the Finnish model, initiatives that break the threshold of 50,000 names must be treated in the Parliament equally to an initiative from a group of MPs. This gives the initiative constitutional and political weight.

Ed: The Act led to the passage of Finland’s first equal marriage law in 2014. In this case, online platforms were created for collecting signatures as well as drafting legislation. An NGO created a well-used platform, but it subsequently had to shut it down because it couldn’t afford the electronic signature system. Crowds are great, but not a silver bullet if something as prosaic as authentication is impossible. Where should the balance lie between NGOs and centrally funded services, i.e. government?

Taneli: The crucial thing in the success of a civic initiative system is whether it gives the people real power. This question is decided by the legal framework and constitutional basis of the initiative system. So, governments have a very important role in this early stage – designing a law for truly effective citizen initiatives.

When a framework for power-making is in place, service providers will emerge. Should the providers be public, private or third sector entities? I think that is defined by local political culture and history.

In the United States, the civic technology field is heavily funded by philanthropic foundations. There is an urge to make these tools commercially viable, though no one seems to have figured out the business model. In Europe there’s less philanthropic money, and in my experience experiments are more often government funded.

Both models have their pros and cons, but I’d like to see the two continents learning more from each other. American digital civic activists tell me enviously that the radically empowering Finnish model with a government-run service for crowdsourcing for law would be impossible in the US. In Europe, civic technologists say they wish they had the big foundations that Americans have.

Ed: But realistically, how useful is the input of non-lawyers in (technical) legislation drafting? And is there a critical threshold of people necessary to draft legislation?

Taneli: I believe that input is valuable from anyone who cares to invest some time in learning an issue. That said, having lawyers in the campaign team really helps. Writing legislation is a special skill. It’s a pity that the co-creation features in Finland’s Open Ministry website were shut down due to a lack of funding. In that model, help from lawyers could have been made more accessible for all campaign teams.

In terms of numbers, I don’t think the size of the group is an issue either way. A small group of skilled and committed people can do a lot in the drafting phase.

Ed: But can the drafting process become rather burdensome for contributors, given professional legislators will likely heavily rework, or even scrap, the text?

Taneli: Professional legislators will most likely rework the draft, and that is exactly what they are supposed to do. Initiating an idea, working on a draft, and collecting support for it are just phases in a complex process that continues in the parliament after the threshold of 50,000 signatures is reached. A well-written draft will make the legislators’ job easier, but it won’t replace them.

Ed: Do you think there’s a danger that crowdsourcing legislation might just end up reflecting the societal concerns of the web-savvy – or of campaigning and lobbying groups

Taneli: That’s certainly a risk, but so far there is little evidence of it happening. The only initiative passed so far in Finland – the Equal Marriage Act – was supported by the majority of Finns and by the majority of political parties, too. The initiative system was used to bypass a political gridlock. The handful of initiatives that have reached the 50,000 signatures threshold and entered parliamentary proceedings represent a healthy variety of issues in the fields of education, crime and punishment, and health care. Most initiatives seem to echo the viewpoint of the ‘ordinary people’ instead of lobbies or traditional political and business interest groups.

Ed: You state in your article that the real-time nature of digital crowdsourcing appeals to a generation that likes and dislikes quickly; a generation that inhabits “the space of flows”. Is this a potential source of instability or chaos? And how can this rapid turnover of attention be harnessed efficiently so as to usefully contribute to a stable and democratic society?

Taneli: The Citizens’ Initiative Act in Finland is one fairly successful model to look at in terms of balancing stability and disruptive change. It is a radical law in its potential to empower the individual and affect real power-making. But it is by no means a shortcut to ‘legislation by a digital mob’, or anything of that sort. While the digital campaigning phase can be an explosive expression of the power of the people in the ‘time and space of flows’, the elected representatives retain the final say. Passing a law is still a tedious process, and often for good reasons.

Ed: You also write about the emergence of the “mediating citizen” – what do you mean by this?

Taneli: The starting point for developing the idea of the mediating citizen is Lance Bennett’s AC/DC theory, i.e. the dichotomy of the actualising and the dutiful citizen. The dutiful citizen is the traditional form of democratic citizenship – it values voting, following the mass media, and political parties. The actualising citizen, on the other hand, finds voting and parties less appealing, and prefers more flexible and individualised forms of political action, such as ad hoc campaigns and the use of interactive technology.

I find these models accurate but was not able to place in this duality the emerging typologies of civic action I observed in the Finnish case. What we see is understanding and respect for parliamentary institutions and their power, but also strong faith in one’s skills and capability to improve the system in creative, technologically savvy ways. I used the concept of the mediating citizen to describe an actor who is able to move between the previous typologies, mediating between them. In the Finnish example, creative tools were developed to feed initiatives in the traditional power-making system of the parliament.

Ed: Do you think Finland’s Citizens Initiative Act is a model for other governments to follow when addressing concerns about “democratic legitimacy”?

Taneli: It is an interesting model to look at. But unfortunately the ‘legitimacy crisis’ is probably too complex a problem to be solved by a single participation tool. What I’d really like to see is a wave of experimentation, both on-line and off-line, as well as cross-border learning from each other. And is that not what happened when the representative model spread, too?

Read the full article: Heikka, T., (2015) The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation. Policy and Internet 7 (3) 268–291.


Taneli Heikka is a journalist, author, entrepreneur, and PhD student based in Washington.

Taneli Heikka was talking to Blog Editor Pamina Smith.

]]>
Crowdsourcing ideas as an emerging form of multistakeholder participation in Internet governance https://ensr.oii.ox.ac.uk/crowdsourcing-ideas-as-an-emerging-form-of-multistakeholder-participation-in-internet-governance/ Wed, 21 Oct 2015 11:59:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3445 What are the linkages between multistakeholder governance and crowdsourcing? Both are new — trendy, if you will — approaches to governance premised on the potential of collective wisdom, bringing together diverse groups in policy-shaping processes. Their interlinkage has remained underexplored so far. Our article recently published in Policy and Internet sought to investigate this in the context of Internet governance, in order to assess the extent to which crowdsourcing represents an emerging opportunity of participation in global public policymaking.

We examined two recent Internet governance initiatives which incorporated crowdsourcing with mixed results: the first one, the ICANN Strategy Panel on Multistakeholder Innovation, received only limited support from the online community; the second, NETmundial, had a significant number of online inputs from global stakeholders who had the opportunity to engage using a platform for political participation specifically set up for the drafting of the outcome document. The study builds on these two cases to evaluate how crowdsourcing was used as a form of public consultation aimed at bringing the online voice of the “undefined many” (as opposed to the “elected few”) into Internet governance processes.

From the two cases, it emerged that the design of the consultation processes conducted via crowdsourcing platforms is key in overcoming barriers of participation. For instance, in the NETmundial process, the ability to submit comments and participate remotely via www.netmundial.br attracted inputs from all over the world very early on, since the preparatory phase of the meeting. In addition, substantial public engagement was obtained from the local community in the drafting of the outcome document, through a platform for political participation — www.participa.br — that gathered comments in Portuguese. In contrast, the outreach efforts of the ICANN Strategy Panel on Multistakeholder Innovation remained limited; the crowdsourcing platform they used only gathered input (exclusively in English) from a small group of people, insufficient to attribute to online public input a significant role in the reform of ICANN’s multistakeholder processes.

Second, questions around how crowdsourcing should and could be used effectively to enhance the legitimacy of decision-making processes in Internet governance remain unanswered. A proper institutional setting that recognizes a role for online multistakeholder participation is yet to be defined; in its absence, the initiatives we examined present a set of procedural limitations. For instance, in the NETmundial case, the Executive Multistakeholder Committee, in charge of drafting an outcome document to be discussed during the meeting based on the analysis of online contributions, favoured more “mainstream” and “uncontroversial” contributions. Additionally, online deliberation mechanisms for different propositions put forward by a High-Level Multistakeholder Committee, which commented on the initial draft, were not in place.

With regard to ICANN, online consultations have been used on a regular basis since its creation in 1998. Its target audience is the “ICANN community,” a group of stakeholders that volunteer their time and expertise to improve policy processes within the organization. Despite the effort, initiatives such as the 2000 global election for the new At-Large Directors have revealed difficulties in reaching as broad of an audience as wanted. Our study discusses some of the obstacles of the implementation of this ambitious initiative, including limited information and awareness about the At-Large elections, and low Internet access and use in most developing countries, particularly in Africa and Latin America.

Third, there is a need for clear rules regarding the way in which contributions are evaluated in crowdsourcing efforts. When the deliberating body (or committee) is free to disregard inputs without providing any motivation, it triggers concerns about the broader transnational governance framework in which we operate, as there is no election of those few who end up determining which parts of the contributions should be reflected in the outcome document. To avoid the agency problem arising from the lack of accountability over the incorporation of inputs, it is important that crowdsourcing attempts pay particular attention to designing a clear and comprehensive assessment process.

The “wisdom of the crowd” has traditionally been explored in developing the Internet, yet it remains a contested ground when it comes to its governance. In multistakeholder set-ups, the diversity of voices and the collection of ideas and input from as many actors as possible — via online means — represent a desideratum, rather than a reality. In our exploration of empowerment through online crowdsourcing for institutional reform, we identify three fundamental preconditions: first, the existence of sufficient community interest, able to leverage wide expertise beyond a purely technical discussion; second, the existence of procedures for the collection and screening of inputs, streamlining certain ideas considered for implementation; and third, commitment to institutionalizing the procedures, especially by clearly defining the rules according to which feedback is incorporated and circumvention is avoided.

Read the full paper: Radu, R., Zingales, N. and Calandro, E. (2015), Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet, 7: 362–382. doi: 10.1002/poi3.99


Roxana Radu is a PhD candidate in International Relations at the Graduate Institute of International and Development Studies in Geneva and a fellow at the Center for Media, Data and Society, Central European University (Budapest). Her current research explores the negotiation of internet policy-making in global and regional frameworks.

Nicolo Zingales is an assistant professor at Tilburg law school, a senior member of the Tilburg Law and Economics Center (TILEC), and a research associate of the Tilburg Institute for Law, Technology and Society (TILT). He researches on various aspects of Internet governance and regulation, including multistakeholder processes, data-driven innovation and the role of online intermediaries.

Enrico Calandro (PhD) is a senior research fellow at Research ICT Africa, an ICT policy think-tank based based in Cape Town. His academic research focuses on accessibility and affordability of ICT, broadband policy, and internet governance issues from an African perspective.

]]>
Crowdsourcing for public policy and government https://ensr.oii.ox.ac.uk/crowdsourcing-for-public-policy-and-government/ Thu, 27 Aug 2015 11:28:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3339 If elections were invented today, they would probably be referred to as “crowdsourcing the government.” First coined in a 2006 issue of Wired magazine (Howe, 2006), the term crowdsourcing has come to be applied loosely to a wide variety of situations where ideas, opinions, labor or something else is “sourced” in from a potentially large group of people. Whilst most commonly applied in business contexts, there is an increasing amount of buzz around applying crowdsourcing techniques in government and policy contexts as well (Brabham, 2013).

Though there is nothing qualitatively new about involving more people in government and policy processes, digital technologies in principle make it possible to increase the quantity of such involvement dramatically, by lowering the costs of participation (Margetts et al., 2015) and making it possible to tap into people’s free time (Shirky, 2010). This difference in quantity is arguably great enough to obtain a quality of its own. We can thus be justified in using the term “crowdsourcing for public policy and government” to refer to new digitally enabled ways of involving people in any aspect of democratic politics and government, not replacing but rather augmenting more traditional participation routes such as elections and referendums.

In this editorial, we will briefly highlight some of the key emerging issues in research on crowdsourcing for public policy and government. Our entry point into the discussion is a collection of research papers first presented at the Internet, Politics & Policy 2014 (IPP2014) conference organized by the Oxford Internet Institute (University of Oxford) and the Policy & Internet journal. The theme of this very successful conference—our third since the founding of the journal—was “crowdsourcing for politics and policy.” Out of almost 80 papers presented at the conference in September last year, 14 of the best have now been published as peer-reviewed articles in this journal, including five in this issue. A further handful of papers from the conference focusing on labor issues will be published in the next issue, but we can already now take stock of all the articles focusing on government, politics, and policy.

The growing interest in crowdsourcing for government and public policy must be understood in the context of the contemporary malaise of politics, which is being felt across the democratic world, but most of all in Europe. The problems with democracy have a long history, from the declining powers of parliamentary bodies when compared to the executive; to declining turnouts in elections, declining participation in mass parties, and declining trust in democratic institutions and politicians. But these problems have gained a new salience in the last five years, as the ongoing financial crisis has contributed to the rise of a range of new populist forces all across Europe, and to a fragmentation of the center ground. Furthermore, poor accuracy of pre- election polls in recent elections in Israel and the UK have generated considerable debate over the usefulness and accuracy of the traditional way of knowing what the public is thinking: the sample survey.

Many place hopes on technological and institutional innovations such as crowdsourcing to show a way out of the brewing crisis of democratic politics and political science. One of the key attractions of crowdsourcing techniques to governments and grass roots movements alike is the legitimacy such techniques are expected to be able to generate. For example, crowdsourcing techniques have been applied to enable citizens to verify the legality and correctness of government decisions and outcomes. A well-known application is to ask citizens to audit large volumes of data on government spending, to uncover any malfeasance but also to increase citizens’ trust in the government (Maguire, 2011).

Articles emerging from the IPP2014 conference analyze other interesting and comparable applications. In an article titled “Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform,” Carlos Arias, Jorge Garcia and Alejandro Corpeño (2015) describe the use of crowdsourcing for auditing election results. Dieter Zinnbauer (2015) discusses the potentials and pitfalls of the use of crowdsourcing for some other types of auditing purposes, in “Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future.”

Besides allowing citizens to verify the outcome of a process, crowdsourcing can also be used to lend an air of inclusiveness and transparency to a process itself. This process legitimacy can then indirectly legitimate the outcome of the process as well. For example, crowdsourcing-style open processes have been used to collect policy ideas, gather support for difficult policy decisions, and even generate detailed spending plans through participatory budgeting (Wampler & Avritzer, 2004). Articles emerging from our conference further advance this line of research. Roxana Radu, Nicolo Zingales and Enrico Calandro (2015) examine the use of crowdsourcing to lend process legitimacy to Internet governance, in an article titled “Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance.” Graham Smith, Robert C. Richards Jr. and John Gastil (2015) write about “The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations.”

An interesting cautionary tale is presented by Henrik Serup Christensen, Maija Karjalainen and Laura Nurminen (2015) in “Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland.” They show how a citizen initiative process ended up decreasing government legitimacy, after the government failed to implement the outcome of an initiative process that was perceived as highly legitimate by its supporters. Taneli Heikka (2015) further examines the implications of citizen initiative processes to the state–citizen relationship in “The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation.”

In many of the contributions that touch on the legitimating effects of crowdsourcing, one can sense a third, latent theme. Besides allowing outcomes to be audited and processes to be potentially more inclusive, crowdsourcing can also increase the perceived legitimacy of a government or policy process by lending an air of innovation and technological progress to the endeavor and those involved in it. This is most explicitly stated by Simo Hosio, Jorge Goncalves, Vassilis Kostakos and Jukka Riekki (2015) in “Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu.” They describe how local government officials collaborating with the research team to test a new public screen based polling system “expressed that the PR value boosted their public perception as a modern organization.” That some government crowdsourcing initatives are at least in part motivated by such “crowdwashing” is hardly surprising, but it encourages us to retain a critical approach and analyze actual outcomes instead of accepting dominant discourses about the nature and effects of crowdsourcing at face value.

For instance, we must continue to examine the actual size, composition, internal structures and motivations of the supposed “crowds” that make use of online platforms. Articles emerging from our conference that contributed towards this aim include “Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data” by Benedikt Boecking, Margeret Hall and Jeff Schneider (2015) and “Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making” by Pete Burnap and Matthew L. Williams (2015). Anatoliy Gruzd and Ksenia Tsyganova won a best paper award at the IPP2014 conference for an article published in this journal as “Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups.” These articles can be used to challenge the notion that crowdsourcing contributors are simply sets of independent individuals who are neatly representative of a larger population, and instead highlight the clusters, networks, and power structures inherent within them. This has implications to the democratic legitimacy of some of the more naive crowdsourcing initiatives.

One of the most original articles to emerge out of IPP2014 turns the concept of crowdsourcing for public policy and government on its head. While most research has focused on crowdsourcing’s empowering effects (or lack thereof), Gregory Asmolov (2015) analyses crowdsourcing as a form of social control. In an article titled “Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations,” Asmolov draws on empirical evidence and theorists such as Foucault to show how crowdsourcing platforms can be used to institutionalize volunteer resources in order to align them with state objectives and prevent independent collective action. An article by Jorge Goncalves, Yong Liu, Bin Xiao, Saad Chaudhry, Simo Hosio and Vassilis Kostakos (2015) provides a less nefarious example of strategic use of online platforms to further government objectives, under the title “Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook.”

Articles emerging from the conference also include two review articles that provide useful overviews of the field from different perspectives. “A Systematic Review of Online Deliberation Research” by Dennis Friess and Christiane Eilders (2015) takes stock of the use of digital technologies as public spheres. “The Fundamentals of Policy Crowdsourcing” by John Prpić, Araz Taeihagh and James Melton (2015) situates a broad variety of crowdsourcing literature into the context of a public policy cycle framework.

It has been extremely satisfying to follow the progress of these papers from initial conference submissions to high-quality journal articles, and to see that the final product not only advances the state of the art, but also provides certain new and critical perspectives on crowdsourcing. These perspectives will no doubt provoke responses, and Policy & Internet continues to welcome high-quality submissions dealing with crowdsourcing for public policy, government, and beyond.

Read the full editorial: Vili Lehdonvirta andJonathan Bright (2015) Crowdsourcing for Public Policy and Government. Editorial. Volume 7, Issue 3, pages 263–267.

References

Arias, C.R., Garcia, J. and Corpeño, A. (2015) Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform. Policy & Internet 7 (2) 185–202.

Asmolov, G. (2105) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy & Internet 7 (3).

Brabham, D. C. (2013). Citizen E-Participation in Urban Governance: Crowdsourcing and Collaborative Creativity: Crowdsourcing and Collaborative Creativity. IGI Global.

Boecking, B., Hall, M. and Schneider, J. (2015) Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data. Policy & Internet 7 (2) 159–184.

Burnap P. and Williams, M.L. (2015) Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making. Policy & Internet 7 (2) 223–242.

Christensen, H.S., Karjalainen, M. and Nurminen, L. (2015) Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland. Policy & Internet 7 (1) 25-45.

Friess, D. and Eilders, C. (2015) A Systematic Review of Online Deliberation Research. Policy & Internet 7 (3).

Goncalves, J., Liu, Y., Xiao, B., Chaudhry, S., Hosio, S. and Kostakos, V. (2015) Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook. Policy & Internet 7 (1) 80-102.

Gruzd, A. and Tsyganova, K. (2015) Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups. Policy & Internet 7 (2) 121–158.

Heikka, T. (2015) The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation. Policy & Internet 7 (3).

Hosio, S., Goncalves, J., Kostakos, V. and Riekki, J. (2015) Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu. Policy & Internet 7 (2) 203–222.

Howe, J. (2006). The Rise of Crowdsourcing by Jeff Howe | Byliner. Retrieved from

Maguire, S. (2011). Can Data Deliver Better Government? Political Quarterly, 82(4), 522–525.

Margetts, H., John, P., Hale, S., & Yasseri, T. (2015): Political Turbulence: How Social Media Shape Collective Action. Princeton University Press.

Prpić, J., Taeihagh, A. and Melton, J. (2015) The Fundamentals of Policy Crowdsourcing. Policy & Internet 7 (3).

Radu, R., Zingales, N. and Calandro, E. (2015) Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet 7 (3).

Shirky, C. (2010). Cognitive Surplus: How Technology Makes Consumers into Collaborators. Penguin Publishing Group.

Smith, G., Richards R.C. Jr. and Gastil, J. (2015) The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations. Policy & Internet 7 (2) 243–262.

Wampler, B., & Avritzer, L. (2004). Participatory publics: civil society and new institutions in democratic Brazil. Comparative Politics, 36(3), 291–312.

Zinnbauer, D. (2015) Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future. Policy & Internet 7 (1) 1–24.

]]>
Political polarization on social media: do birds of a feather flock together on Twitter? https://ensr.oii.ox.ac.uk/political-polarization-on-social-media-do-birds-of-a-feather-flock-together-on-twitter/ Tue, 05 May 2015 09:53:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3254 Twitter has exploded in recent years, now boasting half a billion registered users. Like blogs and the world’s largest social networking platform, Facebook, Twitter has actively been used for political discourse during the past few elections in the US, Canada, and elsewhere but it differs from them in a number of significant ways. Twitter’s connections tend to be less about strong social relationships (such as those between close friends or family members), and more about connecting with people for the purposes of commenting and information sharing. Twitter also provides a steady torrent of updates and resources from individuals, celebrities, media outlets, and any other organization seeking to inform the world as to its views and actions.

This may well make Twitter particularly well suited to political debate and activity. Yet important questions emerge in terms of the patterns of conduct and engagement. Chief among them: are users mainly seeking to reinforce their own viewpoints and link with likeminded persons, or is there a basis for widening and thoughtful exposure to a variety of perspectives that may improve the collective intelligence of the citizenry as a result?

Conflict and Polarization

Political polarization often occurs in a so-called ‘echo chamber’ environment, in which individuals are exposed to only information and communities that support their own viewpoints, while ignoring opposing perspectives and insights. In such isolating and self-reinforcing conditions, ideas can become more engrained and extreme due to lack of contact with contradictory views and the exchanges that could ensue as a result.

On the web, political polarization has been found among political blogs, for instance. American researchers have found that liberal and conservative bloggers in the US tend to link to other bloggers who share their political ideology. For Kingwell, a prominent Canadian philosopher, the resulting dynamic is one that can be characterized by a decline in civility and a lessening ability for political compromise to take hold. He laments the emergence of a ‘shout doctrine’ that corrodes the civic and political culture, in the sense that divisions are accentuated and compromise becomes more elusive.

Such a dynamic is not the result of social media alone – but rather it reflects for some the impacts of the Internet generally and the specific manner by which social media can lend itself to broadcasting and sensationalism, rather than reasoned debate and exchange. Traditional media and journalistic organizations have thus become further pressured to act in kind, driven less by a patient and persistent presentation of all sides of an issue and more by near-instantaneous reporting online. In a manner akin to Kingwell’s view, one prominent television news journalist in the US, Ted Koppel, has lamented this new media environment as a danger to the republic.

Nonetheless, the research is far from conclusive as to whether the Internet increases political polarization. Some studies have found that among casual acquaintances (such as those that can typically be observed on Twitter), it is common to observe connections across ideological boundaries. In one such funded by the Pew Internet and American Life Project and the National Science Foundation, findings suggest that people who often visit websites that support their ideological orientation also visit web sites that support divergent political views. As a result, greater sensitivity and empathy for alternative viewpoints could potentially ensue, improving the likelihood for political compromise – even on a modest scale that would otherwise not have been achievable without this heightened awareness and debate.

Early Evidence from Canada

The 2011 federal election in Canada was dubbed by some observers in the media as the country’s first ‘social media election’ – as platforms such as Facebook and Twitter became prominent sources of information for growing segments of the citizenry, and evermore strategic tools for political parties in terms of fundraising, messaging, and mobilizing voters. In examining Twitter traffic, our own intention was to ascertain the extent to which polarization or cross-pollinization was occurring across the portion of the electorate making use of this micro-blogging platform.

We gathered nearly 6000 tweets pertaining to the federal election made by just under 1500 people during a three-day period in the week preceding election day (this time period was chosen because it was late enough in the campaign for people to have an informed opinion, but still early enough for them to be persuaded as to how they should vote). Once the tweets were retrieved, we used social network analysis and content analysis to analyze patterns of exchange and messaging content in depth.

We found that overall people do tend to cluster around shared political views on Twitter. Supporters of each of the four major political parties identified in the study were more likely to tweet to other supporters of the same affiliation (this was particularly true of the ruling Conservatives, the most inwardly networked of the four major politically parties). Nevertheless, in a significant number of cases (36% of all interactions) we also observed a cross-ideological discourse, especially among supporters of the two most prominent left-of-centre parties, the New Democratic Party (NDP) and the Liberal Party of Canada (LPC). The cross-ideological interactions among supporters of left-leaning parties tended to be agreeable in nature, but often at the expense of the party in power, the Conservative Party of Canada (CPC). Members from the NDP and Liberal formations were also more likely to share general information and updates about the election as well as debate various issues around their party platforms with each other.

By contrast, interactions between parties that are ideologically distant seemed to denote a tone of conflict: nearly 40% of tweets between left-leaning parties and the Conservatives tended to be hostile. Such negative interactions between supporters of different parties have shown to reduce enthusiasm about political campaigns in general, potentially widening the cleavage between highly engaged partisans and less affiliated citizens who may view such forms of aggressive and divisive politics as distasteful.

For Twitter sceptics, one concern is that the short length of Twitter messages does not allow for meaningful and in-depth discussions around complex political issues. While it is certainly true that expression within 140 characters is limited, one third of tweets between supporters of different parties included links to external sources such as news stories, blog posts, or YouTube videos. Such indirect sourcing can thereby constitute a means of expanding dialogue and debate.

Accordingly, although it is common to view Twitter as largely a platform for self-expression via short tweets, there may be a wider collective dimension to both users and the population at large as a steady stream of both individual viewpoints and referenced sources drive learning and additional exchange. If these exchanges happen across partisan boundaries, they can contribute to greater collective awareness and learning for the citizenry at large.

As the next federal election approaches in 2015, with younger voters gravitating online – especially via mobile devices, and with traditional polling increasingly under siege as less reliable than in the past, all major parties will undoubtedly devote more energy and resources to social media strategies including, perhaps most prominently, an effective usage of Twitter.

Partisan Politics versus Politics 2.0

In a still-nascent era likely to be shaped by the rise of social media and a more participative Internet on the one hand, and the explosion of ‘big data’ on the other hand, the prominence of Twitter in shaping political discourse seems destined to heighten. Our preliminary analysis suggests an important cleavage between traditional political processes and parties – and wider dynamics of political learning and exchange across a changing society that is more fluid in its political values and affiliations.

Within existing democratic structures, Twitter is viewed by political parties as primarily a platform for messaging and branding, thereby mobilizing members with shared viewpoints and attacking opposing interests. Our own analysis of Canadian electoral tweets both amongst partisans and across party lines underscores this point. The nexus between partisan operatives and new media formations will prove to be an increasingly strategic dimension to campaigning going forward.

More broadly, however, Twitter is a source of information, expression, and mobilization across a myriad of actors and formations that may not align well with traditional partisan organizations and identities. Social movements arising during the Arab Spring, amongst Mexican youth during that country’s most recent federal elections and most recently in Ukraine are cases in point. Across these wider societal dimensions – especially consequential in newly emerging democracies, the tremendous potential of platforms such as Twitter may well lie in facilitating new and much more open forms of democratic engagement that challenge our traditional constructs.

In sum, we are witnessing the inception of new forms of what can be dubbed ‘Politics 2.0’ that denotes a movement of both opportunities and challenges likely to play out differently across democracies at various stages of socio-economic, political, and digital development. Whether Twitter and other likeminded social media platforms enable inclusive and expansionary learning, or instead engrain divisive polarized exchange, has yet to be determined. What is clear however is that on Twitter, in some instances, birds of a feather do flock together as they do on political blogs. But in other instances, Twitter can play an important role to foster cross parties communication in the online political arenas.

Read the full article: Gruzd, A., and Roy, J. (2014) Investigating Political Polarization on Twitter: A Canadian Perspective. Policy and Internet 6 (1) 28-48.

Also read: Gruzd, A. and Tsyganova, K. Information wars and online activism during the 2013/2014 crisis in Ukraine: Examining the social structures of Pro- and Anti-Maidan groups. Policy and Internet. Early View April 2015: DOI: 10.1002/poi3.91


Anatoliy Gruzd is Associate Professor in the Ted Rogers School of Management and Director of the Social Media Lab at Ryerson University, Canada. Jeffrey Roy is Professor in the School of Public Administration at Dalhousie University’s Faculty of Management. His most recent book was published in 2013 by Springer: From Machinery to Mobility: Government and Democracy in a Participative Age.

]]>
Tracing our every move: Big data and multi-method research https://ensr.oii.ox.ac.uk/tracing-our-every-move-big-data-and-multi-method-research/ https://ensr.oii.ox.ac.uk/tracing-our-every-move-big-data-and-multi-method-research/#comments Thu, 30 Apr 2015 09:32:55 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3210
There is a lot of excitement about ‘big data’, but the potential for innovative work on social and cultural topics far outstrips current data collection and analysis techniques. Image by IBM Deutschland.
Using anything digital always creates a trace. The more digital ‘things’ we interact with, from our smart phones to our programmable coffee pots, the more traces we create. When collected together these traces become big data. These collections of traces can become so large that they are difficult to store, access and analyze with today’s hardware and software. But as a social scientist I’m interested in how this kind of information might be able to illuminate something new about societies, communities, and how we interact with one another, rather than engineering challenges.

Social scientists are just beginning to grapple with the technical, ethical, and methodological challenges that stand in the way of this promised enlightenment. Most of us are not trained to write database queries or regular expressions, or even to communicate effectively with those who are trained. Ethical questions arise with informed consent when new analytics are created. Even a data scientist could not know the full implications of consenting to data collection that may be analyzed with currently unknown techniques. Furthermore, social scientists tend to specialize in a particular type of data and analysis, surveys or experiments and inferential statistics, interviews and discourse analysis, participant observation and ethnomethodology, and so on. Collaborating across these lines is often difficult, particularly between quantitative and qualitative approaches. Researchers in these areas tend to ask different questions and accept different kinds of answers as valid.

Yet trace data does not fit into the quantitative / qualitative binary. The trace of a tweet includes textual information, often with links or images and metadata about who sent it, when and sometimes where they were. The traces of web browsing are also largely textual with some audio/visual elements. The quantity of these textual traces often necessitates some kind of initial quantitative filtering, but it doesn’t determine the questions or approach.

The challenges are important to understand and address because the promise of new insight into social life is real. Large-scale patterns become possible to detect, for example according to one study of mobile phone location data one’s future location is 93% predictable (Song, Qu, Blum & Barabási, 2010), despite great variation in the individual patterns. This new finding opens up further possibilities for comparison and understanding the context of these patterns. Are locations more or less predictable among people with different socio-economic circumstances? What are the key differences between the most and least predictable?

Computational social science is often associated with large-scale studies of anonymized users such as the phone location study mentioned above, or participation traces of those who contribute to online discussions. Studies that focus on limited information about a large number of people are only one type, which I call horizontal trace data. Other studies that work in collaboration with informed participants can add context and depth by asking for multiple forms of trace data and involving participants in interpreting them — what I call the vertical trace data approach.

In my doctoral dissertation I took the vertical approach to examining political information gathering during an election, gathering participants’ web browsing data with their informed consent and interviewing them in person about the context (Menchen-Trevino 2012). I found that access to websites with political information was associated with self-reported political interest, but access to election-specific pages was not. The most active election-specific browsing came from those who were undecided on election day, while many of those with high political interest had already decided whom to vote for before the election began. This is just one example of how digging futher into such data can reveal that what is true for larger categories (political information in general) may not be true, and in fact can be misleading for smaller domains (election-specific browsing). Vertical trace data collection is difficult, but it should be an important component of the project of computational social science.

Read the full article: Menchen-Trevino, E. (2013) Collecting vertical trace data: Big possibilities and big challenges for multi-method research. Policy and Internet 5 (3) 328-339.

References

Menchen-Trevino, E. (2013) Collecting vertical trace data: Big possibilities and big challenges for multi-method research. Policy and Internet 5 (3) 328-339.

Menchen-Trevino, E. (2012) Partisans and Dropouts?: News Filtering in the Contemporary Media Environment. Northwestern University, Evanston, Illinois.

Song, C., Qu, Z., Blumm, N., & Barabasi, A.-L. (2010) Limits of Predictability in Human Mobility. Science 327 (5968) 1018–1021.


Erica Menchen-Trevino is an Assistant Professor at Erasmus University Rotterdam in the Media & Communication department. She researches and teaches on topics of political communication and new media, as well as research methods (quantitative, qualitative and mixed).

]]>
https://ensr.oii.ox.ac.uk/tracing-our-every-move-big-data-and-multi-method-research/feed/ 1
How do the mass media affect levels of trust in government? https://ensr.oii.ox.ac.uk/how-do-the-mass-media-affect-levels-of-trust-in-government/ Wed, 04 Mar 2015 16:33:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3157
Caption
The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness, using many different ICTs. Seoul at night by jonasginter.
Ed: You examine the influence of citizens’ use of online mass media on levels of trust in government. In brief, what did you find?

Greg: As I explain in the article, there is a common belief that mass media outlets, and especially online mass media outlets, often portray government in a negative light in an effort to pique the interest of readers. This tendency of media outlets to engage in ‘bureaucracy bashing’ is thought, in turn, to detract from the public’s support for their government. The basic assumption underpinning this relationship is that the more negative information on government there is, the more negative public opinion. However, in my analyses, I found evidence of a positive indirect relationship between citizens’ use of online mass media outlets and their levels of trust in government. Interestingly, however, the more frequently citizens used online mass media outlets for information about their government, the weaker this association became. These findings challenge conventional wisdom that suggests greater exposure to mass media outlets will result in more negative perceptions of the public sector.

Ed: So you find that that the particular positive or negative spin of the actual message may not be as important as the individuals’ sense that they are aware of the activities of the public sector. That’s presumably good news — both for government, and for efforts to ‘open it up’?

Greg: Yes, I think it can be. However, a few important caveats apply. First, the positive relationship between online mass media use and perceptions of government tapers off as respondents made more frequent use of online mass media outlets. In the study, I interpreted this to mean that exposure to mass media had less of an influence upon those who were more aware of public affairs, and more of an influence upon those who were less aware of public affairs. Therefore, there is something of a diminishing returns aspect to this relationship. Second, this study was not able to account for the valence (ie how positive or negative the information is) of information respondents were exposed to when using online mass media. While some attempts were made to control for valance by adding different control variables, further research drawing upon experimental research designs would be useful in substantiating the relationship between the valence of information disseminated by mass media outlets and citizens’ perceptions of their government.

Ed: Do you think governments are aware of this relationship — ie that an indirect effect of being more open and present in the media, might be increased citizen trust — and that they are responding accordingly?

Greg: I think that there is a general idea that more communication is better than less communication. However, at the same time there is a lot of evidence to suggest that some of the more complex aspects of the relationship between openness and trust in government go unaccounted for in current attempts by public sector organizations to become more open and transparent. As a result, this tool that public organizations have at their disposal is not being used as effectively as it could be, and in some instances is being used in ways that are counterproductive–that is, actually decreasing citizen trust in government. Therefore, in order for governments to translate greater openness into greater trust in government, more refined applications are necessary.

Ed: I know there are various initiatives in the UK — open government data / FoIs / departmental social media channels etc. — aimed at a general opening up of government processes. How open is the Korean government? Is a greater openness something they might adopt (or are adopting?) as part of a general aim to have a more informed and involved — and therefore hopefully more trusting — citizenry?

Greg: The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness. Their strategy has made use of different ICTs, such as e-government websites, social media accounts, non-emergency call centers, and smart phone apps. As a result, many now say that attempts by the Korean Government to become more open are more advanced than in many other areas of the developed world. However, the persistent issue in South Korea, as elsewhere, is whether these attempts are having the intended impact. A lot of empirical research has found, for example, that various attempts at becoming more open by many governments around the world have fallen short of creating a more informed and involved citizenry.

Ed: Finally — is there much empirical work or data in this area?

Greg: While there is a lot of excellent empirical research from the field of political science that has examined how mass media use relates to citizens’ perceptions of politicians, political preferences, or their levels of political knowledge, this topic has received almost no attention at all in public management/administration. This lack of discussion is surprising, given mass media has long served as a key means of enhancing the transparency and accountability of public organizations.

Read the full article: Porumbescu, G. (2013) Assessing the Link Between Online Mass Media and Trust in Government: Evidence From Seoul, South Korea. Policy & Internet 5 (4) 418-443.


Greg Porumbescu was talking to blog editor David Sutcliffe.

Gregory Porumbescu is an Assistant Professor at the Northern Illinois University Department of Public Administration. His research interests primarily relate to public sector applications of information and communications technology, transparency and accountability, and citizens’ perceptions of public service provision.

]]>
Don’t knock clickivism: it represents the political participation aspirations of the modern citizen https://ensr.oii.ox.ac.uk/dont-knock-clickivism-it-represents-the-political-participation-aspirations-of-the-modern-citizen/ Sun, 01 Mar 2015 10:44:49 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3140
Following a furious public backlash in 2011, the UK government abandoned plans to sell off 258,000 hectares of state-owned woodland. The public forest campaign by 38 Degrees gathered over half a million signatures.
How do we define political participation? What does it mean to say an action is ‘political’? Is an action only ‘political’ if it takes place in the mainstream political arena; involving government, politicians or voting? Or is political participation something that we find in the most unassuming of places, in sports, home and work? This question, ‘what is politics’ is one that political scientists seem to have a lot of trouble dealing with, and with good reason. If we use an arena definition of politics, then we marginalise the politics of the everyday; the forms of participation and expression that develop between the cracks, through need and ingenuity. However, if we broaden our approach as so to adopt what is usually termed a process definition, then everything can become political. The problem here is that saying that everything is political is akin to saying nothing is political, and that doesn’t help anyone.

Over the years, this debate has plodded steadily along, with scholars on both ends of the spectrum fighting furiously to establish a working understanding. Then, the Internet came along and drew up new battle lines. The Internet is at its best when it provides a home for the disenfranchised, an environment where like-minded individuals can wipe free the dust of societal disassociation and connect and share content. However, the Internet brought with it a shift in power, particularly in how individuals conceptualised society and their role within it. The Internet, in addition to this role, provided a plethora of new and customisable modes of political participation. From the onset, a lot of these new forms of engagement were extensions of existing forms, broadening the everyday citizen’s participatory repertoire. There was a move from voting to e-voting, petitions to e-petitions, face-to-face communities to online communities; the Internet took what was already there and streamlined it, removing those pesky elements of time, space and identity.

Yet, as the Internet continues to develop, and we move into the ultra-heightened communicative landscape of the social web, new and unique forms of political participation take root, drawing upon those customisable environments and organic cyber migrations. The most prominent of these is clicktivism, sometimes also, unfairly, referred to as slacktivism. Clicktivism takes the fundamental features of browsing culture and turns them into a means of political expression. Quite simply, clicktivism refers to the simplification of online participatory processes: one-click online petitions, content sharing, social buttons (e.g. Facebook’s ‘Like’ button) etc.

For the most part, clicktivism is seen in derogatory terms, with the idea that the streamlining of online processes has created a societal disposition towards feel-good, ‘easy’ activism. From this perspective, clicktivism is a lazy or overly-convenient alternative to the effort and legitimacy of traditional engagement. Here, individuals engaging in clicktivism may derive some sense of moral gratification from their actions, but clicktivism’s capacity to incite genuine political change is severely limited. Some would go so far as to say that clicktivism has a negative impact on democratic systems, as it undermines an individual’s desire and need to participate in traditional forms of engagement; those established modes which mainstream political scholars understand as the backbone of a healthy, functioning democracy.

This idea that clicktivism isn’t ‘legitimate’ activism is fuelled by a general lack of understanding about what clicktivism actually involves. As a recent development in observed political action, clicktivism has received its fair share of attention in the political participation literature. However, for the most part, this literature has done a poor job of actually defining clicktivism. As such, clicktivism is not so much a contested notion, as an ill-defined one. The extant work continues to describe clicktivism in broad terms, failing to effectively establish what it does, and does not, involve. Indeed, as highlighted, the mainstream political participation literature saw clicktivism not as a specific form of online action, but rather as a limited and unimportant mode of online engagement.

However, to disregard emerging forms of engagement such as clicktivism because they are at odds with long-held notions of what constitutes meaningful ‘political’ engagement is a misguided and dangerous road to travel. Here, it is important that we acknowledge that a political act, even if it requires limited effort, has relevance for the individual, and, as such, carries worth. And this is where we see clicktivism challenging these traditional notions of political participation. To date, we have looked at clicktivism through an outdated lens; an approach rooted in traditional notions of democracy. However, the Internet has fundamentally changed how people understand politics, and, consequently, it is forcing us to broaden our understanding of the ‘political’, and of what constitutes political participation.

The Internet, in no small part, has created a more reflexive political citizen, one who has been given the tools to express dissatisfaction throughout all facets of their life, not just those tied to the political arena. Collective action underpinned by a developed ideology has been replaced by project orientated identities and connective action. Here, an individual’s desire to engage does not derive from the collective action frames of political parties, but rather from the individual’s self-evaluation of a project’s worth and their personal action frames.

Simply put, people now pick and choose what projects they participate in and feel little generalized commitment to continued involvement. And it is clicktivism which is leading the vanguard here. Clicktivism, as an impulsive, non-committed online political gesture, which can be easily replicated and that does not require any specialized knowledge, is shaped by, and reinforces, this change. It affords the project-oriented individual an efficient means of political participation, without the hassles involved with traditional engagement.

This is not to say, however, that clicktivism serves the same functions as traditional forms. Indeed, much more work is needed to understand the impact and effect that clicktivist techniques can have on social movements and political issues. However, and this is the most important point, clicktivism is forcing us to reconsider what we define as political participation. It does not overtly engage with the political arena, but provides avenues through which to do so. It does not incite genuine political change, but it makes people feel as if they are contributing. It does not politicize issues, but it fuels discursive practices. It may not function in the same way as traditional forms of engagement, but it represents the political participation aspirations of the modern citizen. Clicktivism has been bridging the dualism between the traditional and contemporary forms of political participation, and in its place establishing a participatory duality.

Clicktivism, and similar contemporary forms of engagement, are challenging how we understand political participation, and to ignore them because of what they don’t embody, rather than what they do, is to move forward with eyes closed.

Read the full article: Halupka, M. (2014) Clicktivism: A Systematic Heuristic. Policy and Internet 6 (2) 115-132.


Max Halupka is a PhD candidate at the ANZOG Institute for Governance, University of Canberra. His research interests include youth political participation, e-activism, online engagement, hacktivism, and fluid participatory structures.

]]>
Does a market-approach to online privacy protection result in better protection for users? https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/ https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/#comments Wed, 25 Feb 2015 11:21:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3123 Ed: You examined the voluntary provision by commercial sites of information privacy protection and control under the self-regulatory policy of the U.S. Federal Trade Commission (FTC). In brief, what did you find?

Yong Jin: First, because we rely on the Internet to perform almost all types of transactions, how personal privacy is protected is perhaps one of the important issues we face in this digital age. There are many important findings: the most significant one is that the more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected. This is surprising because one might expect “the more popular, the better privacy protection” — a sort of marketplace magic that automatically solves the issue of personal privacy online. This was not the case at all, because the popular sites with more resources did not provide better privacy protection. Of course, the Internet in general is a malleable medium. This means that commercial sites can design, modify, or easily manipulate user interfaces to maximize the ease with which users can protect their personal privacy. The fact that this is not really happening for commercial websites in the U.S. is not only alarming, but also suggests that commercial forces may not have a strong incentive to provide privacy protection.

Ed: Your sample included websites oriented toward young users and sensitive data relating to health and finance: what did you find for them?

Yong Jin: Because the sample size for these websites was limited, caution is needed in interpreting the results. But what is clear is that just because the websites deal with health or financial data, they did not seem to be better at providing more privacy protection. To me, this should raise enormous concerns from those who use the Internet for health information seeking or financial data. The finding should also inform and urge policymakers to ask whether the current non-intervention policy (regarding commercial websites in the U.S.) is effective, when no consideration is given for the different privacy needs in different commercial sectors.

Ed: How do your findings compare with the first investigation into these matters by the FTC in 1998?

Yong Jin: This is a very interesting question. In fact, at least as far as the findings from this study are concerned, it seems that no clear improvement has been made in almost two decades. Of course, the picture is somewhat complicated. On the one hand, we see (on the surface) that websites have a lot more interactive features. But this does not necessarily mean improvement, because when it comes to actually informing users of what features are available for their privacy control and protection, they still tend to perform poorly. Note that today’s privacy policies are longer and are likely to carry more pages and information, which makes it even more difficult for users to understand what options they do have. I think informing people about what they can actually do is harder, but is getting more important in today’s online environment.

Ed: Is this just another example of a US market-led vs European regulation-led approach to a particular problem? Or is the situation more complicated?

Yong Jin: The answer is yes and no. Yes, it is because a US market-led approach clearly presents no strong statuary ground to mandate privacy protection in commercial websites. However, the answer is also no: even in the EU there is no regulatory mandate for websites to have certain interface-protections concerning how users should get informed about their personal data, and interact with websites to control its use. The difference is more on the fundamental principle of the “opt-in” EU approach. Although the “opt-in” is stronger than the “opt-out” approach in the U.S. this does not require websites to have certain interface-design aspects that are optimized for users’ data control. In other words, to me, the reality of the EU regulation (despite its robust policy approach) will not necessarily be rosier than the U.S., because commercial websites in the EU context also operate under the same incentive of personal data collection and uses. Ultimately, this is an empirical question that will require further studies. Interestingly, the next frontier of this debate will be on privacy in mobile platforms – and useful information concerning this can be found at the OII’s project to develop ethical privacy guidelines for mobile connectivity measurements.

Ed: Awareness of issues around personal data protection is pretty prominent in Europe — witness the recent European Court of Justice ruling about the ‘Right to Forget’ — how prominent is this awareness in the States? Who’s interested in / pushing / discussing these issues?

Yong Jin: The general public in the U.S. has an enormous concern for personal data privacy, since the Edward Snowden revelations in 2013 revealed extensive government surveillance activities. Yet my sense is that public awareness concerning data collection and surveillance by commercial companies has not yet reached the same level. Certainly, the issue such as the “Right to Forget” is being discussed among only a small circle of scholars, website operators, journalists, and policymakers, and I see the general public mostly remains left out of this discussion. In fact, a number of U.S. scholars have recently begun to weigh the pros and cons of a “Right to Forget” in terms of the public’s right to know vs the individual’s right to privacy. Given the strong tradition of freedom of speech, however, I highly doubt that U.S. policymakers will have a serious interest in pushing a similar type of approach in the foreseeable future.

My own work on privacy awareness, digital literacy, and behavior online suggests that public interest and demand for strong legislation such as a “Right to Forget” is a long shot, especially in the context of commercial websites.

Ed: Given privacy policies are notoriously awful to deal with (and are therefore generally unread) — what is the solution? You say the situation doesn’t seem to have improved in ten years, and that some aspects — such as readability of policies — might actually have become worse: is this just ‘the way things are always going to be’, or are privacy policies something that realistically can and should be addressed across the board, not just for a few sites?

Yong Jin: A great question, and I see no easy answer! I actually pondered a similar question when I conducted this study. I wonder: “Are there any viable solutions for online privacy protection when commercial websites are so desperate to use personal data?” My short answer is No. And I do think the problem will persist if the current regulatory contours in the U.S. continue. This means that there is a need for appropriate policy intervention that is not entirely dependent on market-based solutions.

My longer answer would be that realistically, to solve the notoriously difficult privacy problems on the Internet, we will need multiple approaches — which means a combination of appropriate regulatory forces by all the entities involved: regulatory mandates (government), user awareness and literacy (public), commercial firms and websites (market), and interface design (technology). For instance, it is plausible to perceive a certain level of readability of policy statement is to be required of all websites targeting children or teenagers. Of course, this will function with appropriate organizational behaviors, users’ awareness and interest in privacy, etc. In my article I put a particular emphasis on the role of the government (particularly in the U.S.) where the industry often ‘captures’ the regulatory agencies. The issue is quite complicated because for privacy protection, it is not just the FTC but also Congress who should enact to empower the FTC in its jurisdiction. The apparent lack of improvement over the years since the FTC took over online privacy regulation in the mid 1990s reflects this gridlock in legislative dynamics — as much as it reflects the commercial imperative for personal data collection and use.

I made a similar argument for multiple approaches to solve privacy problems in my article Offline Status, Online Status Reproduction of Social Categories in Personal Information Skill and Knowledge, and related, excellent discussions can be found in Information Privacy in Cyberspace Transactions (by Jerry Kang), and Exploring Identity and Identification in Cyberspace, by Oscar Gandy.

Read the full article: Park, Y.J. (2014) A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. Policy & Internet 6 (4) 360-376.


Yong Jin Park was taking to blog editor David Sutcliffe.

Yong Jin Park is an Associate Professor at the School of Communications, Howard University. His research interests center on social and policy implications of new technologies; current projects examine various dimensions of digital privacy.

]]>
https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/feed/ 1
Will digital innovation disintermediate banking — and can regulatory frameworks keep up? https://ensr.oii.ox.ac.uk/will-digital-innovation-disintermediate-banking-and-can-regulatory-frameworks-keep-up/ Thu, 19 Feb 2015 12:11:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3114
Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Image by MPD01605.
Innovation doesn’t just fall from the sky. It’s not distributed proportionately or randomly around the world or within countries, or found disproportionately where there is the least regulation, or in exact linear correlation with the percentage of GDP spent on R&D. Innovation arises in cities and countries, and perhaps most importantly of all, in the greatest proportion in ecosystems or clusters. Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Specifically, Europe’s innovation finance ecosystem lacks the necessary scale, plurality, and appetite for risk to drive investments in long-term initiatives aiming to produce a disruptive new technology. Such long-term investments are taking place more in the rising economies of Asia than in Europe.

While these problems could be addressed by new approaches and technologies for financing dynamism in Europe’s economies, financing of (potentially risky) innovation could also be held back by financial regulation that focuses on stability, avoiding forum shopping (i.e., looking for the most permissive regulatory environment), and preventing fraud, to the exclusion of other interests, particularly innovation and renewal. But the role of finance in enabling the development and implementation of new ideas is vital — an economy’s dynamism depends on innovative competitors challenging, and if successful, replacing complacent players in the markets.

However, newcomers obviously need capital to grow. As a reaction to the markets having priced risk too low before the financial crisis, risk is now being priced too high in Europe, starving the innovation efforts of private financing at a time when much public funding has suffered from austerity measures. Of course, complementary (non-bank) sources of finance can also help fund entrepreneurship, and without that petrol of money, the engine of the new technology economy will likely stall.

The Internet has made it possible to fund innovation in new ways like crowd funding — an innovation in finance itself — and there is no reason to think that financial institutions should be immune to disruptive innovation produced by new entrants that offer completely novel ways of saving, insuring, loaning, transferring and investing money. New approaches such as crowdfunding and other financial technology (aka “FinTech”) initiatives could provide depth and a plurality of perspectives, in order to foster innovation in financial services and in the European economy as a whole.

The time has come to integrate these financial technologies into the overall financial frameworks in a manner that does not neuter their creativity, or lower their potential to revitalize the economy. There are potential synergies with macro-prudential policies focused on mitigating systemic risk and ensuring the stability of financial systems. These platforms have great potential for cross-border lending and investment and could help to remedy the retreat of bank capital behind national borders since the financial crisis. It is time for a new perspective grounded in an “innovation-friendly” philosophy and regulatory approach to emerge.

Crowdfunding is a newcomer to the financial industry, and as such, actions (such as complex and burdensome regulatory frameworks or high levels of guaranteed compensation for losses) that could close it down or raise high barriers of entry should be avoided. Competition in the interests of the consumer and of entrepreneurs looking for funding should be encouraged. Regulators should be ready to step in if abuses do, or threaten to, arise while leaving space for new ideas around crowdfunding to gain traction rapidly, without being overburdened by regulatory requirements at an early stage.

The interests of both “financing innovation” and “innovation in the financial sector” also coincide in the FinTech entrepreneurial community. Schumpeter wrote in 1942: “[the] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” An economy’s dynamism depends on innovative competitors challenging, and if successful, taking the place of complacent players in the markets. Keeping with the theme of Schumpeterian creative destruction, the financial sector is one seen by banking sector analysts and commentators as being particularly ripe for disruptive innovation, given its current profits and lax competition. Technology-driven disintermediation of many financial services is on the cards, for example, in financial advice, lending, investing, trading, virtual currencies and risk management.

The UK’s Financial Conduct Authority’s regulatory dialogues with FinTech developers to provide legal clarity on the status of their new initiatives are an example of good practice , as regulation in this highly monitored sector is potentially a serious barrier to entry and new innovation. The FCA also proactively addresses enabling innovation with Project Innovate, an initiative to assist both start-ups and established businesses in implementing innovative ideas in the financial services markets through an Incubator and Innovation Hub.

By its nature, FinTech is a sector that can benefit and benefit from the EU’s Digital Single Market and make Europe a sectoral global leader in this field. In evaluating possible future FinTech regulation, we need to ensure an optimal regulatory framework and specific rules. The innovation principle I discuss in my article should be part of an approach ensuring not only that regulation is clear and proportional — so that innovators can easily comply — but also ensuring that we are ready, when justified, to adapt regulation to enable innovations. Furthermore, any regulatory approaches should be “future proofed” and should not lock in today’s existing technologies, business models or processes.

Read the full article: Zilgalvis, P. (2014) The Need for an Innovation Principle in Regulatory Impact Assessment: The Case of Finance and Innovation in Europe. Policy and Internet 6 (4) 377–392.


Pēteris Zilgalvis, J.D. is a Senior Member of St Antony’s College, University of Oxford, and an Associate of its Political Economy of Financial Markets Programme. In 2013-14 he was a Senior EU Fellow at St Antony’s. He is also currently Head of Unit for eHealth and Well Being, DG CONNECT, European Commission.

]]>
Will China’s new national search engine, ChinaSo, fare better than “The Little Search Engine that Couldn’t”? https://ensr.oii.ox.ac.uk/will-chinas-new-national-search-engine-chinaso-fare-better-than-the-little-search-engine-that-couldnt/ Tue, 10 Feb 2015 10:55:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3084 State search engine ChinaSo launched in March 2014 following indifferent performance from the previous state-run search engine Jike.
State search engine ChinaSo launched in March 2014 following indifferent performance from the previous state-run search engine Jike. Its long-term impact on China’s search market and users remains unclear.

When Jike, the Chinese state-run search engine, launched in 2011, its efforts received a mixed response. The Chinese government pulled out all the stops to promote it, including placing Deng Yaping, one of China’s most successful athletes at the helm. Jike strategically branded itself as friendly, high-tech, and patriotic to appeal to national pride, competition, and trust. It also signaled a serious attempt by a powerful authoritarian state to nationalize the Internet within its territory, and to extend its influence in the digital sphere. However, plagued by technological inferiority, management deficiencies, financial woes and user indifference, Jike failed in terms of user adoption, pointing to the limits of state influence in the marketplace.

Users and critics remain skeptical of state-run search engines. While some news outlets referred to Jike as “the little search engine that couldn’t,” Chinese propaganda was busy at work rebranding, recalibrating, and reimagining its efforts. The result? The search engine formally known as Jike has now morphed into a new enterprise known as “ChinaSo”. This transformation is not new — Jike originally launched in 2010 under the name Goso, rebranding itself as Jike a year later. The March 2014 unveiling of ChinaSo was the result of the merging of the two state-run search engines Jike and Panguso.

Only time will tell if this new (ad)venture will prove more fruitful. However, several things are worthy of note here. First, despite repeated trials, the Chinese state has not given up on its efforts to expand its digital toolbox and weave a ‘China Wide Web’. Rather, state media have pooled their resources to make their collective, strategic bets. The merging of Jike and Panguso into ChinaSo was backed by several state media giants, including People’s Daily, Xinhua News Agency, and China Central Television. Branded explicitly as “China Search: Authoritative National Search,” ChinaSo reinforces a sense of national identity. How does it perform? ChinaSo now ranks 225th in China and 2139th globally (Alexa.com, 8 February 2015), up from Jike’s ranking of 376th in China and 3,174th globally that we last recorded in May 2013. While ChinaSo’s rankings have increased over time, a low adoption rate continues to haunt the state search engine. Compared to China’s homegrown commercial search giant Baidu that ranks first in China and fifth globally (Alexa.com, 8 February 2015), ChinaSo has a long way to go.

Second, in terms of design, ChinaSo has adopted a mixture of general and vertical search to increase its appeal to a wide range of potential users. Its general search, similar to Google’s and Baidu’s, allows users to query through a search box to receive results in a combination of text, image and video formats based on ChinaSo’s search engine that archives, ranks, and presents information to users. In addition, ChinaSo incorporates vertical search focusing on a wide range of categories such as transportation, investment, education and technology, health, food, tourism, shopping, real estate and cars, and sports and entertainment. Interestingly, ChinaSo also guides searchers by highlighting “top search topics today” as users place their cursor in the search box. Currently, various “anti-corruption” entries appear prominently which correspond to the central government’s high-profile anti-corruption campaigns. Given the opaqueness of search engine operation, it is unclear whether the “top searches” are ChinaSo’s editorial choices or search terms based on user queries. We suspect ChinaSo strategically prioritizes this list to direct user attention.

Third, besides improved functionality that enhances ChinaSo’s priming and agenda-setting abilities, it continues to practice (as did Jike) sophisticated information filtering and presentation. For instance, a search of “New York Times” returns not a single result directing users to the paper’s website — as it is banned in China. Instead, on the first page of results, ChinaSo directs users to several Chinese online encyclopedia entries for New York Times, stock information of NYT, and sanctioned news stories relating to the NYT that have appeared in such official media outlets as People’s Net, China Daily, and Global Times. All information appears in Chinese, which has acted as a natural barrier to the average Chinese user who seeks information outside China. Although Chinese language versions of foreign new organizations such as NYT Chinese, WSJ Chinese, and BBC Chinese exist, they are invariably blocked in China.

Last, ChinaSo’s long-term impact on China’s search market and users remains unclear. While many believe ChinaSo to be a “waste of taxpayer money” due to its persistent inability to carve out its market share in competition, others are willing to give it a shot, especially with regard to queries for official policies and statements, remarking that “[there] is nothing wrong with creating a state-run search engine service” and that ChinaSo’s results are better than those of its commercial counterparts. It seems that users either do not care or remain largely unaware of the surveillance capacities of search engines. Although recent scholarship (for instance here and here) has started to probe the Chinese notion and practices of privacy in social networking sites, no research has been conducted with regard to search-related privacy concerns in the Chinese context.

The idea of a state-sponsored search engine is not new, however. As early as 2005, a few European countries proposed a Euro-centric search engine “Project Quaero” to compete against Google and Yahoo! in what was perceived to be the “threat of Anglo-Saxon cultural imperialism.” In the post-Snowden world, not only are powerful authoritarian countries—China, Russia, Iran, and Turkey—interested in building their own national search engines, democratic countries like Germany and Brazil have also condemned the U.S. government and vowed to create their own “national Internets.”

The changing international political landscape compels researchers, policy makers and the public to re-evaluate previous assumptions of internationalism and confront the reality of the role of the Internet as an extension of state power and national identity instead. In the near future, the “the return of the state”, reflected in various trends to re-nationalize communication networks, will likely go hand in hand with social, economic and cultural changes that cross national and international borders. ChinaSo is part and parcel of the “geopolitical turn” in policy and Internet studies that should command more scholarly and public attention.

Read the full article: Jiang, M. & Okamoto, K. (2014) National identity, ideological apparatus, or panopticon? A case study of the Chinese national search engine Jike. Policy and Internet 6 (1) 89-107.

Min Jiang is an Associate Professor, in the department of Communication Studies, UNC Charlotte. Kristen Okamoto is a Ph.D. Student in the school of Communication Studies, University of Ohio.

]]>
Why does the Open Government Data agenda face such barriers? https://ensr.oii.ox.ac.uk/why-does-the-open-government-data-agenda-face-such-barriers/ Mon, 26 Jan 2015 11:03:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3068 Image by the Open Data Institute.
Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Advocates of Open Government Data (OGD) – that is, data produced or commissioned by government or government-controlled entities that can be freely used, reused and redistributed by anyone – talk about the potential of such data to increase government transparency, catalyse economic growth, address social and environmental challenges and boost democratic participation. This heady mix of potential benefits has proved persuasive to the UK Government (and governments around the world). Over the past decade, since the emergence of the OGD agenda, the UK Government has invested extensively in making more of its data open. This investment has included £10 million to establish the Open Data Institute and a £7.5 million fund to support public bodies overcome technical barriers to releasing open data.

Yet the transformative impacts claimed by OGD advocates, in government as well as NGOs such as the Open Knowledge Foundation, still seem a rather distant possibility. Even the more modest goal of integrating the creation and use of OGD into the mainstream practices of government, businesses and citizens remains to be achieved. In my recent article Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective (Policy & Internet 6:3) I reflect upon the barriers preventing the OGD agenda from making a breakthrough into the mainstream. These reflections centre on the five key finds of a survey exploring where key stakeholders within the UK OGD community perceive barriers to the OGD agenda. The key messages from the UK OGD community are that:

1. Barriers to the OGD agenda are perceived to be widespread 

Unsurprisingly, given the relatively limited impact of OGD to date, my research shows that barriers to the OGD agenda are perceived to be widespread and numerous in the UK’s OGD community. What I find rather more surprising is the expectation, amongst policy makers, that these barriers ought to just melt away when exposed to the OGD agenda’s transparently obvious value and virtue. Given that the breakthrough of the OGD agenda (in actual fact) will require changes across the complex socio-technical structures of government and society, many teething problems should be expected, and considerable work will be required to overcome them.

2. Barriers on the demand side are of great concern

Members of the UK OGD community are particularly concerned about the wide range of demand-side barriers, including the low level of demand for OGD across civil society and the public and private sectors. These concerns are likely to have arisen as a legacy of the OGD community’s focus on the supply of OGD (such as public spending, prescription and geospatial data), which has often led the community to overlook the need to nurture initiatives that make use of OGD: for example innovators such as Carbon Culture who use OGD to address environmental challenges.

Adopting a strategic approach to supporting niches of OGD use could help overcome some of the demand-side barriers. For example, such an approach could foster the social learning required to overcome barriers relating to the practices and business models of data users. Whilst there are encouraging signs that the UK’s Open Data Institute (a UK Government-supported not-for-profit organisation seeking to catalyse the use of open data) is supporting OGD use in the private sector, there remains a significant opportunity to improve the support offered to potential OGD users across civil society. It is also important to recognise that increasing the support for OGD users is not guaranteed to result in increased demand. Rather the possibility remains that demand for OGD is limited for many other reasons – including the possibility that the majority of businesses, citizens and community organisations find OGD of very little value.

3. The structures of government continue to act as barriers

Members of the UK OGD community are also concerned that major barriers remain on the supply side, particularly in the form of the established structures and institutions of government. For example, barriers were perceived in the forms of the risk-adverse cultures of government organisations and the ad hoc funding of OGD initiatives. Although resilient, these structures are dynamic, so proponents of OGD need to be aware of emerging ‘windows of opportunity’ as they open up. Such opportunities may take the form of tensions within the structures of government (e.g. where restrictions on data sharing between different parts of government present an opportunity for OGD to create efficiency savings); and external pressures on government (e.g. the pressure to transition to a low carbon economy could create opportunities for OGD initiatives and demand for OGD).

4. There are major challenges to mobilising resources to support the open government data agenda

The research results also showed that members of the UK’s OGD community see mobilising the resources required to support the OGD as a major challenge. Concerns around securing funding are predictably prominent, but concerns also extend to developing the skills and knowledge required to use OGD across civil society, government and the private sector. These challenges are likely to persist whilst the post-financial crisis narrative of public deficit reduction through public spending reduction dominates the political agenda. This leaves OGD advocates to consider the politics and ethics of calling for investment in OGD initiatives, whilst spending reductions elsewhere are leading to the degradation of public services provision to vulnerable and socially excluded individuals.

5. The nature of some barriers remains contentious within the OGD community

OGD is often presented by advocates as a neutral, apolitical public good. However, my research highlights the important role that values and politics plays in how individuals within the OGD community perceive the agenda and the barriers it faces. For example, there are considerable differences in opinion, within the OGD community, on whether or not a private sector focus on exploiting financial value from OGD is crowding out the creation of social and environmental value. So benefits may arise from advocates being more open about the values and politics that underpin and shape the agenda. At the same time, OGD-related policy and practice could create further opportunities for social learning that brings together the diverse values and perspectives that coexist within the OGD community.

Having considered the wide range of barriers to the breakthrough of OGD agenda, and some approaches to overcoming these barriers, these discussions need setting in a broader political context. If the agenda does indeed make a breakthrough into the mainstream, it remains unclear what form this will take. Will the OGD agenda make a breakthrough by conforming with, and reinforcing, prevailing neoliberal interests? Or will the agenda stretch the fabric of government, the economy and society, and transform the relationship between citizens and the state?

Read the full article: Martin, C. (2014) Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective. Policy & Internet 6 (3) 217-240.

]]>
Past and Emerging Themes in Policy and Internet Studies https://ensr.oii.ox.ac.uk/past-and-emerging-themes-in-policy-and-internet-studies/ Mon, 12 May 2014 09:24:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2673 Caption
We can’t understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) “Stop Watching Us” Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody.

The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision.

At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams is fundamental rights in a mediated society (11 articles), which focuses particularly on privacy and freedom of expression. Related streams are children and child protection (six articles), copyright and piracy (five articles), and general e-commerce regulation (six articles), including taxation. A recently emerged stream in the journal is hate speech and cybersecurity (four articles). Of course, an enduring research stream is Internet governance, or the regulation of technical infrastructures and economic institutions that constitute the material basis of the Internet (seven articles). In recent years, the research agenda in this stream has been influenced by national policy debates around broadband market competition and network neutrality (Hahn and Singer 2013). Another enduring stream deals with the Internet and public health (eight articles).

Looking specifically at “the Internet and policy processes” category, the largest stream is e-participation, or the role of the Internet in engaging citizens in national and local government policy processes, through methods such as online deliberation, petition platforms, and voting advice applications (18 articles). Two other streams are e-government, or the use of Internet technologies for government service provision (seven articles), and e-politics, or the use of the Internet in mainstream politics, such as election campaigning and communications of the political elite (nine articles). Another stream that has gained pace during recent years, is online collective action, or the role of the Internet in activism, ‘clicktivism,’ and protest campaigns (16 articles). Last year the journal published a special issue on online collective action (Calderaro and Kavada 2013), and the next forthcoming issue includes an invited article on digital civics by Ethan Zuckerman, director of MIT’s Center for Civic Media, with commentary from prominent scholars of Internet activism. A trajectory discernible in this stream over the years is a movement from discussing mere potentials towards analyzing real impacts—including critical analyses of the sometimes inflated expectations and “democracy bubbles” created by digital media (Shulman 2009; Karpf 2012; Bryer 2012).

The final category, discussion of novel methodologies, consists of articles that develop, analyze, and reflect critically on methodological innovations in policy and Internet studies. Empirical articles published in the journal have made use of a wide range of conventional and novel research methods, from interviews and surveys to automated content analysis and advanced network analysis methods. But of those articles where methodology is the topic rather than merely the tool, the majority deal with so-called “big data,” or the use of large-scale transactional data sources in research, commerce, and evidence-based public policy (nine articles). The journal recently devoted a special issue to the potentials and pitfalls of big data for public policy (Margetts and Sutcliffe 2013), based on selected contributions to the journal’s 2012 big data conference: Big Data, Big Challenges? In general, the notion of data science and public policy is a growing research theme.

This brief analysis suggests that research published in the journal over the last five years has indeed followed the broad contours of the original vision. The two challenges, namely policy implications of Internet-related social change and Internet-related changes in policy processes, have both been addressed. In particular, research has addressed the implications of the Internet’s increasing role in social and political life. The journal has also furthered the development of new methodologies, especially the use of online network analysis techniques and large-scale transactional data sources (aka ‘big data’).

As expected, authors from a wide range of disciplines have contributed their perspectives to the journal, and engaged with other disciplines, while retaining the rigor of their own specialisms. The geographic scope of the contributions has been truly global, with authors and research contexts from six continents. I am also pleased to note that a characteristic common to all the published articles is polish; this is no doubt in part due to the high level of editorial support that the journal is able to afford to authors, including copyediting. The justifications for the journal’s establishment five years ago have clearly been borne out, so that the journal now performs an important function in fostering and bringing together research on the public policy implications of an increasingly Internet-mediated society.

And what of my own research interests as an editor? In the inaugural editorial, Helen Margetts highlighted work, finance, exchange, and economic themes in general as being among the prominent areas of Internet-related social change that are likely to have significant future policy implications. I think for the most part, these implications remain to be addressed, and this is an area that the journal can encourage authors to tackle better. As an editor, I will work to direct attention to this opportunity, and welcome manuscript submissions on all aspects of Internet-enabled economic change and its policy implications. This work will be kickstarted by the journal’s 2014 conference (26-27 September), which this year focuses on crowdsourcing and online labor.

Our published articles will continue to be highlighted here in the journal’s blog. Launched last year, we believe this blog will help to expand the reach and impact of research published in Policy and Internet to the wider academic and practitioner communities, promote discussion, and increase authors’ citations. After all, publication is only the start of an article’s public life: we want people reading, debating, citing, and offering responses to the research that we, and our excellent reviewers, feel is important, and worth publishing.

Read the full editorial:  Lehdonvirta, V. (2014) Past and Emerging Themes in Policy and Internet Studies. Policy & Internet 6(2): 109-114.

References

Bryer, T.A. (2011) Online Public Engagement in the Obama Administration: Building a Democracy Bubble? Policy & Internet 3 (4).

Calderaro, A. and Kavada, A. (2013) Challenges and Opportunities of Online Collective Action for Policy Change. Policy & Internet (5) 1.

Hahn, R. and Singer, H. (2013) Is the U.S. Government’s Internet Policy Broken? Policy & Internet 5 (3) 340-363.

Karpf, D. (2012) Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism. Policy & Internet 2 (4) 7-41.

Margetts, H. (2009) The Internet and Public Policy. Policy and Internet 1 (1).

Margetts, H. and Sutcliffe, D. (2013) Addressing the Policy Challenges and Opportunities of ‘Big Data.’ Policy & Internet 5 (2) 139-146.

Shulman, S.W. (2009) The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking. Policy & Internet 1 (1) 23-53.

]]>
Unpacking patient trust in the “who” and the “how” of Internet-based health records https://ensr.oii.ox.ac.uk/unpacking-patient-trust-in-the-who-and-the-how-of-internet-based-health-records/ Mon, 03 Mar 2014 08:50:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2615 In an attempt to reduce costs and improve quality, digital health records are permeating health systems all over the world. Internet-based access to them creates new opportunities for access and sharing – while at the same time causing nightmares to many patients: medical data floating around freely within the clouds, unprotected from strangers, being abused to target and discriminate people without their knowledge?

Individuals often have little knowledge about the actual risks, and single instances of breaches are exaggerated in the media. Key to successful adoption of Internet-based health records is, however, how much a patient places trust in the technology: trust that data will be properly secured from inadvertent leakage, and trust that it will not be accessed by unauthorised strangers.

Situated in this context, my own research has taken a closer look at the structural and institutional factors influencing patient trust in Internet-based health records. Utilising a survey and interviews, the research has looked specifically at Germany – a very suitable environment for this question given its wide range of actors in the health system, and often being referred to as a “hard-line privacy country”. Germany has struggled for years with the introduction of smart cards linked to centralised Electronic Health Records, not only changing its design features over several iterations, but also battling negative press coverage about data security.

The first element to this question of patient trust is the “who”: that is, does it make a difference whether the health record is maintained by either a medical or a non-medical entity, and whether the entity is public or private? I found that patients clearly expressed a higher trust in medical operators, evidence of a certain “halo effect” surrounding medical professionals and organisations driven by patient faith in their good intentions. This overrode the concern that medical operators might be less adept at securing the data than (for example) most non-medical IT firms. The distinction between public and private operators is much more blurry in patients’ perception. However, there was a sense among the interviewees that a stronger concern about misuse was related to a preference for public entities who would “not intentionally give data to others”, while data theft concerns resulted in a preference for private operators – as opposed to public institutions who might just “shrug their shoulders and finger-point at subordinate levels”.

Equally important to the question of “who” is managing the data may be the “how”: that is, is the patient’s ability to access and control their health-record content perceived as trust enhancing? While the general finding of this research is that having the opportunity to both access and control their records helps to build patient trust, an often overlooked (and discomforting) factor is that easy access for the patient may also mean easy access for the rest of the family. In the words of one interviewee: “For example, you have Alzheimer’s disease or dementia. You don’t want everyone around you to know. They will say ‘show us your health record online’, and then talk to doctors about you – just going over your head.” Nevertheless, for most people I surveyed, having access and control of records was perceived as trust enhancing.

At the same time, a striking survey finding is how greater access and control of records can be less trust-enhancing for those with lower Internet experience, confidence, and breadth of use: as one older interviewee put it – “I am sceptical because I am not good at these Internet things. My husband can help me, but somehow it is not really worth this effort.” The quote reveals one of the facets of digital divides, and additionally highlights the relevance of life-stage in the discussion. Older participants see the benefits of sharing data (if it means avoiding unnecessary repetition of routine examinations) and are less concerned about outsider access, while younger people are more apprehensive of the risk of medical data falling into the wrong hands. An older participant summarised this very effectively: “If I was 30 years younger and at the beginning of my professional career or my family life, it would be causing more concern for me than now”. Finally, this reinforces the importance of legal regulations and security audits ensuring a general level of protection – even if the patient chooses not to be (or cannot be) directly involved in the management of their data.

Interestingly, the research also uncovered what is known as the certainty trough: not only are those with low online affinity highly suspicious of Internet-based health records – the experts are as well! The more different activities a user engaged in, the higher the suspicion of Internet-based health records. This confirms the notion that with more knowledge and more intense engagement with the Internet, we tend to become more aware of the risks – and lose trust in the technology and what the protections might actually be worth.

Finally, it is clear that the “who” and the “how” are interrelated, as a low degree of trust goes hand in hand with a desire for control. For a generally less trustworthy operator, access to records is not sufficient to inspire patient trust. While access improves knowledge and may allow for legal steps to change what is stored online, few people make use of this possibility; only direct control of what is stored online helps to compensate for a general suspicion about the operator. It is noteworthy here that there is a discrepancy between how much importance people place on having control, and how much they actually use it, but in the end, trust is a subjective concept that doesn’t necessarily reflect actual privacy and security.

The results of this research provide valuable insights for the further development of Internet-based health records. In short: to gain patient trust, the operator should ideally be of a medical nature and should allow the patients to get involved in how their health records are maintained. Moreover, policy initiatives designed to increase the Internet and health literacy of the public are crucial in reaching all parts of the population, as is an underlying legal and regulatory framework within which any Internet-based health record should be embedded.


Read the full paper: Rauer, Ulrike (2012) Patient Trust in Internet-based Health Records: An Analysis Across Operator Types and Levels of Patient Involvement in Germany. Policy and Internet 4 (2).

]]>
The challenges of government use of cloud services for public service delivery https://ensr.oii.ox.ac.uk/challenges-government-use-cloud-services-public-service-delivery/ Mon, 24 Feb 2014 08:50:15 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2584 Caption
Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally — presenting particular challenges to government. Image by NASA Goddard Photo and Video

Ed: You open your recent Policy and Internet article by noting that “the modern treasury of public institutions is where the wealth of public information is stored and processed” … what are the challenges of government use of cloud services?

Kristina: The public sector is a very large user of information technology but data handling policies, vendor accreditation and procurement often predate the era of cloud computing. Governments first have to put in place new internal policies to ensure the security and integrity of their information assets residing in the cloud. Through this process governments are discovering that their traditional notions of control are challenged because cloud services are virtual, dynamic, and operate across borders.

One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data — after all, public sector information embodies the past, the present, and the future of a country. The ability to govern presupposes command and control over government information to the extent necessary to deliver public services, protect citizens’ personal data and to ensure the integrity of the state, among other considerations. One could even assert that in today’s interconnected world national sovereignty is conditional upon adequate data sovereignty.

Ed: A basic question: if a country’s health records (in the cloud) temporarily reside on / are processed on commercial servers in a different country: who is liable for the integrity and protection of that data, and under who’s legal scheme? ie can a country actually technically lose sovereignty over its data?

Kristina: There is always one line of responsibility flowing from the contract with the cloud service provider. However, when these health records cross borders they are effectively governed under a third country’s jurisdiction where disclosure authorities vis-à-vis the cloud service provider can likely be invoked. In some situations the geographical whereabouts of the public health records is not even that important because certain countries’ legislation has extra-territorial reach and it suffices that the cloud service provider is under an obligation to turn over data in its custody. In both situations countries’ exclusive sovereignty over public sector information would be contested. And service providers may find themselves in a Catch22 when they have to decide their legitimate course of action.

Ed: Is there a sense of how many government services are currently hosted “in the cloud”; and have there been any known problems so far about access and jurisdiction?

Kristina: The US has published some targets but otherwise we have no sense of the magnitude of government cloud computing. It is certainly an ever growing phenomenon in leading countries, for example both the US Federal Cloud Computing Strategy and the United Kingdom’s G-Cloud Framework leverage public sector cloud migration with a cloud-first strategy and they operate government application stores where public authorities can self-provision themselves with cloud-based IT services. Until now, the issues of access and jurisdiction have primarily been discussed in terms of risk (as I showed in my article) with governments adopting strategies to keep their public records within national territory, even if they are residing on a cloud service.

Ed: Is there anything about the cloud that is actually functionally novel; ie that calls for new regulation at national or international level, beyond existing data legislation?

Kristina: Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally. The legal risks arising from its transnationality won’t be solved by more legislation at the national level; even if this is a pragmatic solution, the resurrection of territoriality in cloud service contracts with the government conflicts with scalability. My article explores various avenues at the international level, for example extending diplomatic immunity, international agreements for cross-border data transfers, and reliance on mutual legal assistance treaties but in my opinion they do not satisfyingly restore a country’s quest for data sovereignty in the cloud context. In the EU a regional approach could be feasible and I am very much drawn by the idea of a European cloud environment where common information assurance principles prevail — also curtailing individual member states’ disclosure authorities.

Ed: As the economies of scale of cloud services kick in, do you think we will see increasing commercialisation of public record storing and processing (with a possible further erosion of national sovereignty)?

Kristina: Where governments have the capability they adopt a differentiated, risk-based approach corresponding to the information’s security classification: data in the public domain or that have low security markings are suitable for cloud services without further restrictions. Data that has medium security markings may still be processed on cloud services but are often confined to the national territory. Beyond this threshold, i.e. for sensitive and classified information, cloud services are not an option, judging from analysis of the emerging practice in the U.S., the UK, Canada and Australia. What we will increasingly see is IT-outsourcing that is labelled “cloud” despite not meeting the specifications of a true cloud service. Some governments are more inclined to introduce dedicated private “clouds” that are not fully scalable, in other words central data centres. For a vast number of countries, including developing ones, the options are further limited because there is no local cloud infrastructure and/or the public sector cannot afford to contract a dedicated government cloud. In this situation I could imagine an increasing reliance on transnational cloud services, with all the attendant pros and cons.

Ed: How do these sovereignty / jurisdiction / data protection questions relate to the revelations around the NSA’s PRISM surveillance programme?

Kristina: It only confirms that disclosure authorities are extensively used for intelligence gathering and that legal risks have to be taken as seriously as technical vulnerabilities. As a consequence of the Snowden revelations it is quite likely that the sensitivity of governments (as well as private sector organizations) to the impact of foreign jurisdictions will become even more pronounced. For example, there are reports estimating that the lack of trust in US-based cloud services is bound to affect the industry’s growth.

Ed: Could this usher in a whole new industry of ‘guaranteed’ national clouds..? ie how is the industry responding to these worries?

Kristina: This is already happening; in particular, European and Asian players are being very vocal in terms of marketing their regional or national cloud offerings as compatible with specific jurisdiction or national data protection frameworks.

Ed: And finally, who do you think is driving the debate about sovereignty and cloud services: government or industry?

Kristina: In the Western world it is government with its special security needs and buying power to which industry is responsive. As a nascent technology cloud services nonetheless thrive on business with governments because it opens new markets where previously in-house IT services dominated in the public sector.


Read the full paper: Kristina Irion (2013) Government Cloud Computing and National Data Sovereignty. Policy and Internet 4 (3/4) 40–71.

Kristina Irion was talking to blog editor David Sutcliffe.

]]>
Mapping collective public opinion in the Russian blogosphere https://ensr.oii.ox.ac.uk/mapping-collective-public-opinion-in-the-russian-blogosphere/ Mon, 10 Feb 2014 11:30:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2372 Caption
Widely reported as fraudulent, the 2011 Russian Parliamentary elections provoked mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia. Image by Nikolai Vassiliev.

Blogs are becoming increasingly important for agenda setting and formation of collective public opinion on a wide range of issues. In countries like Russia where the Internet is not technically filtered, but where the traditional media is tightly controlled by the state, they may be particularly important. The Russian language blogosphere counts about 85 million blogs – an amount far beyond the capacities of any government to control – and the Russian search engine Yandex, with its blog rating service, serves as an important reference point for Russia’s educated public in its search of authoritative and independent sources of information. The blogosphere is thereby able to function as a mass medium of “public opinion” and also to exercise influence.

One topic that was particularly salient over the period we studied concerned the Russian Parliamentary elections of December 2011. Widely reported as fraudulent, they provoked immediate and mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia, as well as corresponding activity in the blogosphere. Protesters made effective use of the Internet to organize a movement that demanded cancellation of the parliamentary election results, and the holding of new and fair elections. These protests continued until the following summer, gaining widespread national and international attention.

Most of the political and social discussion blogged in Russia is hosted on the blog platform LiveJournal. Some of these bloggers can claim a certain amount of influence; the top thirty bloggers have over 20,000 “friends” each, representing a good circulation for the average Russian newspaper. Part of the blogosphere may thereby resemble the traditional media; the deeper into the long tail of average bloggers, however, the more it functions as more as pure public opinion. This “top list” effect may be particularly important in societies (like Russia’s) where popularity lists exert a visible influence on bloggers’ competitive behavior and on public perceptions of their significance. Given the influence of these top bloggers, it may be claimed that, like the traditional media, they act as filters of issues to be thought about, and as definers of their relative importance and salience.

Gauging public opinion is of obvious interest to governments and politicians, and opinion polls are widely used to do this, but they have been consistently criticized for the imposition of agendas on respondents by pollsters, producing artefacts. Indeed, the public opinion literature has tended to regard opinion as something to be “extracted” by pollsters, which inevitably pre-structures the output. This literature doesn’t consider that public opinion might also exist in the form of natural language texts, such as blog posts, that have not been pre-structured by external observers.

There are two basic ways to detect topics in natural language texts: the first is manual coding of texts (ie by traditional content analysis), and the other involves rapidly developing techniques of automatic topic modeling or text clustering. The media studies literature has relied heavily on traditional content analysis; however, these studies are inevitably limited by the volume of data a person can physically process, given there may be hundreds of issues and opinions to track — LiveJournal’s 2.8 million blog accounts, for example, generate 90,000 posts daily.

For large text collections, therefore, only the second approach is feasible. In our article we explored how methods for topic modeling developed in computer science may be applied to social science questions – such as how to efficiently track public opinion on particular (and evolving) issues across entire populations. Specifically, we demonstrate how automated topic modeling can identify public agendas, their composition, structure, the relative salience of different topics, and their evolution over time without prior knowledge of the issues being discussed and written about. This automated “discovery” of issues in texts involves division of texts into topically — or more precisely, lexically — similar groups that can later be interpreted and labeled by researchers. Although this approach has limitations in tackling subtle meanings and links, experiments where automated results have been checked against human coding show over 90 percent accuracy.

The computer science literature is flooded with methodological papers on automatic analysis of big textual data. While these methods can’t entirely replace manual work with texts, they can help reduce it to the most meaningful and representative areas of the textual space they help to map, and are the only means to monitor agendas and attitudes across multiple sources, over long periods and at scale. They can also help solve problems of insufficient and biased sampling, when entire populations become available for analysis. Due to their recentness, as well as their mathematical and computational complexity, these approaches are rarely applied by social scientists, and to our knowledge, topic modeling has not previously been applied for the extraction of agendas from blogs in any social science research.

The natural extension of automated topic or issue extraction involves sentiment mining and analysis; as Gonzalez-Bailon, Kaltenbrunner, and Banches (2012) have pointed out, public opinion doesn’t just involve specific issues, but also encompasses the state of public emotion about these issues, including attitudes and preferences. This involves extracting opinions on the issues/agendas that are thought to be present in the texts, usually by dividing sentences into positive and negative. These techniques are based on human-coded dictionaries of emotive words, on algorithmic construction of sentiment dictionaries, or on machine learning techniques.

Both topic modeling and sentiment analysis techniques are required to effectively monitor self-generated public opinion. When methods for tracking attitudes complement methods to build topic structures, a rich and powerful map of self-generated public opinion can be drawn. Of course this mapping can’t completely replace opinion polls; rather, it’s a new way of learning what people are thinking and talking about; a method that makes the vast amounts of user-generated content about society – such as the 65 million blogs that make up the Russian blogosphere — available for social and policy analysis.

Naturally, this approach to public opinion and attitudes is not free of limitations. First, the dataset is only representative of the self-selected population of those who have authored the texts, not of the whole population. Second, like regular polled public opinion, online public opinion only covers those attitudes that bloggers are willing to share in public. Furthermore, there is still a long way to go before the relevant instruments become mature, and this will demand the efforts of the whole research community: computer scientists and social scientists alike.

Read the full paper: Olessia Koltsova and Sergei Koltcov (2013) Mapping the public agenda with topic modeling: The case of the Russian livejournal. Policy and Internet 5 (2) 207–227.

Also read on this blog: Can text mining help handle the data deluge in public policy analysis? by Aude Bicquelet.

References

González-Bailón, S., A. Kaltenbrunner, and R.E. Banches. 2012. “Emotions, Public Opinion and U.S. Presidential Approval Rates: A 5 Year Analysis of Online Political Discussions,” Human Communication Research 38 (2): 121–43.

]]>
Exploring variation in parental concerns about online safety issues https://ensr.oii.ox.ac.uk/exploring-variation-parental-concerns-about-online-safety-issues/ Thu, 14 Nov 2013 08:29:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1208 Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate?

boyd / Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole.

As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices — both the good and the bad. Our goal is to introduce research that can help contextualize socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology.

Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research?

boyd / Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous.

Parents have different and often conflicting views about what’s best for their children or for children writ large. This creates a significant challenge for designing interventions that are meant to be beneficial and applicable to a diverse group of people. What’s beneficial or desirable to one may not be positively received by another. More importantly, what’s helpful to one group of parents may not actually benefit parents or youth as a whole. As a result, we think it’s important to start interrogating assumptions that underpin technology policy interventions so that policymakers have a better understanding of how their decisions affect whom they’re hoping to reach.

Ed: What did your study reveal, and in particular, where do you see the greatest differences in attitudes arising? Did it reveal anything unexpected?

boyd / Hargittai: The most significant take-away from our research is that there are significant demographic differences in concerns about young people. Some of the differences are not particularly surprising. For example, parents of children who have been exposed to pornography or violent content, or who have bullied or been bullied, have greater concern that this will happen to their child. Yet, other factors may be more surprising. For example, we found significant racial and ethnic differences in how parents approach these topics. Black, Hispanic, and Asian parents are much more concerned about at least some of the online safety measures than Whites, even when controlling for socioeconomic factors and previous experiences.

While differences in cultural experiences may help explain some of these findings, our results raise serious questions as to the underlying processes and reasons for these discrepancies. Are these parents more concerned because they have a higher level of distrust for technology? Because they feel as though there are fewer societal protections for their children? Because they feel less empowered as parents? We don’t know. Still, our findings challenge policy-makers to think about the diversity of perspectives their law-making should address. And when they enact laws, they should be attentive to how those interventions are received. Just because parents of colour are more concerned does not mean that an intervention intended to empower them will do so. Like many other research projects, this study results in as many — if not more — questions than it answers.

Ed: Are parents worrying about the right things? For example, you point out that ‘stranger danger’ registers the highest level of concern from most parents, yet this is a relatively rare occurrence. Bullying is much more common, yet not such a source of concern. Do we need to do more to educate parents about risks, opportunities and coping?

boyd / Hargittai: Parental fear is a contested issue among scholars and for good reason. In many ways, it’s a philosophical issue. Should parents worry more about frequent but low-consequence issues? Or should they concern themselves more with the possibility of rare but devastating incidents? How much fear is too much fear? Fear is an understandable response to danger, but left unchecked, it can become an irrational response to perceived but unlikely risks. Fear can prevent injury, but too much fear can result in a form of protectionism that itself can be harmful. Most parents want to protect their children from harm but few think about the consequences of smothering their children in their efforts to keep them safe. All too often, in erring on the side of caution, we escalate a societal tendency to become overprotective, limiting our children’s opportunities to explore, learn, be creative and mature. Finding the right balance is very tricky.

People tend to fear things that they don’t understand. New technologies are often terrifying because they are foreign. And so parents are reasonably concerned when they see their children using tools that confound them. One of the best antidotes to fear is knowledge. Although this is outside of the scope of this paper, we strongly recommend that parents take the time to learn about the tools that their children are using, ideally by discussing them with their children. The more that parents can understand the technological choices and decisions made by their children, the more that parents can help them navigate the risks and challenges that they do face, online and off.

Ed: On the whole, it seems that parents whose children have had negative experiences online are more likely to say they are concerned, which seems highly appropriate. But we also have evidence from other studies that many parents are unaware of such experiences, and also that children who are more vulnerable offline, may be more vulnerable online too. Is there anything in your research to suggest that certain groups of parents aren’t worrying enough?

boyd / Hargittai: As researchers, we regularly use different methodologies and different analytical angles to get at various research questions. Each approach has its strengths and weaknesses, insights and blind spots. In this project, we surveyed parents, which allows us to get at their perspective, but it limits our ability to understand what they do not know or will not admit. Over the course of our careers, we’ve also surveyed and interviewed numerous youth and young adults, parents and other adults who’ve worked with youth. In particular, danah has spent a lot of time working with at-risk youth who are especially vulnerable. Unfortunately, what she’s learned in the process — and what numerous survey studies have shown — is that those who are facing some of the most negative experiences do not necessarily have positive home life experiences. Many youth face parents who are absent, addicts, or abusive; these are the youth who are most likely to be physically, psychologically, or socially harmed, online and offline.

In this study, we took parents at face value, assuming that parents are good actors with positive intentions. It is important to recognise, however, that this cannot be taken for granted. As with all studies, our findings are limited because of the methodological approach we took. We have no way of knowing whether or not these parents are paying attention, let alone whether or not their relationship to their children is unhealthy.

Although the issues of abuse and neglect are outside of the scope of this particular paper, these have significant policy implications. Empowering well-intended parents is generally a good thing, but empowering abusive parents can create unintended consequences for youth. This is an area where much more research is needed because it’s important to understand when and how empowering parents can actually put youth at risk in different ways.

Ed: What gaps remain in our understanding of parental attitudes towards online risks?

boyd / Hargittai: As noted above, our paper assumes well-intentioned parenting on behalf of caretakers. A study could explore online attitudes in the context of more information about people’s general parenting practices. Regarding our findings about attitudinal differences by race and ethnicity, much remains to be done. While existing literature alludes to some reasons as to why we might observe these variations, it would be helpful to see additional research aiming to uncover the sources of these discrepancies. It would be fruitful to gain a better understanding of what influences parental attitudes about children’s use of technology in the first place. What role do mainstream media, parents’ own experiences with technology, their personal networks, and other factors play in this process?

Another line of inquiry could explore how parental concerns influence rules aimed at children about technology uses and how such rules affect youth adoption and use of digital media. The latter is a question that Eszter is addressing in a forthcoming paper with Sabrina Connell, although that study does not include data on parental attitudes, only rules. Including details about parental concerns in future studies would allow more nuanced investigation of the above questions. Finally, much is needed to understand the impact that policy interventions in this space have on parents, youth, and communities. Even the most well-intentioned policy may inadvertently cause harm. It is important that all policy interventions are monitored and assessed as to both their efficacy and secondary effects.


Read the full paper: boyd, d., and Hargittai, E. (2013) Connected and Concerned: Exploring Variation in Parental Concerns About Online Safety Issues. Policy and Internet 5 (3).

danah boyd and Eszter Hargittai were talking to blog editor David Sutcliffe.

]]>
Can text mining help handle the data deluge in public policy analysis? https://ensr.oii.ox.ac.uk/can-text-mining-help-handle-data-deluge-public-policy-analysis/ Sun, 27 Oct 2013 12:29:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2273 Policy makers today must contend with two inescapable phenomena. On the one hand, there has been a major shift in the policies of governments concerning participatory governance – that is, engaged, collaborative, and community-focused public policy. At the same time, a significant proportion of government activities have now moved online, bringing about “a change to the whole information environment within which government operates” (Margetts 2009, 6).

Indeed, the Internet has become the main medium of interaction between government and citizens, and numerous websites offer opportunities for online democratic participation. The Hansard Society, for instance, regularly runs e-consultations on behalf of UK parliamentary select committees. For examples, e-consultations have been run on the Climate Change Bill (2007), the Human Tissue and Embryo Bill (2007), and on domestic violence and forced marriage (2008). Councils and boroughs also regularly invite citizens to take part in online consultations on issues affecting their area. The London Borough of Hammersmith and Fulham, for example, recently asked its residents for thier views on Sex Entertainment Venues and Sex Establishment Licensing policy.

However, citizen participation poses certain challenges for the design and analysis of public policy. In particular, governments and organizations must demonstrate that all opinions expressed through participatory exercises have been duly considered and carefully weighted before decisions are reached. One method for partly automating the interpretation of large quantities of online content typically produced by public consultations is text mining. Software products currently available range from those primarily used in qualitative research (integrating functions like tagging, indexing, and classification), to those integrating more quantitative and statistical tools, such as word frequency and cluster analysis (more information on text mining tools can be found at the National Centre for Text Mining).

While these methods have certainly attracted criticism and skepticism in terms of the interpretability of the output, they offer four important advantages for the analyst: namely categorization, data reduction, visualization, and speed.

1. Categorization. When analyzing the results of consultation exercises, analysts and policymakers must make sense of the high volume of disparate responses they receive; text mining supports the structuring of large amounts of this qualitative, discursive data into predefined or naturally occurring categories by storage and retrieval of sentence segments, indexing, and cross-referencing. Analysis of sentence segments from respondents with similar demographics (eg age) or opinions can itself be valuable, for example in the construction of descriptive typologies of respondents.

2. Data Reduction. Data reduction techniques include stemming (reduction of a word to its root form), combining of synonyms, and removal of non-informative “tool” or stop words. Hierarchical classifications, cluster analysis, and correspondence analysis methods allow the further reduction of texts to their structural components, highlighting the distinctive points of view associated with particular groups of respondents.

3. Visualization. Important points and interrelationships are easy to miss when read by eye, and rapid generation of visual overviews of responses (eg dendrograms, 3D scatter plots, heat maps, etc.) make large and complex datasets easier to comprehend in terms of identifying the main points of view and dimensions of a public debate.

4. Speed. Speed depends on whether a special dictionary or vocabulary needs to be compiled for the analysis, and on the amount of coding required. Coding is usually relatively fast and straightforward, and the succinct overview of responses provided by these methods can reduce the time for consultation responses.

Despite the above advantages of automated approaches to consultation analysis, text mining methods present several limitations. Automatic classification of responses runs the risk of missing or miscategorising distinctive or marginal points of view if sentence segments are too short, or if they rely on a rare vocabulary. Stemming can also generate problems if important semantic variations are overlooked (eg lumping together ‘ill+ness’, ‘ill+defined’, and ‘ill+ustration’). Other issues applicable to public e-consultation analysis include the danger that analysts distance themselves from the data, especially when converting words to numbers. This is quite apart from the issues of inter-coder reliability and data preparation, missing data, and insensitivity to figurative language, meaning and context, which can also result in misclassification when not human-verified.

However, when responding to criticisms of specific tools, we need to remember that different text mining methods are complementary, not mutually exclusive. A single solution to the analysis of qualitative or quantitative data would be very unlikely; and at the very least, exploratory techniques provide a useful first step that could be followed by a theory-testing model, or by triangulation exercises to confirm results obtained by other methods.

Apart from these technical issues, policy makers and analysts employing text mining methods for e-consultation analysis must also consider certain ethical issues in addition to those of informed consent, privacy, and confidentiality. First (of relevance to academics), respondents may not expect to end up as research subjects. They may simply be expecting to participate in a general consultation exercise, interacting exclusively with public officials and not indirectly with an analyst post hoc; much less ending up as a specific, traceable data point.

This has been a particularly delicate issue for healthcare professionals. Sharf (1999, 247) describes various negative experiences of following up online postings: one woman, on being contacted by a researcher seeking consent to gain insights from breast cancer patients about their personal experiences, accused the researcher of behaving voyeuristically and “taking advantage of people in distress.” Statistical interpretation of responses also presents its own issues, particularly if analyses are to be returned or made accessible to respondents.

Respondents might also be confused about or disagree with text mining as a method applied to their answers; indeed, it could be perceived as dehumanizing – reducing personal opinions and arguments to statistical data points. In a public consultation, respondents might feel somewhat betrayed that their views and opinions eventually result in just a dot on a correspondence analysis with no immediate, apparent meaning or import, at least in lay terms. Obviously the consultation organizer needs to outline clearly and precisely how qualitative responses can be collated into a quantifiable account of a sample population’s views.

This is an important point; in order to reduce both technical and ethical risks, researchers should ensure that their methodology combines both qualitative and quantitative analyses. While many text mining techniques provide useful statistical output, the UK Government’s prescribed Code of Practice on public consultation is quite explicit on the topic: “The focus should be on the evidence given by consultees to back up their arguments. Analyzing consultation responses is primarily a qualitative rather than a quantitative exercise” (2008, 12). This suggests that the perennial debate between quantitative and qualitative methodologists needs to be updated and better resolved.

References

Margetts, H. 2009. “The Internet and Public Policy.” Policy & Internet 1 (1).

Sharf, B. 1999. “Beyond Netiquette: The Ethics of Doing Naturalistic Discourse Research on the Internet.” In Doing Internet Research, ed. S. Jones, London: Sage.


Read the full paper: Bicquelet, A., and Weale, A. (2011) Coping with the Cornucopia: Can Text Mining Help Handle the Data Deluge in Public Policy Analysis? Policy & Internet 3 (4).

Dr Aude Bicquelet is a Fellow in LSE’s Department of Methodology. Her main research interests include computer-assisted analysis, Text Mining methods, comparative politics and public policy. She has published a number of journal articles in these areas and is the author of a forthcoming book, “Textual Analysis” (Sage Benchmarks in Social Research Methods, in press).

]]>
Predicting elections on Twitter: a different way of thinking about the data https://ensr.oii.ox.ac.uk/predicting-elections-on-twitter-a-different-way-of-thinking-about-the-data/ Sun, 04 Aug 2013 11:43:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1498 GOP presidential nominee Mitt Romney
GOP presidential nominee Mitt Romney, centre, waving to crowd, after delivering his acceptance speech on the final night of the 2012 Republican National Convention. Image by NewsHour.

Recently, there has been a lot of interest in the potential of social media as a means to understand public opinion. Driven by an interest in the potential of so-called “big data”, this development has been fuelled by a number of trends. Governments have been keen to create techniques for what they term “horizon scanning”, which broadly means searching for the indications of emerging crises (such as runs on banks or emerging natural disasters) online, and reacting before the problem really develops. Governments around the world are already committing massive resources to developing these techniques. In the private sector, big companies’ interest in brand management has fitted neatly with the potential of social media monitoring. A number of specialised consultancies now claim to be able to monitor and quantify reactions to products, interactions or bad publicity in real time.

It should therefore come as little surprise that, like other research methods before, these new techniques are now crossing over into the competitive political space. Social media monitoring, which in theory can extract information from tweets and Facebook posts and quantify positive and negative public reactions to people, policies and events has an obvious utility for politicians seeking office. Broadly, the process works like this: vast datasets relating to an election, often running into millions of items, are gathered from social media sites such as Twitter. These data are then analysed using natural language processing software, which automatically identifies qualities relating to candidates or policies and attributes a positive or negative sentiment to each item. Finally, these sentiments and other properties mined from the text are totalised, to produce an overall figure for public reaction on social media.

These techniques have already been employed by the mainstream media to report on the 2010 British general election (when the country had its first leaders debate, an event ripe for this kind of research) and also in the 2012 US presidential election. This growing prominence led my co-author Mike Jensen of the University of Canberra and myself to question: exactly how useful are these techniques for predicting election results? In order to answer this question, we carried out a study on the Republican nomination contest in 2012, focused on the Iowa Caucus and Super Tuesday. Our findings are published in the current issue of Policy and Internet.

There are definite merits to this endeavour. US candidate selection contests are notoriously hard to predict with traditional public opinion measurement methods. This is because of the unusual and unpredictable make-up of the electorate. Voters are likely (to greater or lesser degrees depending on circumstances in a particular contest and election laws in the state concerned) to share a broadly similar outlook, so the electorate is harder for pollsters to model. Turnout can also vary greatly from one cycle to the next, adding an additional layer of unpredictability to the proceedings.

However, as any professional opinion pollster will quickly tell you, there is a big problem with trying to predict elections using social media. The people who use it are simply not like the rest of the population. In the case of the US, research from Pew suggests that only 16 per cent of internet users use Twitter, and while that figure goes up to 27 per cent of those aged 18-29, only 2 per cent of over 65s use the site. The proportion of the electorate voting for within those categories, however, is the inverse: over 65s vote at a relatively high rate compared to the 18-29 cohort. furthermore, given that we know (from research such as Matthew Hindman’s The Myth of Digital Democracy) that the only a very small proportion of people online actually create content on politics, those who are commenting on elections become an even more unusual subset of the population.

Thus (and I can say this as someone who does use social media to talk about politics!) we are looking at an unrepresentative sub-set (those interested in politics) of an unrepresentative sub-set (those using social media) of the population. This is hardly a good omen for election prediction, which relies on modelling the voting population as closely as possible. As such, it seems foolish to suggest that a simply culmination of individual preferences can simply be equated to voting intentions.

However, in our article we suggest a different way of thinking about social media data, more akin to James Surowiecki’s idea of The Wisdom of Crowds. The idea here is that citizens commenting on social media should not be treated like voters, but rather as commentators, seeking to understand and predict emerging political dynamics. As such, the method we operationalized was more akin to an electoral prediction market, such as the Iowa Electronic Markets, than a traditional opinion poll.

We looked for two things in our dataset: sudden changes in the number of mentions of a particular candidate and also words that indicated momentum for a particular candidate, such as “surge”. Our ultimate finding was that this turned out to be a strong predictor. We found that the former measure had a good relationship with Rick Santorum’s sudden surge in the Iowa caucus, although it did also tend to disproportionately-emphasise a lot of the less successful candidates, such as Michelle Bachmann. The latter method, on the other hand, picked up the Santorum surge without generating false positives, a finding certainly worth further investigation.

Our aim in the paper was to present new ways of thinking about election prediction through social media, going beyond the paradigm established by the dominance of opinion polling. Our results indicate that there may be some value in this approach.


Read the full paper: Michael J. Jensen and Nick Anstead (2013) Psephological investigations: Tweets, votes, and unknown unknowns in the republican nomination process. Policy and Internet 5 (2) 161–182.

Dr Nick Anstead was appointed as a Lecturer in the LSE’s Department of Media and Communication in September 2010, with a focus on Political Communication. His research focuses on the relationship between existing political institutions and new media, covering such topics as the impact of the Internet on politics and government (especially e-campaigning), electoral competition and political campaigns, the history and future development of political parties, and political mobilisation and encouraging participation in civil society.

Dr Michael Jensen is a Research Fellow at the ANZSOG Institute for Governance (ANZSIG), University of Canberra. His research spans the subdisciplines of political communication, social movements, political participation, and political campaigning and elections. In the last few years, he has worked particularly with the analysis of social media data and other digital artefacts, contributing to the emerging field of computational social science.

]]>
How effective is online blocking of illegal child sexual content? https://ensr.oii.ox.ac.uk/how-effective-is-online-blocking-of-illegal-child-sexual-content/ Fri, 28 Jun 2013 09:30:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1576 Anonymous Belgium
The recent announcement by ‘Anonymous Belgium’ (above) that they would ‘liberate the Belgian Web’ on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking?

Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email.

Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites – notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are?

Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures.

It is, in our view, especially important at a time when governments do not hesitate to carry out secret online surveillance of citizens without any transparency or accountability, that any interference with online content must be clearly prescribed by law, have a legitimate aim and, most importantly, be proportional and not go beyond what is necessary to achieve that aim. In addition, the role of private actors, such as ISPs, search engine companies or social networks, must be very carefully considered. It must be clear that decisions about which content or behaviours are illegal and/or harmful must be taken or at least be surveyed by the judicial power in a democratic society.

Ed: You suggest that removal of websites at their source (mostly in the US and Canada) is a more effective means of stopping the distribution of child pornography — but that European law enforcement has often been insufficiently committed to such action. Why is this? And how easy are cross-jurisdictional efforts to tackle this sort of content?

Authors: The blocking of websites is, although questionably ineffective as a method of making the content inaccessible, a quick way to be seen to take action against the appearance of unwanted material on the Internet. The removal of content on the other hand requires not only the identification of those responsible for hosting the content but more importantly the actual perpetrators. This is of course a more intrusive and lengthy process, for which law enforcement agencies currently lack resources.

Moreover, these agencies may indeed run into obstacles related to territorial jurisdiction and difficult international cooperation. However, prioritising and investing in actual removal of content, even though not feasible in certain circumstances, will ensure that child sexual abuse images do not further circulate, and, hence, that the risk of repetitive re-victimization of abused children is reduced.


Read the full paper: Karel Demeyer, Eva Lievens and Jos Dumortier (2012) Blocking and Removing Illegal Child Sexual Content: Analysis from a Technical and Legal Perspective. Policy and Internet 4 (3-4).

Karel Demeyer, Eva Lievens and Jos Dumortier were talking to blog editor Heather Ford.

]]>
Presenting the moral imperative: effective storytelling strategies by online campaigning organisations https://ensr.oii.ox.ac.uk/presenting-the-moral-imperitive-storytelling-strategies-by-online-campaigning-organisations/ Tue, 25 Jun 2013 09:45:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1309 Online campaigning organisations are on the rise. They have captured the imagination of citizens and scholars alike with their ability to use rapid response tactics to engage with public policy debate and mobilize citizens. Early on Andrew Chadwick (2007) labeled these new campaign organisations as ‘hybrids’: using both online and offline political action strategies, as well as intentionally switching repertoires to sometimes act like a mass mobilisation social movement, and other times like an insider interest group.

These online campaigning organisations run multi-issue agendas, are geographically decentralized, and run sophisticated media strategies. The best known of these are MoveOn in the US, internationally focused Avaaz, and GetUp! in Australia. However, new online campaigning organisations are emerging all the time that more often than not have direct lineage through former staff and similar tactics to this first wave. These newer organisations include the UK-based 38 Degrees, SumOfUs that works on consumer issues to hold corporations accountable, and Change.Org, a for-profit organisation that hosts and develops petitions for grassroots groups.

Existing civil society focused organisations are also being challenged to fundamentally change their approach, to move political tactics and communications online, and to grow their member lists. David Karpf (2012) has branded this “MoveOn Effect”, where the success of online campaigning organisations like MoveOn has fundamentally changed and disrupted the advocacy organisation scene. But how has this shift occurred? How have these new organisations succeeded in being both innovative and politically successful?

One increasingly common answer is to focus on how they have developed low threshold online tactics where the risk to participants is reduced. This includes issue and campaign specific online petitions, letter writing, emails, donating money, and boycotts. The other answer is to focus more closely on the discursive tactics these organisations use in their campaigns, based on a shared commitment to a storytelling strategy, and the practical realisation of a ‘theory of change’. That is, to ask how campaigns produce successful stories that follow a concrete theory of why taking action inevitably leads to a desired result.

Storytelling is a device for explaining politics and a campaign via “cause and effect relations, through its sequencing of events, rather than by appeals to standards of logic and proof” (Polletta et al. 2011, 111). These campaign stories characteristically have a plot and identifiable characters, a beginning and middle to the story, but the recipient of the story can create, or rather act out, the end. Framing is important to understanding social movement action but a narrative or storytelling driven analysis focuses more on how language or rhetoric is used, and reveals the underlying “common sense” and emotional frames used in online campaigns’ delivery of messages (Polletta 2009). Polletta et al. (2011, 122) suggest that activists have been successful against better resourced and influential opponents and elites when they “sometimes but not always, have been able to exploit popular associations of narrative with people over power, and moral urgency over technical rationality”.

We have identified four stages of storytelling that need to occur for a campaign to be successful:

  1. An emotional identification with the issue by the story recipient, to mobilize participation
  2. A shared sense of community on the issue, to build solidarity (‘people over power’)
  3. Moral urgency for action (rather than technical persuasion), to resolve the issue and create social change
  4. Securing of public and political support by neutralising counter-movements.

The new online campaigning organisations all prioritise a storytelling approach in their campaigns, using it to build their own autobiographical story and to differentiate what they do from ‘politics as usual’, characterised as party-based, adversarial politics. Harvard scholar and organising practitioner Marshall Ganz’s ideas on the practice of storytelling underpin the philosophy of the New Organizing Institute, which provides training for increasing numbers of online activists. Having received training, these professional campaigners informally join the network of established and emerging ‘theory of change’ organisations such as MoveOn, AVAAZ, Organising for America, the Progressive Change Campaign Committee, SumOfUs, and so on.

GetUp! is a member of this network, has over 600,000 members in Australia, and has conducted high-profile public policy campaigns on issues as diverse as mental health, electoral law, same-sex marriage, and climate change. GetUp!’s communications strategy tries to use storytelling to reorient Australian political debate — and the nature of politics itself — in an affective way. And underpinning all their political tactics is the construction of effective online campaign stories. GetUp! has used stories to help citizens, and to a lesser extent, decision-makers, identify with an issue, build community, and act in recognition of the moral urgency for political change.

Yet despite GetUp!’s commitment to a storytelling technique, it does not always work — these organisations rarely publicise their failed campaigns, or those that do not even get past the initial email ‘ask’. It is important to look at how campaigns unfold to see how storytelling develops, and also to judge whether it is a success or not. This moves the analysis onto an organisation’s whole campaign, rather than studying only decontextualised emails or online petitions. In contrasting two campaigns in-depth we judged one on mental health policy to be a success in meeting the four storytelling criteria; and the other on climate change policy (which was promoted externally as a success) to actually be a storytelling failure.

The mental health story was able to build solidarity and emotional identification around families and friends of those with illness (not sufferers themselves) after celebrity experts launched the campaign to bring awareness to and increase funding for mental health. Mental health was presented by GetUp! as a purely moral dilemma, with very little mention by any opponents of the economic implications of policy reform. In the end the policy was changed, an extra $2.2 billion of funding for mental health was announced in the 2011 Federal Budget, and the Australian Prime Minister appeared with GetUp! in an online video to make the funding announcement.

GetUp’s climate change storytelling, however, failed on all four criteria. Despite national policy change taking place similar to what they had advocated, GetUp!’s climate change campaign did not achieve the level of member or public mobilisation achieved by their mental health campaign. GetUp! used partisan, adversarial tactics that can be partly attributed to climate change becoming an increasingly polarised issue in Australian political debate. This was particularly the case as the oppositional counter-movement successfully reframed climate change as solely an economic issue, focusing on the imposition of an expensive new tax. This story defeated GetUp’s moral urgency story, and their attempt to create ‘people-power’ mobilised for a shared environmental concern.

Why is thinking about this important? For a few reasons. It helps us to see online tactics within the context of a broader political campaign, and challenges us to think about how to judge both successful mobilisation and political influence of hybrid online campaign organisations. Yet it also points to the limitations of an affective approach based on moral urgency alone. Technical persuasion and, more often than not, economic reality still matter for both mobilisation and political change.

References

Chadwick, Andrew (2007) “Digital Network Repertoires and Organizational Hybridity” Political Communication, 24 (3): 283-301.

Karpf, David (2012) The MoveOn Effect: The unexpected transformation of American political advocacy. Oxford: Oxford University Press.

Polletta, Francesca (2009) “Storytelling in social movements” in Culture, Social Movements and Protest ed. Hank Johnston Surrey: Ashgate, 33-54.

Polletta, Francesca, Pang Ching, Bobby Chen, Beth Gharrity Gardner, and Alice Motes (2011) “The sociology of storytelling” Annual Review of Sociology, 37: 109–30.


Read the full paper: Vromen, A. and Coleman, W. (2013) Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5 (1).

]]>
The global fight over copyright control: Is David beating Goliath at his own game? https://ensr.oii.ox.ac.uk/the-global-fight-over-copyright-control-is-david-beating-goliath-at-his-own-game/ Mon, 10 Jun 2013 11:27:11 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1262 Anti-HADOPI march in Paris
Anti-HADOPI march in Paris, 2009. Image by kurto.

In the past few years, many governments have attempted to curb online “piracy” by enforcing harsher copyright control upon Internet users. This trend is now well documented in the academic literature, as with Jon Bright and José Agustina‘s or Sebastian Haunss‘ recent reviews of such developments.

However, as the digital copyright control bills of the 21st century reached parliamentary floors, several of them failed to pass. Many of these legislative failures, such as the postponement of the SOPA and PIPA bills in the United States, succeeded in mobilizing large audiences and received widespread media coverage.

Writing about these bills and the related events that led to the demise of the similarly-intentioned Anti-Counterfeiting Treaty Agreement (ACTA), Susan Sell, a seasoned analyst of intellectual property enforcement, points to the transnational coalition of Internet users at the heart of these outcomes. As she puts it:

In key respects, this is a David and Goliath story in which relatively weak activists were able to achieve surprising success against the strong.

That analogy also appears in our recently published article in Policy & Internet, which focuses on the groups that fought several digital copyright control bills as they went through the European and French parliaments in 2007-2009 — most notably the EU “Telecoms Package” and the French “HADOPI” laws.

Like Susan Sell, our analysis shows “David” civil society groups formed by socially and technically skilled activists disrupting the work of “Goliath” coalitions of powerful actors that had previously been successful at converting the interests of the so-called “creative industries” into copyright law.

To explain this process, we stress the importance of digital environments for providing contenders of copyright reform with a robust discursive opportunity structure — a space in which activist groups could defend and diffuse alternative understandings and practices of copyright control and telecommunication reform.

These counter-frames and practices refer to the Internet as a public good, and make openness, sharing and creativity central features of the new digital economy. They also require that copyright control and telecom regulation respect basic principles of democratic life, such as the right to access information.

Once put into the public space by skilled activists from the free software community and beyond, this discourse chimed with a larger audience, which eventually led many European and French parliamentarians to oppose “graduated response” and “three-strikes” initiatives that threatened Internet users with Internet access termination for successive copyright infringement. The reforms that we studied had different legal outcomes, thereby reflecting the current state of copyright regulation.

In our analysis, we say a lot more about the kind of skills that we briefly allude to here, such as political coding abilities to support political and legal analysis. We also draw on previous work by Andrew Chadwick to forge the concept of digital network repertoires of contention, by which we mean the tactical use of digital communication to mobilize individuals into loose protest groups.

This part of our research sheds light on how “David” ended up beating “Goliath”, with activists relying on their technical skills and high levels of digital literacy to overcome the logic of collective action and to counterbalance their comparatively weak economic resources.

However, as we write in our paper, David does not systematically beat Goliath over copyright control and telecom regulation. The “three-strikes” or “graduated response” approach to unauthorized file-sharing, where Internet users are monitored and sanctioned if suspected of digital “piracy”, is still very much alive.

France is an interesting case study to date, as it pioneered this scheme under Nicolas Sarkozy’s presidency. Although the current left-wing government seems determined to dismantle the “HADOPI” body set up by its predecessor, which has proven largely ineffective in curbing online copyright infringement, it has not renounced the monitoring and sanctioning of illegal file-sharing.

Furthermore, as both our case studies illustrate, online collective action had to be complemented by offline lobbying and alliances with like-minded parliamentary actors, consumer groups and businesses to work effectively. The extent to which activism has actually gone ‘digital’ therefore requires some nuance.

Finally, as we stress in our article and as Yana observes in her literature review on Internet content regulation in liberal democracies, further comparative work is needed to assess whether the “Davids” of Internet activism are beating the “Goliaths” in the global fight over online file-sharing and copyright control.

We therefore hope that our article will incite other researchers to study the social groups that compete over intellectual property lawmaking. The legislative landscape is rife with reforms of copyright law and telecom regulation, and the conflicts that they generate carry important lessons for Internet politics scholars.

References

Breindl, Y. and Briatte, F. (2013) Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5 (1) 27-55.

Breindl, Y. (2013) Internet content regulation in liberal democracies. A literature review. Working Papers on Digital Humanities, Institut für Politikwissenschaft der Georg-August-Universität Göttingen.

Bright, J. and Agustina, J.R. (2013) Mediating Surveillance: The Developing Landscape of European Online Copyright Enforcement. Journal of Contemporary European Research 9 (1).

Chadwick, A. (2007) Digital Network Repertoires and Organizational Hybridity. Political Communication 24 (3).

Haunss, S. (2013) Conflicts in the Knowledge Society: The Contentious Politics of Intellectual Property. Cambridge Intellectual Property and Information Law (No. 20), Cambridge University Press.

Sell, S.K. (2013) Revenge of the “Nerds”: Collective Action against Intellectual Property Maximalism in the Global Information Age. International Studies Review 15 (1) 67-85.


Read the full paper: Yana Breindl and François Briatte (2012) Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5 (1).

]]>
How accessible are online legislative data archives to political scientists? https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/ https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/#comments Mon, 03 Jun 2013 12:07:40 +0000 http://blogs.oii.ox.ac.uk/policy/?p=654 House chamber of the Utah State Legislature
A view inside the House chamber of the Utah State Legislature. Image by deltaMike.

Public demands for transparency in the political process have long been a central feature of American democracy, and recent technological improvements have considerably facilitated the ability of state governments to respond to such public pressures. With online legislative archives, state legislatures can make available a large number of public documents. In addition to meeting the demands of interest groups, activists, and the public at large, these websites enable researchers to conduct single-state studies, cross-state comparisons, and longitudinal analysis.

While online legislative archives are, in theory, rich sources of information that save researchers valuable time as they gather data across the states, in practice, government agencies are rarely completely transparent, often do not provide clear instructions for accessing the information they store, seldom use standardized norms, and can overlook user needs. These obstacles to state politics research are longstanding: Malcolm Jewell noted almost three decades ago the need for “a much more comprehensive and systematic collection and analysis of comparative state political data.” While the growing availability of online legislative resources helps to address the first problem of collection, the limitations of search and retrieval functions remind us that the latter remains a challenge.

The fifty state legislative websites are quite different; few of them are intuitive or adequately transparent, and there is no standardized or systematic process to retrieve data. For many states, it is not possible to identify issue-specific bills that are introduced and/or passed during a specific period of time, let alone the sponsors or committees, without reading the full text of each bill. For researchers who are interested in certain time periods, policy areas, committees, or sponsors, the inability to set filters or immediately see relevant results limits their ability to efficiently collect data.

Frustrated by the obstacles we faced in undertaking a study of state-level immigration legislation before and after September 11, 2001, we decided to instead  evaluate each state legislative website — a “state of the states” analysis — to help scholars who need to understand the limitations of the online legislative resources they may want to use. We evaluated three main dimensions on an eleven-point scale: (1) the number of searchable years; (2) the keyword search filters; and (3) the information available on the immediate results pages. The number of searchable sessions is crucial for researchers interested in longitudinal studies, before/after comparisons, other time-related analyses, and the activity of specific legislators across multiple years. The “search interface” helps researchers to define, filter, and narrow the scope of the bills—a particularly important feature when keywords can generate hundreds of possibilities. The “results interface” allows researchers to determine if a given bill is relevant to a research project.

Our paper builds on the work of other scholars and organizations interested in state policy. To help begin a centralized space for data collection, Kevin Smith and Scott Granberg-Rademacker publicly invited “researchers to submit descriptions of data sources that were likely to be of interest to state politics and policy scholars,” calling for “centralized, comprehensive, and reliable datasets” that are easy to download and manipulate. In this spirit, Jason Sorens, Fait Muedini, and William Ruger introduced a free database that offered a comprehensive set of variables involving over 170 public policies at the state and local levels in order to “reduce reduplication of scholarly effort.” The National Conference of State Legislatures (NCSL) provides links to state legislatures, bill lists, constitutions, reports, and statutes for all fifty states. The State Legislative History Research Guides compiled by the University of Indiana Law School also include links to legislative and historical resources for the states, such as the Legislative Reference Library of Texas. However, to our knowledge, no existing resource assesses usability across all state websites.

So, what did we find during our assessment of the state websites? In general, we observed that the archival records as well as the search and results functions leave considerable room for improvement. The maximum possible score was 11 in each year, and the average was 3.87 in 2008 and 4.25 in 2010. For researchers interested in certain time periods, policy areas, committees, or sponsors, the inability to set filters, immediately see relevant results, and access past legislative sessions limits their ability to complete projects in a timely manner (or at all). We also found a great deal of variation in site features, content, and navigation. Greater standardization would improve access to information about state policymaking by researchers and the general public—although some legislators may well see benefits to opacity.

While we noted some progress over the study period, not all change was positive. By 2010, two states had scored 10 points (no state scored the full 11), fewer states had very low scores, and the average score rose slightly from 3.87 to 4.25 (out of 11). This suggests slow but steady improvement, and the provision of a baseline of support for researchers. However, a quarter of the states showed score drops over the study period, for the most part reflecting the adoption of “Powered by Google” search tools that used only keywords, and some in a very limited manner. If the latter becomes a trend, we could see websites becoming less, not more, user friendly in the future.

In addition, our index may serve as a proxy variable for state government transparency. While  the website scores were not statistically associated with Robert Erikson, Gerald Wright, and John McIver’s measure of state ideology, there may nevertheless be promise for future research along these lines; additional transparency determinants worth testing include legislative professionalism and social capital. Moving forward, the states might consider creating a working group to share ideas and best practices, perhaps through an organization like the National Conference of State Legislatures, rather than the national government, as some states might resist leadership from D.C. on federalist grounds.

Helen Margetts (2009) has noted that “The Internet has the capacity to provide both too much (which poses challenges to analysis) and too little data (which requires innovation to fill the gaps).” It is notable, and sometimes frustrating, that state legislative websites illustrate both dynamics. As datasets come online at an increasing rate, it is also easy to forget that websites can vary in terms of user friendliness, hierarchical structure, search terms and functions, terminology, and navigability — causing unanticipated methodological and data capture problems (i.e. headaches) to scholars working in this area.


Read the full paper: Taofang Huang, David Leal, B.J. Lee, and Jill Strube (2012) Assessing the Online Legislative Resources of the American States. Policy and Internet 4 (3-4).

]]>
https://ensr.oii.ox.ac.uk/how-accessible-are-online-legislative-data-archives-to-political-scientists/feed/ 1
Online collective action and policy change: new special issue from Policy and Internet https://ensr.oii.ox.ac.uk/online-collective-action-and-policy-change-new-special-issue-from-policy-and-internet/ Mon, 18 Mar 2013 14:22:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=854 The Internet has multiplied the platforms available to influence public opinion and policy making. It has also provided citizens with a greater capacity for coordination and mobilisation, which can strengthen their voice and representation in the policy agenda. As waves of protest sweep both authoritarian regimes and liberal democracies, this rapidly developing field calls for more detailed enquiry. However, research exploring the relationship between online mobilisation and policy change is still limited. This special issue of ‘Policy and Internet’ addresses this gap through a variety of perspectives. Contributions to this issue view the Internet both as a tool that allows citizens to influence policy making, and as an object of new policies and regulations, such as data retention, privacy, and copyright laws, around which citizens are mobilising. Together, these articles offer a comprehensive empirical account of the interface between online collective action and policy making.

Within this framework, the first article in this issue, “Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena?” by Stefania Milan and Arne Hintz (2013), looks at the Internet as both a tool of collective action and an object of policy. The authors provide a comprehensive overview of how computer-mediated communication creates not only new forms of organisational structure for collective action, but also new contentious policy fields. By focusing on what the authors define as ‘techie activists,’ Milan and Hintz explore how new grassroots actors participate in policy debates around the governance of the Internet at different levels. This article provides empirical evidence to what Kriesi et al. (1995) defines as “windows of opportunities” for collective action to contribute to the policy debate around this new space of contentious politics. Milan and Hintz demonstrate how this has happened from the first World Summit of Information Society (WSIS) in 2003 to more recent debates about Internet regulation.

Yana Breindl and François Briatte’s (2013) article “Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union” complements Milan and Hintz’s analysis by looking at how the regulation of copyright issues opens up new spaces of contentious politics. The authors compare how online and offline initiatives and campaigns in France around the “Droit d’Auteur et les Droits Voisins dans la Société de l’Information” (DADVSI) and “Haute Autorité pour la diffusion des œuvres et la protection des droits sure Internet” (HADOPI) laws, and in Europe around the Telecoms Package Reform, have contributed to the deliberations within the EU Parliament. They thus add to the rich debate on the contentious issues of intellectual property rights, demonstrating how collective action contributes to this debate at the European level.

The remaining articles in this special issue focus more on the online tactics and strategies of collective actors and the opportunities opened by the Internet for them to influence policy makers. In her article, “Activism and The Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies?” Julie Uldam (2013) discusses the tactics used by London-based environmental activists to influence policy making during the 17th UN climate conference (COP17) in 2011. Based on ethnographic research, Uldam traces the relationship between online modes of action and problem identification and demands. She also discusses the differences between radical and reformist activists in both their preferences for online action and their attitudes towards policy makers. Drawing on Cammaerts’ (2012) framework of the mediation opportunity structure, Uldam shows that radical activists preferred online tactics that aimed at disrupting the conference, since they viewed COP17 as representative of an unjust system. However, their lack of technical skills and resources prevented them from disrupting the conference in the virtual realm. Reformist activists, on the other hand, considered COP17 as a legitimate adversary, and attempted to influence its politics mainly through the diffusion of alternative information online.

The article by Ariadne Vromen and William Coleman (2013) “Online Campaigning Organizations and Storytelling Strategies: GetUp! Australia,” also investigates a climate change campaign but shifts the focus to the new ‘hybrid’ collective actors, who use the Internet extensively for campaigning. Based on a case study of GetUp!, Vromen and Coleman examine the storytelling strategies employed by the organisation in two separate campaigns, one around climate change, the other around mental health. The authors investigate the factors that led one campaign to be successful and the other to have limited resonance. They also skilfully highlight the difficulties encountered by new collective actors to gain legitimacy and influence policy making. In this respect, GetUp! used storytelling to set itself apart from traditional party-based politics and to emphasise its identity as an organiser and representative of grassroots communities, rather than as an insider lobbyist or disruptive protestor.

Romain Badouard and Laurence Monnoyer-Smith (2013), in their article “Hyperlinks as Political Resources: The European Commission Confronted with Online Activism,” explore some of the more structured ways in which citizens use online tools to engage with policy makers. They investigate the political opportunities offered by the e-participation and e-government platforms of the European Commission for activists wishing to make their voice heard in the European policy making sphere. They focus particularly on strategic uses of web technical resources and hyperlinks, which allows citizens to refine their proposals and thus increase their influence on European policy.

Finally, Jo Bates’ (2013) article “The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis” provides a pertinent framework that facilitates our understanding of the policy challenges posed by the issue of open data. The digitisation of data offers new opportunities for increasing transparency; traditionally considered a fundamental public good. By focusing on the Open Data Government initiative in the UK, Bates explores the policy challenges generated by increasing transparency via new Internet platforms by applying the established theoretical instruments of Gramscian ‘Trasformismo.’ This article frames the open data debate in terms consistent with the literature on collective action, and provides empirical evidence as to how citizens have taken an active role in the debate on this issue, thereby challenging the policy debate on public transparency.

Taken together, these articles advance our understanding of the interface between online collective action and policy making. They introduce innovative theoretical frameworks and provide empirical evidence around the new forms of collective action, tactics, and contentious politics linked with the emergence of the Internet. If, as Melucci (1996) argues, contemporary social movements are sensors of new challenges within current societies, they can be an enriching resource for the policy debate arena. Gaining a better understanding of how the Internet might strengthen this process is a valuable line of enquiry.

Read the full article at: Calderaro, A. and Kavada A., (2013) “Challenges and Opportunities of Online Collective Action for Policy Change“, Policy and Internet 5(1).

Twitter: @AnastasiaKavada / @andreacalderaro
Web: Anastasia’s Personal Page / Andrea’s Personal Page

References

Badouard, R., and Monnoyer-Smith, L. 2013. Hyperlinks as Political Resources: The European Commission Confronted with Online Activism. Policy and Internet 5(1).

Bates, J. 2013. The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis. Policy and Internet 5(1).

Breindl, Y., and Briatte, F. 2013. Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5(1).

Cammaerts, Bart. 2012. “Protest Logics and the Mediation Opportunity Structure.” European Journal of Communication 27(2): 117–134.

Kriesi, Hanspeter. 1995. “The Political Opportunity Structure of New Social Movements: its Impact on their Mobilization.” In The Politics of Social Protest, eds. J. Jenkins and B. Dermans. London: UCL Press, pp. 167–198.

Melucci, Alberto. 1996. Challenging Codes: Collective Action in the Information Age. Cambridge: Cambridge University Press.

Milan, S., and Hintz, A. 2013. Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Policy and Internet 5(1).

Uldam, J. 2013. Activism and the Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies? Policy and Internet 5(1).

Vromen, A., and Coleman, W. 2013. Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5(1).

]]>
Did Libyan crisis mapping create usable military intelligence? https://ensr.oii.ox.ac.uk/did-libyan-crisis-mapping-create-usable-military-intelligence/ Thu, 14 Mar 2013 10:45:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=817 The Middle East has recently witnessed a series of popular uprisings against autocratic rulers. In mid-January 2011, Tunisian President Zine El Abidine Ben Ali fled his country, and just four weeks later, protesters overthrew the regime of Egyptian President Hosni Mubarak. Yemen’s government was also overthrown in 2011, and Morocco, Jordan, and Oman saw significant governmental reforms leading, if only modestly, toward the implementation of additional civil liberties.

Protesters in Libya called for their own ‘day of rage’ on February 17, 2011, marked by violent protests in several major cities, including the capitol Tripoli. As they transformed from ‘protestors’ to ‘Opposition forces’ they began pushing information onto Twitter, Facebook, and YouTube, reporting their firsthand experiences of what had turned into a civil war virtually overnight. The evolving humanitarian crisis prompted the United Nations to request the creation of the Libya Crisis Map, which was made public on March 6, 2011. Other, more focused crisis maps followed, and were widely distributed on Twitter.

While the map was initially populated with humanitarian information pulled from the media and online social networks, as the imposition of an internationally enforced No Fly Zone (NFZ) over Libya became imminent, information began to appear on it that appeared to be of a tactical military nature. While many people continued to contribute conventional humanitarian information to the map, the sudden shift toward information that could aid international military intervention was unmistakable.

How useful was this information, though? Agencies in the U.S. Intelligence Community convert raw data into useable information (incorporated into finished intelligence) by utilizing some form of the Intelligence Process. As outlined in the U.S. military’s joint intelligence manual, this consists of six interrelated steps all centered on a specific mission. It is interesting that many Twitter users, though perhaps unaware of the intelligence process, replicated each step during the Libyan civil war; producing finished intelligence adequate for consumption by NATO commanders and rebel leadership.

It was clear from the beginning of the Libyan civil war that very few people knew exactly what was happening on the ground. Even NATO, according to one of the organization’s spokesmen, lacked the ground-level informants necessary to get a full picture of the situation in Libya. There is no public information about the extent to which military commanders used information from crisis maps during the Libyan civil war. According to one NATO official, “Any military campaign relies on something that we call ‘fused information’. So we will take information from every source we can… We’ll get information from open source on the internet, we’ll get Twitter, you name any source of media and our fusion centre will deliver all of that into useable intelligence.”

The data in these crisis maps came from a variety of sources, including journalists, official press releases, and civilians on the ground who updated blogs and/or maintaining telephone contact. The @feb17voices Twitter feed (translated into English and used to support the creation of The Guardian’s and the UN’s Libya Crisis Map) included accounts of live phone calls from people on the ground in areas where the Internet was blocked, and where there was little or no media coverage. Twitter users began compiling data and information; they tweeted and retweeted data they collected, information they filtered and processed, and their own requests for specific data and clarifications.

Information from various Twitter feeds was then published in detailed maps of major events that contained information pertinent to military and humanitarian operations. For example, as fighting intensified, @LibyaMap’s updates began to provide a general picture of the battlefield, including specific, sourced intelligence about the progress of fighting, humanitarian and supply needs, and the success of some NATO missions. Although it did not explicitly state its purpose as spreading mission-relevant intelligence, the nature of the information renders alternative motivations highly unlikely.

Interestingly, the Twitter users featured in a June 2011 article by the Guardian had already explicitly expressed their intention of affecting military outcomes in Libya by providing NATO forces with specific geographical coordinates to target Qadhafi regime forces. We could speculate at this point about the extent to which the Intelligence Community might have guided Twitter users to participate in the intelligence process; while NATO and the Libyan Opposition issued no explicit intelligence requirements to the public, they tweeted stories about social network users trying to help NATO, likely leading their online supporters to draw their own conclusions.

It appears from similar maps created during the ongoing uprisings in Syria that the creation of finished intelligence products by crisis mappers may become a regular occurrence. Future study should focus on determining the motivations of mappers for collecting, processing, and distributing intelligence, particularly as a better understanding of their motivations could inform research on the ethics of crisis mapping. It is reasonable to believe that some (or possibly many) crisis mappers would be averse to their efforts being used by military commanders to target “enemy” forces and infrastructure.

Indeed, some are already questioning the direction of crisis mapping in the absence of professional oversight (Global Brief 2011): “[If] crisis mappers do not develop a set of best practices and shared ethical standards, they will not only lose the trust of the populations that they seek to serve and the policymakers that they seek to influence, but (…) they could unwittingly increase the number of civilians being hurt, arrested or even killed without knowing that they are in fact doing so.”


Read the full paper: Stottlemyre, S., and Stottlemyre, S. (2012) Crisis Mapping Intelligence Information During the Libyan Civil War: An Exploratory Case Study. Policy and Internet 4 (3-4).

]]>
Uncovering the structure of online child exploitation networks https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/ https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/#comments Thu, 07 Feb 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=661 The Internet has provided the social, individual, and technological circumstances needed for child pornography to flourish. Sex offenders have been able to utilize the Internet for dissemination of child pornographic content, for social networking with other pedophiles through chatrooms and newsgroups, and for sexual communication with children. A 2009 estimate by the United Nations estimates that there are more than four million websites containing child pornography, with 35 percent of them depicting serious sexual assault [1]. Even if this report or others exaggerate the true prevalence of those websites by a wide margin, the fact of the matter is that those websites are pervasive on the world wide web.

Despite large investments of law enforcement resources, online child exploitation is nowhere near under control, and while there are numerous technological products to aid in finding child pornography online, they still require substantial human intervention. Despite this, steps can be taken to increase the automation process of these searches, to reduce the amount of content police officers have to examine, and increase the time they can spend on investigating individuals.

While law enforcement agencies will aim for maximum disruption of online child exploitation networks by targeting the most connected players, there is a general lack of research on the structural nature of these networks; something we aimed to address in our study, by developing a method to extract child exploitation networks, map their structure, and analyze their content. Our custom-written Child Exploitation Network Extractor (CENE) automatically crawls the Web from a user-specified seed page, collecting information about the pages it visits by recursively following the links out of the page; the result of the crawl is a network structure containing information about the content of the websites, and the linkages between them [2].

We chose ten websites as starting points for the crawls; four were selected from a list of known child pornography websites while the other six were selected and verified through Google searches using child pornography search terms. To guide the network extraction process we defined a set of 63 keywords, which included words commonly used by the Royal Canadian Mounted Police to find illegal content; most of them code words used by pedophiles. Websites included in the analysis had to contain at least seven of the 63 unique keywords, on a given web page; manual verification showed us that seven keywords distinguished well between child exploitation web pages and regular web pages. Ten sports networks were analyzed as a control.

The web crawler was found to be able to properly identify child exploitation websites, with a clear difference found in the hardcore content hosted by child exploitation and non-child exploitation websites. Our results further suggest that a ‘network capital’ measure — which takes into account network connectivity, as well as severity of content — could aid in identifying the key players within online child exploitation networks. These websites are the main concern of law enforcement agencies, making the web crawler a time saving tool in target prioritization exercises. Interestingly, while one might assume that website owners would find ways to avoid detection by a web crawler of the type we have used, these websites — despite the fact that much of the content is illegal — turned out to be easy to find. This fits with previous research that has found that only 20-25 percent of online child pornography arrestees used sophisticated tools for hiding illegal content [3,4].

As mentioned earlier, the huge amount of content found on the Internet means that the likelihood of eradicating the problem of online child exploitation is nil. As the decentralized nature of the Internet makes combating child exploitation difficult, it becomes more important to introduce new methods to address this. Social network analysis measurements, in general, can be of great assistance to law enforcement investigating all forms of online crime—including online child exploitation. By creating a web crawler that reduces the amount of hours officers need to spend examining possible child pornography websites, and determining whom to target, we believe that we have touched on a method to maximize the current efforts by law enforcement. An automated process has the added benefit of aiding to keep officers in the department longer, as they would not be subjugated to as much traumatic content.

There are still areas for further research; the first step being to further refine the web crawler. Despite being a considerable improvement over a manual analysis of 300,000 web pages, it could be improved to allow for efficient analysis of larger networks, bringing us closer to the true size of the full online child exploitation network, but also, we expect, to some of the more hidden (e.g., password/membership protected) websites. This does not negate the value of researching publicly accessible websites, given that they may be used as starting locations for most individuals.

Much of the law enforcement to date has focused on investigating images, with the primary reason being that databases of hash values (used to authenticate the content) exists for images, and not for videos. Our web crawler did not distinguish between the image content, but utilizing known hash values would help improve the validity of our severity measurement. Although it would be naïve to suggest that online child exploitation can be completely eradicated, the sorts of social network analysis methods described in our study provide a means of understanding the structure (and therefore key vulnerabilities) of online networks; in turn, greatly improving the effectiveness of law enforcement.

[1] Engeler, E. 2009. September 16. UN Expert: Child Porn on Internet Increases. The Associated Press.

[2] Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

[3] Carr, J. 2004. Child Abuse, Child Pornography and the Internet. London: NCH.

[4] Wolak, J., D. Finkelhor, and K.J. Mitchell. 2005. “Child Pornography Possessors Arrested in Internet-Related Crimes: Findings from the National Juvenile Online Victimization Study (NCMEC 06–05–023).” Alexandria, VA: National Center for Missing and Exploited Children.


Read the full paper: Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

]]>
https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/feed/ 2
Searching for a “Plan B”: young adults’ strategies for finding information about emergency contraception online https://ensr.oii.ox.ac.uk/searching-for-a-plan-b-young-adults-strategies-for-finding-information-about-emergency-contraception-online/ Mon, 15 Oct 2012 15:57:26 +0000 http://blogs.oii.ox.ac.uk/policy/?p=365 People increasingly turn to the Internet for health information, with 80 percent of U.S. Internet users (59 percent of adults) having used the Web for this purpose. However, because there is so much health content online, users may find it difficult to find reliable content quickly. Research has also shown that websites hosting information about the most controversial topics – including Emergency Contraceptive Pills, ECPs – contain a great number of inaccuracies. While the Internet is a potentially valuable source of information about sexual health topics for young adults, difficulty in searching and evaluating credibility may prevent them from finding useful information in time.

Emergency contraception has long been heralded as a “second chance” for women to prevent pregnancy after unprotected intercourse. However, the commercial promotion and use of ECPs has been a highly contentious issue in the United States, a fact that has had a significant impact on legislative action and accessibility. Due to their limited window of effectiveness and given that people do not tend to obtain them until the moment when they are needed urgently, it is essential for people to be able to find accurate information about ECPs as quickly as possible.

Our study investigated empirically how over 200 young college students (18-19 years old) at two college campuses in the Midwestern United States searched for and evaluated information about emergency contraception. They were given the hypothetical scenario: “You are at home in the middle of summer. A friend calls you frantically on a Friday at midnight. The condom broke while she was with her boyfriend. What can she do to prevent pregnancy? Remember, neither of you is on campus. She lives in South Bend, Indiana.” All of the students had considerable experience with using the Internet.

Worryingly, a third of the participants, after looking for information online, were unable to conclude that the friend should seek out ECPs. Less than half gave what we consider the ideal response: to have the friend purchase ECPs over the counter at a pharmacy. Some participants suggested such solutions as “wait it out,” “adoption,” “visit a gynecologist” (in the incorrect location), and purchasing another condom. Three percent of respondents came to no conclusion at all.

While adolescents often claim to be confident in searching for information online, they are often unsystematic in their search and few students made a concerted effort to verify information they found during their search. The presence of a dot-org domain name was sometimes cited as a measure of credibility: “Cause it’s like a government issued kind of website,” noted one participant. While it’s encouraging that students are aware of different top-level domain names, it’s alarming that their knowledge of what they signify can be wrong: dot-org sites are not sanctioned any more than are dot-com sites and thus should not be considered a signal of credibility.

Another student assumed that “the main website” for the morning after pill was morningafterpill.org, which happens to be sponsored by the American Life League, a pro-life organization. The website includes articles with titles such as “Emergency Contraception: the Truth, the Whole Truth, and Nothing but the Truth,” as well as advocacy by medical professionals matching the perspectives of the American Life League. This demonstrates the way in which people and organizations with a particular agenda can publicize any type of information – in this case erroneous health information – to the public.

Overall, the findings suggest that despite information theoretically available on the Web about emergency contraception, even young adults with considerable online experiences may not be able to find it in a time of need. Many respondents were uncertain of how to begin looking for information; some did not immediately consider the Internet as a primary source for it. An important policy implication of this study is that it is problematic to assume that just because content exists online, it is easily within the reach of all users. In particular, it is a mistake to think that just because young people grew up with digital media, they are universally savvy with finding and evaluating Web content.

Given the importance of finding credible and accurate health-related content, it is important to understand the strategies people use to find information so that obstacles can be addressed – rather than taking such know-how for granted, educational institutions should think about incorporating related content into their curricula. Additionally, related services should be available at establishments such as public libraries available to those not enrolled in school.

In some cases particular search terms determined whether people found the right information: providers of content about emergency contraception need to be aware of this. The study also raises questions about search engine practices. While search engine companies seem to take pride in letting their algorithms sort out the ranking of search results, is it ideal or responsible to leave content important to people’s health in the hands of automated processes that are open to manipulation?

Algorithms themselves are not neutral – they include lots of decisions taken by their creators – yet the idea of “algorithm literacy” is not a topic taken up in educational curricula or public conversations.

]]>
The “IPP2012: Big Data, Big Challenges” conference explores the new research frontiers opened up by big data .. as well as its limitations https://ensr.oii.ox.ac.uk/the-ipp2012-big-data-big-challenges-conference-explores-the-new-research-frontiers-opened-up-by-big-data-as-well-as-its-limitations/ Mon, 24 Sep 2012 10:50:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=447 Recent years have seen an increasing buzz around how ‘Big Data’ can uncover patterns of human behaviour and help predict social trends. Most social activities today leave digital imprints that can be collected and stored in the form of large datasets of transactional data. Access to this data presents powerful and often unanticipated opportunities for researchers and policy makers to generate new, precise, and rapid insights into economic, social and political practices and processes, as well as to tackle longstanding problems that have hitherto been impossible to address, such as how political movements like the ‘Arab Spring’ and Occupy originate and spread.

Helen Margetts
Opening comments from convenor, Helen Margetts
While big data can allow the design of efficient and realistic policy and administrative change, it also brings ethical challenges (for example, when it is used for probabilistic policy-making), raising issues of justice, equity and privacy. It also presents clear methodological and technical challenges: big data generation and analysis requires expertise and skills which can be a particular challenge to governmental organizations, given their dubious record on the guardianship of large scale datasets, the management of large technology-based projects, and capacity to innovate. It is these opportunities and challenges that were addressed by the recent conference “Internet, Politics, Policy 2012: Big Data, Big Challenges?” organised by the Oxford Internet Institute (University of Oxford) on behalf of the OII-edited academic journal Policy and Internet. Over the two days of paper and poster presentations and discussion it explored the new research frontiers opened up by big data as well as its limitations, serving as a forum to encourage discussion across disciplinary boundaries on how to exploit this data to inform policy debates and advance social science research.

Duncan Watts
Duncan Watts (Keynote Speaker)
The conference was organised along three tracks: “Policy,” “Politics,” and Data+Methods (see the programme) with panels focusing on the impact of big data on (for example) political campaigning, collective action and political dissent, sentiment analysis, prediction of large-scale social movements, government, public policy, social networks, data visualisation, and privacy. Webcasts are now available of the keynote talks given by Nigel Shadbolt (University of Southampton and Open Data Institute) and Duncan Watts (Microsoft Research). A webcast is also available of the opening plenary panel, which set the scene for the conference, discussing the potential and challenges of big data for public policy-making, with participation from Helen Margetts (OII), Lance Bennett (University of Washington, Seattle), Theo Bertram (UK Policy Manager, Google), and Patrick McSharry (Mathematical Institute, University of Oxford), chaired by Victoria Nash (OII).

IPP2012 Convenors and Prize Winners
Poster Prize Winner Shawn Walker (left) and Paper Prize Winner Jonathan Bright (right) with IPP2012 convenors Sandra Gonzalez-Bailon (left) and Helen Margetts (right).
The evening receptions were held in the Ashmolean Museum (allowing us to project exciting data visualisations onto their shiny white walls), and the University’s Natural History Museum, which provided a rather more fossil-focused ambience. We are very pleased to note that the “Best Paper” winners were Thomas Chadefaux (ETH Zurich) for his paper: Early Warning Signals for War in the News, and Jonathan Bright (EUI) for his paper: The Dynamics of Parliamentary Discourse in the UK: 1936-2011. The Google-sponsored “Best Poster” prize winners were Shawn Walker (University of Washington) for his poster (with Joe Eckert, Jeff Hemsley, Robert Mason, and Karine Nahon): SoMe Tools for Social Media Research, and Giovanni Grasso (University of Oxford) for his poster (with Tim Furche, Georg Gottlob, and Christian Schallhart): OXPath: Everyone can Automate the Web!

Many of the conference papers are available on the conference website; the conference special issue on big data will be published in the journal Policy and Internet in 2013.

]]>
eHealth: what is needed at the policy level? New special issue from Policy and Internet https://ensr.oii.ox.ac.uk/ehealth-what-is-needed-at-the-policy-level/ Thu, 24 May 2012 16:36:23 +0000 http://blogs.oii.ox.ac.uk/policy/?p=399 The explosive growth of the Internet and its omnipresence in people’s daily lives has facilitated a shift in information seeking on health, with the Internet now a key information source for the general public, patients, and health professionals. The Internet also has obvious potential to drive major changes in the organization and delivery of health services efforts, and many initiatives are harnessing technology to support user empowerment. For example, current health reforms in England are leading to a fragmented, marketized National Health Service (NHS), where competitive choice designed to drive quality improvement and efficiency savings is informed by transparency and patient experiences, and with the notion of an empowered health consumer at its centre.

Is this aim of achieving user empowerment realistic? In their examination of health queries submitted to the NHS Direct online enquiry service, John Powell and Sharon Boden find that while patient empowerment does occur in the use of online health services, it is constrained and context dependent. Policymakers wishing to promote greater choice and control among health system users should therefore take account of the limits to empowerment as well as barriers to participation. The Dutch government’s online public national health and care portal similarly aims to facilitate consumer decision-making behavior and increasing transparency and accountability to improve quality of care and functioning of health markets. Interestingly, Hans Ossebaard, Lisette van Gemert-Pijnen and Erwin Seydel find the influence of the Dutch portal on choice behavior, awareness, and empowerment of users to actually be small.

The Internet is often discussed in terms of empowering (or even endangering) patients through broadening of access to medical and health-related information, but there is evidence that concerns about serious negative effects of using the Internet for health information may be ill-founded. The cancer patients in the study by Alison Chapple, Julie Evans and Sue Ziebland gave few examples of harm from using the Internet or of damage caused to their relationships with health professionals. While policy makers have tended to focus on regulating the factual content of online information, in this study it was actually the consequences of stumbling on factually correct (but unwelcome) information that most concerned the patients and families; good practice guidelines for health information may therefore need to pay more attention to website design and user routing, as well as to the accuracy of content.

Policy makers and health professionals should also acknowledge the often highly individual strategies people use to access health information online, and understand how these practices are shaped by technology — the study by Astrid Mager found that the way people collected and evaluated online information about chronic diseases was shaped by search engines as much as by their individual medical preferences.

Many people still lack the necessary skills to navigate online content effectively. Eszter Hargittai and Heather Young examined the experiences a diverse group of young adults looking for information about emergency contraception online, finding that the majority of the study group could not identify the most efficient way of acquiring emergency contraception in a time of need. Given the increasing trend for people to turn to the Internet for health information, users must possess the necessary skills to make effective and efficient use of it; an important component of this may concern educational efforts to help people better navigate the Web. Improving general e-Health literacy is one of several recommendations by Maria De Jesus and Chenyang Xiao, who examined how Hispanic adults in the United States search for health information online. They report a striking language divide, with English proficiency of the user largely predicting online health information-seeking behavior.

Lastly, but no less importantly, is the policy challenge of addressing the issue of patient trust. The study by Ulrike Rauer on the structural and institutional factors that influence patient trust in Internet-based health records found that while patients typically considered medical operators to be more trustworthy than non-medical ones, there was no evidence of a “public–private” divide; patients perceived physicians and private health insurance providers to be more trustworthy than the government and corporations. Patient involvement in terms of access and control over their records was also found to be trust enhancing.

A lack of policy measures is a common barrier to success of eHealth initiatives; it is therefore essential that we develop measures that facilitate the adoption of initiatives and that demonstrate their success through improvement in services and the health status of the population. The articles presented in this special issue of Policy & Internet provide the sort of evidence-based insight that is urgently needed to help shape these policy measures. The empirical research and perspectives gathered here will make a valuable contribution to future efforts in this area.

]]>
Public policy responses to cybercrime: a new special issue from Policy and Internet https://ensr.oii.ox.ac.uk/public-policy-responses-to-cybercrime-a-new-special-issue-from-policy-and-internet/ Wed, 01 Jun 2011 11:38:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=645 Cybercrime is just one of many significant and challenging issues of ethics and public policy raised by the Internet. It has policy implications for both national and supra-national legislation, involving, as it may, attacks against the integrity, authenticity, and confidentiality of information systems; content-related crimes, and “traditional”: crimes committed using networked technologies.

While ‘cybercrime’ can be used to describe a wide range of undesirable conduct facilitated by networked technologies, it is not a legal term of art, and many so-called cybercrimes (such as cyber-rape and the virtual vandalism of virtual worlds) are not necessarily crimes as far as the criminal law is concerned. This can give rise to novel situations in which outcomes that feel instinctively wrong do not give rise to criminal liability. Emily Finch discusses the tragic case of Bernard Gilbert: a man whose argument over a disputed parking space led to his death, a police officer having disclosed Gilbert’s home address to his assailant. The officer was charged simply with the offence of disclosing personal data; the particular consequences of such disclosure being immaterial under English criminal law. Finch argues that this is unsatisfactory: as more personal information is gathered and available online, the greater the potential risk to the individual from its unauthorized disclosure. She advocates a two-tier structure for liability in the event that disclosure results in harm.

Clearly, current criminal law often struggles to deal with behaviours that were either not technologically possible at the time that the law was made, or were not within the contemplation of the legislature. Conversely, criminal liability might arise unexpectedly. Sandra Schmitz and Lawrence Siry explore the curious overlap between the nature and motivation of sexting and the possibility that it might fall foul of child pornography laws. They argue that the law as it stands is questionable, as the nature and content of ‘sext’ messages is generally at odds with conceptualizations of child pornography. Given that laws designed to protect children may also be used to criminalize unremarkable adolescent behaviour, this is an example of where the law should clearly be changed.

Simone van der Hof and Bert-Jaap Koops also address the mobile Internet usage of young people, considering the boundaries between freedom and autonomy on the one hand and control and repression on the other, arguing that the criminal law is yet again somewhat inadequate, as its overuse can lead to adolescents relying on legal protection rather than taking more proactive responsibility for their own online safety. The role of policy here should be to stimulate digital literacy in the first instance and, for the graver risks, to foster a co-regulatory regime by the Internet industry; criminalization should only be used as a last resort. However, the technology itself can also be used to assist law enforcement agencies in policing the Internet, for example in identifying and targeting online child exploitation networks. Bryce G. Westlake, Martin Bouchard, and Richard Frank demonstrate the use of an automated tool to provide greater efficiency in target prioritization for policing authorities; also opening the policy discussion surrounding the desirability of augmentation of traditional police procedures by automated software agents.

As well as harm to individuals, misuse of the Internet can also cause immense commercial damage to the creative industries by unlawful copyright infringement. The empirical study by Jonathan Basamanowicz and Martin Bouchard on the policing of copyright piracy rings suggests that increased regulation will only encourage offenders to adapt their behaviour into something less amenable to control. They argue that an effective response must consider the motivations and modus operandi of the offenders, and propose a situational crime prevention framework which may, at the very least, curtail their activities. Finally, again from an economic perspective, Michael R. Hammock considers the economics of website security seals and analyses whether market forces are controlling privacy and security adequately, concluding that unilateral regulation may actually harm those consumers who might not care whether or not they are protected, but who will have to bear the indirect cost in the form of a premium for increased protection.

The spectrum of policy responses can therefore be seen as existing along a continuum, with “top-down” responses originating from state agencies at one extreme, to bottom-up responses originating from private institutions at the other. Malcolm Shore, Yi Du, and Sherali Zeadally apply this public–private model to the regulation of cyber-attacks on critical national infrastructures. They consider the benefits and limitations of the various public and private initiatives designed to implement a national cybersecurity strategy for New Zealand and propose a model of assured public–private partnership based on incentivized adoption as the most effective way forward.

The articles in this special issue show that there can be no single simple policy-based solution to cybercrime, but suggest instead that there is a role for policymakers to introduce multi-tiered responses, involving a mix of the law, education, industry responsibility, and technology. Responses will also often require cooperation between law enforcement organizations, international coordination, and cooperation between the public and private spheres. It is clear that an effective response to cybercrime must therefore be greater than the sum of its parts, and should evolve as part of the diffuse governance network which results from the complex, yet natural, tensions between law, society, and the Internet.

]]>
Internet, Politics, Policy 2010: Wrap-Up https://ensr.oii.ox.ac.uk/internet-politics-policy-2010-wrap-up/ Fri, 17 Sep 2010 19:46:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=91 Our two-day conference is just about to come to an end with an evening reception at Oxford’s Ashmolean Museum (you can have a live view through OII’s very own webcam…). Its aim was to try to make an assessment of the Internet’s impact on politics and policy. The presentations approached this challenge from a number of different angles and we would like to encourage everyone to browse the archive of papers on the conference website to get a comprehensive overview about much of the cutting-edge research that is currently taking place in many different parts of the world.

The submissions to this conference allowed setting up very topical panels in which the different papers fitted together rather well. Helen Margetts, the convenor, highlighted in her summary just how much discussion and informed exchange has been going on within these panels. But a conference is more than the collection of papers delivered. It is just as much about the social gathering of people who share similar interests and the conference schedule tried to accommodate for this by offering many coffee breaks to encourage more informal exchange. It is a testimony to the success of this strategy that the majority of people have very much welcomed the idea to have a similar conference in two years time, details of which are yet to be confirmed.

Great thanks to everybody who helped to make this conference happen, in particular OII’s dedicated support staff such as journal editor David Sutcliffe and events manager Tim Davies.

]]>