social media – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:22 +0000 en-GB hourly 1 Censorship or rumour management? How Weibo constructs “truth” around crisis events https://ensr.oii.ox.ac.uk/censorship-or-rumour-management-how-weibo-constructs-truth-around-crisis-events/ Tue, 03 Oct 2017 08:48:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4350 As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality — as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking.

There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts“, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion.

We caught up with the authors to discuss their findings:

Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both exploring mechanisms to try to combat it. On the flip-side we have state-sponsored propaganda now suddenly very visible (e.g. Russia), in an attempt to reduce trust, destabilise institutions, and inject rumour into the public sphere. What is the difference between rumour, propaganda and fake news; and how do they play out online in China?

Jing / Chung-hong / King-wa: The definition of rumour is very fuzzy, and it is very common to see ‘rumour’ being used interchangeably with other related concepts. Our study drew the definition of rumour from the fields of sociology and social psychology, wherein this concept has been most thoroughly articulated.

Rumour is a form of unverified information circulated in uncertain circumstances. The major difference between rumour and propaganda lies in their functions. Rumour sharing is a social practice of sense-making, therefore it functions to help people make meaning of an uncertain situation. In contrast, the concept of propaganda is more political. Propaganda is a form of information strategically used to mobilise political support for a political force.

Fake news is a new buzz word and works closely with another buzz term – post-truth. There is no established and widely accepted definition of fake news, and its true meaning(s) should be understood with respect to specific contexts. For example, Donald Trump’s use of “fake news” in his tweets aims to attack a few media outlets who have reported unfavourable stories about the him, whereas ungrounded and speculative “fake news” is created and widely circulated on the public’s social media. If we simply understand fake news as a form of fabricated news, I would argue that fake news can operate as either rumour, propaganda, or both of them.

It is worth pointing out that, in the Chinese contexts, rumour may not always be fake and propaganda is not necessarily bad. As pointed out by different scholars, rumour functions as a social protest against the authoritarian state’s information control. And in the Chinese language, the Mandarin term Xuanchuan (‘propaganda’) does not always have the same negative connotation as does its English counterpart.

Ed.: You mention previous research finding that the “Chinese government’s propaganda and censorship policies were mainly used by the authoritarian regime to prevent collective action and to maintain social stability” — is that what you found as well? i.e. that criticism of the Government is tolerated, but not organised protest?

Jing / Chung-hong / King-wa: This study examined rumour communication around the 2015 Tianjin blasts, therefore our analyses did not directly address Weibo users’ attempts to organise protest. However, regarding the Chinese government’s response to Weibo users’ criticism of its handling of the crisis, our study suggested that some criticisms of the government were tolerated. For example, the messages about local government officials mishandling of the crisis were not heavily censored. Instead, what we have found seems to confirm that social stability is of paramount importance for the ruling regime and thus online censorship was used as a mean to maintain social stability. It explains Weibo’s decision to silence the discussions on the assault of a CNN reporter, the chaotic aftermath of the blasts and the local media’s reluctance to broadcast the blasts.

Ed.: What are people’s responses to obvious government attempts to censor or head-off online rumour, e.g. by deleting posts or issuing statements? And are people generally supportive of efforts to have a “clean, rumour-free Internet”, or cynical about the ultimate intentions or effects of censorship?

Jing / Chung-hong / King-wa: From our time series analysis, we found different responses from netizens with respect to topics but we cannot find a consistent pattern of a chilling effect. Basically, the Weibo rumour management strategies, either deleting posts or refuting posts, will usually stimulate more public interest. At least as shown in our data, netizens are not supportive of those censorship efforts and somehow end up posting more messages of rumours as a counter-reaction.

Ed.: Is online rumour particularly a feature of contemporary Chinese society — or do you think that’s just a human thing (we’ve certainly seen lots of lying in the Brexit and Trump campaigns)? How might rumour relate more generally to levels of trust in institutions, and the presence of a strong, free press?

Jing / Chung-hong / King-wa: Online rumour is common in China, but it can be also pervasive in any country where use of digital technologies for communication is prevalent. Rumour sharing is a human thing, yes you can say that. But it is more accurate to say, it is a societally constructed thing. As mentioned earlier, rumour is a social practice of collective sense-making under uncertain circumstances.

Levels of public trust in governmental organisations and the media can directly impact rumour circulation, and rumour-debunking efforts. When there is a lack of public trust in official sources of information, it opens up room for rumour circulation. Likewise, when the authorities have low credibility, the official rumour debunking efforts can backfire, because the public may think the authorities are trying to hide something. This might explain what we observed in our study.

Ed.: I guess we live in interesting times; Theresa May now wants to control the Internet, Trump is attacking the very institution of the press, social media companies are under pressure to accept responsibility for the content they host. What can we learn from the Chinese case, of a very sophisticated system focused on social control and stability?

Jing / Chung-hong / King-wa: The most important implication of this study is that the most sophisticated rumour control mechanism can only be developed on a good understanding of the social roots of rumour. As our study shows, without solving the more fundamental social cause of rumour, rumour debunking efforts can backfire.


Read the full article: Jing Zeng, Chung-hong Chan and King-wa Fu (2017) How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts. Policy & Internet 9 (3) 297-320. DOI: 10.1002/poi3.155

Jing Zeng, Chung-hong Chan and King-wa Fu were talking to blog editor David Sutcliffe.

]]>
How policy makers can extract meaningful public opinion data from social media to inform their actions https://ensr.oii.ox.ac.uk/extracting-meaningful-public-opinion-data-from-social-media-to-inform-policy-makers/ Fri, 07 Jul 2017 09:48:53 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4325 The role of social media in fostering the transparency of governments and strengthening the interaction between citizens and public administrations has been widely studied. Scholars have highlighted how online citizen-government and citizen-citizen interactions favour debates on social and political matters, and positively affect citizens’ interest in political processes, like elections, policy agenda setting, and policy implementation.

However, while top-down social media communication between public administrations and citizens has been widely examined, the bottom-up side of this interaction has been largely overlooked. In their Policy & Internet article “The ‘Social Side’ of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle,” Andrea Ceron and Fedra Negri aim to bridge the gap between knowledge and practice, by examining how the information available on social media can support the actions of politicians and bureaucrats along the policy cycle.

Policymakers, particularly politicians, have always been interested in knowing citizens’ preferences, in measuring their satisfaction and in receiving feedback on their activities. Using the technique of Supervised Aggregated Sentiment Analysis, the authors show that meaningful information on public services, programmes, and policies can be extracted from the unsolicited comments posted by social media users, particularly those posted on Twitter. They use this technique to extract and analyse citizen opinion on two major public policies (on labour market reform and school reform) that drove the agenda of the Matteo Renzi cabinet in Italy between 2014 and 2015.

They show how online public opinion reacted to the different policy alternatives formulated and discussed during the adoption of the policies. They also demonstrate how social media analysis allows monitoring of the mobilization and de-mobilization processes of rival stakeholders in response to the various amendments adopted by the government, with results comparable to those of a survey and a public consultation that were undertaken by the government.

We caught up with the authors to discuss their findings:

Ed.: You say that this form of opinion monitoring and analysis is cheaper, faster and easier than (for example) representative surveys. That said, how commonly do governments harness this new form of opinion-monitoring (with the requirement for new data skills, as well as attitudes)? Do they recognise the value of it?

Andrea / Fedri: Governments are starting to pay attention to the world of social media. Just to give an idea, the Italian government has issued a call to jointly collect survey data together with the results of social media analysis and these two types of data are provided in a common report. The report has not been publicly shared, suggesting that the cabinet considers such information highly valuable. VOICES from the blogs, a spin-off created by Stefano Iacus, Luigi Curini and Andrea Ceron (University of Milan), has been involved in this and, for sure, we can attest that in a couple of instances the government modified its actions in line with shifts in public opinion observed both through survey polls and sentiment analysis. This happened with the law on Civil Unions and with the abolishment of the “voucher” (a flexible form of worker payment). So far these are just instances — although there are signs of enhanced responsiveness, particularly when online public opinion represents the core constituency of ruling parties, as the case of the school reform (discussed in the article) clearly indicates: teachers are in fact the core constituency of the Democratic Party.

Ed.: You mention that the natural language used by social media users evolves continuously and is sensitive to the discussed topic: resulting in error. The method you use involves scaling up of a human-coded (=accurate) ontology. Could you discuss how this might work in practice? Presumably humans would need to code the terms of interest first, as it wouldn’t be able to pick up new issues (e.g. around a completely new word: say, “Bowling Green”?) automatically.

Andrea / Fedri: Gary King says that the best technology is human empowered. There are at least two great advantages in exploiting human coders. First, with our technique coders manage to get rid of noise better than any algorithm, as often a single word can be judged to be in-topic or out of topic based on the context and on the rest of the sentence. Second, human-coders can collect deeper information by mining the real opinions expressed in the online conversations. This sometimes allows them to detect, bottom-up, some arguments that were completely ignored ex-ante by scholars or analysts.

Ed.: There has been a lot of debate in the UK around “false balance”, e.g. the BBC giving equal coverage to climate deniers (despite being a tiny, unrepresentative, and uninformed minority), in an attempt at “impartiality”: how do you get round issues of non-representativeness in social media, when tracking — and more importantly, acting on — opinion?

Andrea / Fedri: Nowadays social media are a non-representative sample of a country’s population. However, the idea of representativeness linked to the concept of “public opinion” dates back to the early days of polling. Today, by contrast, online conversations often represent an “activated public opinion” comprising stakeholders who express their voices in an attempt to build wider support around their views. In this regard, social media data are interesting precisely due to their non-representativeness. A tiny group can speak loudly and this voice can gain the support of an increasing number of people. If the activated public opinion acts as an “influencer”, this implies that social media analysis could anticipate trends and shifts in public opinion.

Ed.: As data becomes increasingly open and tractable (controlled by people like Google, Facebook, or monitored by e.g. GCHQ / NSA), and text-techniques become increasingly sophisticated: what is the extreme logical conclusion in terms of government being able to track opinion, say in 50 years, following the current trajectory? Or will the natural messiness of humans and language act as a natural upper limit on what is possible?

Andrea / Fedri: The purpose of scientific research, particularly applied research, is to improve our well-being and to make our life easier. For sure there could be issues linked with the privacy of our data and, in a sci-fi scenario, government and police will be able to read our minds — either to prevent crimes and terrorist attacks (as in the Minority Report movie) or to detect, isolate and punish dissent. However, technology is not a standalone object and we should not forget that there are humans behind it. Whether these humans are governments, activists or common citizens, can certainly make a difference. If governments try to misuse technology, they will certainly meet a reaction from citizens — which can be amplified precisely via this new technology.

Read the full article: Ceron, A. and Negri, F. (2016) The “Social Side” of Public Policy: Monitoring Online Public Opinion and Its Mobilization During the Policy Cycle. Policy & Internet 8 (2) DOI:10.1002/poi3.117


Andrea Ceron and Fedra Negri were talking to blog editor David Sutcliffe.

]]>
Social media and the battle for perceptions of the U.S.–Mexico border https://ensr.oii.ox.ac.uk/social-media-and-the-battle-for-perceptions-of-the-u-s-mexico-border/ Wed, 07 Jun 2017 07:33:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4195 The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories.

However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it.

The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”.

In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of the second creates a policy problem that is more value-laden than empirically based and that creates distrust and polarization among stakeholders and between the countries. The U.S.–Mexico border is clearly a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border; possibly creating the greater cooperation we need to solve many of the urgent problems faced by border communities.

We caught up with the authors to discuss their findings:

Ed.: Who created the videos you studied: were they created by the public, or were they also produced by perhaps more progressive media outlets? i.e. were you able to disentangle the effect of the media in terms of these narratives?

Mark / Donna: For this study, we studied YouTube videos, using the “relevance” filter. Thus, the videos were ordered by most related to our topic and by most frequently viewed. With this selection method we captured videos produced by a variety of sources; some that contained embedded videos from mainstream media, others created by non-profit groups and public television groups, but also videos produced by interested citizens or private groups. The non-profit and media groups more often discuss the beneficial elements of the border (trade, shared environmental protection, etc.), while individual citizens or groups tended to post the more emotional and narrative-driven videos more likely to construct the border residents in a non-deserving sense.

Ed.: How influential do you think these videos are? In a world of extreme media concentration (where even the US President seems to get his news from Fox headlines and the 42 people he follows on Twitter) .. how significant is “home grown” content; which after all may have better, or at least more locally-representative, information than certain parts of the national media?

Mark / Donna: Today’s extreme media world supplies us with constant and fast-moving news. YouTube is part of the media mix, frequently mentioned as the second largest search engine on the web, and as such is influential. Media sources report that a large number of diverse people use YouTube, thus the videos encompass a broad swath of international, domestic and local issues. That said, as with most news sources today, some individuals gravitate to the stories that represent their point of view, and YouTube makes it possible for individuals to do just this. In other words, if a person perceives the US-Mexico border as a horrible place, they can use key words to search YouTube videos that represent that point of view.

However, we believe YouTube to be more influential than some other sources precisely because it encompasses diversity, thus, even when searching using specific terms, there will likely be a few videos included in search results that provide a different point of view. Furthermore, we did find some local, “home grown” content included in search results, again adding to the diversity presented to the individual watching YouTube. Although, we found less homegrown content than initially expected. Overall, there is selectivity bias with YouTube, like any type of media, but YouTube’s greater diversity of postings and viewers and broad distribution may increase both exposure and influence.

Ed.: Your article was published pre-Trump. How do you think things might have changed post-election, particularly given the uncertainty over “the wall“ and NAFTA — and Trump’s rather strident narratives about each? Is it still a case of “negative traditional media; equivocal social media”?

Mark / Donna: Our guess is that anti-border forces are more prominent on YouTube since Trump’s election and inauguration. Unless there is an organized effort to counter discussion of “the wall” and produce positive constructions of the border, we expect that YouTube videos posted over the past few months lean more toward non-deserving constructions.

Ed.: How significant do you think social media is for news and politics generally, i.e. its influence in this information environment — compared with (say) the mainstream press and party-machines? I guess Trump’s disintermediated tweeting might have turned a few assumptions on their heads, in terms of the relation between news, social media and politics? Or is the media always going to be bigger than Trump / the President?

Mark / Donna: Social media, including YouTube and Twitter, is interactive and thus allows anyone to bypass traditional institutions. President Trump can bypass institutions of government, media institutions, even his own political party and staff and communicate directly with people via Twitter. Of course, there are advantages to that, including hearing views that differ from the “official lines,” but there are also pitfalls, such as minimized editing of comments.

We believe people see both the strengths and the weakness with social media, and thus often read news from both traditional media sources and social media. Traditional media is still powerful and connected to traditional institutions, thus, remains a substantial source of information for many people — although social media numbers are climbing, particularly with the President’s use of Twitter. Overall, both types of media influence politics, although we do not expect future presidents will necessarily emulate President Trump’s use of social media.

Ed.: Another thing we hear a lot about now is “filter bubbles” (and whether or not they’re a thing). YouTube filters viewing suggestions according to what you watch, but still presents a vast range of both good and mad content: how significant do you think YouTube (and the explosion of smartphone video) content is in today’s information / media environment? (And are filter bubbles really a thing..?)

Mark / Donna: Yeah, we think that the filter bubbles are real. Again, we think that social media has a lot of potential to provide new information to people (and still does); although currently social media is falling into the same selectivity bias that characterizes the traditional media. We encourage our students to use online technology to seek out diverse sources; sources that both mirror their opinions and that oppose their opinions. People in the US can access diverse sources on a daily basis, but they have to be willing to seek out perspectives that differ from their own view, perspectives other than their favoured news source.

The key is getting individuals to want to challenge themselves and to be open to cognitive dissonance as they read or watch material that differs from their belief systems. Technology is advanced but humans still suffer the cognitive limitations from which they have always suffered. The political system in the US, and likely other places, encourages it. The key is for individuals to be willing to listen to views unlike their own.

Read the full article: Lybecker, D.L., McBeth, M.K., Husmann, M.A, and Pelikan, N. (2015) Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube. Policy & Internet 7 (4). DOI: 10.1002/poi3.94.


Mark McBeth and Donna Lybecker were talking to blog editor David Sutcliffe.

]]>
We aren’t “rational actors” when it come to privacy — and we need protecting https://ensr.oii.ox.ac.uk/we-arent-rational-actors-when-it-come-to-privacy-and-we-need-protecting/ Fri, 05 May 2017 08:00:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4100
We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection — and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist” — an autonomous individual who makes informed decisions about the disclosure of their personal information.

But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference — a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and benefits associated with information exchange.

Academic critiques of this model have tended to focus on the methodological and theoretical validity of the pragmatist framework; however, in light of two recent studies that suggest individuals are resigned to the loss of privacy online, this article argues for the need to examine a possibility that has been overlooked as a consequence of this focus on Westin’s typology of privacy preferences: that people have opted out of the discussion altogether. Considering a theory of resignation alters how the problem of privacy is framed and opens the door to alternative discussions around policy solutions.

We caught up with Nora to discuss her findings:

Ed.: How easy is it even to discuss privacy (and people’s “rational choices”), when we know so little about what data is collected about us through a vast number of individually innocuous channels — or the uses to which it is put?

Nora: This is a fundamental challenge in current discussions around privacy. There are steps that we can take as individuals that protect us from particular types of intrusion, but in an environment where seemingly benign data flows are used to understand and predict our behaviours, it is easy for personal privacy protection to feel like an uphill battle. In such an environment, it is increasingly important that we consider resigned inaction to be a rational choice.

Ed.: I’m not surprised that there will be people who basically give up in exhaustion, when faced with the job of managing their privacy (I mean, who actually reads the Google terms that pop up every so often?). Is there a danger that this lack of engagement with privacy will be normalised during a time that we should actually be paying more, not less, attention to it?

Nora: This feeling of powerlessness around our ability to secure opportunities for privacy has the potential to discourage individual or collective action around privacy. Anthropologists Peter Benson and Stuart Kirsch have described the cultivation of resignation as a strategy to discourage collective action against undesirable corporate practices. Whether or not these are deliberate efforts, the consequences of creating a nearly unnavigable privacy landscape is that people may accept undesirable practices as inevitable.

Ed.: I suppose another irony is the difficulty of getting people to care about something that nevertheless relates so fundamentally and intimately to themselves. How do we get privacy to seem more interesting and important to the general public?

Nora: People experience the threats of unwanted visibility very differently. For those who are used to the comfortable feeling of public invisibility — the types of anonymity we feel even in public spaces — the likelihood of an unwanted privacy breach can feel remote. This is one of the problems of thinking about privacy purely as a personal issue. When people internalize the idea that if they have done nothing wrong, they have no reason to be concerned about their privacy, it can become easy to dismiss violations when they happen to others. We can become comfortable with a narrative that if a person’s privacy has been violated, it’s likely because they failed to use the appropriate safeguards to protect their information.

This cultivation of a set of personal responsibilities around privacy is problematic not least because it has the potential to blame victims rather than those parties responsible for the privacy incursions. I believe there is real value in building empathy around this issue. Efforts to treat privacy as a community practice and, perhaps, a social obligation may encourage us to think about privacy as a collective rather than individual value.

Ed.: We have a forthcoming article that explores the privacy views of Facebook / Google (companies and employees), essentially pointing out that while the public may regard privacy as pertaining to whether or not companies collect information in the first place, the companies frame it as an issue of “control” — they collect it, but let users subsequently “control” what others see. Is this fundamental discrepancy (data collection vs control) something you recognise in the discussion?

Nora: The discursive and practical framing of privacy as a question of control brings together issues addressed in your previous two questions. By providing individuals with tools to manage particular aspects of their information, companies are able to cultivate an illusion of control. For example, we may feel empowered to determine who in our digital network has access to a particular posted image, but little ability to determine how information related to that image — for example, its associated metadata or details on who likes, comments, or reposts it — is used.

The “control” framework further encourages us to think about privacy as an individual responsibility. For example, we may assume that unwanted visibility related to that image is the result of an individual’s failure to correctly manage their privacy settings. The reality is usually much more complicated than this assigning of individual blame allows for.

Ed.: How much of the privacy debate and policy making (in the States) is skewed by economic interests — i.e. holding that it’s necessary for the public to provide data in order to keep business competitive? And is the “Europe favours privacy, US favours industry” truism broadly true?

Nora: I don’t have a satisfactory answer to this question. There is evidence from past surveys I’ve done with colleagues that people in the United States are more alarmed by the collection and use of personal information by political parties than they are by similar corporate practices. Even that distinction, however, may be too simplistic. Political parties have an established history of using consumer information to segment and target particular audience groups for political purposes. We know that the U.S. government has required private companies to share information about consumers to assist in various surveillance efforts. Discussions about privacy in the U.S. are often framed in terms of tradeoffs with, for example, technological and economic innovation. This is, however, only one of the ways in which the value of privacy is undermined through the creation of false tradeoffs. Daniel Solove, for example, has written extensively on how efforts to frame privacy in opposition to safety encourages capitulation to transparency in the service of national security.

Ed.: There are some truly terrible US laws (e.g. the General Mining Act of 1872) that were developed for one purpose, but are now hugely exploitable. What is the situation for privacy? Is the law still largely fit for purpose, in a world of ubiquitous data collection? Or is reform necessary?

Nora: One example of such a law is the Electronic Communication Privacy Act (ECPA) of 1986. This law was written before many Americans had email accounts, but continues to influence the scope authorities have to access digital communications. One of the key issues in the ECPA is the differential protection for messages depending on when they were sent. The ECPA, which was written when emails would have been downloaded from a server onto a personal computer, treats emails stored for more than 180 days as “abandoned.” While messages received in the past 180 days cannot be accessed without a warrant, so-called abandoned messages require only a subpoena. Although there is some debate about whether subpoenas offer adequate privacy protections for messages stored on remote servers, the issue is that the time-based distinction created by “180-day rule” makes little sense when access to cloud storage allows people to save messages indefinitely. Bipartisan efforts to introduce the Email Privacy Act, which would extend warrant protections to digital communication that is over 180 days old has received wide support from those in the tech industry as well as from privacy advocacy groups.

Another challenge, which you alluded to in your first question, pertains to the regulation of algorithms and algorithmic decision-making. These technologies are often described as “black boxes” to reflect the difficulties in assessing how they work. While the consequences of algorithmic decision-making can be profound, the processes that lead to those decisions are often opaque. The result has been increased scholarly and regulatory attention on strategies to understand, evaluate, and regulate the processes by which algorithms make decisions about individuals.

Read the full article: Draper, N.A. (2017) From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates. Policy & Internet 9 (2). doi:10.1002/poi3.142.


Nora A. Draper was talking to blog editor David Sutcliffe.

]]>
Five Pieces You Should Probably Read On: Fake News and Filter Bubbles https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-fake-news-and-filter-bubbles/ Fri, 27 Jan 2017 10:08:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3940 This is the second post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Fake News and Filter Bubbles!

Fake news, post-truth, “alternative facts”, filter bubbles — this is the news and media environment we apparently now inhabit, and that has formed the fabric and backdrop of Brexit (“£350 million a week”) and Trump (“This was the largest audience to ever witness an inauguration — period”). Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing? How much can we do with machine-automated or crowd-sourced verification of facts? And are things really any worse now than when Bacon complained in 1620 about the false notions that “are now in possession of the human understanding, and have taken deep root therein”?

 

1. Bernie Hogan: How Facebook divides us [Times Literary Supplement]

27 October 2016 / 1000 words / 5 minutes

“Filter bubbles can create an increasingly fractured population, such as the one developing in America. For the many people shocked by the result of the British EU referendum, we can also partially blame filter bubbles: Facebook literally filters our friends’ views that are least palatable to us, yielding a doctored account of their personalities.”

Bernie Hogan says it’s time Facebook considered ways to use the information it has about us to bring us together across political, ideological and cultural lines, rather than hide us from each other or push us into polarized and hostile camps. He says it’s not only possible for Facebook to help mitigate the issues of filter bubbles and context collapse; it’s imperative, and it’s surprisingly simple.

 

2. Luciano Floridi: Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis [the Guardian]

29 November 2016 / 1000 words / 5 minutes

“The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere. […] some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals’ wellbeing and social good.”

The Internet echo chamber satiates our appetite for pleasant lies and reassuring falsehoods, and has become the defining challenge of the 21st century, says Luciano Floridi. So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively, but this is not good enough, he says. We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.

 

3. Philip Howard: Facebook and Twitter’s real sin goes beyond spreading fake news

3 January 2017 / 1000 words / 5 minutes

“With the data at their disposal and the platforms they maintain, social media companies could raise standards for civility by refusing to accept ad revenue for placing fake news. They could let others audit and understand the algorithms that determine who sees what on a platform. Just as important, they could be the platforms for doing better opinion, exit and deliberative polling.”

Only Facebook and Twitter know how pervasive fabricated news stories and misinformation campaigns have become during referendums and elections, says Philip Howard — and allowing fake news and computational propaganda to target specific voters is an act against democratic values. But in a time of weakening polling systems, withholding data about public opinion is actually their major crime against democracy, he says.

 

4. Brent Mittelstadt: Should there be a better accounting of the algorithms that choose our news for us?

7 December 2016 / 1800 words / 8 minutes

“Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished. At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users.”

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information, says Brent Mittelstadt. And content personalization systems and the algorithms they rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

 

5. Heather Ford: Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom?

19 November 2013 / 1400 words / 6 minutes

“A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms.”

‘Human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process, says Heather Ford. If code is law and if other aspects in addition to code determine how we can act in the world, it is important that we understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources — only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

 

.. and just to prove we’re capable of understanding and acknowledging and assimilating multiple viewpoints on complex things, here’s Helen Margetts, with a different slant on filter bubbles: “Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

 

The Authors

Bernie Hogan is a Research Fellow at the OII; his research interests lie at the intersection of social networks and media convergence.

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information. His  research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology.

Philip Howard is the OII’s Professor of Internet Studies. He investigates the impact of digital media on political life around the world.

Brent Mittelstadt is an OII Postdoc His research interests include the ethics of information handled by medical ICT, theoretical developments in discourse and virtue ethics, and epistemology of information.

Heather Ford completed her doctorate at the OII, where she studied how Wikipedia editors write history as it happens. She is now a University Academic Fellow in Digital Methods at the University of Leeds. Her forthcoming book “Fact Factories: Wikipedia’s Quest for the Sum of All Human Knowledge” will be published by MIT Press.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up! .. It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

]]>
Five Pieces You Should Probably Read On: The US Election https://ensr.oii.ox.ac.uk/five-pieces-you-should-probably-read-on-the-us-election/ Fri, 20 Jan 2017 12:22:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3927 This is the first post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: The US Election.

This was probably the nastiest Presidential election in recent memory: awash with Twitter bots and scandal, polarisation and filter bubbles, accusations of interference by Russia and the Director of the FBI, and another shock result. We have written about electoral prediction elsewhere: instead, here are five pieces that consider the interaction of social media and democracy — the problems, but also potential ways forward.

 

1. James Williams: The Clickbait Candidate

10 October 2016 / 2700 words / 13 minutes

“Trump is very straightforwardly an embodiment of the dynamics of clickbait: he is the logical product (though not endpoint) in the political domain of a media environment designed to invite, and indeed incentivize, relentless competition for our attention […] Like clickbait or outrage cascades, Donald Trump is merely the sort of informational packet our media environment is designed to select for.”

James Williams says that now is probably the time to have that societal conversation about the design ethics of the attention economy — because in our current media environment, attention trumps everything.

 

2. Sam Woolley, Philip Howard: Bots Unite to Automate the Presidential Election [Wired]

15 May 2016 / 850 words / 4 minutes

“Donald Trump understands minority communities. Just ask Pepe Luis Lopez, Francisco Palma, and Alberto Contreras […] each tweeted in support of Trump after his victory in the Nevada caucuses earlier this year. The problem is, Pepe, Francisco, and Alberto aren’t people. They’re bots.”

It’s no surprise that automated spam accounts (or bots) are creeping into election politics, say Sam Woolley and Philip Howard. Demanding bot transparency would at least help clean up social media — which, for better or worse, is increasingly where presidents get elected.

 

3. Phil Howard: Is Social Media Killing Democracy?

15 November 2016 / 1100 words / 5 minutes

“This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits […] these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.”

Phil Howard discusses ways to address fake news, audit social algorithms, and deal with social media’s “moral pass” — social media is damaging democracy, he says, but can also be used to save it.

 

4. Helen Margetts: Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection?

15 November 2016 / 600 words / 3 minutes

“Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead.”

New social information and visibility brings change to social behaviour, says Helen Margetts — ushering in political turbulence and unpredictability. Social media made visible what could have remain a country’s dark secret (hatred of women, rampant racism, etc.), but it will also underpin any radical counter-movement that emerges in the future.

 

5. Helen Margetts: Of course social media is transforming politics. But it’s not to blame for Brexit and Trump

9 January 2017 / 1700 words / 8 minutes

“Even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach. And from the research, it looks like they managed to do just that.”

Politics is a lot messier in the social media era than it used to be, says Helen Margetts, but rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

 

The Authors

James Williams is an OII doctoral candidate, studying the ethics of attention and persuasion in technology design.

Sam Woolley is a Research Assistant on the OII’s Computational Propaganda project; he is interested in political bots, and the intersection of political communication and automation.

Philip Howard is the OII’s Professor of Internet Studies and PI of the Computational Propaganda project. He investigates the impact of digital media on political life around the world.

Helen Margetts is the OII’s Director, and Professor of Society and the Internet. She specialises in digital era government, politics and public policy, and data science and experimental methods. Her most recent book is Political Turbulence (Princeton).

 

Coming up .. Fake news and filter bubbles / It’s the economy, stupid / Augmented reality and ambient fun / The platform economy / Power and development / Internet past and future / Government / Labour rights / The disconnected / Ethics / Staying critical

#5OIIPieces

]]>
Of course social media is transforming politics. But it’s not to blame for Brexit and Trump https://ensr.oii.ox.ac.uk/of-course-social-media-is-transforming-politics-but-its-not-to-blame-for-brexit-and-trump/ Mon, 09 Jan 2017 10:24:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3909 After Brexit and the election of Donald Trump, 2016 will be remembered as the year of cataclysmic democratic events on both sides of the Atlantic. Social media has been implicated in the wave of populism that led to both these developments.

Attention has focused on echo chambers, with many arguing that social media users exist in ideological filter bubbles, narrowly focused on their own preferences, prey to fake news and political bots, reinforcing polarization and leading voters to turn away from the mainstream. Mark Zuckerberg has responded with the strange claim that his company (built on $5 billion of advertising revenue) does not influence people’s decisions.

So what role did social media play in the political events of 2016?

Political turbulence and the new populism

There is no doubt that social media has brought change to politics. From the waves of protest and unrest in response to the 2008 financial crisis, to the Arab spring of 2011, there has been a generalized feeling that political mobilization is on the rise, and that social media had something to do with it.

Our book investigating the relationship between social media and collective action, Political Turbulence, focuses on how social media allows new, “tiny acts” of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later, if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or campaigns for policy change. But they almost always don’t. The overwhelming majority (99.99%) of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US).

The very few that succeed do so very quickly on a massive scale (petitions challenging the Brexit and Trump votes immediately shot above 4 million signatures, to become the largest petitions in history), but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties – the reason why so many of the Arab Spring revolutions proved disappointing.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics can explain why many political developments of our time seem to come from nowhere. It can help to understand the shock waves of support that brought us the Italian Five Star Movement, Podemos in Spain, Jeremy Corbyn, Bernie Sanders, and most recently Brexit and Trump – all of which have campaigned against the “establishment” and challenged traditional political institutions to breaking point.

Each successive mobilization has made people believe that challengers from outside the mainstream are viable – and that is in part what has brought us unlikely results on both sides of the Atlantic. But it doesn’t explain everything.

We’ve had waves of populism before – long before social media (indeed many have made parallels between the politics of 2016 and that of the 1930s). While claims that social media feeds are the biggest threat to democracy, leading to the “disintegration of the general will” and “polarization that drives populism” abound, hard evidence is more difficult to find.

The myth of the echo chamber

The mechanism that is most often offered for this state of events is the existence of echo chambers or filter bubbles. The argument goes that first social media platforms feed people the news that is closest to their own ideological standpoint (estimated from their previous patterns of consumption) and second, that people create their own personalized information environments through their online behaviour, selecting friends and news sources that back up their world view.

Once in these ideological bubbles, people are prey to fake news and political bots that further reinforce their views. So, some argue, social media reinforces people’s current views and acts as a polarizing force on politics, meaning that “random exposure to content is gone from our diets of news and information”.

Really? Is exposure less random than before? Surely the most perfect echo chamber would be the one occupied by someone who only read the Daily Mail in the 1930s – with little possibility of other news – or someone who just watches Fox News? Can our new habitat on social media really be as closed off as these environments, when our digital networks are so very much larger and more heterogeneous than anything we’ve had before?

Research suggests not. A recent large-scale survey (of 50,000 news consumers in 26 countries) shows how those who do not use social media on average come across news from significantly fewer different online sources than those who do. Social media users, it found, receive an additional “boost” in the number of news sources they use each week, even if they are not actually trying to consume more news. These findings are reinforced by an analysis of Facebook data, where 8.8 billion posts, likes and comments were posted through the US election.

Recent research published in Science shows that algorithms play less of a role in exposure to attitude-challenging content than individuals’ own choices and that “on average more than 20% of an individual’s Facebook friends who report an ideological affiliation are from the opposing party”, meaning that social media exposes individuals to at least some ideologically cross-cutting viewpoints: “24% of the hard content shared by liberals’ friends is cross-cutting, compared to 35% for conservatives” (the equivalent figures would be 40% and 45% if random).

In fact, companies have no incentive to create hermetically sealed (as I have heard one commentator claim) echo chambers. Most of social media content is not about politics (sorry guys) – most of that £5 billion advertising revenue does not come from political organizations. So any incentives that companies have to create echo chambers – for the purposes of targeted advertising, for example – are most likely to relate to lifestyle choices or entertainment preferences, rather than political attitudes.

And where filter bubbles do exist they are constantly shifting and sliding – easily punctured by a trending cross-issue item (anybody looking at #Election2016 shortly before polling day would have seen a rich mix of views, while having little doubt about Trump’s impending victory).

And of course, even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach.

And from the research, it looks like they managed to do just that. A barrage of evidence suggests that such advertising was effective in the 2015 UK general election (where the Conservatives spent 10 times as much as Labour on Facebook advertising), in the EU referendum (where the Leave campaign also focused on paid Facebook ads) and in the presidential election, where Facebook advertising has been credited for Trump’s victory, while the Clinton campaign focused on TV ads. And of course, advanced advertising techniques might actually focus on those undecided voters from their conversations. This is not the bottom-up political mobilization that fired off support for Podemos or Bernie Sanders. It is massive top-down advertising dollars.

Ironically however, these huge top-down political advertising campaigns have some of the same characteristics as the bottom-up movements discussed above, particularly sustainability. Former New York Governor Mario Cuomo’s dictum that candidates “campaign in poetry and govern in prose” may need an update. Barack Obama’s innovative campaigns of online social networks, micro-donations and matching support were miraculous, but the extent to which he developed digital government or data-driven policy-making in office was disappointing. Campaign digitally, govern in analogue might be the new mantra.

Chaotic pluralism

Politics is a lot messier in the social media era than it used to be – whether something takes off and succeeds in gaining critical mass is far more random than it appears to be from a casual glance, where we see only those that succeed.

In Political Turbulence, we wanted to identify the model of democracy that best encapsulates politics intertwined with social media. The dynamics we observed seem to be leading us to a model of “chaotic pluralism”, characterized by diversity and heterogeneity – similar to early pluralist models – but also by non-linearity and high interconnectivity, making liberal democracies far more disorganized, unstable and unpredictable than the architects of pluralist political thought ever envisaged.

Perhaps rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

Within chaotic pluralism, there is an urgent need for redesigning democratic institutions that can accommodate new forms of political engagement, and respond to the discontent, inequalities and feelings of exclusion – even anger and alienation – that are at the root of the new populism. We should be using social media to listen to (rather than merely talk at) the expression of these public sentiments, and not just at election time.

Many political institutions – for example, the British Labour Party, the US Republican Party, and the first-past-the-post electoral system shared by both countries – are in crisis, precisely because they have become so far removed from the concerns and needs of citizens. Redesign will need to include social media platforms themselves, which have rapidly become established as institutions of democracy and will be at the heart of any democratic revival.

As these platforms finally start to admit to being media companies (rather than tech companies), we will need to demand human intervention and transparency over algorithms that determine trending news; factchecking (where Google took the lead); algorithms that detect fake news; and possibly even “public interest” bots to counteract the rise of computational propaganda.

Meanwhile, the only thing we can really predict with certainty is that unpredictable things will happen and that social media will be part of our political future.

Discussing the echoes of the 1930s in today’s politics, the Wall Street Journal points out how Roosevelt managed to steer between the extremes of left and right because he knew that “public sentiments of anger and alienation aren’t to be belittled or dismissed, for their causes can be legitimate and their consequences powerful”. The path through populism and polarization may involve using the opportunity that social media presents to listen, understand and respond to these sentiments.

This piece draws on research from Political Turbulence: How Social Media Shape Collective Action (Princeton University Press, 2016), by Helen Margetts, Peter John, Scott Hale and Taha Yasseri.

It is cross-posted from the World Economic Forum, where it was first published on 22 December 2016.

]]>
Should there be a better accounting of the algorithms that choose our news for us? https://ensr.oii.ox.ac.uk/should-there-be-a-better-accounting-of-the-algorithms-that-choose-our-news-for-us/ Wed, 07 Dec 2016 14:44:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3875 A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information — and content personalization systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse.

A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalization systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalization systems undermine open exchange of ideas and evidence among participants: at a minimum, personalization systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers — content personalization systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalization systems.

He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalized content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalization systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing.

The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine — no longer in force — and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealized version of political discourse described here. Both precedents promote balance in public political discourse by setting standards for delivery of politically relevant content. Whether it is appropriate to hold service providers that use content personalization systems to a similar standard remains a crucial question.

Read the full article: Mittelstadt, B. (2016) Auditing for Transparency in Content Personalization Systems. International Journal of Communication 10(2016), 4991–5002.

We caught up with Brent to explore the broader implications of the study:

Ed: We basically accept that the tabloids will be filled with gross bias, populism and lies (in order to sell copy) — and editorial decisions are not generally transparent to us. In terms of their impact on the democratic process, what is the difference between the editorial boardroom and a personalising social media algorithm?

Brent: There are a number of differences. First, although not necessarily transparent to the public, one hopes that editorial boardrooms are at least transparent to those within the news organisations. Editors can discuss and debate the tone and factual accuracy of their stories, explain their reasoning to one another, reflect upon the impact of their decisions on their readers, and generally have a fair debate about the merits and weaknesses of particular content.

This is not the case for a personalising social media algorithm; those working with the algorithm inside a social media company are often unable to explain why the algorithm is functioning in a particular way, or determined a particular story or topic to be ‘trending’ or displayed to particular users, while others are not. It is also far more difficult to ‘fact check’ algorithmically curated news; a news item can be widely disseminated merely by many users posting or interacting with it, without any purposeful dissemination or fact checking by the platform provider.

Another big difference is the degree to which users can be aware of the bias of the stories they are reading. Whereas a reader of The Daily Mail or The Guardian will have some idea of the values of the paper, the same cannot be said of platforms offering algorithmically curated news and information. The platform can be neutral insofar as it disseminates news items and information reflecting a range of values and political viewpoints. A user will encounter items reflecting her particular values (or, more accurately, her history of interactions with the platform and the values inferred from them), but these values, and their impact on her exposure to alternative viewpoints, may not be apparent to the user.

Ed: And how is content “personalisation” different to content filtering (e.g. as we see with the Great Firewall of China) that people get very worked up about? Should we be more worried about personalisation?

Brent: Personalisation and filtering are essentially the same mechanism; information is tailored to a user or users according to some prevailing criteria. One difference is whether content is merely infeasible to access, or technically inaccessible. Content of all types will typically still be accessible in principle when personalisation is used, but the user will have to make an effort to access content that is not recommended or otherwise given special attention. Filtering systems, in contrast, will impose technical measures to make particular content inaccessible from a particular device or geographical area.

Another difference is the source of the criteria used to set the visibility of different types of content. In the case of personalisation, these criteria are typically based on the users (inferred) interests, values, past behaviours and explicit requests. Critically, these values are not necessarily apparent to the user. For filtering, criteria are typically externally determined by a third party, often a government. Some types of information are set off limits, according to the prevailing values of the third party. It is the imposition of external values, which limit the capacity of users to access content of their choosing, which often causes an outcry against filtering and censorship.

Importantly, the two mechanisms do not necessarily differ in terms of the transparency of the limiting factors or rules to users. In some cases, such as the recently proposed ban in the UK of adult websites that do not provide meaningful age verification mechanisms, the criteria that determine whether sites are off limits will be publicly known at a general level. In other cases, and especially with personalisation, the user inside the ‘filter bubble’ will be unaware of the rules that determine whether content is (in)accessible. And it is not always the case that the platform provider intentionally keeps these rules secret. Rather, the personalisation algorithms and background analytics that determine the rules can be too complex, inaccessible or poorly understood even by the provider to give the user any meaningful insight.

Ed: Where are these algorithms developed: are they basically all proprietary? i.e. how would you gain oversight of massively valuable and commercially sensitive intellectual property?

Brent: Personalisation algorithms tend to be proprietary, and thus are not normally open to public scrutiny in any meaningful sense. In one sense this is understandable; personalisation algorithms are valuable intellectual property. At the same time the lack of transparency is a problem, as personalisation fundamentally affects how users encounter and digest information on any number of topics. As recently argued, it may be the case that personalisation of news impacts on political and democratic processes. Existing regulatory mechanisms have not been successful in opening up the ‘black box’ so to speak.

It can be argued, however, that legal requirements should be adopted to require these algorithms to be open to public scrutiny due to the fundamental way they shape our consumption of news and information. Oversight can take a number of forms. As I argue in the article, algorithmic auditing is one promising route, performed both internally by the companies themselves, and externally by a government agency or researchers. A good starting point would be for the companies developing and deploying these algorithms to extend their cooperation with researchers, thereby allowing a third party to examine the effects these systems are having on political discourse, and society more broadly.

Ed: By “algorithm audit” — do you mean examining the code and inferring what the outcome might be in terms of bias, or checking the outcome (presumably statistically) and inferring that the algorithm must be introducing bias somewhere? And is it even possible to meaningfully audit personalisation algorithms, when they might rely on vast amounts of unpredictable user feedback to train the system?

Brent: Algorithm auditing can mean both of these things, and more. Audit studies are a tool already in use, whereby human participants introduce different inputs into a system, and examine the effect on the system’s outputs. Similar methods have long been used to detect discriminatory hiring practices, for instance. Code audits are another possibility, but are generally prohibitive due to problems of access and complexity. Also, even if you can access and understand the code of an algorithm, that tells you little about how the algorithm performs in practice when given certain input data. Both the algorithm and input data would need to be audited.

Alternatively, auditing can assess just the outputs of the algorithm; recent work to design mechanisms to detect disparate impact and discrimination, particularly in the Fairness, Accountability and Transparency in Machine Learning (FAT-ML) community, is a great example of this type of auditing. Algorithms can also be designed to attempt to prevent or detect discrimination and other harms as they occur. These methods are as much about the operation of the algorithm, as they are about the nature of the training and input data, which may itself be biased. In short, auditing is very difficult, but there are promising avenues of research and development. Once we have reliable auditing methods, the next major challenge will be to tailor them to specific sectors; a one-size-meets-all approach to auditing is not on the cards.

Ed: Do you think this is a real problem for our democracy? And what is the solution if so?

Brent: It’s difficult to say, in part because access and data to study the effects of personalisation systems are hard to come by. It is one thing to prove that personalisation is occurring on a particular platform, or to show that users are systematically displayed content reflecting a narrow range of values or interests. It is quite another to prove that these effects are having an overall harmful effect on democracy. Digesting information is one of the most basic elements of social and political life, so any mechanism that fundamentally changes how information is encountered should be subject to serious and sustained scrutiny.

Assuming personalisation actually harms democracy or political discourse, mitigating its effects is quite a different issue. Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished.

At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users. A promising step would be proactively giving the user some idea of what the system thinks it knows about them, or how they are being classified or profiled, without the user first needing to ask.


Brent Mittelstadt was talking to blog editor David Sutcliffe.

]]>
Is Social Media Killing Democracy? https://ensr.oii.ox.ac.uk/is-social-media-killing-democracy/ Tue, 15 Nov 2016 08:46:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3850 Donald Trump in Reno, Nevada, by Darron Birgenheier (Flickr).
Donald Trump in Reno, Nevada, by Darron Birgenheier (Flickr).

This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits.

Platforms like Twitter and Facebook now provide a structure for our political lives. We’ve always relied on many kinds of sources for our political news and information. Family, friends, news organizations, charismatic politicians certainly predate the internet. But whereas those are sources of information, social media now provides the structure for political conversation. And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends. Many of us just assume that there is a modicum of truth-in-advertising. We expect this from advertisements for commercial goods and services, but not from politicians and political parties. Occasionally a political actor gets punished for betraying the public trust through their misinformation campaigns. But in the United States “political speech” is completely free from reasonable public oversight, and in most other countries the media organizations and public offices for watching politicians are legally constrained, poorly financed, or themselves untrustworthy. Research demonstrates that during the campaigns for Brexit and the U.S. presidency, large volumes of fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms.

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”. When offered the choice of who to spend time with or which organizations to trust, we prefer to strengthen our ties to the people and organizations we already know and like. When offered a choice of news stories, we prefer to read about the issues we already care about, from pundits and news outlets we’ve enjoyed in the past. Random exposure to content is gone from our diets of news and information. The problem is not that we have constructed our own community silos — humans will always do that. The problem is that social media networks take away the random exposure to new, high-quality information.

This is not a technological problem. We are social beings and so we will naturally look for ways to socialize, and we will use technology to socialize each other. But technology could be part of the solution. A not-so-radical redesign might occasionally expose us to new sources of information, or warn us when our own social networks are getting too bounded.

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to.

In most democracies, the public policy and exit polling systems have been broken for a decade. Many social scientists now find that big data, especially network data, does a better job of revealing public preferences than traditional random digit dial systems. So Facebook actually got a moral pass twice this year. Their data on public opinion would have certainly informed the Brexit debate, and their data on voter preferences would certainly have informed public conversation during the US election.

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends. Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook. The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook. When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda. The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies. Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work. And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems. Opaque software algorithms shape what people find in their news feeds. We’ve all noticed fake news stories (often called clickbait), and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion. These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump. Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention.

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms. Was Hillary Clinton really replaced by an alien in the final weeks of the 2016 campaign? We all need to be able to see who wrote this story, whether or not it is true, and how it was spread. Most important, Facebook should not allow such stories to be presented as news, much less spread. If they take ad revenue for promoting political misinformation, they should face the same regulatory punishments that a broadcaster would face for doing such a public disservice.

The second problem is a social one that can be exacerbated by information technologies. This means it can also be mitigated by technologies. Introducing random news stories and ensuring exposure to high quality information would be a simple — and healthy — algorithmic adjustment to social media platforms. The third problem could be resolved with moral leadership from within social media firms, but a little public policy oversight from elections officials and media watchdogs would help. Did Facebook see that journalists and pollsters were wrong about public preferences? Facebook should have told us if so, and shared that data.

Social media platforms have provided a structure for spreading around fake news, we users tend to trust our friends and family, and we don’t hold media technology firms accountable for degrading our public conversations. The next big thing for technology evolution is the Internet of Things, which will generate massive amounts of data that will further harden these structures. Is social media damaging democracy? Yes, but we can also use social media to save democracy.

]]>
Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection? https://ensr.oii.ox.ac.uk/dont-shoot-the-messenger-what-part-did-social-media-play-in-2016-us-election/ Tue, 15 Nov 2016 07:57:44 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3854
Young activists gather at Lafayette Park, preparing for a march to the U.S. Capitol in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).
Young activists gather at Lafayette Park in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).

Commentators have been quick to ‘blame social media’ for ‘ruining’ the 2016 election in putting Mr Donald Trump in the White House. Just as was the case in the campaign for Brexit, people argue that social media has driven us to a ‘post-truth’ world of polarisation and echo chambers.

Is this really the case? At first glance, the ingredients of the Trump victory — as for Brexit — seem remarkably traditional. The Trump campaign spent more on physical souvenirs than on field data, more on Make America Great Again hats (made in China) than on polling. The Daily Mail characterisation of judges as Enemies of the People after their ruling that the triggering of Article 50 must be discussed in parliament seemed reminiscent of the 1930s. Likewise, US crowds chanting ‘Lock her up’, like lynch mobs, seemed like ghastly reminders of a pre-democratic era.

Clearly social media were a big part of the 2016 election, used heavily by the candidates themselves, and generating 8.8 billion posts, likes and commentson Facebook alone. Social media also make visible what in an earlier era could remain a country’s dark secret — hatred of women (through death and rape threats and trolling of female politicians in both the UK and US), and rampant racism.

This visibility, society’s new self-awareness, brings change to political behaviour. Social media provide social information about what other people are doing: viewing, following, liking, sharing, tweeting, joining, supporting and so on. This social information is the driver behind the political turbulence that characterises politics today. Those rustbelt Democrats feeling abandoned by the system saw on social media that they were not alone — that other people felt the same way, and that Trump was viable as a candidate. For a woman drawn towards the Trump agenda but feeling tentative, the hashtag #WomenForTrump could reassure her that there were like-minded people she could identify with. Decades of social science research shows information about the behaviour of others influences how groups behave and now it is driving the unpredictability of politics, bringing us Trump, Brexit, Corbyn, Sanders and unexpected political mobilisation across the world.

These are not echo chambers. As recent research shows, people are exposed to cross-cutting discourse on social media, across ever larger and more heterogeneous social networks. While the hypothetical #WomenForTrump tweeter or Facebook user will see like-minded behaviour, she will also see a peppering of social information showing people using opposing hashtags like #ImWithHer, or (post-election) #StillWithHer. It could be argued that a better example of an ‘echo chamber’ would be a regular Daily Mail reader or someone who only watched Fox News.

The mainstream media loved Trump: his controversial road-crash views sold their newspapers and advertising. Social media take us out of that world. They are relatively neutral in their stance on content, giving no particular priority to extreme or offensive views as on their platforms, the numbers are what matter.

Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead. Social media can also shine the light of transparency on the workings of a Trump administration, as they did on his campaign. They will be critical for building networks of solidarity to confront the intolerance, sexism and racism stirred up during this bruising campaign. And social media will underpin any radical counter-movement that emerges in the coming years.


Helen Margetts is the author of Political Turbulence: How Social Media Shape Collective Action and thanks her co-authors Peter JohnScott Haleand Taha Yasseri.

]]>
Rethinking Digital Media and Political Change https://ensr.oii.ox.ac.uk/rethinking-digital-media-and-political-change/ Tue, 23 Aug 2016 14:52:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3824
Image:
Did Twitter lead to Donald Trump’s rise and success to date in the American campaign for the presidency? Image: Gage Skidmore (Flickr)
What are the dangers or new opportunities of digital media? One of the major debates in relation to digital media in the United States has been whether they contribute to political polarization. I argue in a new paper (Rethinking Digital Media and Political Change) that Twitter led to Donald Trump’s rise and success to date in the American campaign for the presidency. There is plenty of evidence to show that Trump received a disproportionate amount of attention on Twitter, which in turn generated a disproportionate amount of attention in the mainstream media. The strong correlation between the two suggests that Trump was able to bypass the gatekeepers of the traditional media.

A second ingredient in his success has been populism, which rails against dominant political elites (including the Republican party) and the ‘biased’ media. Populism also rests on the notion of an ‘authentic’ people — by implication excluding ‘others’ such as immigrants and foreign powers like the Chinese — to whom the leader appeals directly. The paper makes parallels with the strength of the Sweden Democrats, an anti-immigrant party which, in a similar way, has been able to appeal to its following via social media and online newspapers, again bypassing mainstream media with its populist message.

There is a difference, however: in the US, commercial media compete for audience share, so Trump’s controversial tweets have been eagerly embraced by journalists seeking high viewership and readership ratings. In Sweden, where public media dominate and there is far less of the ‘horserace’ politics of American politics, the Sweden Democrats have been more locked out of the mainstream media and of politics. In short, Twitter plus populism has led to Trump. I argue that dominating the mediated attention space is crucial. One outcome of how this story ends will be known in November. But whatever the outcome, it is already clear that the role of the media in politics, and how they can be circumvented by new media, requires fundamental rethinking.


Ralph Schroeder is Professor and director of the Master’s degree in Social Science of the Internet at the Oxford Internet Institute. Before coming to Oxford University, he was Professor in the School of Technology Management and Economics at Chalmers University in Gothenburg (Sweden). Recent books include Rethinking Science, Technology and Social Change (Stanford University Press, 2007) and, co-authored with Eric T. Meyer, Knowledge Machines: Digital Transformations of the Sciences and Humanities (MIT Press 2015).

]]>
Brexit, voting, and political turbulence https://ensr.oii.ox.ac.uk/brexit-voting-and-political-turbulence/ Thu, 18 Aug 2016 14:23:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3819 Cross-posted from the Princeton University Press blog. The authors of Political Turbulence discuss how the explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere.


On 23rd June 2016, a majority of the British public voted in a referendum on whether to leave the European Union. The Leave or so-called #Brexit option was victorious, with a margin of 52% to 48% across the country, although Scotland, Northern Ireland, London and some towns voted to remain. The result was a shock to both leave and remain supporters alike. US readers might note that when the polls closed, the odds on futures markets of Brexit (15%) were longer than those of Trump being elected President.

Political scientists are reeling with the sheer volume of politics that has been packed into the month after the result. From the Prime Minister’s morning-after resignation on 24th June the country was mired in political chaos, with almost every political institution challenged and under question in the aftermath of the vote, including both Conservative and Labour parties and the existence of the United Kingdom itself, given Scotland’s resistance to leaving the EU. The eventual formation of a government under a new prime minister, Teresa May, has brought some stability. But she was not elected and her government has a tiny majority of only 12 Members of Parliament. A cartoon by Matt in the Telegraph on July 2nd (which would work for almost any day) showed two students, one of them saying ‘I’m studying politics. The course covers the period from 8am on Thursday to lunchtime on Friday.’

All these events – the campaigns to remain or leave, the post-referendum turmoil, resignations, sackings and appointments – were played out on social media; the speed of change and the unpredictability of events being far too great for conventional media to keep pace. So our book, Political Turbulence: How Social Media Shape Collective Action, can provide a way to think about the past weeks. The book focuses on how social media allow new, ‘tiny acts’ of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later – if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or petitions for policy change. These mobilizations normally fail – 99.9% of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US). The very few that succeed usually do so very quickly on a massive scale, but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties. When Brazilian President Dilma Rousseff asked to speak to the leaders of the mass demonstrations against the government in 2014 organised entirely on social media with an explicit rejection of party politics, she was told ‘there are no leaders’.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere. In the US and the UK it can help to understand the shock waves of support that brought Bernie Sanders, Donald Trump, Jeremy Corbyn (elected leader of the Labour party in 2015) and Brexit itself, all of which have challenged so strongly traditional political institutions. In both countries, the two largest political parties are creaking to breaking point in their efforts to accommodate these phenomena.

The unpredicted support for Brexit by over half of voters in the UK referendum illustrates these characteristics of the movements we model in the book, with the resistance to traditional forms of organization. Voters were courted by political institutions from all sides – the government, all the political parties apart from UKIP, the Bank of England, international organizations, foreign governments, the US President himself and the ‘Remain’ or StrongerIn campaign convened by Conservative, Labour and the smaller parties. Virtually every authoritative source of information supported Remain. Yet people were resistant to aligning themselves with any of them. Experts, facts, leaders of any kind were all rejected by the rising swell of support for the Leave side. Famously, Michael Gove, one of the key leave campaigners said ‘we have had enough of experts’. According to YouGov polls, over 2/3 of Conservative voters in 2015 voted to Leave in 2016, as did over one third of Labour and Liberal Democrat voters.

Instead, people turned to a few key claims promulgated by the two Leave campaigns Vote Leave(with key Conservative Brexiteers such as Boris Johnson, Michael Gove and Liam Fox) and Leave.EU, dominated by UKIP and its leader Nigel Farage, bankrolled by the aptly named billionaire Arron Banks. This side dominated social media in driving home their simple (if largely untrue) claims and anti-establishment, anti-elitist message (although all were part of the upper echelons of both establishment and elite). Key memes included the claim (painted on the side of a bus) that the UK gave £350m a week to the EU which could instead be spent on the NHS; the likelihood that Turkey would soon join the EU; and an image showing floods of migrants entering the UK via Europe. Banks brought in staff from his own insurance companies and political campaign firms (such as Goddard Gunster) and Leave.EU created a massive database of leave supporters to employ targeted advertising on social media.

While Remain represented the status-quo and a known entity, Leave was flexible to sell itself as anything to anyone. Leave campaigners would often criticize the Government but then not offer specific policy alternatives stating, ‘we are a campaign not a government.’ This ability for people to coalesce around a movement for a variety of different (and sometimes conflicting) reasons is a hallmark of the social-media based campaigns that characterize Political Turbulence. Some voters and campaigners argued that voting Leave would allow the UK to be more global and accept more immigrants from non-EU countries. In contrast, racism and anti-immigration sentiment were key reasons for other voters. Desire for sovereignty and independence, responses to austerity and economic inequality and hostility to the elites in London and the South East have all figured in the torrent of post-Brexit analysis. These alternative faces of Leave were exploited to gain votes for ‘change,’ but the exact change sought by any two voters could be very different.

The movement‘s organization illustrates what we have observed in recent political turbulence – as in Brazil, Hong Kong and Egypt; a complete rejection of mainstream political parties and institutions and an absence of leaders in any conventional sense. There is little evidence that the leading lights of the Leave campaigns were seen as prospective leaders. There was no outcry from the Leave side when they seemed to melt away after the vote, no mourning over Michael Gove’s complete fall from grace when the government was formed – nor even joy at Boris Johnson’s appointment as Foreign Secretary. Rather, the Leave campaigns acted like advertising campaigns, driving their points home to all corners of the online and offline worlds but without a clear public face. After the result, it transpired that there was no plan, no policy proposals, no exit strategy proposed by either campaign. The Vote Leave campaign was seemingly paralyzed by shock after the vote (they tried to delete their whole site, now reluctantly and partially restored with the lie on the side of the bus toned down to £50 million), pickled forever after 23rd June. Meanwhile, Teresa May, a reluctant Remain supporter and an absent figure during the referendum itself, emerged as the only viable leader after the event, in the same way as (in a very different context) the Muslim Brotherhood, as the only viable organization, were able to assume power after the first Egyptian revolution.

In contrast, the Leave.Eu website remains highly active, possibly poised for the rebirth of UKIP as a radical populist far-right party on the European model, as Arron Banks has proposed. UKIP was formed around this single policy – of leaving the EU – and will struggle to find policy purpose, post-Brexit. A new party, with Banks’ huge resources and a massive database of Leave supporters and their social media affiliations, possibly disenchanted by the slow progress of Brexit, disaffected by the traditional parties – might be a political winner on the new landscape.

The act of voting in the referendum will define people’s political identity for the foreseeable future, shaping the way they vote in any forthcoming election. The entire political system is being redrawn around this single issue, and whichever organizational grouping can ride the wave will win. The one thing we can predict for our political future is that it will be unpredictable.

 

]]>
How big data is breathing new life into the smart cities concept https://ensr.oii.ox.ac.uk/how-big-data-is-breathing-new-life-into-the-smart-cities-concept/ Thu, 23 Jul 2015 09:57:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3297 “Big data” is a growing area of interest for public policy makers: for example, it was highlighted in UK Chancellor George Osborne’s recent budget speech as a major means of improving efficiency in public service delivery. While big data can apply to government at every level, the majority of innovation is currently being driven by local government, especially cities, who perhaps have greater flexibility and room to experiment and who are constantly on a drive to improve service delivery without increasing budgets.

Work on big data for cities is increasingly incorporated under the rubric of “smart cities”. The smart city is an old(ish) idea: give urban policymakers real time information on a whole variety of indicators about their city (from traffic and pollution to park usage and waste bin collection) and they will be able to improve decision making and optimise service delivery. But the initial vision, which mostly centred around adding sensors and RFID tags to objects around the city so that they would be able to communicate, has thus far remained unrealised (big up front investment needs and the requirements of IPv6 are perhaps the most obvious reasons for this).

The rise of big data – large, heterogeneous datasets generated by the increasing digitisation of social life – has however breathed new life into the smart cities concept. If all the cars have GPS devices, all the people have mobile phones, and all opinions are expressed on social media, then do we really need the city to be smart at all? Instead, policymakers can simply extract what they need from a sea of data which is already around them. And indeed, data from mobile phone operators has already been used for traffic optimisation, Oyster card data has been used to plan London Underground service interruptions, sewage data has been used to estimate population levels … the examples go on.

However, at the moment these examples remain largely anecdotal, driven forward by a few cities rather than adopted worldwide. The big data driven smart city faces considerable challenges if it is to become a default means of policymaking rather than a conversation piece. Getting access to the right data; correcting for biases and inaccuracies (not everyone has a GPS, phone, or expresses themselves on social media); and communicating it all to executives remain key concerns. Furthermore, especially in a context of tight budgets, most local governments cannot afford to experiment with new techniques which may not pay off instantly.

This is the context of two current OII projects in the smart cities field: UrbanData2Decide (2014-2016) and NEXUS (2015-2017). UrbanData2Decide joins together a consortium of European universities, each working with a local city partner, to explore how local government problems can be resolved with urban generated data. In Oxford, we are looking at how open mapping data can be used to estimate alcohol availability; how website analytics can be used to estimate service disruption; and how internal administrative data and social media data can be used to estimate population levels. The best concepts will be built into an application which allows decision makers to access these concepts real time.

NEXUS builds on this work. A collaborative partnership with BT, it will look at how social media data and some internal BT data can be used to estimate people movement and traffic patterns around the city, joining these data into network visualisations which are then displayed to policymakers in a data visualisation application. Both projects fill an important gap by allowing city officials to experiment with data driven solutions, providing proof of concepts and showing what works and what doesn’t. Increasing academic-government partnerships in this way has real potential to drive forward the field and turn the smart city vision into a reality.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

]]>
Digital Disconnect: Parties, Pollsters and Political Analysis in #GE2015 https://ensr.oii.ox.ac.uk/digital-disconnect-parties-pollsters-and-political-analysis-in-ge2015/ Mon, 11 May 2015 15:16:16 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3268 We undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII's election night party, or read about the data hack
The Oxford Internet Institute undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII’s election night party, or read about the data hack

Counts of public Facebook posts mentioning any of the party leaders’ surnames. Data generated by social media can be used to understand political behaviour and institutions on an ongoing basis.[/caption]‘Congratulations to my friend @Messina2012 on his role in the resounding Conservative victory in Britain’ tweeted David Axelrod, campaign advisor to Miliband, to his former colleague Jim Messina, Cameron’s strategy adviser, on May 8th. The former was Obama’s communications director and the latter campaign manager of Obama’s 2012 campaign. Along with other consultants and advisors and large-scale data management platforms from Obama’s hugely successful digital campaigns, Conservative and Labour used an arsenal of social media and digital tools to interact with voters throughout, as did all the parties competing for seats in the 2015 election.

The parties ran very different kinds of digital campaigns. The Conservatives used advanced data science techniques borrowed from the US campaigns to understand how their policy announcements were being received and to target groups of individuals. They spent ten times as much as Labour on Facebook, using ads targeted at Facebook users according to their activities on the platform, geo-location and demographics. This was a top down strategy that involved working out was happening on social media and responding with targeted advertising, particularly for marginal seats. It was supplemented by the mainstream media, such as the Telegraph for example, which contacted its database of readers and subscribers to services such as Telegraph Money, urging them to vote Conservative. As Andrew Cooper tweeted after the election, ‘Big data, micro-targeting and social media campaigns just thrashed “5 million conversations” and “community organizing”’.

He has a point. Labour took a different approach to social media. Widely acknowledged to have the most boots on the real ground, knocking on doors, they took a similar ‘ground war’ approach to social media in local campaigns. Our own analysis at the Oxford Internet Institute shows that of the 450K tweets sent by candidates of the six largest parties in the month leading up to the general election, Labour party candidates sent over 120,000 while the Conservatives sent only 80,000, no more than the Greens and not much more than UKIP. But the greater number of Labour tweets were no more productive in terms of impact (measured in terms of mentions generated: and indeed the final result).

Both parties’ campaigns were tightly controlled. Ostensibly, Labour generated far more bottom-up activity from supporters using social media, through memes like #votecameron out, #milibrand (responding to Miliband’s interview with Russell Brand), and what Miliband himself termed the most unlikely cult of the 21st century in his resignation speech, #milifandom, none of which came directly from Central Office. These produced peaks of activity on Twitter that at some points exceeded even discussion of the election itself on the semi-official #GE2015 used by the parties, as the figure below shows. But the party remained aloof from these conversations, fearful of mainstream media mockery.

The Brand interview was agreed to out of desperation and can have made little difference to the vote (partly because Brand endorsed Miliband only after the deadline for voter registration: young voters suddenly overcome by an enthusiasm for participatory democracy after Brand’s public volte face on the utility of voting will have remained disenfranchised). But engaging with the swathes of young people who spend increasing amounts of their time on social media is a strategy for engagement that all parties ought to consider. YouTubers like PewDiePie have tens of millions of subscribers and billions of video views – their videos may seem unbelievably silly to many, but it is here that a good chunk the next generation of voters are to be found.

Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.
Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.

Only one of the leaders had a presence on social media that managed anything like the personal touch and universal reach that Obama achieved in 2008 and 2012 based on sustained engagement with social media – Nicola Sturgeon. The SNP’s use of social media, developed in last September’s referendum on Scottish independence had spawned a whole army of digital activists. All SNP candidates started the campaign with a Twitter account. When we look at the 650 local campaigns waged across the country, by far the most productive in the sense of generating mentions was the SNP; 100 tweets from SNP local candidates generating 10 times more mentions (1,000) than 100 tweets from (for example) the Liberal Democrats.

Scottish Labour’s failure to engage with Scottish peoples in this kind of way illustrates how difficult it is to suddenly develop relationships on social media – followers on all platforms are built up over years, not in the short space of a campaign. In strong contrast, advertising on these platforms as the Conservatives did is instantaneous, and based on the data science understanding (through advertising algorithms) of the platform itself. It doesn’t require huge databases of supporters – it doesn’t build up relationships between the party and supporters – indeed, they may remain anonymous to the party. It’s quick, dirty and effective.

The pollsters’ terrible night

So neither of the two largest parties really did anything with social media, or the huge databases of interactions that their platforms will have generated, to generate long-running engagement with the electorate. The campaigns were disconnected from their supporters, from their grass roots.

But the differing use of social media by the parties could lend a clue to why the opinion polls throughout the campaign got it so wrong, underestimating the Conservative lead by an average of five per cent. The social media data that may be gathered from this or any campaign is a valuable source of information about what the parties are doing, how they are being received, and what people are thinking or talking about in this important space – where so many people spend so much of their time. Of course, it is difficult to read from the outside; Andrew Cooper labeled the Conservatives’ campaign of big data to identify undecided voters, and micro-targeting on social media, as ‘silent and invisible’ and it seems to have been so to the polls.

Many voters were undecided until the last minute, or decided not to vote, which is impossible to predict with polls (bar the exit poll) – but possibly observable on social media, such as the spikes in attention to UKIP on Wikipedia towards the end of the campaign, which may have signaled their impressive share of the vote. As Jim Messina put it to msnbc news following up on his May 8th tweet that UK (and US) polling was ‘completely broken’ – ‘people communicate in different ways now’, arguing that the Miliband campaign had tried to go back to the 1970s.

Surveys – such as polls — give a (hopefully) representative picture of what people think they might do. Social media data provide an (unrepresentative) picture of what people really said or did. Long-running opinion surveys (such as the Ipsos MORI Issues Index) can monitor the hopes and fears of the electorate in between elections, but attention tends to focus on the huge barrage of opinion polls at election time – which are geared entirely at predicting the election result, and which do not contribute to more general understanding of voters. In contrast, social media are a good way to track rapid bursts in mobilization or support, which reflect immediately on social media platforms – and could also be developed to illustrate more long running trends, such as unpopular policies or failing services.

As opinion surveys face more and more challenges, there is surely good reason to supplement them with social media data, which reflect what people are really thinking on an ongoing basis – like, a video in rather than the irregular snapshots taken by polls. As a leading pollster João Francisco Meira, director of Vox Populi in Brazil (which is doing innovative work in using social media data to understand public opinion) put it in conversation with one of the authors in April – ‘we have spent so long trying to hear what people are saying – now they are crying out to be heard, every day’. It is a question of pollsters working out how to listen.

Political big data

Analysts of political behaviour – academics as well as pollsters — need to pay attention to this data. At the OII we gathered large quantities of data from Facebook, Twitter, Wikipedia and YouTube in the lead-up to the election campaign, including mentions of all candidates (as did Demos’s Centre for the Analysis of Social Media). Using this data we will be able, for example, to work out the relationship between local social media campaigns and the parties’ share of the vote, as well as modeling the relationship between social media presence and turnout.

We can already see that the story of the local campaigns varied enormously – while at the start of the campaign some candidates were probably requesting new passwords for their rusty Twitter accounts, some already had an ongoing relationship with their constituents (or potential constituents), which they could build on during the campaign. One of the candidates to take over the Labour party leadership, Chuka Umunna, joined Twitter in April 2009 and now has 100K followers, which will be useful in the forthcoming leadership contest.

Election results inject data into a research field that lacks ‘big data’. Data hungry political scientists will analyse these data in every way imaginable for the next five years. But data in between elections, for example relating to democratic or civic engagement or political mobilization, has traditionally been woefully short in our discipline. Analysis of the social media campaigns in #GE2015 will start to provide a foundation to understand patterns and trends in voting behaviour, particularly when linked to other sources of data, such as the actual constituency-level voting results and even discredited polls — which may yet yield insight, even having failed to achieve their predictive aims. As the OII’s Jonathan Bright and Taha Yasseri have argued, we need ‘a theory-informed model to drive social media predictions, that is based on an understanding of how the data is generated and hence enables us to correct for certain biases’

A political data science

Parties, pollsters and political analysts should all be thinking about these digital disconnects in #GE2015, rather than burying them with their hopes for this election. As I argued in a previous post, let’s use data generated by social media to understand political behaviour and institutions on an ongoing basis. Let’s find a way of incorporating social media analysis into polling models, for example by linking survey datasets to big data of this kind. The more such activity moves beyond the election campaign itself, the more useful social media data will be in tracking the underlying trends and patterns in political behavior.

And for the parties, these kind of ways of understanding and interacting with voters needs to be institutionalized in party structures, from top to bottom. On 8th May, the VP of a policy think-tank tweeted to both Axelrod and Messina ‘Gentlemen, welcome back to America. Let’s win the next one on this side of the pond’. The UK parties are on their own now. We must hope they use the time to build an ongoing dialogue with citizens and voters, learning from the success of the new online interest group barons, such as 38 degrees and Avaaz, by treating all internet contacts as ‘members’ and interacting with them on a regular basis. Don’t wait until 2020!


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics, investigating political behaviour, digital government and government-citizen interactions in the age of the internet, social media and big data. She has published over a hundred books, articles and major research reports in this area, including Political Turbulence: How Social Media Shape Collective Action (with Peter John, Scott Hale and Taha Yasseri, 2015).

Scott A. Hale is a Data Scientist at the OII. He develops and applies techniques from computer science to research questions in the social sciences. He is particularly interested in the area of human-computer interaction and the spread of information between speakers of different languages online and the roles of bilingual Internet users. He is also interested in collective action and politics more generally.

]]>
How do the mass media affect levels of trust in government? https://ensr.oii.ox.ac.uk/how-do-the-mass-media-affect-levels-of-trust-in-government/ Wed, 04 Mar 2015 16:33:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3157
Caption
The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness, using many different ICTs. Seoul at night by jonasginter.
Ed: You examine the influence of citizens’ use of online mass media on levels of trust in government. In brief, what did you find?

Greg: As I explain in the article, there is a common belief that mass media outlets, and especially online mass media outlets, often portray government in a negative light in an effort to pique the interest of readers. This tendency of media outlets to engage in ‘bureaucracy bashing’ is thought, in turn, to detract from the public’s support for their government. The basic assumption underpinning this relationship is that the more negative information on government there is, the more negative public opinion. However, in my analyses, I found evidence of a positive indirect relationship between citizens’ use of online mass media outlets and their levels of trust in government. Interestingly, however, the more frequently citizens used online mass media outlets for information about their government, the weaker this association became. These findings challenge conventional wisdom that suggests greater exposure to mass media outlets will result in more negative perceptions of the public sector.

Ed: So you find that that the particular positive or negative spin of the actual message may not be as important as the individuals’ sense that they are aware of the activities of the public sector. That’s presumably good news — both for government, and for efforts to ‘open it up’?

Greg: Yes, I think it can be. However, a few important caveats apply. First, the positive relationship between online mass media use and perceptions of government tapers off as respondents made more frequent use of online mass media outlets. In the study, I interpreted this to mean that exposure to mass media had less of an influence upon those who were more aware of public affairs, and more of an influence upon those who were less aware of public affairs. Therefore, there is something of a diminishing returns aspect to this relationship. Second, this study was not able to account for the valence (ie how positive or negative the information is) of information respondents were exposed to when using online mass media. While some attempts were made to control for valance by adding different control variables, further research drawing upon experimental research designs would be useful in substantiating the relationship between the valence of information disseminated by mass media outlets and citizens’ perceptions of their government.

Ed: Do you think governments are aware of this relationship — ie that an indirect effect of being more open and present in the media, might be increased citizen trust — and that they are responding accordingly?

Greg: I think that there is a general idea that more communication is better than less communication. However, at the same time there is a lot of evidence to suggest that some of the more complex aspects of the relationship between openness and trust in government go unaccounted for in current attempts by public sector organizations to become more open and transparent. As a result, this tool that public organizations have at their disposal is not being used as effectively as it could be, and in some instances is being used in ways that are counterproductive–that is, actually decreasing citizen trust in government. Therefore, in order for governments to translate greater openness into greater trust in government, more refined applications are necessary.

Ed: I know there are various initiatives in the UK — open government data / FoIs / departmental social media channels etc. — aimed at a general opening up of government processes. How open is the Korean government? Is a greater openness something they might adopt (or are adopting?) as part of a general aim to have a more informed and involved — and therefore hopefully more trusting — citizenry?

Greg: The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness. Their strategy has made use of different ICTs, such as e-government websites, social media accounts, non-emergency call centers, and smart phone apps. As a result, many now say that attempts by the Korean Government to become more open are more advanced than in many other areas of the developed world. However, the persistent issue in South Korea, as elsewhere, is whether these attempts are having the intended impact. A lot of empirical research has found, for example, that various attempts at becoming more open by many governments around the world have fallen short of creating a more informed and involved citizenry.

Ed: Finally — is there much empirical work or data in this area?

Greg: While there is a lot of excellent empirical research from the field of political science that has examined how mass media use relates to citizens’ perceptions of politicians, political preferences, or their levels of political knowledge, this topic has received almost no attention at all in public management/administration. This lack of discussion is surprising, given mass media has long served as a key means of enhancing the transparency and accountability of public organizations.

Read the full article: Porumbescu, G. (2013) Assessing the Link Between Online Mass Media and Trust in Government: Evidence From Seoul, South Korea. Policy & Internet 5 (4) 418-443.


Greg Porumbescu was talking to blog editor David Sutcliffe.

Gregory Porumbescu is an Assistant Professor at the Northern Illinois University Department of Public Administration. His research interests primarily relate to public sector applications of information and communications technology, transparency and accountability, and citizens’ perceptions of public service provision.

]]>
Young people are the most likely to take action to protect their privacy on social networking sites https://ensr.oii.ox.ac.uk/young-people-are-the-most-likely-to-take-action-to-protect-their-privacy-on-social-networking-sites/ Thu, 14 Aug 2014 07:33:49 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2694
A pretty good idea of what not to do on a social media site. Image by Sean MacEntee.

Standing on a stage in San Francisco in early 2010, Facebook founder Mark Zuckerberg, partly responding to the site’s decision to change the privacy settings of its 350 million users, announced that as Internet users had become more comfortable sharing information online, privacy was no longer a “social norm”. Of course, he had an obvious commercial interest in relaxing norms surrounding online privacy, but this attitude has nevertheless been widely echoed in the popular media. Young people are supposed to be sharing their private lives online — and providing huge amounts of data for commercial and government entities — because they don’t fully understand the implications of the public nature of the Internet.

There has actually been little systematic research on the privacy behaviour of different age groups in online settings. But there is certainly evidence of a growing (general) concern about online privacy (Marwick et al., 2010), with a 2013 Pew study finding that 50 percent of Internet users were worried about the information available about them online, up from 30 percent in 2009. Following the recent revelations about the NSA’s surveillance activities, a Washington Post-ABC poll reported 40 percent of its U.S. respondents as saying that it was more important to protect citizens’ privacy even if it limited the ability of the government to investigate terrorist threats. But what of young people, specifically? Do they really care less about their online privacy than older users?

Privacy concerns an individual’s ability to control what personal information about them is disclosed, to whom, when, and under what circumstances. We present different versions of ourselves to different audiences, and the expectations and norms of the particular audience (or context) will determine what personal information is presented or kept hidden. This highlights a fundamental problem with privacy in some SNSs: that of ‘context collapse’ (Marwick and boyd 2011). This describes what happens when audiences that are normally kept separate offline (such as employers and family) collapse into a single online context: such a single Facebook account or Twitter channel. This could lead to problems when actions that are appropriate in one context are seen by members of another audience; consider for example, the US high school teacher who was forced to resign after a parent complained about a Facebook photo of her holding a glass of wine while on holiday in Europe.

SNSs are particularly useful for investigating how people handle privacy. Their tendency to collapse the “circles of social life” may prompt users to reflect more about their online privacy (particularly if they have been primed by media coverage of people losing their jobs, going to prison, etc. as a result of injudicious postings). However, despite SNS being an incredibly useful source of information about online behaviour practices, few articles in the large body of literature on online privacy draw on systematically collected data, and the results published so far are probably best described as conflicting (see the literature review in the full paper). Furthermore, they often use convenience samples of college students, meaning they are unable to adequately address either age effects, or potentially related variables such as education and income. These ambiguities certainly provide fertile ground for additional research; particularly research based on empirical data.

The OII’s own Oxford Internet Surveys (OxIS) collect data on British Internet users and non-users through nationally representative random samples of more than 2,000 individuals aged 14 and older, surveyed face-to-face. One of the (many) things we are interested in is online privacy behaviour, which we measure by asking respondents who have an SNS profile: “Thinking about all the social network sites you use, … on average how often do you check or change your privacy settings?” In addition to the demographic factors we collect about respondents (age, sex, location, education, income etc.), we can construct various non-demographic measures that might have a bearing on this question, such as: comfort revealing personal data; bad experiences online; concern with negative experiences; number of SNSs used; and self-reported ability using the Internet.

So are young people completely unconcerned about their privacy online, gaily granting access to everything to everyone? Well, in a word, no. We actually find a clear inverse relationship: almost 95% of 14-17-year-olds have checked or changed their SNS privacy settings, with the percentage steadily dropping to 32.5% of respondents aged 65 and over. The strength of this effect is remarkable: between the oldest and youngest the difference is over 62 percentage points, and we find little difference in the pattern between the 2013 and 2011 surveys. This immediately suggests that the common assumption that young people don’t care about — and won’t act on — privacy concerns is probably wrong.

SNS-users

Comparing our own data with recent nationally representative surveys from Australia (OAIC 2013) and the US (Pew 2013) we see an amazing similarity: young people are more, not less, likely to have taken action to protect the privacy of their personal information on social networking sites than older people. We find that this age effect remains significant even after controlling for other demographic variables (such as education). And none of the five non-demographic variables changes the age effect either (see the paper for the full data, analysis and modelling). The age effect appears to be real.

So in short, and contrary to the prevailing discourse, we do not find young people to be apathetic when it comes to online privacy. Barnes (2006) outlined the original ‘privacy paradox’ by arguing that “adults are concerned about invasion of privacy, while teens freely give up personal information (…) because often teens are not aware of the public nature of the Internet.” This may once have been true, but it is certainly not the case today.

Existing theories are unable to explain why young people are more likely to act to protect privacy, but maybe the answer lies in the broad, fundamental characteristics of social life. It is social structure that creates context: people know each other based around shared life stages, experiences and purposes. Every person is the center of many social circles, and different circles have different norms for what is acceptable behavior, and thus for what is made public or kept private. If we think of privacy as a sort of meta-norm that arises between groups rather than within groups, it provides a way to smooth out some of the inevitable conflicts of the varied contexts of modern social life.

This might help explain why young people are particularly concerned about their online privacy. At a time when they’re leaving their families and establishing their own identities, they will often be doing activities in one circle (e.g. friends) that they do not want known in other circles (e.g. potential employers or parents). As an individual enters the work force, starts to pay taxes, and develops friendships and relationships farther from the home, the number of social circles increases, increasing the potential for conflicting privacy norms. Of course, while privacy may still be a strong social norm, it may not be in the interest of the SNS provider to cater for its differentiated nature.

The real paradox is that these sites have become so embedded in the social lives of users that to maintain their social lives they must disclose information on them despite the fact that there is a significant privacy risk in disclosing this information; and often inadequate controls to help users to meet their diverse and complex privacy needs.

Read the full paper: Blank, G., Bolsover, G., and Dubois, E. (2014) A New Privacy Paradox: Young people and privacy on social network sites. Prepared for the Annual Meeting of the American Sociological Association, 16-19 August 2014, San Francisco, California.

References

Barnes, S. B. (2006). A privacy paradox: Social networking in the United States. First Monday,11(9).

Marwick, A. E., Murgia-Diaz, D., & Palfrey, J. G. (2010). Youth, Privacy and Reputation (Literature Review). SSRN Scholarly Paper No. ID 1588163. Rochester, NY: Social Science Research Network.

Marwick, A. E., & boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133. doi:10.1177/1461444810365313


Grant Blank is a Survey Research Fellow at the OII. He is a sociologist who studies the social and cultural impact of the Internet and other new communication media.

]]>
Mapping collective public opinion in the Russian blogosphere https://ensr.oii.ox.ac.uk/mapping-collective-public-opinion-in-the-russian-blogosphere/ Mon, 10 Feb 2014 11:30:05 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2372 Caption
Widely reported as fraudulent, the 2011 Russian Parliamentary elections provoked mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia. Image by Nikolai Vassiliev.

Blogs are becoming increasingly important for agenda setting and formation of collective public opinion on a wide range of issues. In countries like Russia where the Internet is not technically filtered, but where the traditional media is tightly controlled by the state, they may be particularly important. The Russian language blogosphere counts about 85 million blogs – an amount far beyond the capacities of any government to control – and the Russian search engine Yandex, with its blog rating service, serves as an important reference point for Russia’s educated public in its search of authoritative and independent sources of information. The blogosphere is thereby able to function as a mass medium of “public opinion” and also to exercise influence.

One topic that was particularly salient over the period we studied concerned the Russian Parliamentary elections of December 2011. Widely reported as fraudulent, they provoked immediate and mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia, as well as corresponding activity in the blogosphere. Protesters made effective use of the Internet to organize a movement that demanded cancellation of the parliamentary election results, and the holding of new and fair elections. These protests continued until the following summer, gaining widespread national and international attention.

Most of the political and social discussion blogged in Russia is hosted on the blog platform LiveJournal. Some of these bloggers can claim a certain amount of influence; the top thirty bloggers have over 20,000 “friends” each, representing a good circulation for the average Russian newspaper. Part of the blogosphere may thereby resemble the traditional media; the deeper into the long tail of average bloggers, however, the more it functions as more as pure public opinion. This “top list” effect may be particularly important in societies (like Russia’s) where popularity lists exert a visible influence on bloggers’ competitive behavior and on public perceptions of their significance. Given the influence of these top bloggers, it may be claimed that, like the traditional media, they act as filters of issues to be thought about, and as definers of their relative importance and salience.

Gauging public opinion is of obvious interest to governments and politicians, and opinion polls are widely used to do this, but they have been consistently criticized for the imposition of agendas on respondents by pollsters, producing artefacts. Indeed, the public opinion literature has tended to regard opinion as something to be “extracted” by pollsters, which inevitably pre-structures the output. This literature doesn’t consider that public opinion might also exist in the form of natural language texts, such as blog posts, that have not been pre-structured by external observers.

There are two basic ways to detect topics in natural language texts: the first is manual coding of texts (ie by traditional content analysis), and the other involves rapidly developing techniques of automatic topic modeling or text clustering. The media studies literature has relied heavily on traditional content analysis; however, these studies are inevitably limited by the volume of data a person can physically process, given there may be hundreds of issues and opinions to track — LiveJournal’s 2.8 million blog accounts, for example, generate 90,000 posts daily.

For large text collections, therefore, only the second approach is feasible. In our article we explored how methods for topic modeling developed in computer science may be applied to social science questions – such as how to efficiently track public opinion on particular (and evolving) issues across entire populations. Specifically, we demonstrate how automated topic modeling can identify public agendas, their composition, structure, the relative salience of different topics, and their evolution over time without prior knowledge of the issues being discussed and written about. This automated “discovery” of issues in texts involves division of texts into topically — or more precisely, lexically — similar groups that can later be interpreted and labeled by researchers. Although this approach has limitations in tackling subtle meanings and links, experiments where automated results have been checked against human coding show over 90 percent accuracy.

The computer science literature is flooded with methodological papers on automatic analysis of big textual data. While these methods can’t entirely replace manual work with texts, they can help reduce it to the most meaningful and representative areas of the textual space they help to map, and are the only means to monitor agendas and attitudes across multiple sources, over long periods and at scale. They can also help solve problems of insufficient and biased sampling, when entire populations become available for analysis. Due to their recentness, as well as their mathematical and computational complexity, these approaches are rarely applied by social scientists, and to our knowledge, topic modeling has not previously been applied for the extraction of agendas from blogs in any social science research.

The natural extension of automated topic or issue extraction involves sentiment mining and analysis; as Gonzalez-Bailon, Kaltenbrunner, and Banches (2012) have pointed out, public opinion doesn’t just involve specific issues, but also encompasses the state of public emotion about these issues, including attitudes and preferences. This involves extracting opinions on the issues/agendas that are thought to be present in the texts, usually by dividing sentences into positive and negative. These techniques are based on human-coded dictionaries of emotive words, on algorithmic construction of sentiment dictionaries, or on machine learning techniques.

Both topic modeling and sentiment analysis techniques are required to effectively monitor self-generated public opinion. When methods for tracking attitudes complement methods to build topic structures, a rich and powerful map of self-generated public opinion can be drawn. Of course this mapping can’t completely replace opinion polls; rather, it’s a new way of learning what people are thinking and talking about; a method that makes the vast amounts of user-generated content about society – such as the 65 million blogs that make up the Russian blogosphere — available for social and policy analysis.

Naturally, this approach to public opinion and attitudes is not free of limitations. First, the dataset is only representative of the self-selected population of those who have authored the texts, not of the whole population. Second, like regular polled public opinion, online public opinion only covers those attitudes that bloggers are willing to share in public. Furthermore, there is still a long way to go before the relevant instruments become mature, and this will demand the efforts of the whole research community: computer scientists and social scientists alike.

Read the full paper: Olessia Koltsova and Sergei Koltcov (2013) Mapping the public agenda with topic modeling: The case of the Russian livejournal. Policy and Internet 5 (2) 207–227.

Also read on this blog: Can text mining help handle the data deluge in public policy analysis? by Aude Bicquelet.

References

González-Bailón, S., A. Kaltenbrunner, and R.E. Banches. 2012. “Emotions, Public Opinion and U.S. Presidential Approval Rates: A 5 Year Analysis of Online Political Discussions,” Human Communication Research 38 (2): 121–43.

]]>
The physics of social science: using big data for real-time predictive modelling https://ensr.oii.ox.ac.uk/physics-of-social-science-using-big-data-for-real-time-predictive-modelling/ Thu, 21 Nov 2013 09:49:27 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2320 Ed: You are interested in analysis of big data to understand human dynamics; how much work is being done in terms of real-time predictive modelling using these data?

Taha: The socially generated transactional data that we call “big data” have been available only very recently; the amount of data we now produce about human activities in a year is comparable to the amount that used to be produced in decades (or centuries). And this is all due to recent advancements in ICTs. Despite the short period of availability of big data, the use of them in different sectors including academia and business has been significant. However, in many cases, the use of big data is limited to monitoring and post hoc analysis of different patterns. Predictive models have been rarely used in combination with big data. Nevertheless, there are very interesting examples of using big data to make predictions about disease outbreaks, financial moves in the markets, social interactions based on human mobility patterns, election results, etc.

Ed: What were the advantages of using Wikipedia as a data source for your study — as opposed to Twitter, blogs, Facebook or traditional media, etc.?

Taha: Our results have shown that the predictive power of Wikipedia page view and edit data outperforms similar box office-prediction models based on Twitter data. This can partially be explained by considering the different nature of Wikipedia compared to social media sites. Wikipedia is now the number one source of online information, and Wikipedia article page view statistics show how much Internet users have been interested in knowing about a specific movie. And the edit counts — even more importantly — indicate the level of interest of the editors in sharing their knowledge about the movies with others. Both indicators are much stronger than what you could measure on Twitter, which is mainly the reaction of the users after watching or reading about the movie. The cost of participation in Wikipedia’s editorial process makes the activity data more revealing about the potential popularity of the movies.

Another advantage is the sheer availability of Wikipedia data. Twitter streams, by comparison, are limited in both size and time. Gathering Facebook data is also problematic, whereas all the Wikipedia editorial activities and page views are recorded in full detail — and made publicly available.

Ed: Could you briefly describe your method and model?

Taha: We retrieved two sets of data from Wikipedia, the editorial activity and the page views relating to our set of 312 movies. The former indicates the popularity of the movie among the Wikipedia editors and the latter among Wikipedia readers. We then defined different measures based on these two data streams (eg number of edits, number of unique editors, etc.) In the next step we combined these data into a linear model that assumes the more popular the movie is, the larger the size of these parameters. However this model needs both training and calibration. We calibrated the model based on the IMBD data on the financial success of a set of ‘training’ movies. After calibration, we applied the model to a set of “test” movies and (luckily) saw that the model worked very well in predicting the financial success of the test movies.

Ed: What were the most significant variables in terms of predictive power; and did you use any content or sentiment analysis?

Taha: The nice thing about this method is that you don’t need to perform any content or sentiment analysis. We deal only with volumes of activities and their evolution over time. The parameter that correlated best with financial success (and which was therefore the best predictor) was the number of page views. I can easily imagine that these days if someone wants to go to watch a movie, they most likely turn to the Internet and make a quick search. Thanks to Google, Wikipedia is going to be among the top results and it’s very likely that the click will go to the Wikipedia article about the movie. I think that’s why the page views correlate to the box office takings so significantly.

Ed: Presumably people are picking up on signals, ie Wikipedia is acting like an aggregator and normaliser of disparate environmental signals — what do you think these signals might be, in terms of box office success? ie is it ultimately driven by the studio media machine?

Taha: This is a very difficult question to answer. There are numerous factors that make a movie (or a product in general) popular. Studio marketing strategies definitely play an important role, but the quality of the movie, the collective mood of the public, herding effects, and many other hidden variables are involved as well. I hope our research serves as a first step in studying popularity in a quantitative framework, letting us answer such questions. To fully understand a system the first thing you need is a tool to monitor and observe it very well quantitatively. In this research we have shown that (for example) Wikipedia is a nice window and useful tool to observe and measure popularity and its dynamics; hopefully leading to a deep understanding of the underlying mechanisms as well.

Ed: Is there similar work / approaches to what you have done in this study?

Taha: There have been other projects using socially generated data to make predictions on the popularity of movies or movement in financial markets, however to the best of my knowledge, it’s been the first time that Wikipedia data have been used to feed the models. We were positively surprised when we observed that these data have stronger predictive power than previously examined datasets.

Ed: If you have essentially shown that ‘interest on Wikipedia’ tracks ‘real-world interest’ (ie box office receipts), can this be applied to other things? eg attention to legislation, political scandal, environmental issues, humanitarian issues: ie Wikipedia as “public opinion monitor”?

Taha: I think so. Now I’m running two other projects using a similar approach; one to predict election outcomes and the other one to do opinion mining about the new policies implemented by governing bodies. In the case of elections, we have observed very strong correlations between changes in the information seeking rates of the general public and the number of ballots cast. And in the case of new policies, I think Wikipedia could be of great help in understanding the level of public interest in searching for accurate information about the policies, and how this interest is satisfied by the information provided online. And more interestingly, how this changes overtime as the new policy is fully implemented.

Ed: Do you think there are / will be practical applications of using social media platforms for prediction, or is the data too variable?

Taha: Although the availability and popularity of social media are recent phenomena, I’m sure that social media data are already being used by different bodies for predictions in various areas. We have seen very nice examples of using these data to predict disease outbreaks or the arrival of earthquake waves. The future of this field is very promising, considering both the advancements in the methodologies and also the increase in popularity and use of social media worldwide.

Ed: How practical would it be to generate real-time processing of this data — rather than analysing databases post hoc?

Taha: Data collection and analysis could be done instantly. However the challenge would be the calibration. Human societies and social systems — similarly to most complex systems — are non-stationary. That means any statistical property of the system is subject to abrupt and dramatic changes. That makes it a bit challenging to use a stationary model to describe a continuously changing system. However, one could use a class of adaptive models or Bayesian models which could modify themselves as the system evolves and more data are available. All these could be done in real time, and that’s the exciting part of the method.

Ed: As a physicist; what are you learning in a social science department? And what does physicist bring to social science and the study of human systems?

Taha: Looking at complicated phenomena in a simple way is the art of physics. As Einstein said, a physicist always tries to “make things as simple as possible, but not simpler”. And that works very well in describing natural phenomena, ranging from sub-atomic interactions all the way to cosmology. However, studying social systems with the tools of natural sciences can be very challenging, and sometimes too much simplification makes it very difficult to understand the real underlying mechanisms. Working with social scientists, I’m learning a lot about the importance of the individual attributes (and variations between) the elements of the systems under study, outliers, self-awarenesses, ethical issues related to data, agency and self-adaptation, and many other details that are mostly overlooked when a physicist studies a social system.

At the same time, I try to contribute the methodological approaches and quantitative skills that physicists have gained during two centuries of studying complex systems. I think statistical physics is an amazing example where statistical techniques can be used to describe the macro-scale collective behaviour of billions and billions of atoms with a single formula. I should admit here that humans are way more complicated than atoms — but the dialogue between natural scientists and social scientists could eventually lead to multi-scale models which could help us to gain a quantitative understanding of social systems, thereby facilitating accurate predictions of social phenomena.

Ed: What database would you like access to, if you could access anything?

Taha: I have day dreams about the database of search queries from all the Internet users worldwide at the individual level. These data are being collected continuously by search engines and technically could be accessed, but due to privacy policy issues it’s impossible to get a hold on; even if only for research purposes. This is another difference between social systems and natural systems. An atom never gets upset being watched through a microscope all the time, but working on social systems and human-related data requires a lot of care with respect to privacy and ethics.

Read the full paper: Mestyán, M., Yasseri, T., and Kertész, J. (2013) Early Prediction of Movie Box Office Success based on Wikipedia Activity Big Data. PLoS ONE 8 (8) e71226.


Taha Yasseri was talking to blog editor David Sutcliffe.

Taha Yasseri is the Big Data Research Officer at the OII. Prior to coming to the OII, he spent two years as a Postdoctoral Researcher at the Budapest University of Technology and Economics, working on the socio-physical aspects of the community of Wikipedia editors, focusing on conflict and editorial wars, along with Big Data analysis to understand human dynamics, language complexity, and popularity spread. He has interests in analysis of Big Data to understand human dynamics, government-society interactions, mass collaboration, and opinion dynamics.

]]>
Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom? https://ensr.oii.ox.ac.uk/verification-of-crowd-sourced-information-is-this-crowd-wisdom-or-machine-wisdom/ Tue, 19 Nov 2013 09:00:41 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1528 Crisis mapping platform
‘Code’ or ‘law’? Image from an Ushahidi development meetup by afropicmusing.

In ‘Code and Other Laws of Cyberspace’, Lawrence Lessig (2006) writes that computer code (or what he calls ‘West Coast code’) can have the same regulatory effect as the laws and legal code developed in Washington D.C., so-called ‘East Coast code’. Computer code impacts on a person’s behaviour by virtue of its essentially restrictive architecture: on some websites you must enter a password before you gain access, in other places you can enter unidentified. The problem with computer code, Lessig argues, is that it is invisible, and that it makes it easy to regulate people’s behaviour directly and often without recourse.

For example, fair use provisions in US copyright law enable certain uses of copyrighted works, such as copying for research or teaching purposes. However the architecture of many online publishing systems heavily regulates what one can do with an e-book: how many times it can be transferred to another device, how many times it can be printed, whether it can be moved to a different format – activities that have been unregulated until now, or that are enabled by the law but effectively ‘closed off’ by code. In this case code works to reshape behaviour, upsetting the balance between the rights of copyright holders and the rights of the public to access works to support values like education and innovation.

Working as an ethnographic researcher for Ushahidi, the non-profit technology company that makes tools for people to crowdsource crisis information, has made me acutely aware of the many ways in which ‘code’ can become ‘law’. During my time at Ushahidi, I studied the practices that people were using to verify reports by people affected by a variety of events – from earthquakes to elections, from floods to bomb blasts. I then compared these processes with those followed by Wikipedians when editing articles about breaking news events. In order to understand how to best design architecture to enable particular behaviour, it becomes important to understand how such behaviour actually occurs in practice.

In addition to the impact of code on the behaviour of users, norms, the market and laws also play a role. By interviewing both the users and designers of crowdsourcing tools I soon realized that ‘human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process. It involves negotiation between different narratives of what happened and why; identifying the sources of information and assessing their reputation among groups who are considered important users of that information; and identifying gatekeeping and fact checking processes where the source is a group or institution, amongst other factors.

One disjuncture between verification ‘practice’ and the architecture of the verification code developed by Ushahidi for users was that verification categories were set as a default feature, whereas some users of the platform wanted the verification process to be invisible to external users. Items would show up as being ‘unverified’ unless they had been explicitly marked as ‘verified’, thus confusing users about whether the item was unverified because the team hadn’t yet verified it, or whether it was unverified because it had been found to be inaccurate. Some user groups wanted to be able to turn off such features when they could not take responsibility for data verification. In the case of the Christchurch Recovery Map in the aftermath of the 2011 New Zealand earthquake, the government officials with whom volunteers who set up the Ushahidi instance were working wanted to be able to turn off such features because they were concerned that they could not ensure that reports were indeed verified and having the category show up (as ‘unverified’ until ‘verified’) implied that they were engaged in some kind of verification process.

The existence of a default verification category impacted on the Christchurch Recovery Map group’s ability to gain support from multiple stakeholders, including the government, but this feature of the platform’s architecture did not have the same effect in other places and at other times. For other users like the original Ushahidi Kenya team who worked to collate instances of violence after the Kenyan elections in 2007/08, this detailed verification workflow was essential to counter the misinformation and rumour that dogged those events. As Ushahidi’s use cases have diversified – from reporting death and damage during natural disasters to political events including elections, civil war and revolutions, the architecture of Ushahidi’s code base has needed to expand. Ushahidi has recognised that code plays a defining role in the experience of verification practices, but also that code’s impact will not be the same at all times, and in all circumstances. This is why it invested in research about user diversity in a bid to understand the contexts in which code runs, and how these contexts result in a variety of different impacts.

A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms. Wikipedia uses a set of policies and practices about how content should be added and reviewed, such as the use of ‘citation needed’ tags for information that sounds controversial and that should be backed up by a reliable source. Galaxy Zoo uses an algorithm to detect whether certain contributions are accurate by comparing them to the same work by other volunteers.

Ushahidi leaves it up to individual deployers of their tools and platform to make decisions about verification policies and practices, and is going to be designing new defaults to accommodate this variety of use. In parallel, Veri.ly, a project by ex-Ushahidi Patrick Meier with organisations Masdar and QCRI is responding to the large amounts of unverified and often contradictory information that appears on social media following natural disasters by enabling social media users to collectively evaluate the credibility of rapidly crowdsourced evidence. The project was inspired by MIT’s winning entry to DARPA’s ‘Red Balloon Challenge’ which was intended to highlight social networking’s potential to solve widely distributed, time-sensitive problems, in this case by correctly identifying the GPS coordinates of 10 balloons suspended at fixed, undisclosed locations across the US. The winning MIT team crowdsourced the problem by using a monetary incentive structure, promising $2,000 to the first person who submitted the correct coordinates for a single balloon, $1,000 to the person who invited that person to the challenge; $500 to the person who invited the inviter, and so on. The system quickly took root, spawning geographically broad, dense branches of connections. After eight hours and 52 minutes, the MIT team identified the correct coordinates for all 10 balloons.

Veri.ly aims to apply MIT’s approach to the process of rapidly collecting and evaluating critical evidence during disasters: “Instead of looking for weather balloons across an entire country in less than 9 hours, we hope Veri.ly will facilitate the crowdsourced collection of multimedia evidence for individual disasters in under 9 minutes.” It is still unclear how (or whether) Verily will be able to reproduce the same incentive structure, but a bigger question lies around the scale and spread of social media in the majority of countries where humanitarian assistance is needed. The majority of Ushahidi or Crowdmap installations are, for example, still “small data” projects, with many focused on areas that still require offline verification procedures (such as calling volunteers or paid staff who are stationed across a country, as was the case in Sudan [3]). In these cases – where the social media presence may be insignificant — a team’s ability to achieve a strong local presence will define the quality of verification practices, and consequently the level of trust accorded to their project.

If code is law and if other aspects in addition to code determine how we can act in the world, it is important to understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources. Only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

For more on Ushahidi verification practices and the management of sources on Wikipedia during breaking news events, see:

[1] Ford, H. (2012) Wikipedia Sources: Managing Sources in Rapidly Evolving Global News Articles on the English Wikipedia. SSRN Electronic Journal. doi:10.2139/ssrn.2127204

[2] Ford, H. (2012) Crowd Wisdom. Index on Censorship 41(4), 33–39. doi:10.1177/0306422012465800

[3] Ford, H. (2011) Verifying information from the crowd. Ushahidi.


Heather Ford has worked as a researcher, activist, journalist, educator and strategist in the fields of online collaboration, intellectual property reform, information privacy and open source software in South Africa, the United Kingdom and the United States. She is currently a DPhil student at the OII, where she is studying how Wikipedia editors write history as it happens in a format that is unprecedented in the history of encyclopedias. Before this, she worked as an ethnographer for Ushahidi. Read Heather’s blog.

For more on the ChristChurch Earthquake, and the role of digital humanities in preserving the digital record of its impact see: Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project on this blog.

]]>
Exploring variation in parental concerns about online safety issues https://ensr.oii.ox.ac.uk/exploring-variation-parental-concerns-about-online-safety-issues/ Thu, 14 Nov 2013 08:29:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1208 Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate?

boyd / Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole.

As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices — both the good and the bad. Our goal is to introduce research that can help contextualize socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology.

Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research?

boyd / Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous.

Parents have different and often conflicting views about what’s best for their children or for children writ large. This creates a significant challenge for designing interventions that are meant to be beneficial and applicable to a diverse group of people. What’s beneficial or desirable to one may not be positively received by another. More importantly, what’s helpful to one group of parents may not actually benefit parents or youth as a whole. As a result, we think it’s important to start interrogating assumptions that underpin technology policy interventions so that policymakers have a better understanding of how their decisions affect whom they’re hoping to reach.

Ed: What did your study reveal, and in particular, where do you see the greatest differences in attitudes arising? Did it reveal anything unexpected?

boyd / Hargittai: The most significant take-away from our research is that there are significant demographic differences in concerns about young people. Some of the differences are not particularly surprising. For example, parents of children who have been exposed to pornography or violent content, or who have bullied or been bullied, have greater concern that this will happen to their child. Yet, other factors may be more surprising. For example, we found significant racial and ethnic differences in how parents approach these topics. Black, Hispanic, and Asian parents are much more concerned about at least some of the online safety measures than Whites, even when controlling for socioeconomic factors and previous experiences.

While differences in cultural experiences may help explain some of these findings, our results raise serious questions as to the underlying processes and reasons for these discrepancies. Are these parents more concerned because they have a higher level of distrust for technology? Because they feel as though there are fewer societal protections for their children? Because they feel less empowered as parents? We don’t know. Still, our findings challenge policy-makers to think about the diversity of perspectives their law-making should address. And when they enact laws, they should be attentive to how those interventions are received. Just because parents of colour are more concerned does not mean that an intervention intended to empower them will do so. Like many other research projects, this study results in as many — if not more — questions than it answers.

Ed: Are parents worrying about the right things? For example, you point out that ‘stranger danger’ registers the highest level of concern from most parents, yet this is a relatively rare occurrence. Bullying is much more common, yet not such a source of concern. Do we need to do more to educate parents about risks, opportunities and coping?

boyd / Hargittai: Parental fear is a contested issue among scholars and for good reason. In many ways, it’s a philosophical issue. Should parents worry more about frequent but low-consequence issues? Or should they concern themselves more with the possibility of rare but devastating incidents? How much fear is too much fear? Fear is an understandable response to danger, but left unchecked, it can become an irrational response to perceived but unlikely risks. Fear can prevent injury, but too much fear can result in a form of protectionism that itself can be harmful. Most parents want to protect their children from harm but few think about the consequences of smothering their children in their efforts to keep them safe. All too often, in erring on the side of caution, we escalate a societal tendency to become overprotective, limiting our children’s opportunities to explore, learn, be creative and mature. Finding the right balance is very tricky.

People tend to fear things that they don’t understand. New technologies are often terrifying because they are foreign. And so parents are reasonably concerned when they see their children using tools that confound them. One of the best antidotes to fear is knowledge. Although this is outside of the scope of this paper, we strongly recommend that parents take the time to learn about the tools that their children are using, ideally by discussing them with their children. The more that parents can understand the technological choices and decisions made by their children, the more that parents can help them navigate the risks and challenges that they do face, online and off.

Ed: On the whole, it seems that parents whose children have had negative experiences online are more likely to say they are concerned, which seems highly appropriate. But we also have evidence from other studies that many parents are unaware of such experiences, and also that children who are more vulnerable offline, may be more vulnerable online too. Is there anything in your research to suggest that certain groups of parents aren’t worrying enough?

boyd / Hargittai: As researchers, we regularly use different methodologies and different analytical angles to get at various research questions. Each approach has its strengths and weaknesses, insights and blind spots. In this project, we surveyed parents, which allows us to get at their perspective, but it limits our ability to understand what they do not know or will not admit. Over the course of our careers, we’ve also surveyed and interviewed numerous youth and young adults, parents and other adults who’ve worked with youth. In particular, danah has spent a lot of time working with at-risk youth who are especially vulnerable. Unfortunately, what she’s learned in the process — and what numerous survey studies have shown — is that those who are facing some of the most negative experiences do not necessarily have positive home life experiences. Many youth face parents who are absent, addicts, or abusive; these are the youth who are most likely to be physically, psychologically, or socially harmed, online and offline.

In this study, we took parents at face value, assuming that parents are good actors with positive intentions. It is important to recognise, however, that this cannot be taken for granted. As with all studies, our findings are limited because of the methodological approach we took. We have no way of knowing whether or not these parents are paying attention, let alone whether or not their relationship to their children is unhealthy.

Although the issues of abuse and neglect are outside of the scope of this particular paper, these have significant policy implications. Empowering well-intended parents is generally a good thing, but empowering abusive parents can create unintended consequences for youth. This is an area where much more research is needed because it’s important to understand when and how empowering parents can actually put youth at risk in different ways.

Ed: What gaps remain in our understanding of parental attitudes towards online risks?

boyd / Hargittai: As noted above, our paper assumes well-intentioned parenting on behalf of caretakers. A study could explore online attitudes in the context of more information about people’s general parenting practices. Regarding our findings about attitudinal differences by race and ethnicity, much remains to be done. While existing literature alludes to some reasons as to why we might observe these variations, it would be helpful to see additional research aiming to uncover the sources of these discrepancies. It would be fruitful to gain a better understanding of what influences parental attitudes about children’s use of technology in the first place. What role do mainstream media, parents’ own experiences with technology, their personal networks, and other factors play in this process?

Another line of inquiry could explore how parental concerns influence rules aimed at children about technology uses and how such rules affect youth adoption and use of digital media. The latter is a question that Eszter is addressing in a forthcoming paper with Sabrina Connell, although that study does not include data on parental attitudes, only rules. Including details about parental concerns in future studies would allow more nuanced investigation of the above questions. Finally, much is needed to understand the impact that policy interventions in this space have on parents, youth, and communities. Even the most well-intentioned policy may inadvertently cause harm. It is important that all policy interventions are monitored and assessed as to both their efficacy and secondary effects.


Read the full paper: boyd, d., and Hargittai, E. (2013) Connected and Concerned: Exploring Variation in Parental Concerns About Online Safety Issues. Policy and Internet 5 (3).

danah boyd and Eszter Hargittai were talking to blog editor David Sutcliffe.

]]>
Can Twitter provide an early warning function for the next pandemic? https://ensr.oii.ox.ac.uk/can-twitter-provide-an-early-warning-function-for-the-next-flu-pandemic/ Mon, 14 Oct 2013 08:00:41 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1241 Image by .
Communication of risk in any public health emergency is a complex task for healthcare agencies; a task made more challenging when citizens are bombarded with online information. Mexico City, 2009. Image by Eneas.

 

Ed: Could you briefly outline your study?

Patty: We investigated the role of Twitter during the 2009 swine flu pandemics from two perspectives. Firstly, we demonstrated the role of the social network to detect an upcoming spike in an epidemic before the official surveillance systems – up to week in the UK and up to 2-3 weeks in the US – by investigating users who “self-diagnosed” themselves posting tweets such as “I have flu / swine flu”. Secondly, we illustrated how online resources reporting the WHO declaration of “pandemics” on 11 June 2009 were propagated through Twitter during the 24 hours after the official announcement [1,2,3].

Ed: Disease control agencies already routinely follow media sources; are public health agencies  aware of social media as another valuable source of information?

Patty:  Social media are providing an invaluable real-time data signal complementing well-established epidemic intelligence (EI) systems monitoring online media, such as MedISys and GPHIN. While traditional surveillance systems will remain the pillars of public health, online media monitoring has added an important early-warning function, with social media bringing  additional benefits to epidemic intelligence: virtually real-time information available in the public domain that is contributed by users themselves, thus not relying on the editorial policies of media agencies.

Public health agencies (such as the European Centre for Disease Prevention and Control) are interested in social media early warning systems, but more research is required to develop robust social media monitoring solutions that are ready to be integrated with agencies’ EI services.

Ed: How difficult is this data to process? Eg: is this a full sample, processed in real-time?

Patty:  No, obtaining all Twitter search query results is not possible. In our 2009 pilot study we were accessing data from Twitter using a search API interface querying the database every minute (the number of results was limited to 100 tweets). Currently, only 1% of the ‘Firehose’ (massive real-time stream of all public tweets) is made available using the streaming API. The searches have to be performed in real-time as historical Twitter data are normally available only through paid services. Twitter analytics methods are diverse; in our study, we used frequency calculations, developed algorithms for geo-location, automatic spam and duplication detection, and applied time series and cross-correlation with surveillance data [1,2,3].

Ed: What’s the relationship between traditional and social media in terms of diffusion of health information? Do you have a sense that one may be driving the other?

Patty: This is a fundamental question. “Does media coverage of certain topic causes buzz on social media or does social media discussion causes media frenzy?” This was particularly important to investigate for the 2009 swine flu pandemic, which experienced unprecedented media interest. While it could be assumed that disease cases preceded media coverage, or that media discussion sparked public interest causing Twitter debate, neither proved to be the case in our experiment. On some days, media coverage for flu was higher, and on others Twitter discussion was higher; but peaks seemed synchronized – happening on the same days.

Ed: In terms of communicating accurate information, does the Internet make the job easier or more difficult for health authorities?

Patty: The communication of risk in any public health emergencies is a complex task for government and healthcare agencies; this task is made more challenging when citizens are bombarded with online information, from a variety of sources that vary in accuracy. This has become even more challenging with the increase in users accessing health-related information on their mobile phones (17% in 2010 and 31% in 2012, according to the US Pew Internet study).

Our findings from analyzing Twitter reaction to online media coverage of the WHO declaration of swine flu as a “pandemic” (stage 6) on 11 June 2009, which unquestionably was the most media-covered event during the 2009 epidemic, indicated that Twitter does favour reputable sources (such as the BBC, which was by far the most popular) but also that bogus information can still leak into the network.

Ed: What differences do you see between traditional and social media, in terms of eg bias / error rate of public health-related information?

Patty: Fully understanding quality of media coverage of health topics such as the 2009 swine flu pandemics in terms of bias and medical accuracy would require a qualitative study (for example, one conducted by Duncan in the EU [4]). However, the main role of social media, in particular Twitter due to the 140 character limit, is to disseminate media coverage by propagating links rather than creating primary health information about a particular event. In our study around 65% of tweets analysed contained a link.

Ed: Google flu trends (which monitors user search terms to estimate worldwide flu activity) has been around a couple of years: where is that going? And how useful is it?

Patty: Search companies such as Google have demonstrated that online search queries for keywords relating to flu and its symptoms can serve as a proxy for the number of individuals who are sick (Google Flu Trends), however, in 2013 the system “drastically overestimated peak flu levels”, as reported by Nature. Most importantly, however, unlike Twitter, Google search queries remain proprietary and are therefore not useful for research or the construction of non-commercial applications.

Ed: What are implications of social media monitoring for countries that may want to suppress information about potential pandemics?

Patty: The importance of event-based surveillance and monitoring social media for epidemic intelligence is of particular importance in countries with sub-optimal surveillance systems and those lacking the capacity for outbreak preparedness and response. Secondly, the role of user-generated information on social media is also of particular importance in counties with limited freedom of press or those that actively try to suppress information about potential outbreaks.

Ed: Would it be possible with this data to follow spread geographically, ie from point sources, or is population movement too complex to allow this sort of modelling?

Patty: Spatio-temporal modelling is technically possible as tweets are time-stamped and there is a support for geo-tagging. However, the location of all tweets can’t be precisely identified; however, early warning systems will improve in accuracy as geo-tagging of user generated content becomes widespread. Mathematical modelling of the spread of diseases and population movements are very topical research challenges (undertaken by, for example, by Colliza et al. [5]) but modelling social media user behaviour during health emergencies to provide a robust baseline for early disease detection remains a challenge.

Ed: A strength of monitoring social media is that it follows what people do already (eg search / Tweet / update statuses). Are there any mobile / SNS apps to support collection of epidemic health data? eg a sort of ‘how are you feeling now’ app?

Patty: The strength of early warning systems using social media is exactly in the ability to piggy-back on existing users’ behaviour rather than having to recruit participants. However, there are a growing number of participatory surveillance systems that ask users to provide their symptoms (web-based such as Flusurvey in the UK, and “Flu Near You” in the US that also exists as a mobile app). While interest in self-reporting systems is growing, challenges include their reliability, user recruitment and long-term retention, and integration with public health services; these remain open research questions for the future. There is also a potential for public health services to use social media two-ways – by providing information over the networks rather than only collect user-generated content. Social media could be used for providing evidence-based advice and personalized health information directly to affected citizens where they need it and when they need it, thus effectively engaging them in active management of their health.

References

[1.] M Szomszor, P Kostkova, C St Louis: Twitter Informatics: Tracking and Understanding Public Reaction during the 2009 Swine Flu Pandemics, IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology 2011, WI-IAT, Vol. 1, pp.320-323.

[2.]  Szomszor, M., Kostkova, P., de Quincey, E. (2010). #swineflu: Twitter Predicts Swine Flu Outbreak in 2009. M Szomszor, P Kostkova (Eds.): ehealth 2010, Springer Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering LNICST 69, pages 18-26, 2011.

[3.] Ed de Quincey, Patty Kostkova Early Warning and Outbreak Detection Using Social Networking Websites: the Potential of Twitter, P Kostkova (Ed.): ehealth 2009, Springer Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering LNICST 27, pages 21-24, 2010.

[4.] B Duncan. How the Media reported the first day of the pandemic H1N1) 2009: Results of EU-wide Media Analysis. Eurosurveillance, Vol 14, Issue 30, July 2009

[5.] Colizza V, Barrat A, Barthelemy M, Valleron AJ, Vespignani A (2007) Modeling the worldwide spread of pandemic influenza: Baseline case an containment interventions. PloS Med 4(1): e13. doi:10.1371/journal. pmed.0040013

Further information on this project and related activities, can be found at: BMJ-funded scientific film: http://www.youtube.com/watch?v=_JNogEk-pnM ; Can Twitter predict disease outbreaks? http://www.bmj.com/content/344/bmj.e2353 ; 1st International Workshop on Public Health in the Digital Age: Social Media, Crowdsourcing and Participatory Systems (PHDA 2013): http://www.digitalhealth.ws/ ; Social networks and big data meet public health @ WWW 2013: http://www2013.org/2013/04/25/social-networks-and-big-data-meet-public-health/


Patty Kostkova was talking to blog editor David Sutcliffe.

Dr Patty Kostkova is a Principal Research Associate in eHealth at the Department of Computer Science, University College London (UCL) and held a Research Scientist post at the ISI Foundation in Italy. Until 2012, she was the Head of the City eHealth Research Centre (CeRC) at City University, London, a thriving multidisciplinary research centre with expertise in computer science, information science and public health. In recent years, she was appointed a consultant at WHO responsible for the design and development of information systems for international surveillance.

Researchers who were instrumental in this project include Ed de Quincey, Martin Szomszor and Connie St Louis.

]]>
Predicting elections on Twitter: a different way of thinking about the data https://ensr.oii.ox.ac.uk/predicting-elections-on-twitter-a-different-way-of-thinking-about-the-data/ Sun, 04 Aug 2013 11:43:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1498 GOP presidential nominee Mitt Romney
GOP presidential nominee Mitt Romney, centre, waving to crowd, after delivering his acceptance speech on the final night of the 2012 Republican National Convention. Image by NewsHour.

Recently, there has been a lot of interest in the potential of social media as a means to understand public opinion. Driven by an interest in the potential of so-called “big data”, this development has been fuelled by a number of trends. Governments have been keen to create techniques for what they term “horizon scanning”, which broadly means searching for the indications of emerging crises (such as runs on banks or emerging natural disasters) online, and reacting before the problem really develops. Governments around the world are already committing massive resources to developing these techniques. In the private sector, big companies’ interest in brand management has fitted neatly with the potential of social media monitoring. A number of specialised consultancies now claim to be able to monitor and quantify reactions to products, interactions or bad publicity in real time.

It should therefore come as little surprise that, like other research methods before, these new techniques are now crossing over into the competitive political space. Social media monitoring, which in theory can extract information from tweets and Facebook posts and quantify positive and negative public reactions to people, policies and events has an obvious utility for politicians seeking office. Broadly, the process works like this: vast datasets relating to an election, often running into millions of items, are gathered from social media sites such as Twitter. These data are then analysed using natural language processing software, which automatically identifies qualities relating to candidates or policies and attributes a positive or negative sentiment to each item. Finally, these sentiments and other properties mined from the text are totalised, to produce an overall figure for public reaction on social media.

These techniques have already been employed by the mainstream media to report on the 2010 British general election (when the country had its first leaders debate, an event ripe for this kind of research) and also in the 2012 US presidential election. This growing prominence led my co-author Mike Jensen of the University of Canberra and myself to question: exactly how useful are these techniques for predicting election results? In order to answer this question, we carried out a study on the Republican nomination contest in 2012, focused on the Iowa Caucus and Super Tuesday. Our findings are published in the current issue of Policy and Internet.

There are definite merits to this endeavour. US candidate selection contests are notoriously hard to predict with traditional public opinion measurement methods. This is because of the unusual and unpredictable make-up of the electorate. Voters are likely (to greater or lesser degrees depending on circumstances in a particular contest and election laws in the state concerned) to share a broadly similar outlook, so the electorate is harder for pollsters to model. Turnout can also vary greatly from one cycle to the next, adding an additional layer of unpredictability to the proceedings.

However, as any professional opinion pollster will quickly tell you, there is a big problem with trying to predict elections using social media. The people who use it are simply not like the rest of the population. In the case of the US, research from Pew suggests that only 16 per cent of internet users use Twitter, and while that figure goes up to 27 per cent of those aged 18-29, only 2 per cent of over 65s use the site. The proportion of the electorate voting for within those categories, however, is the inverse: over 65s vote at a relatively high rate compared to the 18-29 cohort. furthermore, given that we know (from research such as Matthew Hindman’s The Myth of Digital Democracy) that the only a very small proportion of people online actually create content on politics, those who are commenting on elections become an even more unusual subset of the population.

Thus (and I can say this as someone who does use social media to talk about politics!) we are looking at an unrepresentative sub-set (those interested in politics) of an unrepresentative sub-set (those using social media) of the population. This is hardly a good omen for election prediction, which relies on modelling the voting population as closely as possible. As such, it seems foolish to suggest that a simply culmination of individual preferences can simply be equated to voting intentions.

However, in our article we suggest a different way of thinking about social media data, more akin to James Surowiecki’s idea of The Wisdom of Crowds. The idea here is that citizens commenting on social media should not be treated like voters, but rather as commentators, seeking to understand and predict emerging political dynamics. As such, the method we operationalized was more akin to an electoral prediction market, such as the Iowa Electronic Markets, than a traditional opinion poll.

We looked for two things in our dataset: sudden changes in the number of mentions of a particular candidate and also words that indicated momentum for a particular candidate, such as “surge”. Our ultimate finding was that this turned out to be a strong predictor. We found that the former measure had a good relationship with Rick Santorum’s sudden surge in the Iowa caucus, although it did also tend to disproportionately-emphasise a lot of the less successful candidates, such as Michelle Bachmann. The latter method, on the other hand, picked up the Santorum surge without generating false positives, a finding certainly worth further investigation.

Our aim in the paper was to present new ways of thinking about election prediction through social media, going beyond the paradigm established by the dominance of opinion polling. Our results indicate that there may be some value in this approach.


Read the full paper: Michael J. Jensen and Nick Anstead (2013) Psephological investigations: Tweets, votes, and unknown unknowns in the republican nomination process. Policy and Internet 5 (2) 161–182.

Dr Nick Anstead was appointed as a Lecturer in the LSE’s Department of Media and Communication in September 2010, with a focus on Political Communication. His research focuses on the relationship between existing political institutions and new media, covering such topics as the impact of the Internet on politics and government (especially e-campaigning), electoral competition and political campaigns, the history and future development of political parties, and political mobilisation and encouraging participation in civil society.

Dr Michael Jensen is a Research Fellow at the ANZSOG Institute for Governance (ANZSIG), University of Canberra. His research spans the subdisciplines of political communication, social movements, political participation, and political campaigning and elections. In the last few years, he has worked particularly with the analysis of social media data and other digital artefacts, contributing to the emerging field of computational social science.

]]>
Investigating the structure and connectivity of online global protest networks https://ensr.oii.ox.ac.uk/investigating-the-structure-and-connectivity-of-online-global-protest-networks/ Mon, 10 Jun 2013 12:04:26 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1275 How have online technologies reconfigured collective action? It is often assumed that the rise of social networking tools, accompanied by the mass adoption of mobile devices, have strengthened the impact and broadened the reach of today’s political protests. Enabling massive self-communication allows protesters to write their own interpretation of events – free from a mass media often seen as adversarial – and emerging protests may also benefit from the cheaper, faster transmission of information and more effective mobilization made possible by online tools such as Twitter.

The new networks of political protest, which harness these new online technologies are often described in theoretical terms as being ‘fluid’ and ‘horizontal’, in contrast to the rigid and hierarchical structure of earlier protest organization. Yet such theoretical assumptions have seldom been tested empirically. This new language of networks may be useful as a shorthand to describe protest dynamics, but does it accurately reflect how protest networks mediate communication and coordinate support?

The global protests against austerity and inequality which took place on May 12, 2012 provide an interesting case study to test the structure and strength of a transnational online protest movement. The ‘indignados’ movement emerged as a response to the Spanish government’s politics of austerity in the aftermath of the global financial crisis. The movement flared in May 2011, when hundreds of thousands of protesters marched in Spanish cities, and many set up camps ahead of municipal elections a week later.

These protests contributed to the emergence of the worldwide Occupy movement. After the original plan to occupy New York City’s financial district mobilised thousands of protesters in September 2011, the movement spread to other cities in the US and worldwide, including London and Frankfurt, before winding down as the camp sites were dismantled weeks later. Interest in these movements was revived, however, as the first anniversary of the ‘indignados’ protests approached in May 2012.

To test whether the fluidity, horizontality and connectivity often claimed for online protest networks holds true in reality, tweets referencing these protest movements during May 2012 were collected. These tweets were then classified as relating either to the ‘indignados’ or Occupy movement, using hashtags as a proxy for content. Many tweets, however, contained hashtags relevant for the two movements, creating bridges across the two streams of information. The users behind those bridges acted as  information ‘brokers’, and are fundamentally important to the global connectivity of the two movements: they joined the two streams of information and their audiences on Twitter. Once all the tweets were classified by content and author, it emerged that around 6.5% of all users posted at least one message relevant for the two movements by using hashtags from both sides jointly.

Analysis of the Twitter data shows that this small minority of ‘brokers’ play an important role connecting users to a network that would otherwise be disconnected. Brokers are significantly more active in the contribution of messages and more visible in the stream of information, being re-tweeted and mentioned more often than other users. The analysis also shows that these brokers play an important role in the global network, by helping to keep the network together and improving global connectivity. In a simulation, the removal of brokers fragmented the network faster than the removal of random users at the same rate.

What does this tell us about global networks of protest? Firstly, it is clear that global networks are more vulnerable and fragile than is often assumed. Only a small percentage of users disseminate information across transnational divides, and if any of these users cease to perform this role, they are difficult to immediately replace, thus limiting the assumed fluidity of such networks. The decentralized nature of online networks, with no central authority imposing order or even suggesting a common strategy, make the role of ‘brokers’ all the more vital to the survival of networks which cross national borders.

Secondly, the central role performed by brokers suggests that global networks of online protest lack the ‘horizontal’ structure that is often described in the literature. Talking about horizontal structures can be useful as shorthand to refer to decentralised organisations, but not to analyse the process by which these organisations materialise in communication networks. The distribution of users in those networks reveals a strong hierarchy in terms of connections and the ability to communicate effectively.

Future research into online networks, then, should keep in mind that the language of protest networks in the digital age, particularly terms like horizontality and fluidity, do not necessarily stand up to empirical scrutiny. The study of contentious politics in the digital age should be evaluated, first and foremost, through the lens of what protesters actually reveal through their actions.


Read the paper: Sandra Gonzalez-Bailon and Ning Wang (2013) The Bridges and Brokers of Global Campaigns in the Context of Social Media.

]]>
Crowdsourcing translation during crisis situations: are ‘real voices’ being excluded from the decisions and policies it supports? https://ensr.oii.ox.ac.uk/crowdsourcing-translation-during-crisis-situations-are-real-voices-being-excluded-from-the-decisions-and-policies-it-supports/ Tue, 07 May 2013 08:58:47 +0000 http://blogs.oii.ox.ac.uk/policy/?p=957 As revolution spread across North Africa and the Middle East in 2011, participants and observers of the events were keen to engage via social media. However, saturation by Arab-language content demanded a new translation strategy for those outside the region to follow the information flows — and for those inside to reach beyond their domestic audience. Crowdsourcing was seen as the most efficient strategy in terms of cost and time to meet the demand, and translation applications that harnessed volunteers across the internet were integrated with nearly every type of ICT project. For example, as Steve Stottlemyre has already mentioned on this blog, translation played a part in tools like the Libya Crisis Map, and was essential for harnessing tweets from the region’s ‘voices on the ground.’

If you have ever worried about media bias then you should really worry about the impact of translation. Before the revolutions, the translation software for Egyptian Arabic was almost non-existent. Few translation applications were able to handle the different Arabic dialects or supply coding labor and capital to build something that could contend with internet blackouts. Google’s Speak to Tweet became the dominant application used in the Egyptian uprisings, delivering one homogenized source of information that fed the other sources. In 2011, this collaboration helped circumvent the problem of Internet connectivity in Egypt by allowing cellphone users to call their tweet into a voicemail to be transcribed and translated. A crowd of volunteers working for Twitter enhanced translation of Egyptian Arabic after the Tweets were first transcribed by a Mechanical Turk application trained from an initial 10 hours of speech.

The unintended consequence of these crowdsourcing applications was that when the material crossed the language barrier into English, it often became inaccessible to the original contributors. Individuals on the ground essentially ceded authorship to crowds of untrained volunteer translators who stripped the information of context, and then plotted it in categories and on maps without feedback from original sources. Controlling the application meant controlling the information flow, the lens through which the revolutions were conveyed to the outside world.

This flawed system prevented the original sources (e.g. in Libya) from interacting with the information that directly related to their own life-threatening situation, while the information became an unsound basis for decision-making by international actors. As Stottlemyre describes, ceding authorship was sometimes an intentional strategy, but also one imposed by the nature of the language/power imbalance and the failure of the translation applications and the associated projects to incorporate feedback loops or more two-way communication.

The after action report for the Libya Crisis Map project commissioned by the UN OCHA offers some insight into the disenfranchisement of sources to the decision-making process once they had provided information for the end product; the crisis map. In the final ‘best practices section’ reviewing the outcomes, The Standby Task Force which created the map described decision-makers and sources, but did not consider or mention the sources’ access to decision-making, the map, or a mechanism by which they could feed back to the decision-making chain. In essence, Libyans were not seen as part of the user group of the product they helped create.

How exactly does translation and crowdsourcing shape our understanding of complex developing crises, or influence subsequent policy decisions?  The SMS polling initiative launched by Al Jazeera English in collaboration with Ushahidi, a prominent crowdsourcing platform, illustrates the most common process of visualizing crisis information: translation, categorization, and mapping.  In December 2011, Al Jazeera launched Somalia Speaks, with the aim of giving a voice to the people of Somalia and sharing a picture of how violence was impacting everyday lives. The two have since repeated this project in Mali, to share opinions about the military intervention in the north.  While Al Jazeera is a news organization, not a research institute or a government actor, it plays an important role in informing electorates who can put political pressure on governments involved in the conflict. Furthermore, this same type of technology is being used on the ground to gather information in crisis situations at the governmental and UN levels.

A call for translators in the diaspora, particularly Somali student groups, was issued online, and phones were distributed on the ground throughout Somalia so multiple users could participate. The volunteers translated the SMSs and categorized the content as either political, social, or economic. The results were color-coded and aggregated on a map.

SMS-translation

The stated goal of the project was to give a voice to the Somali people, but the Somalis who participated had no say in how their voices were categorized or depicted on the map. The SMS poll asked an open question:

How has the Somalia conflict affected your life?

In one response example:

The Bosaso Market fire has affected me. It happened on Saturday.

The response was categorized as ‘social.’ But why didn’t the fact that violence happened in a market, an economic centre, denote ‘economic’ categorization? There was no guidance for maintaining consistency among the translators, nor any indication of how the information would be used later. It was these categories chosen by the translators, represented as bright colorful circles on the map, which were speaking to the world, not the Somalis — whose voices had been lost through a crowdsourcing application that was designed with a language barrier. The primary sources could not suggest another category that better suited the intentions of their responses, nor did they understand the role categories would play in representing and visualizing their responses to the English language audience.

Somalia Crisis Map

An 8 December 2011 comment on the Ushahidi blog described in compelling terms how language and control over information flow impact the power balance during a conflict:

A—-, My friend received the message from you on his phone. The question says “tell us how is conflict affecting your life” and “include your name of location”. You did not tell him that his name will be told to the world. People in Somalia understand that sms is between just two people. Many people do not even understand the internet. The warlords have money and many contacts. They understand the internet. They will look at this and they will look at who is complaining. Can you protect them? I think this project is not for the people of Somalia. It is for the media like Al Jazeera and Ushahidi. You are not from here. You are not helping. It is better that you stay out.

Ushahidi director Patrick Meier, responded to the comment:

Patrick: Dear A—-, I completely share your concern and already mentioned this exact issue to Al Jazeera a few hours ago. I’m sure they’ll fix the issue as soon as they get my message. Note that the question that was sent out does *not* request people to share their names, only the name of their general location. Al Jazeera is careful to map the general location and *not* the exact location. Finally, Al Jazeera has full editorial control over this project, not Ushahidi.

As of 14 January 2012, there were still names featured on the Al Jazeera English website.

The danger is that these categories — economic, political, social — become the framework for aid donations and policy endeavors; the application frames the discussion rather than the words of the Somalis. The simplistic categories become the entry point for policy-makers and citizens alike to understand and become involved with translated material. But decisions and policies developed from the translated information are less connected to ‘real voices’ than we would like to believe.

Developing technologies so that Somalis or Libyans — or any group sharing information via translation — are themselves directing the information flow about the future of their country should be the goal, rather than perpetual simplification into the client / victim that is waiting to be given a voice.

]]>
Why do (some) political protest mobilisations succeed? https://ensr.oii.ox.ac.uk/why-do-some-political-protest-mobilisations-succeed/ Fri, 19 Apr 2013 13:40:55 +0000 http://blogs.oii.ox.ac.uk/policy/?p=909 The communication technologies once used by rebels and protesters to gain global visibility now look burdensome and dated: much separates the once-futuristic-looking image of Subcomandante Marcos posing in the Chiapas jungle draped in electronic gear (1994) from the uprisings of the 2011 Egyptian revolution. While the only practical platform for amplifying a message was once provided by organisations, the rise of the Internet means that cross-national networks are now reachable by individuals—who are able to bypass organisations, ditch membership dues, and embrace self-organization. As social media and mobile applications increasingly blur the distinction between public and private, ordinary citizens are becoming crucial nodes in the contemporary protest network.

The personal networks that are the main channels of information flow in sites such as Facebook, Twitter and LinkedIn mean that we don’t need to actively seek out particular information; it can be served to us with no more effort than that of maintaining a connection with our contacts. News, opinions, and calls for justice are now shared and forwarded by our friends—and their friends—in a constant churn of information, all attached to familiar names and faces. Given we are more likely to pass on information if the source belongs to our social circle, this has had an important impact on the information environment within which protest movements are initiated and develop.

Mobile connectivity is also important for understanding contemporary protest, given that the ubiquitous streams of synchronous information we access anywhere are shortening our reaction times. This is important, as the evolution of mass recruitments—whether they result in flash mobilisations, slow burns, or simply damp squibs—can only be properly understood if we have a handle on the distribution of reaction times within a population. The increasing integration of the mainstream media into our personal networks is also important, given that online networks (and independent platforms like Indymedia) are not the clear-cut alternative to corporate media they once were. We can now write on the walls or feeds of mainstream media outlets, creating two-way communication channels and public discussion.

Online petitions have also transformed political protest; lower information diffusion costs mean that support (and signatures) can be scaled up much faster. These petitions provide a mine of information for researchers interested in what makes protests succeed or fail. The study of cascading behaviour in online networks suggests that most chain reactions fail quickly, and most petitions don’t gather that much attention anyway. While large cascades tend to start at the core of networks, network centrality is not always a guarantor of success.

So what does a successful cascade look like? Work by Duncan Watts has shown that the vast majority of cascades are small and simple, terminating within one degree of an initial adopting ‘seed.’ Research has also shown that adoptions resulting from chains of referrals are extremely rare; even for the largest cascades observed, the bulk of adoptions often took place within one degree of a few dominant individuals. Conversely, research on the spreading dynamics of a petition organised in opposition to the 2002-2003 Iraq war showed a narrow but very deep tree-like distribution, progressing through many steps and complex paths. The deepness and narrowness of the observed diffusion tree meant that it was fragile—and easily broken at any of the levels required for further distribution. Chain reactions are only successful with the right alignment of factors, and this becomes more likely as more attempts are launched. The rise of social media means that there are now more attempts.

One consequence of these—very recent—developments is the blurring of the public and the private. A significant portion of political information shared online travels through networks that are not necessarily political, but that can be activated for political purposes as circumstances arise. Online protest networks are decentralised structures that pull together local sources of information and create efficient channels for a potentially global diffusion, but they replicate the recruitment dynamics that operated in social networks prior to the emergence of the Internet.

The wave of protests seen in 2011—including the Arab Spring, the Spanish Indignados, and the Global Occupy Campaign—reflects this global interdependence of localised, personal networks, with protest movements emerging spontaneously from the individual actions of many thousands (or millions) of networked users. Political protest movements are seldom stable and fixed organisational structures, and online networks are inherently suited to channeling this fluid commitment and identity. However, systematic research to uncover the bridges and precise network mechanisms that facilitate cross-border diffusion is still lacking. Decentralized networks facilitate mobilisations of unprecedented reach and speed—but are actually not very good at maintaining momentum, or creating particularly stable structures. For this, traditional organisations are still relevant, even while they struggle to maintain a critical mass.

The general failure of traditional organisations to harness the power of these personal networks results from their complex structure, which complicates any attempts at prediction, planning, and engineering. Mobilization paths are difficult to predict because they depend on the right alignment of conditions on different levels—from the local information contexts of individuals who initiate or sustain diffusion chains, to the global assembly of separate diffusion branches. The networked chain reactions that result as people jump onto bandwagons follow complex paths; furthermore, the cumulative effects of these individual actions within the network are not linear, due to feedback mechanisms that can cause sudden changes and flips in mobilisation dynamics, such as exponential growth.

Of course, protest movements are not created by social media technologies; they provide just one mechanism by which a movement can emerge, given the right social, economic, and historical circumstances. We therefore need to focus less on the specific technologies and more on how they are used if we are to explain why most mobilisations fail, but some succeed. Technology is just a part of the story—and today’s Twitter accounts will soon look as dated as the electronic gizmos used by the Zapatistas in the Chiapas jungle.

]]>
Online collective action and policy change: new special issue from Policy and Internet https://ensr.oii.ox.ac.uk/online-collective-action-and-policy-change-new-special-issue-from-policy-and-internet/ Mon, 18 Mar 2013 14:22:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=854 The Internet has multiplied the platforms available to influence public opinion and policy making. It has also provided citizens with a greater capacity for coordination and mobilisation, which can strengthen their voice and representation in the policy agenda. As waves of protest sweep both authoritarian regimes and liberal democracies, this rapidly developing field calls for more detailed enquiry. However, research exploring the relationship between online mobilisation and policy change is still limited. This special issue of ‘Policy and Internet’ addresses this gap through a variety of perspectives. Contributions to this issue view the Internet both as a tool that allows citizens to influence policy making, and as an object of new policies and regulations, such as data retention, privacy, and copyright laws, around which citizens are mobilising. Together, these articles offer a comprehensive empirical account of the interface between online collective action and policy making.

Within this framework, the first article in this issue, “Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena?” by Stefania Milan and Arne Hintz (2013), looks at the Internet as both a tool of collective action and an object of policy. The authors provide a comprehensive overview of how computer-mediated communication creates not only new forms of organisational structure for collective action, but also new contentious policy fields. By focusing on what the authors define as ‘techie activists,’ Milan and Hintz explore how new grassroots actors participate in policy debates around the governance of the Internet at different levels. This article provides empirical evidence to what Kriesi et al. (1995) defines as “windows of opportunities” for collective action to contribute to the policy debate around this new space of contentious politics. Milan and Hintz demonstrate how this has happened from the first World Summit of Information Society (WSIS) in 2003 to more recent debates about Internet regulation.

Yana Breindl and François Briatte’s (2013) article “Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union” complements Milan and Hintz’s analysis by looking at how the regulation of copyright issues opens up new spaces of contentious politics. The authors compare how online and offline initiatives and campaigns in France around the “Droit d’Auteur et les Droits Voisins dans la Société de l’Information” (DADVSI) and “Haute Autorité pour la diffusion des œuvres et la protection des droits sure Internet” (HADOPI) laws, and in Europe around the Telecoms Package Reform, have contributed to the deliberations within the EU Parliament. They thus add to the rich debate on the contentious issues of intellectual property rights, demonstrating how collective action contributes to this debate at the European level.

The remaining articles in this special issue focus more on the online tactics and strategies of collective actors and the opportunities opened by the Internet for them to influence policy makers. In her article, “Activism and The Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies?” Julie Uldam (2013) discusses the tactics used by London-based environmental activists to influence policy making during the 17th UN climate conference (COP17) in 2011. Based on ethnographic research, Uldam traces the relationship between online modes of action and problem identification and demands. She also discusses the differences between radical and reformist activists in both their preferences for online action and their attitudes towards policy makers. Drawing on Cammaerts’ (2012) framework of the mediation opportunity structure, Uldam shows that radical activists preferred online tactics that aimed at disrupting the conference, since they viewed COP17 as representative of an unjust system. However, their lack of technical skills and resources prevented them from disrupting the conference in the virtual realm. Reformist activists, on the other hand, considered COP17 as a legitimate adversary, and attempted to influence its politics mainly through the diffusion of alternative information online.

The article by Ariadne Vromen and William Coleman (2013) “Online Campaigning Organizations and Storytelling Strategies: GetUp! Australia,” also investigates a climate change campaign but shifts the focus to the new ‘hybrid’ collective actors, who use the Internet extensively for campaigning. Based on a case study of GetUp!, Vromen and Coleman examine the storytelling strategies employed by the organisation in two separate campaigns, one around climate change, the other around mental health. The authors investigate the factors that led one campaign to be successful and the other to have limited resonance. They also skilfully highlight the difficulties encountered by new collective actors to gain legitimacy and influence policy making. In this respect, GetUp! used storytelling to set itself apart from traditional party-based politics and to emphasise its identity as an organiser and representative of grassroots communities, rather than as an insider lobbyist or disruptive protestor.

Romain Badouard and Laurence Monnoyer-Smith (2013), in their article “Hyperlinks as Political Resources: The European Commission Confronted with Online Activism,” explore some of the more structured ways in which citizens use online tools to engage with policy makers. They investigate the political opportunities offered by the e-participation and e-government platforms of the European Commission for activists wishing to make their voice heard in the European policy making sphere. They focus particularly on strategic uses of web technical resources and hyperlinks, which allows citizens to refine their proposals and thus increase their influence on European policy.

Finally, Jo Bates’ (2013) article “The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis” provides a pertinent framework that facilitates our understanding of the policy challenges posed by the issue of open data. The digitisation of data offers new opportunities for increasing transparency; traditionally considered a fundamental public good. By focusing on the Open Data Government initiative in the UK, Bates explores the policy challenges generated by increasing transparency via new Internet platforms by applying the established theoretical instruments of Gramscian ‘Trasformismo.’ This article frames the open data debate in terms consistent with the literature on collective action, and provides empirical evidence as to how citizens have taken an active role in the debate on this issue, thereby challenging the policy debate on public transparency.

Taken together, these articles advance our understanding of the interface between online collective action and policy making. They introduce innovative theoretical frameworks and provide empirical evidence around the new forms of collective action, tactics, and contentious politics linked with the emergence of the Internet. If, as Melucci (1996) argues, contemporary social movements are sensors of new challenges within current societies, they can be an enriching resource for the policy debate arena. Gaining a better understanding of how the Internet might strengthen this process is a valuable line of enquiry.

Read the full article at: Calderaro, A. and Kavada A., (2013) “Challenges and Opportunities of Online Collective Action for Policy Change“, Policy and Internet 5(1).

Twitter: @AnastasiaKavada / @andreacalderaro
Web: Anastasia’s Personal Page / Andrea’s Personal Page

References

Badouard, R., and Monnoyer-Smith, L. 2013. Hyperlinks as Political Resources: The European Commission Confronted with Online Activism. Policy and Internet 5(1).

Bates, J. 2013. The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis. Policy and Internet 5(1).

Breindl, Y., and Briatte, F. 2013. Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5(1).

Cammaerts, Bart. 2012. “Protest Logics and the Mediation Opportunity Structure.” European Journal of Communication 27(2): 117–134.

Kriesi, Hanspeter. 1995. “The Political Opportunity Structure of New Social Movements: its Impact on their Mobilization.” In The Politics of Social Protest, eds. J. Jenkins and B. Dermans. London: UCL Press, pp. 167–198.

Melucci, Alberto. 1996. Challenging Codes: Collective Action in the Information Age. Cambridge: Cambridge University Press.

Milan, S., and Hintz, A. 2013. Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Policy and Internet 5(1).

Uldam, J. 2013. Activism and the Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies? Policy and Internet 5(1).

Vromen, A., and Coleman, W. 2013. Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5(1).

]]>
Did Libyan crisis mapping create usable military intelligence? https://ensr.oii.ox.ac.uk/did-libyan-crisis-mapping-create-usable-military-intelligence/ Thu, 14 Mar 2013 10:45:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=817 The Middle East has recently witnessed a series of popular uprisings against autocratic rulers. In mid-January 2011, Tunisian President Zine El Abidine Ben Ali fled his country, and just four weeks later, protesters overthrew the regime of Egyptian President Hosni Mubarak. Yemen’s government was also overthrown in 2011, and Morocco, Jordan, and Oman saw significant governmental reforms leading, if only modestly, toward the implementation of additional civil liberties.

Protesters in Libya called for their own ‘day of rage’ on February 17, 2011, marked by violent protests in several major cities, including the capitol Tripoli. As they transformed from ‘protestors’ to ‘Opposition forces’ they began pushing information onto Twitter, Facebook, and YouTube, reporting their firsthand experiences of what had turned into a civil war virtually overnight. The evolving humanitarian crisis prompted the United Nations to request the creation of the Libya Crisis Map, which was made public on March 6, 2011. Other, more focused crisis maps followed, and were widely distributed on Twitter.

While the map was initially populated with humanitarian information pulled from the media and online social networks, as the imposition of an internationally enforced No Fly Zone (NFZ) over Libya became imminent, information began to appear on it that appeared to be of a tactical military nature. While many people continued to contribute conventional humanitarian information to the map, the sudden shift toward information that could aid international military intervention was unmistakable.

How useful was this information, though? Agencies in the U.S. Intelligence Community convert raw data into useable information (incorporated into finished intelligence) by utilizing some form of the Intelligence Process. As outlined in the U.S. military’s joint intelligence manual, this consists of six interrelated steps all centered on a specific mission. It is interesting that many Twitter users, though perhaps unaware of the intelligence process, replicated each step during the Libyan civil war; producing finished intelligence adequate for consumption by NATO commanders and rebel leadership.

It was clear from the beginning of the Libyan civil war that very few people knew exactly what was happening on the ground. Even NATO, according to one of the organization’s spokesmen, lacked the ground-level informants necessary to get a full picture of the situation in Libya. There is no public information about the extent to which military commanders used information from crisis maps during the Libyan civil war. According to one NATO official, “Any military campaign relies on something that we call ‘fused information’. So we will take information from every source we can… We’ll get information from open source on the internet, we’ll get Twitter, you name any source of media and our fusion centre will deliver all of that into useable intelligence.”

The data in these crisis maps came from a variety of sources, including journalists, official press releases, and civilians on the ground who updated blogs and/or maintaining telephone contact. The @feb17voices Twitter feed (translated into English and used to support the creation of The Guardian’s and the UN’s Libya Crisis Map) included accounts of live phone calls from people on the ground in areas where the Internet was blocked, and where there was little or no media coverage. Twitter users began compiling data and information; they tweeted and retweeted data they collected, information they filtered and processed, and their own requests for specific data and clarifications.

Information from various Twitter feeds was then published in detailed maps of major events that contained information pertinent to military and humanitarian operations. For example, as fighting intensified, @LibyaMap’s updates began to provide a general picture of the battlefield, including specific, sourced intelligence about the progress of fighting, humanitarian and supply needs, and the success of some NATO missions. Although it did not explicitly state its purpose as spreading mission-relevant intelligence, the nature of the information renders alternative motivations highly unlikely.

Interestingly, the Twitter users featured in a June 2011 article by the Guardian had already explicitly expressed their intention of affecting military outcomes in Libya by providing NATO forces with specific geographical coordinates to target Qadhafi regime forces. We could speculate at this point about the extent to which the Intelligence Community might have guided Twitter users to participate in the intelligence process; while NATO and the Libyan Opposition issued no explicit intelligence requirements to the public, they tweeted stories about social network users trying to help NATO, likely leading their online supporters to draw their own conclusions.

It appears from similar maps created during the ongoing uprisings in Syria that the creation of finished intelligence products by crisis mappers may become a regular occurrence. Future study should focus on determining the motivations of mappers for collecting, processing, and distributing intelligence, particularly as a better understanding of their motivations could inform research on the ethics of crisis mapping. It is reasonable to believe that some (or possibly many) crisis mappers would be averse to their efforts being used by military commanders to target “enemy” forces and infrastructure.

Indeed, some are already questioning the direction of crisis mapping in the absence of professional oversight (Global Brief 2011): “[If] crisis mappers do not develop a set of best practices and shared ethical standards, they will not only lose the trust of the populations that they seek to serve and the policymakers that they seek to influence, but (…) they could unwittingly increase the number of civilians being hurt, arrested or even killed without knowing that they are in fact doing so.”


Read the full paper: Stottlemyre, S., and Stottlemyre, S. (2012) Crisis Mapping Intelligence Information During the Libyan Civil War: An Exploratory Case Study. Policy and Internet 4 (3-4).

]]>
Papers on Policy, Activism, Government and Representation: New Issue of Policy and Internet https://ensr.oii.ox.ac.uk/issue-34/ Wed, 16 Jan 2013 21:40:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=667 We are pleased to present the combined third and fourth issue of Volume 4 of Policy and Internet. It contains eleven articles, each of which investigates the relationship between Internet-based applications and data and the policy process. The papers have been grouped into the broad themes of policy, government, representation, and activism.

POLICY: In December 2011, the European Parliament Directive on Combating the Sexual Abuse, Sexual Exploitation of Children and Child Pornography was adopted. The directive’s much-debated Article 25 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavor to obtain the removal of such websites hosted outside their territory. Member States are also given the option to block access to such websites to users within their territory. Both these policy choices have been highly controversial and much debated; Karel Demeyer, Eva Lievens, and Jos Dumortie analyse the technical and legal means of blocking and removing illegal child sexual content from the Internet, clarifying the advantages and drawbacks of the various policy options.

Another issue of jurisdiction surrounds government use of cloud services. While cloud services promise to render government service delivery more effective and efficient, they are also potentially stateless, triggering government concern over data sovereignty. Kristina Irion explores these issues, tracing the evolution of individual national strategies and international policy on data sovereignty. She concludes that data sovereignty presents national governments with a legal risk that can’t be addressed through technology or contractual arrangements alone, and recommends that governments retain sovereignty over their information.

While the Internet allows unprecedented freedom of expression, it also facilitates anonymity and facelessness, increasing the possibility of damage caused by harmful online behavior, including online bullying. Myoung-Jin Lee, Yu Jung Choi, and Setbyol Choi investigate the discourse surrounding the introduction of the Korean Government’s “Verification of Identity” policy, which aimed to foster a more responsible Internet culture by mandating registration of a user’s real identity before allowing them to post to online message boards. The authors find that although arguments about restrictions on freedom of expression continue, the policy has maintained public support in Korea.

A different theoretical approach to another controversy topic is offered by Sameer Hinduja, who applies Actor-Network Theory (ANT) to the phenomenon of music piracy, arguing that we should pay attention not only to the social aspects, but also to the technical, economic, political, organizational, and contextual aspects of piracy. He argues that each of these components merits attention and response by law enforcers if progress is to be made in understanding and responding to digital piracy.

GOVERNMENT: While many governments have been lauded for their success in the online delivery of services, fewer have been successful in employing the Internet for more democratic purposes. Tamara A. Small asks whether the Canadian government — with its well-established e-government strategy — fits the pattern of service delivery oriented (rather than democracy oriented) e-government. Based on a content analysis of Government of Canada tweets, she finds that they do indeed tend to focus on service delivery, and shows how nominal a commitment the Canadian government has made to the more interactive and conversational qualities of Twitter.

While political scientists have greatly benefitted from the increasing availability of online legislative data, data collections and search capabilities are not comprehensive, nor are they comparable across the different U.S. states. David L. Leal, Taofang Huang, Byung-Jae Lee, and Jill Strube review the availability and limitations of state online legislative resources in facilitating political research. They discuss levels of capacity and access, note changes over time, and note that their usability index could potentially be used as an independent variable for researchers seeking to measure the transparency of state legislatures.

RERESENTATION: An ongoing theme in the study of elected representatives is how they present themselves to their constituents in order to enhance their re-election prospects. Royce Koop and Alex Marland compare presentation of self by Canadian Members of Parliament on parliamentary websites and in the older medium of parliamentary newsletters. They find that MPs are likely to present themselves as outsiders on their websites, that this differs from patterns observed in newsletters, and that party affiliation plays an important role in shaping self-presentation online.

Many strategic, structural and individual factors can explain the use of online campaigning in elections; based on candidate surveys, Julia Metag and Frank Marcinkowski show that strategic and structural variables, such as party membership or the perceived share of indecisive voters, do most to explain online campaigning. Internet-related perceptions are explanatory in a few cases; if candidates think that other candidates campaign online they feel obliged to use online media during the election campaign.

ACTIVISM: Mainstream opinion at the time of the protests of the “Arab Spring” – and the earlier Iranian “Twitter Revolution” – was that use of social media would significantly affect the outcome of revolutionary collective action. Throughout the Libyan Civil War, Twitter users took the initiative to collect and process data for use in the rebellion against the Qadhafi regime, including map overlays depicting the situation on the ground. In an exploratory case study on crisis mapping of intelligence information, Steve Stottlemyre and Sonia Stottlemyre investigate whether the information collected and disseminated by Twitter users during the Libyan civil war met the minimum requirements to be considered tactical military intelligence.

Philipp S. Mueller and Sophie van Huellen focus on the 2009 post-election protests in Teheran in their analysis of the effect of many-to-many media on power structures in society. They offer two analytical approaches as possible ways to frame the complex interplay of media and revolutionary politics. While social media raised international awareness by transforming the agenda-setting process of the Western mass media, the authors conclude that, given the inability of protesters to overthrow the regime, a change in the “media-scape” does not automatically imply a changed “power-scape.”

A different theoretical approach is offered by Mark K. McBeth, Elizabeth A. Shanahan, Molly C. Arrandale Anderson, and Barbara Rose, who look at how interest groups increasingly turn to new media such as YouTube as tools for indirect lobbying, allowing them to enter into and have influence on public policy debates through wide dissemination of their policy preferences. They explore the use of policy narratives in new media, using a Narrative Policy Framework to analyze YouTube videos posted by the Buffalo Field Campaign, an environmental activist group.

]]>
UK teenagers without the Internet are ‘educationally disadvantaged’ https://ensr.oii.ox.ac.uk/uk-teenagers-without-the-internet-are-educationally-disadvantaged/ Sat, 22 Dec 2012 12:23:36 +0000 http://blogs.oii.ox.ac.uk/policy/?p=707 A major in-depth study examining how teenagers in the UK are using the internet and other mobile devices says the benefits of using such technologies far outweigh any perceived risks. The findings are based on a large-scale study of more than 1,000 randomly selected households in the UK, coupled with regular face-to-face interviews with more than 200 teenagers and their families between 2008 and 2011.

While the study reflects a high level of parental anxiety about the potential of social networking sites to distract their offspring, and shows that some parents despair at their children’s tendency to multitask on mobile devices, the research by Oxford University’s Department of Education and Oxford Internet Institute concludes that there are substantial educational advantages in teenagers being able to access the internet at home.

Teenagers who do not have access to the internet in their home have a strong sense of being ‘educationally disadvantaged’, warns the study. At the time of the study, the researchers estimated that around 10 per cent of the teenagers were without online connectivity at home, with most of this group living in poorer households. While recent figures from the Office of National Statistics suggest this dropped to five per cent in 2012, the researchers say that still leaves around 300,000 children without internet access in their homes.

The researchers’ interviews with teenagers reveal that they felt shut out of their peer group socially and also disadvantaged in their studies as so much of the college or school work set for them to do at home required online research or preparation. One teenager, whose parents had separated, explained that he would ring his father who had internet access and any requested materials were then mailed to him through the post.

Researcher Dr Rebecca Eynon commented: ‘While it’s difficult to state a precise figure for teenagers without access to the internet at home, the fact remains that in the UK, there is something like 300,000 young people who do not – and that’s a significant number. Behind the statistics, our qualitative research shows that these disconnected young people are clearly missing out both educationally and socially.’

In an interview with a researcher, one 14-year old boy said: ‘We get coursework now in Year 9 to see what groups we’re going to go in Year 10. And people with internet, they can get higher marks because they can like research on the internet … my friends are probably on it [MSN] all the day every day. And like they talk about it in school, what happened on MSN.’

Another teenager, aged 15, commented: ‘It was bell gone and I have a lot of things that I could write and I was angry that I haven’t got a computer because I might finish it at home when I’ve got lots of time to do it. But because when I’m at school I need to do it very fast.’

Strikingly, this study contradicts claims that others have made about the potential risks of such technologies adversely affecting the ability of teenagers to concentrate on serious study. The researchers, Dr Chris Davies and Dr Rebecca Eynon, found no evidence to support this claim. Furthermore, their study concludes that the internet has opened up far more opportunities for young people to do their learning at home.

Dr Davies said: ‘Parental anxiety about how teenagers might use the very technologies that they have bought their own children at considerable expense is leading some to discourage their children from becoming confident users. The evidence, based on the survey and hundreds of interviews, shows that parents have tended to focus on the negative side – especially the distracting effects of social networking sites – without always seeing the positive use that their children often make of being online.’

Teenagers’ experiences of the social networking site Facebook appear to be mixed, says the study. Although some regarded Facebook as an integral part of their social life, others were concerned about the number of arguments that had escalated due to others wading in as a result of comments and photographs being posted.

The age of teenagers using Facebook for the first time was found to go down over the three year period from around 16 years old in 2008 to 12 or 13 years old by 2011. Interviews reveal that even the very youngest teenagers who were not particularly interested felt under some peer pressure to join. But the study also suggests that the popularity of Facebook is waning, with teenagers now exploring other forms of social networking.

Dr Davies commented: ‘There is no steady state of teenage technology use – fashions and trends are constantly shifting, and things change very rapidly when they do change.’

The research was part funded by Becta, the British Educational Communications and Technology Agency, a non-departmental public body formed under the last Labour government. The study findings are contained in a new book entitled, Teenagers and Technology, published by Routledge in November 2012.

]]>
New issue of Policy and Internet (2,2) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-22/ Thu, 19 Aug 2010 12:17:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=128 Welcome to the second issue of Policy & Internet for 2010! We are pleased to present six articles which investigate the role of the Internet in a wide range of policy processes and sectors: agenda setting in online and traditional media; environmental policy networks; online deliberation on climate change; data protection and privacy; net neutrality; and digital inclusion/exclusion. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Ben Sayre, Leticia Bode, Dhavan Shah, Dave Wilcox, and Chirag Shah: Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News

Kathleen McNutt and Adam Wellstead: Virtual Policy Networks in Forestry and Climate Change in the U.S. and Canada: Government Nodality, Internationalization and Actor Complexity

Julien Talpin and Stéphanie Wojcik: Deliberating Environmental Policy Issues: Comparing the Learning Potential of Online and Face-To-Face Discussions on Climate Change

Andrew A. Adams, Kiyoshi Murata, and Yohko Orito: The Development of Japanese Data Protection

Scott Jordan: The Application of Net Neutrality to Wireless Networks Based on Network Architecture

Alison Powell, Amelia Bryne, and Dharma Dailey: The Essential Internet: Digital Exclusion in Low-Income American Communities

]]>