Jonathan Bright – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:35 +0000 en-GB hourly 1 Can we predict electoral outcomes from Wikipedia traffic? https://ensr.oii.ox.ac.uk/can-we-predict-electoral-outcomes-from-wikipedia-traffic/ Tue, 06 Dec 2016 15:34:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3881 As digital technologies become increasingly integrated into the fabric of social life their ability to generate large amounts of information about the opinions and activities of the population increases. The opportunities in this area are enormous: predictions based on socially generated data are much cheaper than conventional opinion polling, offer the potential to avoid classic biases inherent in asking people to report their opinions and behaviour, and can deliver results much quicker and be updated more rapidly.

In their article published in EPJ Data Science, Taha Yasseri and Jonathan Bright develop a theoretically informed prediction of election results from socially generated data combined with an understanding of the social processes through which the data are generated. They can thereby explore the predictive power of socially generated data while enhancing theory about the relationship between socially generated data and real world outcomes. Their particular focus is on the readership statistics of politically relevant Wikipedia articles (such as those of individual political parties) in the time period just before an election.

By applying these methods to a variety of different European countries in the context of the 2009 and 2014 European Parliament elections they firstly show that the relative change in number of page views to the general Wikipedia page on the election can offer a reasonable estimate of the relative change in election turnout at the country level. This supports the idea that increases in online information seeking at election time are driven by voters who are considering voting.

Second, they show that a theoretically informed model based on previous national results, Wikipedia page views, news media mentions, and basic information about the political party in question can offer a good prediction of the overall vote share of the party in question. Third, they present a model for predicting change in vote share (i.e., voters swinging towards and away from a party), showing that Wikipedia page-view data provide an important increase in predictive power in this context.

This relationship is exaggerated in the case of newer parties — consistent with the idea that voters don’t seek information uniformly about all parties at election time. Rather, they behave like ‘cognitive misers’, being more likely to seek information on new political parties with which they do not have previous experience and being more likely to seek information only when they are actually changing the way they vote.

In contrast, there was no evidence of a ‘media effect’: there was little correlation between news media mentions and overall Wikipedia traffic patterns. Indeed, the news media and Wikipedia appeared to be biased towards different things: with the news favouring incumbent parties, and Wikipedia favouring new ones.

Read the full article: Yasseri, T. and Bright, J. (2016) Wikipedia traffic data and electoral prediction: towards theoretically informed models. EPJ Data Science. 5 (1).

We caught up with the authors to explore the implications of the work.

Ed: Wikipedia represents a vast amount of not just content, but also user behaviour data. How did you access the page view stats — but also: is anyone building dynamic visualisations of Wikipedia data in real time?

Taha and Jonathan: Wikipedia makes its page view data available for free (in the same way as it makes all of its information available!). You can find the data here, along with some visualisations

Ed: Why did you use Wikipedia data to examine election prediction rather than (the I suppose the more fashionable) Twitter? How do they compare as data sources?

Taha and Jonathan: One of the big problems with using Twitter to predict things like elections is that contributing on social media is a very public thing and people are quite conscious of this. For example, some parties are seen as unfashionable so people might not make their voting choice explicit. Hence overall social media might seem to be saying one thing whereas actually people are thinking another.

By contrast, looking for information online on a website like Wikipedia is an essentially private activity so there aren’t these social biases. In other words, on Wikipedia we can directly have access to transactional data on what people do, rather than what they say or prefer to say.

Ed: How did these results and findings compare with the social media analysis done as part of our UK General Election 2015 Election Night Data Hack? (long title..)

Taha and Jonathan: The GE2015 data hack looked at individual politicians. We found that having a Wikipedia page is becoming increasingly important — over 40% of Labour and Conservative Party candidates had an individual Wikipedia page. We also found that this was highly correlated with Twitter presence — being more active on one network also made you more likely to be active on the other one. And we found some initial evidence that social media reaction was correlated with votes, though there is a lot more work to do here!

Ed: Can you see digital social data analysis replacing (or maybe just complementing) opinion polling in any meaningful way? And what problems would need to be addressed before that happened: e.g. around representative sampling, data cleaning, and weeding out bots?

Taha and Jonathan: Most political pundits are starting to look at a range of indicators of popularity — for example, not just voting intention, but also ratings of leadership competence, economic performance, etc. We can see good potential for social data to become part of this range of popularity indicator. However we don’t think it will replace polling just yet; the use of social media is limited to certain demographics. Also, the data collected from social media are often very shallow, not allowing for validation. In the case of Wikipedia, for example, we only know how many times each page is viewed, but we don’t know by how many people and from where.

Ed: You do a lot of research with Wikipedia data — has that made you reflect on your own use of Wikipedia?

Taha and Jonathan: It’s interesting to think about this activity of getting direct information about politicians — it’s essentially a new activity, something you couldn’t do in the pre-digital age. I know that I personally [Jonathan] use it to find out things about politicians and political parties — it would be interesting to know more about why other people are using it as well. This could have a lot of impacts. One thing Wikipedia has is a really long memory, in a way that other means of getting information on politicians (such as newspapers) perhaps don’t. We could start to see this type of thing becoming more important in electoral politics.

[Taha] .. since my research has been mostly focused on Wikipedia edit wars between human and bot editors, I have naturally become more cautious about the information I find on Wikipedia. When it comes to sensitive topics, sach as politics, Wikipedia is a good point to start, but not a great point to end the search!


Taha Yasseri and Jonathan Bright were talking to blog editor David Sutcliffe.

]]>
Crowdsourcing for public policy and government https://ensr.oii.ox.ac.uk/crowdsourcing-for-public-policy-and-government/ Thu, 27 Aug 2015 11:28:51 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3339 If elections were invented today, they would probably be referred to as “crowdsourcing the government.” First coined in a 2006 issue of Wired magazine (Howe, 2006), the term crowdsourcing has come to be applied loosely to a wide variety of situations where ideas, opinions, labor or something else is “sourced” in from a potentially large group of people. Whilst most commonly applied in business contexts, there is an increasing amount of buzz around applying crowdsourcing techniques in government and policy contexts as well (Brabham, 2013).

Though there is nothing qualitatively new about involving more people in government and policy processes, digital technologies in principle make it possible to increase the quantity of such involvement dramatically, by lowering the costs of participation (Margetts et al., 2015) and making it possible to tap into people’s free time (Shirky, 2010). This difference in quantity is arguably great enough to obtain a quality of its own. We can thus be justified in using the term “crowdsourcing for public policy and government” to refer to new digitally enabled ways of involving people in any aspect of democratic politics and government, not replacing but rather augmenting more traditional participation routes such as elections and referendums.

In this editorial, we will briefly highlight some of the key emerging issues in research on crowdsourcing for public policy and government. Our entry point into the discussion is a collection of research papers first presented at the Internet, Politics & Policy 2014 (IPP2014) conference organized by the Oxford Internet Institute (University of Oxford) and the Policy & Internet journal. The theme of this very successful conference—our third since the founding of the journal—was “crowdsourcing for politics and policy.” Out of almost 80 papers presented at the conference in September last year, 14 of the best have now been published as peer-reviewed articles in this journal, including five in this issue. A further handful of papers from the conference focusing on labor issues will be published in the next issue, but we can already now take stock of all the articles focusing on government, politics, and policy.

The growing interest in crowdsourcing for government and public policy must be understood in the context of the contemporary malaise of politics, which is being felt across the democratic world, but most of all in Europe. The problems with democracy have a long history, from the declining powers of parliamentary bodies when compared to the executive; to declining turnouts in elections, declining participation in mass parties, and declining trust in democratic institutions and politicians. But these problems have gained a new salience in the last five years, as the ongoing financial crisis has contributed to the rise of a range of new populist forces all across Europe, and to a fragmentation of the center ground. Furthermore, poor accuracy of pre- election polls in recent elections in Israel and the UK have generated considerable debate over the usefulness and accuracy of the traditional way of knowing what the public is thinking: the sample survey.

Many place hopes on technological and institutional innovations such as crowdsourcing to show a way out of the brewing crisis of democratic politics and political science. One of the key attractions of crowdsourcing techniques to governments and grass roots movements alike is the legitimacy such techniques are expected to be able to generate. For example, crowdsourcing techniques have been applied to enable citizens to verify the legality and correctness of government decisions and outcomes. A well-known application is to ask citizens to audit large volumes of data on government spending, to uncover any malfeasance but also to increase citizens’ trust in the government (Maguire, 2011).

Articles emerging from the IPP2014 conference analyze other interesting and comparable applications. In an article titled “Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform,” Carlos Arias, Jorge Garcia and Alejandro Corpeño (2015) describe the use of crowdsourcing for auditing election results. Dieter Zinnbauer (2015) discusses the potentials and pitfalls of the use of crowdsourcing for some other types of auditing purposes, in “Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future.”

Besides allowing citizens to verify the outcome of a process, crowdsourcing can also be used to lend an air of inclusiveness and transparency to a process itself. This process legitimacy can then indirectly legitimate the outcome of the process as well. For example, crowdsourcing-style open processes have been used to collect policy ideas, gather support for difficult policy decisions, and even generate detailed spending plans through participatory budgeting (Wampler & Avritzer, 2004). Articles emerging from our conference further advance this line of research. Roxana Radu, Nicolo Zingales and Enrico Calandro (2015) examine the use of crowdsourcing to lend process legitimacy to Internet governance, in an article titled “Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance.” Graham Smith, Robert C. Richards Jr. and John Gastil (2015) write about “The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations.”

An interesting cautionary tale is presented by Henrik Serup Christensen, Maija Karjalainen and Laura Nurminen (2015) in “Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland.” They show how a citizen initiative process ended up decreasing government legitimacy, after the government failed to implement the outcome of an initiative process that was perceived as highly legitimate by its supporters. Taneli Heikka (2015) further examines the implications of citizen initiative processes to the state–citizen relationship in “The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation.”

In many of the contributions that touch on the legitimating effects of crowdsourcing, one can sense a third, latent theme. Besides allowing outcomes to be audited and processes to be potentially more inclusive, crowdsourcing can also increase the perceived legitimacy of a government or policy process by lending an air of innovation and technological progress to the endeavor and those involved in it. This is most explicitly stated by Simo Hosio, Jorge Goncalves, Vassilis Kostakos and Jukka Riekki (2015) in “Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu.” They describe how local government officials collaborating with the research team to test a new public screen based polling system “expressed that the PR value boosted their public perception as a modern organization.” That some government crowdsourcing initatives are at least in part motivated by such “crowdwashing” is hardly surprising, but it encourages us to retain a critical approach and analyze actual outcomes instead of accepting dominant discourses about the nature and effects of crowdsourcing at face value.

For instance, we must continue to examine the actual size, composition, internal structures and motivations of the supposed “crowds” that make use of online platforms. Articles emerging from our conference that contributed towards this aim include “Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data” by Benedikt Boecking, Margeret Hall and Jeff Schneider (2015) and “Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making” by Pete Burnap and Matthew L. Williams (2015). Anatoliy Gruzd and Ksenia Tsyganova won a best paper award at the IPP2014 conference for an article published in this journal as “Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups.” These articles can be used to challenge the notion that crowdsourcing contributors are simply sets of independent individuals who are neatly representative of a larger population, and instead highlight the clusters, networks, and power structures inherent within them. This has implications to the democratic legitimacy of some of the more naive crowdsourcing initiatives.

One of the most original articles to emerge out of IPP2014 turns the concept of crowdsourcing for public policy and government on its head. While most research has focused on crowdsourcing’s empowering effects (or lack thereof), Gregory Asmolov (2015) analyses crowdsourcing as a form of social control. In an article titled “Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations,” Asmolov draws on empirical evidence and theorists such as Foucault to show how crowdsourcing platforms can be used to institutionalize volunteer resources in order to align them with state objectives and prevent independent collective action. An article by Jorge Goncalves, Yong Liu, Bin Xiao, Saad Chaudhry, Simo Hosio and Vassilis Kostakos (2015) provides a less nefarious example of strategic use of online platforms to further government objectives, under the title “Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook.”

Articles emerging from the conference also include two review articles that provide useful overviews of the field from different perspectives. “A Systematic Review of Online Deliberation Research” by Dennis Friess and Christiane Eilders (2015) takes stock of the use of digital technologies as public spheres. “The Fundamentals of Policy Crowdsourcing” by John Prpić, Araz Taeihagh and James Melton (2015) situates a broad variety of crowdsourcing literature into the context of a public policy cycle framework.

It has been extremely satisfying to follow the progress of these papers from initial conference submissions to high-quality journal articles, and to see that the final product not only advances the state of the art, but also provides certain new and critical perspectives on crowdsourcing. These perspectives will no doubt provoke responses, and Policy & Internet continues to welcome high-quality submissions dealing with crowdsourcing for public policy, government, and beyond.

Read the full editorial: Vili Lehdonvirta andJonathan Bright (2015) Crowdsourcing for Public Policy and Government. Editorial. Volume 7, Issue 3, pages 263–267.

References

Arias, C.R., Garcia, J. and Corpeño, A. (2015) Population as Auditor of an Election Process in Honduras: The Case of the VotoSocial Crowdsourcing Platform. Policy & Internet 7 (2) 185–202.

Asmolov, G. (2105) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy & Internet 7 (3).

Brabham, D. C. (2013). Citizen E-Participation in Urban Governance: Crowdsourcing and Collaborative Creativity: Crowdsourcing and Collaborative Creativity. IGI Global.

Boecking, B., Hall, M. and Schneider, J. (2015) Event Prediction With Learning Algorithms—A Study of Events Surrounding the Egyptian Revolution of 2011 on the Basis of Micro Blog Data. Policy & Internet 7 (2) 159–184.

Burnap P. and Williams, M.L. (2015) Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making. Policy & Internet 7 (2) 223–242.

Christensen, H.S., Karjalainen, M. and Nurminen, L. (2015) Does Crowdsourcing Legislation Increase Political Legitimacy? The Case of Avoin Ministeriö in Finland. Policy & Internet 7 (1) 25-45.

Friess, D. and Eilders, C. (2015) A Systematic Review of Online Deliberation Research. Policy & Internet 7 (3).

Goncalves, J., Liu, Y., Xiao, B., Chaudhry, S., Hosio, S. and Kostakos, V. (2015) Increasing the Reach of Government Social Media: A Case Study in Modeling Government–Citizen Interaction on Facebook. Policy & Internet 7 (1) 80-102.

Gruzd, A. and Tsyganova, K. (2015) Information Wars and Online Activism During the 2013/2014 Crisis in Ukraine: Examining the Social Structures of Pro- and Anti-Maidan Groups. Policy & Internet 7 (2) 121–158.

Heikka, T. (2015) The Rise of the Mediating Citizen: Time, Space and Citizenship in the Crowdsourcing of Finnish Legislation. Policy & Internet 7 (3).

Hosio, S., Goncalves, J., Kostakos, V. and Riekki, J. (2015) Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu. Policy & Internet 7 (2) 203–222.

Howe, J. (2006). The Rise of Crowdsourcing by Jeff Howe | Byliner. Retrieved from

Maguire, S. (2011). Can Data Deliver Better Government? Political Quarterly, 82(4), 522–525.

Margetts, H., John, P., Hale, S., & Yasseri, T. (2015): Political Turbulence: How Social Media Shape Collective Action. Princeton University Press.

Prpić, J., Taeihagh, A. and Melton, J. (2015) The Fundamentals of Policy Crowdsourcing. Policy & Internet 7 (3).

Radu, R., Zingales, N. and Calandro, E. (2015) Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet 7 (3).

Shirky, C. (2010). Cognitive Surplus: How Technology Makes Consumers into Collaborators. Penguin Publishing Group.

Smith, G., Richards R.C. Jr. and Gastil, J. (2015) The Potential of Participedia as a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations. Policy & Internet 7 (2) 243–262.

Wampler, B., & Avritzer, L. (2004). Participatory publics: civil society and new institutions in democratic Brazil. Comparative Politics, 36(3), 291–312.

Zinnbauer, D. (2015) Crowdsourced Corruption Reporting: What Petrified Forests, Street Music, Bath Towels, and the Taxman Can Tell Us About the Prospects for Its Future. Policy & Internet 7 (1) 1–24.

]]>
How big data is breathing new life into the smart cities concept https://ensr.oii.ox.ac.uk/how-big-data-is-breathing-new-life-into-the-smart-cities-concept/ Thu, 23 Jul 2015 09:57:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3297 “Big data” is a growing area of interest for public policy makers: for example, it was highlighted in UK Chancellor George Osborne’s recent budget speech as a major means of improving efficiency in public service delivery. While big data can apply to government at every level, the majority of innovation is currently being driven by local government, especially cities, who perhaps have greater flexibility and room to experiment and who are constantly on a drive to improve service delivery without increasing budgets.

Work on big data for cities is increasingly incorporated under the rubric of “smart cities”. The smart city is an old(ish) idea: give urban policymakers real time information on a whole variety of indicators about their city (from traffic and pollution to park usage and waste bin collection) and they will be able to improve decision making and optimise service delivery. But the initial vision, which mostly centred around adding sensors and RFID tags to objects around the city so that they would be able to communicate, has thus far remained unrealised (big up front investment needs and the requirements of IPv6 are perhaps the most obvious reasons for this).

The rise of big data – large, heterogeneous datasets generated by the increasing digitisation of social life – has however breathed new life into the smart cities concept. If all the cars have GPS devices, all the people have mobile phones, and all opinions are expressed on social media, then do we really need the city to be smart at all? Instead, policymakers can simply extract what they need from a sea of data which is already around them. And indeed, data from mobile phone operators has already been used for traffic optimisation, Oyster card data has been used to plan London Underground service interruptions, sewage data has been used to estimate population levels … the examples go on.

However, at the moment these examples remain largely anecdotal, driven forward by a few cities rather than adopted worldwide. The big data driven smart city faces considerable challenges if it is to become a default means of policymaking rather than a conversation piece. Getting access to the right data; correcting for biases and inaccuracies (not everyone has a GPS, phone, or expresses themselves on social media); and communicating it all to executives remain key concerns. Furthermore, especially in a context of tight budgets, most local governments cannot afford to experiment with new techniques which may not pay off instantly.

This is the context of two current OII projects in the smart cities field: UrbanData2Decide (2014-2016) and NEXUS (2015-2017). UrbanData2Decide joins together a consortium of European universities, each working with a local city partner, to explore how local government problems can be resolved with urban generated data. In Oxford, we are looking at how open mapping data can be used to estimate alcohol availability; how website analytics can be used to estimate service disruption; and how internal administrative data and social media data can be used to estimate population levels. The best concepts will be built into an application which allows decision makers to access these concepts real time.

NEXUS builds on this work. A collaborative partnership with BT, it will look at how social media data and some internal BT data can be used to estimate people movement and traffic patterns around the city, joining these data into network visualisations which are then displayed to policymakers in a data visualisation application. Both projects fill an important gap by allowing city officials to experiment with data driven solutions, providing proof of concepts and showing what works and what doesn’t. Increasing academic-government partnerships in this way has real potential to drive forward the field and turn the smart city vision into a reality.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

]]>
The life and death of political news: using online data to measure the impact of the audience agenda https://ensr.oii.ox.ac.uk/the-life-and-death-of-political-news-using-online-data-to-measure-the-impact-of-the-audience-agenda/ Tue, 09 Sep 2014 07:04:47 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2879
Caption
Image of the Telegraph’s state of the art “hub and spoke” newsroom layout by David Sim.
The political agenda has always been shaped by what the news media decide to publish — through their ability to broadcast to large, loyal audiences in a sustained manner, news editors have the ability to shape ‘political reality’ by deciding what is important to report. Traditionally, journalists pass to their editors from a pool of potential stories; editors then choose which stories to publish. However, with the increasing importance of online news, editors must now decide not only what to publish and where, but how long it should remain prominent and visible to the audience on the front page of the news website.

The question of how much influence the audience has in these decisions has always been ambiguous. While in theory we might expect journalists to be attentive to readers, journalism has also been characterized as a profession with a “deliberate…ignorance of audience wants” (Anderson, 2011b). This ‘anti-populism’ is still often portrayed as an important journalistic virtue, in the context of telling people what they need to hear, rather than what they want to hear. Recently, however, attention has been turning to the potential impact that online audience metrics are having on journalism’s “deliberate ignorance”. Online publishing provides a huge amount of information to editors about visitor numbers, visit frequency, and what visitors choose to read and how long they spend reading it. Online editors now have detailed information about what articles are popular almost as soon as they are published, with these statistics frequently displayed prominently in the newsroom.

The rise of audience metrics has created concern both within the journalistic profession and academia, as part of a broader set of concerns about the way journalism is changing online. Many have expressed concern about a ‘culture of click’, whereby important but unexciting stories make way for more attention grabbing pieces, and editorial judgments are overridden by traffic statistics. At a time when media business models are under great strain, the incentives to follow the audience are obvious, particularly when business models increasingly rely on revenue from online traffic and advertising. The consequences for the broader agenda-setting function of the news media could be significant: more prolific or earlier readers might play a disproportionate role in helping to select content; particular social classes or groupings that read news online less frequently might find their issues being subtly shifted down the agenda.

The extent to which such a populist influence exists has attracted little empirical research. Many ethnographic studies have shown that audience metrics are being captured in online newsrooms, with anecdotal evidence for the importance of traffic statistics on an article’s lifetime (Anderson 2011b, MacGregor, 2007). However, many editors have emphasised that popularity is not a major determining factor (MacGregor, 2007), and that news values remain significant in terms of placement of news articles.

In order to assess the possible influence of audience metrics on decisions made by political news editors, we undertook a systematic, large-scale study of the relationship between readership statistics and article lifetime. We examined the news cycles of five major UK news outlets (the BBC, the Daily Telegraph, the Guardian, the Daily Mail and the Mirror) over a period of six weeks, capturing their front pages every 15 minutes, resulting in over 20,000 front-page captures and more than 40,000 individual articles. We measure article readership by capturing information from the BBC’s “most read” list of news articles (twelve percent of the articles were featured at some point on the ‘most read’ list, with a median time to achieving this status of two hours, and an average article life of 15 hours on the front page). Using the Cox Proportional Hazards model (which allows us to quantify the impact of an article’s appearance on the ‘most read’ list on its chance of survival) we asked whether an article’s being listed in a ‘most read’ column affected the length of time it remained on the front page.

We found that ‘most read’ articles had, on average, a 26% lower chance of being removed from the front page than equivalent articles which were not on the most read list, providing support for the idea that online editors are influenced by readership statistics. In addition to assessing the general impact of readership statistics, we also wanted to see whether this effect differs between ‘political’ and ‘entertainment’ news. Research on participatory journalism has suggested that online editors might be more willing to allow audience participation in areas of soft news such as entertainment, arts, sports, etc. We find a small amount of evidence for this claim, though the difference between the two categories was very slight.

Finally, we wanted to assess whether there is a ‘quality’ / ‘tabloid’ split. Part of the definition of tabloid style journalism lies precisely in its willingness to follow the demands of its audience. However, we found the audience ‘effect’ (surprisingly) to be most obvious in the quality papers. For tabloids, ‘most read’ status actually had a slightly negative effect on article lifetime. We wouldn’t argue that tabloid editors actively reject the wishes of their audience; however we can say that these editors are no more likely to follow their audience than the typical ‘quality’ editor, and in fact may be less so. We do not have a clear explanation for this difference, though we could speculate that, as tabloid publications are already more tuned in to the wishes of their audience, the appearance of readership statistics makes less practical difference to the overall product. However it may also simply be the case that the online environment is slowly producing new journalistic practices for which the tabloid / quality distinction will be of less usefulness.

So on the basis of our study, we can say that high-traffic articles do in fact spend longer in the spotlight than ones that attract less readership: audience readership does have a measurable impact on the lifespan of political news. The audience is no longer the unknown quantity it was in offline journalism: it appears to have a clear impact on journalistic practice. The question that remains, however, is whether this constitutes evidence of a new ‘populism’ in journalism; or whether it represents (as editors themselves have argued) the simple striking of a balance between audience demands and news values.

Read the full article: Bright, J., and Nicholls, T. (2014) The Life and Death of Political News: Measuring the Impact of the Audience Agenda Using Online Data. Social Science Computer Review 32 (2) 170-181.

References

Anderson, C. W. (2011) Between creative and quantified audiences: Web metrics and changing patterns of newswork in local US newsrooms. Journalism 12 (5) 550-566.

MacGregor, P. (2007) Tracking the Online Audience. Journalism Studies 8 (2) 280-298.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

Tom Nicholls is a doctoral student at the Oxford Internet Institute. His research interests include the impact of technology on citizen/government relationships, the Internet’s implications for public management and models of electronic public service delivery.

]]>