Helen Margetts – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:46 +0000 en-GB hourly 1 Five reasons ‘technological solutions’ are a distraction from the Irish border problem https://ensr.oii.ox.ac.uk/five-reasons-technological-solutions-are-a-distraction-from-the-irish-border-problem/ Thu, 21 Feb 2019 09:59:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4889 In this post, Helen Margetts, Cosmina Dorobantu, Florian Ostmann, and Christina Hitrova discuss the focus on ‘technological solutions’ in the context of the Irish border debate — arguing that it is becoming a red herring and a distraction from the political choices ahead. They write:

 

Technology is increasingly touted as an alternative to the Irish backstop, especially in light of the government’s difficulty to find a Brexit strategy that can command a majority in the House of Commons. As academics, we have been following the debate around the role of technology in monitoring the border with interest, but also scepticism and frustration. Technology can foster government innovation in countless ways and digital technologies, in particular, have the potential transform the way in which government makes policy and designs public services. Yet, in the context of the Irish border debate, the focus on ‘technological solutions’ is becoming a red herring and distracts from the political choices ahead. Technology cannot solve the Irish border problem and it is time to face the facts.

1: Technology cannot ensure a ‘frictionless border’

Any legal or regulatory restrictions on the movement of goods or people between the UK and the Republic of Ireland post-Brexit will make border-related friction inevitable. Setting the restrictions is a matter of political agreements. Technology can help enforce the legal or regulatory restrictions, but it cannot prevent the introduction of friction compared to the status quo. For example, technology may speed up documentation, processing, and inspections, but it cannot eliminate the need for these procedures, whose existence will mean new burdens on those undergoing them.

2:  There will be a need for new infrastructure at or near the border

Technology may make it possible for some checks to be carried out away from the border. For example, machine learning algorithms can assist in identifying suspicious vehicles and police forces can stop and inspect them away from the border. Regardless of where the relevant inspections are carried out, however, there will be a need for new infrastructure at or near the border, such as camera systems that record the identity of the vehicles crossing the frontier. The amount of new infrastructure needed will depend on how strict the UK and the EU decide to be in enforcing restrictions on the movement of goods and people. At a minimum, cameras will have to be installed at the border. Stricter enforcement regimes will require additional infrastructure such as sensors, scanners, boom barriers or gates.

3: ‘Frictionless’ solutions are in direct conflict with the Brexit goal to ‘take back control’ over borders

There is a fundamental conflict between the goals of minimising friction and enforcing compliance. For example, friction for Irish and UK citizens traveling across the Irish border could be reduced by a system that allows passenger vehicles registered within the Common Travel Area to cross the border freely. This approach, however, would make it difficult to monitor whether registered vehicles are used to facilitate unauthorised movements of people or goods across the border. More generally, the more effective the border management system is in detecting and preventing non-compliant movements of goods or people across the border, the more friction there will be.

4: Technology has known imperfections

Many of the ‘technological solutions’ that have been proposed as ways to minimise friction have blind spots when it comes to monitoring and enforcing compliance – a fact quietly acknowledged through comments about the solutions’ ‘dependence on trust’. Automated licence plate recognition systems, for example, can easily be tricked by using stolen or falsified number plates. Probabilistic algorithmic tools to identify the ‘high risk’ vehicles selected for inspections will fail to identify some cases of non-compliance. Technological tools may lead to improvements over risk-based approaches that rely on human judgment alone, but they cannot, on their own, monitor the border safely.

5: Government will struggle to develop the relevant technological tools

Suggestions that the border controversy may find a last-minute solution by relying on technology seem dangerously detached from the realities of large-scale technology projects, especially in the public sector. In addition to considerable expertise and financial investments, such projects need time, a resource that is quickly running out as March 29 draws closer. The history of government technology projects is littered with examples of failures to meet expectations, enormous cost overruns, and troubled relationships with computer services providers.

A recent example is the mobile phone app meant to facilitate the registration of the 3.7 million EU nationals living in the UK that cannot work on iPhones. Private companies will be keen to sell technological solutions to the backstop problem, with firms like Fujitsu and GSM already signalling their interest in addressing this technological challenge. Under time pressure, government will struggle to evaluate the feasibility of the technological solutions proposed by these private providers, negotiate a favourable contract, and ensure that the resulting technology is fit for purpose.

Technological tools can help implement customs rules, but they cannot fill the current political vacuum. The design, development, and implementation of border management tools require regulatory clarity—prior knowledge of the rules whose monitoring and enforcement the technical tools are meant to support. What these rules will be for the UK-Ireland border following Brexit is a political question. The recent focus on ‘technological solutions’, rather than informing the debate around this question, seems to have served as a strategy for avoiding substantive engagement with it. It is time for government to accept that technology cannot solve the Irish border problem and move on to find real, feasible alternatives.

Authors:

Professor Helen Margetts, Professor of Society and the Internet, Oxford Internet institute, University of Oxford;  Director of the Public Policy Programme, The Alan Turing Institute

Dr Cosmina Dorobantu, Research Associate, Oxford Internet Institute, University of Oxford; Deputy Director of the Public Policy Programme, The Alan Turing Institute

Dr Florian Ostmann, Policy Fellow, Public Policy Programme, The Alan Turing Institute

Christina Hitrova, Digital Ethics Research Assistant, Public Policy Programme, The Alan Turing Institute

Disclaimer: The views expressed in this article are those of the listed members of The Alan Turing Institute’s Public Policy Programme in their individual academic capacities, and do not represent a formal view of the Institute.

]]>
Stormzy 1: The Sun 0 — Three Reasons Why #GE2017 Was the Real Social Media Election https://ensr.oii.ox.ac.uk/stormzy-1-the-sun-0-three-reasons-why-ge2017-was-the-real-social-media-election/ Thu, 15 Jun 2017 15:51:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4261 After its initial appearance as a cynical but safe device by Teresa May to ratchet up the Conservative majority, the UK general election of 2017 turned out to be one of the most exciting and unexpected of all time. One of the many things for which it will be remembered is as the first election where it was the social media campaigns that really made the difference to the relative fortunes of the parties, rather than traditional media. And it could be the first election where the right wing tabloids finally ceded their influence to new media, their power over politics broken according to some.

Social media have been part of the UK electoral landscape for a while. In 2015, many of us attributed the Conservative success in part to their massive expenditure on targeted Facebook advertising, 10 times more than Labour, whose ‘bottom-up’ Twitter campaign seemed mainly to have preached to the converted. Social media advertising was used more successfully by Leave.EU than Remain in the referendum (although some of us cautioned against blaming social media for Brexit). But in both these campaigns, the relentless attack of the tabloid press was able to strike at the heart of the Labour and Remain campaigns and was widely credited for having influenced the result, as in so many elections from the 1930s onwards.

However, in 2017 Labour’s campaign was widely regarded as having made a huge positive difference to the party’s share of the vote – unexpectedly rising by 10 percentage points on 2015 – in the face of a typically sustained and viscious attack by the Daily Mail, the Sun and the Daily Express. Why? There are (at least) three reasons.

First, increased turnout of young people is widely regarded to have driven Labour’s improved share of the vote – and young people do not in general read newspapers not even online. Instead, they spend increasing proportions of their time on social media platforms on mobile phones, particularly Instagram (with 10 million UK users, mostly under 30) and Snapchat (used by half of 18-34 year olds), both mobile-first platforms. On these platforms, although they may see individual stories that are shared or appear on their phone’s news portal, they may not even see the front page headlines that used to make politicians shake.

Meanwhile, what people do pay attention to and share on these platforms are videos and music, so popular artists amass huge followings. Some of the most popular came out in favour of Labour under the umbrella hashtag #Grime4Corbyn, with artists like Stormzy, JME (whose Facebook interview with Corbyn was viewed 2.5 million times) and Skepta with over a million followers on Instagram alone.

A leaflet from Croydon pointing out that ‘Even your Dad has more Facebook friends’ than the 2015 vote difference between Conservative and Labour and showing Stormzy saying ‘Vote Labour!’ was shared millions of times. Obviously we don’t know how much difference these endorsements made – but by sharing videos and images, they certainly spread the idea of voting for Corbyn across huge social networks.

Second, Labour have overtaken the Tories in reaching out across social platforms used by young people with an incredibly efficient advertising strategy. There is no doubt that in 2017 the Conservatives ran a relentless campaign of anti-Corbyn attack ads on Facebook and Instagram. But for the Conservatives, social media are just for elections. Instead, Labour have been using these channels for two years now – Corbyn has been active on Snapchat since becoming Labour leader in 2015 (when some of us were surprised to hear our teenage offspring announcing brightly ‘I’m friends with Jeremy Corbyn on Snapchat’).

That means that by the time of the election Corbyn and various fiercely pro-Labour online-only news outlets like the Canary had acquired a huge following among this demographic, meaning not having to pay for ads. And if you have followers to spread your message, you can be very efficient with advertising spend. While the Conservatives spent more than £1m on direct advertising with Facebook etc., nearly 10 million people watched pro-Labour videos on Facebook that cost less than £2K to make. Furthermore, there is some evidence that the relentless negativity of the Conservative advertising campaign actually put young people off particularly. After all, the advertising guidelines for Instagram advise ‘Images should tell a story/be inspirational’!

On the day before the election, the Daily Mail ran a frontpage headline ‘Apologists for Terror’, with a photo of Diane Abbot along with Corbyn and John McDonnell. But that morning Labour announced that Abbot’s standing aside due to illness. The paper circulating around the networks and sitting on news-stands was already out of date. Digital natives are used to real-time information, they are never going to be swayed by something so clearly past its sell-by-date.

Likewise, the Sun’s election day image – a grotesque image of Jeremy “Corbinned” in a dustbin was photoshopped to replace Corbyn with an equally grotesque photograph of May taking his place in the dustbin, before the first editions landed. It won’t have reached the same audience, perhaps, but it will have reached a lot of people.

It will be a long time before we can really assess the influence of social media in the 2017 election, and some things we may never know. That is because all the data that would allow us to do so is held by the platforms themselves – Facebook, Instagram, Snapchat and so on. That is a crucial issue for the future of our democracy, already bringing calls for some transparency in political advertising both by social media platforms and the parties themselves. Under current conditions the Electoral Commission is incapable of regulating election advertising effectively, or judging (for example) how much national parties spend on targeted advertising locally. This is something that urgently needs addressing in the coming months, especially given Britain’s current penchant for elections.

The secret and often dark world of personalized political advertising on social media, where strong undercurrents of support remain hidden to the outside world, is one reason why polls fail to predict election results until after the election has taken place. Having the data to understand the social media election would also explain some of the volatility in elections these days, as explored in our book Political Turbulence: How Social Media Shape Collective Action. By investigating large-scale data on political activity my co-authors and I showed that social media are injecting the same sort of instability into politics as they have into cultural markets, where most artists gain no traction at all but a (probably unpredictable) few become massively popular – the young singer Ed Sheeran’s ‘The Shape of You’ has been streamed one billion times on Spotify alone.

In 2017, Stormzy and co. provided a more direct link between political and music markets, and this kind of development will ensure that politics in the age of social media will remain turbulent and unpredictable. We can’t claim to have predicted Labour’s unexpected success in this election, but we can claim to have foreseen that it couldn’t be predicted.

]]>
Of course social media is transforming politics. But it’s not to blame for Brexit and Trump https://ensr.oii.ox.ac.uk/of-course-social-media-is-transforming-politics-but-its-not-to-blame-for-brexit-and-trump/ Mon, 09 Jan 2017 10:24:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3909 After Brexit and the election of Donald Trump, 2016 will be remembered as the year of cataclysmic democratic events on both sides of the Atlantic. Social media has been implicated in the wave of populism that led to both these developments.

Attention has focused on echo chambers, with many arguing that social media users exist in ideological filter bubbles, narrowly focused on their own preferences, prey to fake news and political bots, reinforcing polarization and leading voters to turn away from the mainstream. Mark Zuckerberg has responded with the strange claim that his company (built on $5 billion of advertising revenue) does not influence people’s decisions.

So what role did social media play in the political events of 2016?

Political turbulence and the new populism

There is no doubt that social media has brought change to politics. From the waves of protest and unrest in response to the 2008 financial crisis, to the Arab spring of 2011, there has been a generalized feeling that political mobilization is on the rise, and that social media had something to do with it.

Our book investigating the relationship between social media and collective action, Political Turbulence, focuses on how social media allows new, “tiny acts” of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later, if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or campaigns for policy change. But they almost always don’t. The overwhelming majority (99.99%) of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US).

The very few that succeed do so very quickly on a massive scale (petitions challenging the Brexit and Trump votes immediately shot above 4 million signatures, to become the largest petitions in history), but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties – the reason why so many of the Arab Spring revolutions proved disappointing.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics can explain why many political developments of our time seem to come from nowhere. It can help to understand the shock waves of support that brought us the Italian Five Star Movement, Podemos in Spain, Jeremy Corbyn, Bernie Sanders, and most recently Brexit and Trump – all of which have campaigned against the “establishment” and challenged traditional political institutions to breaking point.

Each successive mobilization has made people believe that challengers from outside the mainstream are viable – and that is in part what has brought us unlikely results on both sides of the Atlantic. But it doesn’t explain everything.

We’ve had waves of populism before – long before social media (indeed many have made parallels between the politics of 2016 and that of the 1930s). While claims that social media feeds are the biggest threat to democracy, leading to the “disintegration of the general will” and “polarization that drives populism” abound, hard evidence is more difficult to find.

The myth of the echo chamber

The mechanism that is most often offered for this state of events is the existence of echo chambers or filter bubbles. The argument goes that first social media platforms feed people the news that is closest to their own ideological standpoint (estimated from their previous patterns of consumption) and second, that people create their own personalized information environments through their online behaviour, selecting friends and news sources that back up their world view.

Once in these ideological bubbles, people are prey to fake news and political bots that further reinforce their views. So, some argue, social media reinforces people’s current views and acts as a polarizing force on politics, meaning that “random exposure to content is gone from our diets of news and information”.

Really? Is exposure less random than before? Surely the most perfect echo chamber would be the one occupied by someone who only read the Daily Mail in the 1930s – with little possibility of other news – or someone who just watches Fox News? Can our new habitat on social media really be as closed off as these environments, when our digital networks are so very much larger and more heterogeneous than anything we’ve had before?

Research suggests not. A recent large-scale survey (of 50,000 news consumers in 26 countries) shows how those who do not use social media on average come across news from significantly fewer different online sources than those who do. Social media users, it found, receive an additional “boost” in the number of news sources they use each week, even if they are not actually trying to consume more news. These findings are reinforced by an analysis of Facebook data, where 8.8 billion posts, likes and comments were posted through the US election.

Recent research published in Science shows that algorithms play less of a role in exposure to attitude-challenging content than individuals’ own choices and that “on average more than 20% of an individual’s Facebook friends who report an ideological affiliation are from the opposing party”, meaning that social media exposes individuals to at least some ideologically cross-cutting viewpoints: “24% of the hard content shared by liberals’ friends is cross-cutting, compared to 35% for conservatives” (the equivalent figures would be 40% and 45% if random).

In fact, companies have no incentive to create hermetically sealed (as I have heard one commentator claim) echo chambers. Most of social media content is not about politics (sorry guys) – most of that £5 billion advertising revenue does not come from political organizations. So any incentives that companies have to create echo chambers – for the purposes of targeted advertising, for example – are most likely to relate to lifestyle choices or entertainment preferences, rather than political attitudes.

And where filter bubbles do exist they are constantly shifting and sliding – easily punctured by a trending cross-issue item (anybody looking at #Election2016 shortly before polling day would have seen a rich mix of views, while having little doubt about Trump’s impending victory).

And of course, even if political echo chambers were as efficient as some seem to think, there is little evidence that this is what actually shapes election results. After all, by definition echo chambers preach to the converted. It is the undecided people who (for example) the Leave and Trump campaigns needed to reach.

And from the research, it looks like they managed to do just that. A barrage of evidence suggests that such advertising was effective in the 2015 UK general election (where the Conservatives spent 10 times as much as Labour on Facebook advertising), in the EU referendum (where the Leave campaign also focused on paid Facebook ads) and in the presidential election, where Facebook advertising has been credited for Trump’s victory, while the Clinton campaign focused on TV ads. And of course, advanced advertising techniques might actually focus on those undecided voters from their conversations. This is not the bottom-up political mobilization that fired off support for Podemos or Bernie Sanders. It is massive top-down advertising dollars.

Ironically however, these huge top-down political advertising campaigns have some of the same characteristics as the bottom-up movements discussed above, particularly sustainability. Former New York Governor Mario Cuomo’s dictum that candidates “campaign in poetry and govern in prose” may need an update. Barack Obama’s innovative campaigns of online social networks, micro-donations and matching support were miraculous, but the extent to which he developed digital government or data-driven policy-making in office was disappointing. Campaign digitally, govern in analogue might be the new mantra.

Chaotic pluralism

Politics is a lot messier in the social media era than it used to be – whether something takes off and succeeds in gaining critical mass is far more random than it appears to be from a casual glance, where we see only those that succeed.

In Political Turbulence, we wanted to identify the model of democracy that best encapsulates politics intertwined with social media. The dynamics we observed seem to be leading us to a model of “chaotic pluralism”, characterized by diversity and heterogeneity – similar to early pluralist models – but also by non-linearity and high interconnectivity, making liberal democracies far more disorganized, unstable and unpredictable than the architects of pluralist political thought ever envisaged.

Perhaps rather than blaming social media for undermining democracy, we should be thinking about how we can improve the (inevitably major) part that it plays.

Within chaotic pluralism, there is an urgent need for redesigning democratic institutions that can accommodate new forms of political engagement, and respond to the discontent, inequalities and feelings of exclusion – even anger and alienation – that are at the root of the new populism. We should be using social media to listen to (rather than merely talk at) the expression of these public sentiments, and not just at election time.

Many political institutions – for example, the British Labour Party, the US Republican Party, and the first-past-the-post electoral system shared by both countries – are in crisis, precisely because they have become so far removed from the concerns and needs of citizens. Redesign will need to include social media platforms themselves, which have rapidly become established as institutions of democracy and will be at the heart of any democratic revival.

As these platforms finally start to admit to being media companies (rather than tech companies), we will need to demand human intervention and transparency over algorithms that determine trending news; factchecking (where Google took the lead); algorithms that detect fake news; and possibly even “public interest” bots to counteract the rise of computational propaganda.

Meanwhile, the only thing we can really predict with certainty is that unpredictable things will happen and that social media will be part of our political future.

Discussing the echoes of the 1930s in today’s politics, the Wall Street Journal points out how Roosevelt managed to steer between the extremes of left and right because he knew that “public sentiments of anger and alienation aren’t to be belittled or dismissed, for their causes can be legitimate and their consequences powerful”. The path through populism and polarization may involve using the opportunity that social media presents to listen, understand and respond to these sentiments.

This piece draws on research from Political Turbulence: How Social Media Shape Collective Action (Princeton University Press, 2016), by Helen Margetts, Peter John, Scott Hale and Taha Yasseri.

It is cross-posted from the World Economic Forum, where it was first published on 22 December 2016.

]]>
Don’t Shoot the Messenger! What part did social media play in 2016 US e­lection? https://ensr.oii.ox.ac.uk/dont-shoot-the-messenger-what-part-did-social-media-play-in-2016-us-election/ Tue, 15 Nov 2016 07:57:44 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3854
Young activists gather at Lafayette Park, preparing for a march to the U.S. Capitol in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).
Young activists gather at Lafayette Park in protest at the presidential campaign of presumptive Republican nominee Donald J. Trump. By Stephen Melkisethian (Flickr).

Commentators have been quick to ‘blame social media’ for ‘ruining’ the 2016 election in putting Mr Donald Trump in the White House. Just as was the case in the campaign for Brexit, people argue that social media has driven us to a ‘post-truth’ world of polarisation and echo chambers.

Is this really the case? At first glance, the ingredients of the Trump victory — as for Brexit — seem remarkably traditional. The Trump campaign spent more on physical souvenirs than on field data, more on Make America Great Again hats (made in China) than on polling. The Daily Mail characterisation of judges as Enemies of the People after their ruling that the triggering of Article 50 must be discussed in parliament seemed reminiscent of the 1930s. Likewise, US crowds chanting ‘Lock her up’, like lynch mobs, seemed like ghastly reminders of a pre-democratic era.

Clearly social media were a big part of the 2016 election, used heavily by the candidates themselves, and generating 8.8 billion posts, likes and commentson Facebook alone. Social media also make visible what in an earlier era could remain a country’s dark secret — hatred of women (through death and rape threats and trolling of female politicians in both the UK and US), and rampant racism.

This visibility, society’s new self-awareness, brings change to political behaviour. Social media provide social information about what other people are doing: viewing, following, liking, sharing, tweeting, joining, supporting and so on. This social information is the driver behind the political turbulence that characterises politics today. Those rustbelt Democrats feeling abandoned by the system saw on social media that they were not alone — that other people felt the same way, and that Trump was viable as a candidate. For a woman drawn towards the Trump agenda but feeling tentative, the hashtag #WomenForTrump could reassure her that there were like-minded people she could identify with. Decades of social science research shows information about the behaviour of others influences how groups behave and now it is driving the unpredictability of politics, bringing us Trump, Brexit, Corbyn, Sanders and unexpected political mobilisation across the world.

These are not echo chambers. As recent research shows, people are exposed to cross-cutting discourse on social media, across ever larger and more heterogeneous social networks. While the hypothetical #WomenForTrump tweeter or Facebook user will see like-minded behaviour, she will also see a peppering of social information showing people using opposing hashtags like #ImWithHer, or (post-election) #StillWithHer. It could be argued that a better example of an ‘echo chamber’ would be a regular Daily Mail reader or someone who only watched Fox News.

The mainstream media loved Trump: his controversial road-crash views sold their newspapers and advertising. Social media take us out of that world. They are relatively neutral in their stance on content, giving no particular priority to extreme or offensive views as on their platforms, the numbers are what matter.

Rather than seeing social media solely as the means by which Trump ensnared his presidential goal, we should appreciate how they can provide a wealth of valuable data to understand the anger and despair that the polls missed, and to analyse political behaviour and opinion in the times ahead. Social media can also shine the light of transparency on the workings of a Trump administration, as they did on his campaign. They will be critical for building networks of solidarity to confront the intolerance, sexism and racism stirred up during this bruising campaign. And social media will underpin any radical counter-movement that emerges in the coming years.


Helen Margetts is the author of Political Turbulence: How Social Media Shape Collective Action and thanks her co-authors Peter JohnScott Haleand Taha Yasseri.

]]>
Brexit, voting, and political turbulence https://ensr.oii.ox.ac.uk/brexit-voting-and-political-turbulence/ Thu, 18 Aug 2016 14:23:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3819 Cross-posted from the Princeton University Press blog. The authors of Political Turbulence discuss how the explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere.


On 23rd June 2016, a majority of the British public voted in a referendum on whether to leave the European Union. The Leave or so-called #Brexit option was victorious, with a margin of 52% to 48% across the country, although Scotland, Northern Ireland, London and some towns voted to remain. The result was a shock to both leave and remain supporters alike. US readers might note that when the polls closed, the odds on futures markets of Brexit (15%) were longer than those of Trump being elected President.

Political scientists are reeling with the sheer volume of politics that has been packed into the month after the result. From the Prime Minister’s morning-after resignation on 24th June the country was mired in political chaos, with almost every political institution challenged and under question in the aftermath of the vote, including both Conservative and Labour parties and the existence of the United Kingdom itself, given Scotland’s resistance to leaving the EU. The eventual formation of a government under a new prime minister, Teresa May, has brought some stability. But she was not elected and her government has a tiny majority of only 12 Members of Parliament. A cartoon by Matt in the Telegraph on July 2nd (which would work for almost any day) showed two students, one of them saying ‘I’m studying politics. The course covers the period from 8am on Thursday to lunchtime on Friday.’

All these events – the campaigns to remain or leave, the post-referendum turmoil, resignations, sackings and appointments – were played out on social media; the speed of change and the unpredictability of events being far too great for conventional media to keep pace. So our book, Political Turbulence: How Social Media Shape Collective Action, can provide a way to think about the past weeks. The book focuses on how social media allow new, ‘tiny acts’ of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity – or voting for a political party that supports it – in a social media world, people act first, and think about it, or identify with others later – if at all.

These tiny acts of participation can scale up to large-scale mobilizations, such as demonstrations, protests or petitions for policy change. These mobilizations normally fail – 99.9% of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US). The very few that succeed usually do so very quickly on a massive scale, but without the normal organizational or institutional trappings of a social or political movement, such as leaders or political parties. When Brazilian President Dilma Rousseff asked to speak to the leaders of the mass demonstrations against the government in 2014 organised entirely on social media with an explicit rejection of party politics, she was told ‘there are no leaders’.

This explosive rise, non-normal distribution and lack of organization that characterizes contemporary politics as a chaotic system, can explain why many political mobilizations of our times seem to come from nowhere. In the US and the UK it can help to understand the shock waves of support that brought Bernie Sanders, Donald Trump, Jeremy Corbyn (elected leader of the Labour party in 2015) and Brexit itself, all of which have challenged so strongly traditional political institutions. In both countries, the two largest political parties are creaking to breaking point in their efforts to accommodate these phenomena.

The unpredicted support for Brexit by over half of voters in the UK referendum illustrates these characteristics of the movements we model in the book, with the resistance to traditional forms of organization. Voters were courted by political institutions from all sides – the government, all the political parties apart from UKIP, the Bank of England, international organizations, foreign governments, the US President himself and the ‘Remain’ or StrongerIn campaign convened by Conservative, Labour and the smaller parties. Virtually every authoritative source of information supported Remain. Yet people were resistant to aligning themselves with any of them. Experts, facts, leaders of any kind were all rejected by the rising swell of support for the Leave side. Famously, Michael Gove, one of the key leave campaigners said ‘we have had enough of experts’. According to YouGov polls, over 2/3 of Conservative voters in 2015 voted to Leave in 2016, as did over one third of Labour and Liberal Democrat voters.

Instead, people turned to a few key claims promulgated by the two Leave campaigns Vote Leave(with key Conservative Brexiteers such as Boris Johnson, Michael Gove and Liam Fox) and Leave.EU, dominated by UKIP and its leader Nigel Farage, bankrolled by the aptly named billionaire Arron Banks. This side dominated social media in driving home their simple (if largely untrue) claims and anti-establishment, anti-elitist message (although all were part of the upper echelons of both establishment and elite). Key memes included the claim (painted on the side of a bus) that the UK gave £350m a week to the EU which could instead be spent on the NHS; the likelihood that Turkey would soon join the EU; and an image showing floods of migrants entering the UK via Europe. Banks brought in staff from his own insurance companies and political campaign firms (such as Goddard Gunster) and Leave.EU created a massive database of leave supporters to employ targeted advertising on social media.

While Remain represented the status-quo and a known entity, Leave was flexible to sell itself as anything to anyone. Leave campaigners would often criticize the Government but then not offer specific policy alternatives stating, ‘we are a campaign not a government.’ This ability for people to coalesce around a movement for a variety of different (and sometimes conflicting) reasons is a hallmark of the social-media based campaigns that characterize Political Turbulence. Some voters and campaigners argued that voting Leave would allow the UK to be more global and accept more immigrants from non-EU countries. In contrast, racism and anti-immigration sentiment were key reasons for other voters. Desire for sovereignty and independence, responses to austerity and economic inequality and hostility to the elites in London and the South East have all figured in the torrent of post-Brexit analysis. These alternative faces of Leave were exploited to gain votes for ‘change,’ but the exact change sought by any two voters could be very different.

The movement‘s organization illustrates what we have observed in recent political turbulence – as in Brazil, Hong Kong and Egypt; a complete rejection of mainstream political parties and institutions and an absence of leaders in any conventional sense. There is little evidence that the leading lights of the Leave campaigns were seen as prospective leaders. There was no outcry from the Leave side when they seemed to melt away after the vote, no mourning over Michael Gove’s complete fall from grace when the government was formed – nor even joy at Boris Johnson’s appointment as Foreign Secretary. Rather, the Leave campaigns acted like advertising campaigns, driving their points home to all corners of the online and offline worlds but without a clear public face. After the result, it transpired that there was no plan, no policy proposals, no exit strategy proposed by either campaign. The Vote Leave campaign was seemingly paralyzed by shock after the vote (they tried to delete their whole site, now reluctantly and partially restored with the lie on the side of the bus toned down to £50 million), pickled forever after 23rd June. Meanwhile, Teresa May, a reluctant Remain supporter and an absent figure during the referendum itself, emerged as the only viable leader after the event, in the same way as (in a very different context) the Muslim Brotherhood, as the only viable organization, were able to assume power after the first Egyptian revolution.

In contrast, the Leave.Eu website remains highly active, possibly poised for the rebirth of UKIP as a radical populist far-right party on the European model, as Arron Banks has proposed. UKIP was formed around this single policy – of leaving the EU – and will struggle to find policy purpose, post-Brexit. A new party, with Banks’ huge resources and a massive database of Leave supporters and their social media affiliations, possibly disenchanted by the slow progress of Brexit, disaffected by the traditional parties – might be a political winner on the new landscape.

The act of voting in the referendum will define people’s political identity for the foreseeable future, shaping the way they vote in any forthcoming election. The entire political system is being redrawn around this single issue, and whichever organizational grouping can ride the wave will win. The one thing we can predict for our political future is that it will be unpredictable.

 

]]>
Back to the bad old days, as civil service infighting threatens UK’s only hope for digital government https://ensr.oii.ox.ac.uk/back-to-the-bad-old-days-as-civil-service-infighting-threatens-uks-only-hope-for-digital-government/ Wed, 10 Aug 2016 13:59:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3814 GDS-TheConversationTechnology and the public sector have rarely been happy bedfellows in the UK, where every government technology project seems doomed to arrive late, unperform and come in over budget. The Government Digital Service (GDS) was created to drag the civil service into the 21st century, making services “digital by default”, cheaper, faster, and easier to use. It quickly won accolades for its approach and early cost savings.

But then its leadership departed, not once or twice but three times – the latter two within the last few months. The largest government departments have begun to reassert their authority over GDS expert advice, and digital government looks likely to be dragged back towards the deeply dysfunctional old ways of doing things. GDS isn’t perfect, but to erase the progress it has put in place would be a terrible loss.

The UK government’s use of technology has previously lagged far behind other countries. Low usage of digital services rendered them expensive and inefficient. Digital operations were often handicapped by complex networks of legacy systems, some dating right back to the 1970s. The development of the long-promised “digital era governance” was mired in a series of mega contracts: huge in terms of cost, scope and timescale, bigger than any attempted by other governments worldwide, and to be delivered by the same handful of giant global computer consulting firms that rarely saw any challenge to their grip on public contracts. Departmental silos ensured there were no economies of scale, shared services failed, and the Treasury negotiated with 24 departments individually for their IT expenditure.

Some commentators (including this one) were a little sceptical on our first encounter with GDS. We had seen it before: the Office of the e-Envoy set up by Tony Blair in 1999, superseded by the E-government Unit (2004-7), and then Directgov until 2010.

Successes and failures

In many ways GDS has been a success story, with former prime minister David Cameron calling it one of the “great unsung triumphs of the last parliament” with a claimed £1.7 billion cost savings. Treasury negotiates with GDS, rather than with 24 departments, and GDS has been involved in every hiring decision for senior digital staff, raising the quality of digital expertise.

The building blocks of the GDS’ promised “government as a platform” approach have appeared: Verify, a federated identity system that doesn’t rely on ID cards or centralised identity databases, Govpay, which makes it easier to make payments to the government, and Notify, which allows government agencies to keep citizens informed of progress on services.

GDS tackled the overweening power of the huge firms that have dominated government IT in the past, and has given smaller departments and agencies the confidence to undertake some projects themselves, bringing expertise back in-house, embracing open source, and washing away some of the taint of failure from previous government IT projects.

There has even been a procession of visitors from overseas coming to investigate, and imitations have spawned across the world, from the US to Australia.

But elsewhere GDS has really only chipped away at monolithic government IT. For example, GDS and the Department for Work and Pensions failed to work together on Universal Credit. Instead, the huge Pathfindersystem that underpinned the Universal Credit trial phase was supplied by HP, Accenture, IBM and BT and ran into serious trouble at a cost of hundreds of millions of pounds. The department is now building a new system in parallel, with GDS advice, that will largely replace it.

The big systems integrators are still waiting in the wings, poised to renew their influence in government. Francis Maude, who as cabinet minister created GDS, recently admitted that if GDS had undertaken faster and more wholescale reform of legacy systems, it wouldn’t be under threat now.

The risks of centralisation

An issue GDS never tackled is one that has existed right from the start: is it an army, or is it a band of mercenaries working in other departments? Should GDS be at the centre, building and providing systems, or should it just help others to do so, building their expertise? GDS has done both, but the emphasis has been on the former, most evident through putting the government portal GOV.UK at the centre of public services.

Heading down a centralised route was always risky, as the National Audit Office observed of its forerunner direct.gov in 2007. Many departments resented the centralisation of GOV.UK, and the removal of their departmental websites, but it’s likely they’re used to it now, even relieved that it’s no longer their problem. But a staff of 700 with a budget of £112m (from 2015) was always going to look vulnerable to budget cuts.

Return of the Big Beasts

If GDS is diminished or disbanded, any hope of creating effective digital government faces two threats.

A land-grab from the biggest departments – HMRC, DWP and the Home Office, all critics of the GDS – is one possibility. There are already signs of a purge of the digital chiefs put in place by GDS, despite the National Audit Office citing continuity of leadership as critical. This looks like permanent secretaries in the civil service reasserting control over their departments’ digital operations – which will inevitably bring a return to siloed thinking and siloed data, completely at odds with the idea of government as a platform. While the big beasts can walk alone, without GDS the smaller agencies will struggle.

The other threat is the big companies, poised in the wings to renew their influence on government should GDS controls on contract size be removed. It has already begun: the ATLAS consortium led by HP has already won two Ministry of Defence contracts worth £1.5 billion since founding GDS chief Mike Bracken resigned.

It’s hard to see how government as a platform can be taken forward without expertise and capacity at the centre – no single department would have the incentive to do so. Canada’s former chief information officer recently attributed Canada’s decline as a world leader in digital government to the removal of funds dedicated to allowing departmental silos to work together. Even as the UN declares the UK to be the global leader for implementing e-government, unless the GDS can re-establish itself the UK may find the foundations it has created swept away – at a time when using digital services to do more with less is needed more than ever.


This was first posted on The Conversation.

]]>
Alan Turing Institute and OII: Summit on Data Science for Government and Policy Making https://ensr.oii.ox.ac.uk/alan-turing-institute-and-oii-summit-on-data-science-for-government-and-policy-making/ Tue, 31 May 2016 06:45:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3804 The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings.

The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI – with the academic partners in listening mode.

We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does – collect tax from the whole population, or give money away at scale, or possess the legitimate use of force – it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are unique and tackling them could put government working with researchers at the forefront of data science innovation.

During the Summit a range of stakeholders provided insight from their distinctive perspectives; the Government Chief Scientific Advisor, Sir Mark Walport; Deputy Director of the ATI, Patrick Wolfe; the National Statistician and Director of ONS, John Pullinger; Director of Data at the Government Digital Service, Paul Maltby. Representatives of frontline departments recounted how algorithmic decision-making is already bringing predictive capacity into operational business, improving efficiency and effectiveness.

Discussion revolved around the challenges of how to build core capability in data science across government, rather than outsourcing it (as happened in an earlier era with information technology) or confining it to a data science profession. Some delegates talked of being in the ‘foothills’ of data science. The scale, heterogeneity and complexity of some government departments currently works against data science innovation, particularly when larger departments can operate thousands of databases, creating legacy barriers to interoperability. Out-dated policies can work against data science methodologies. Attendees repeatedly voiced concerns about sharing data across government departments, in some case because of limitations of legal protections; in others because people were unsure what they can and cannot do.

The potential power of data science creates an urgent need for discussion of ethics. Delegates and speakers repeatedly affirmed the importance of an ethical framework and for thought leadership in this area, so that ethics is ‘part of the science’. The clear emergent option was a national Council for Data Ethics (along the lines of the Nuffield Council for Bioethics) convened by the ATI, as recommended in the recent Science and Technology parliamentary committee report The big data dilemma and the government response. Luciano Floridi (OII’s professor of the philosophy and ethics of information) warned that we cannot reduce ethics to mere compliance. Ethical problems do not normally have a single straightforward ‘right’ answer, but require dialogue and thought and extend far beyond individual privacy. There was consensus that the UK has the potential to provide global thought leadership and to set the standard for the rest of Europe. It was announced during the Summit that an ATI Working Group on the Ethics of Data Science has been confirmed, to take these issues forward.

So what happens now?

Throughout the Summit there were calls from policy makers for more data science leadership. We hope that the ATI will be instrumental in providing this, and an interface both between government, business and academia, and between separate Government departments. This Summit showed just how much real demand – and enthusiasm – there is from policy makers to develop data science methods and harness the power of big data. No-one wants to repeat with data science the history of government information technology – where in the 1950s and 60s, government led the way as an innovator, but has struggled to maintain this position ever since. We hope that the ATI can act to prevent the same fate for data science and provide both thought leadership and the ‘time and space’ (as one delegate put it) for policy-makers to work with the Institute to develop data science for the public good.

So since the Summit, in response to the clear need that emerged from the discussion and other conversations with stakeholders, the ATI has been designing a Policy Innovation Unit, with the aim of working with government departments on ‘data science for public good’ issues. Activities could include:

  • Secondments at the ATI for data scientists from government
  • Short term projects in government departments for ATI doctoral students and postdoctoral researchers
  • Developing ATI as an accredited data facility for public data, as suggested in the current Cabinet Office consultation on better use of data in government
  • ATI pilot policy projects, using government data
  • Policy symposia focused on specific issues and challenges
  • ATI representation in regular meetings at the senior level (for example, between Chief Scientific Advisors, the Cabinet Office, the Office for National Statistics, GO-Science).
  • ATI acting as an interface between public and private sectors, for example through knowledge exchange and the exploitation of non-government sources as well as government data
  • ATI offering a trusted space, time and a forum for formulating questions and developing solutions that tackle public policy problems and push forward the frontiers of data science
  • ATI as a source of cross-fertilization of expertise between departments
  • Reviewing the data science landscape in a department or agency, identifying feedback loops – or lack thereof – between policy-makers, analysts, front-line staff and identifying possibilities for an ‘intelligent centre’ model through strategic development of expertise.

The Summit, and a series of Whitehall Roundtables convened by GO-Science which led up to it, have initiated a nascent network of stakeholders across government, which we aim to build on and develop over the coming months. If you are interested in being part of this, please do be in touch with us

Helen Margetts, Oxford Internet Institute, University of Oxford (director@oii.ox.ac.uk)

Tom Melham, Department of Computer Science, University of Oxford

]]>
Digital Disconnect: Parties, Pollsters and Political Analysis in #GE2015 https://ensr.oii.ox.ac.uk/digital-disconnect-parties-pollsters-and-political-analysis-in-ge2015/ Mon, 11 May 2015 15:16:16 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3268 We undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII's election night party, or read about the data hack
The Oxford Internet Institute undertook some live analysis of social media data over the night of the 2015 UK General Election. See more photos from the OII’s election night party, or read about the data hack

Counts of public Facebook posts mentioning any of the party leaders’ surnames. Data generated by social media can be used to understand political behaviour and institutions on an ongoing basis.[/caption]‘Congratulations to my friend @Messina2012 on his role in the resounding Conservative victory in Britain’ tweeted David Axelrod, campaign advisor to Miliband, to his former colleague Jim Messina, Cameron’s strategy adviser, on May 8th. The former was Obama’s communications director and the latter campaign manager of Obama’s 2012 campaign. Along with other consultants and advisors and large-scale data management platforms from Obama’s hugely successful digital campaigns, Conservative and Labour used an arsenal of social media and digital tools to interact with voters throughout, as did all the parties competing for seats in the 2015 election.

The parties ran very different kinds of digital campaigns. The Conservatives used advanced data science techniques borrowed from the US campaigns to understand how their policy announcements were being received and to target groups of individuals. They spent ten times as much as Labour on Facebook, using ads targeted at Facebook users according to their activities on the platform, geo-location and demographics. This was a top down strategy that involved working out was happening on social media and responding with targeted advertising, particularly for marginal seats. It was supplemented by the mainstream media, such as the Telegraph for example, which contacted its database of readers and subscribers to services such as Telegraph Money, urging them to vote Conservative. As Andrew Cooper tweeted after the election, ‘Big data, micro-targeting and social media campaigns just thrashed “5 million conversations” and “community organizing”’.

He has a point. Labour took a different approach to social media. Widely acknowledged to have the most boots on the real ground, knocking on doors, they took a similar ‘ground war’ approach to social media in local campaigns. Our own analysis at the Oxford Internet Institute shows that of the 450K tweets sent by candidates of the six largest parties in the month leading up to the general election, Labour party candidates sent over 120,000 while the Conservatives sent only 80,000, no more than the Greens and not much more than UKIP. But the greater number of Labour tweets were no more productive in terms of impact (measured in terms of mentions generated: and indeed the final result).

Both parties’ campaigns were tightly controlled. Ostensibly, Labour generated far more bottom-up activity from supporters using social media, through memes like #votecameron out, #milibrand (responding to Miliband’s interview with Russell Brand), and what Miliband himself termed the most unlikely cult of the 21st century in his resignation speech, #milifandom, none of which came directly from Central Office. These produced peaks of activity on Twitter that at some points exceeded even discussion of the election itself on the semi-official #GE2015 used by the parties, as the figure below shows. But the party remained aloof from these conversations, fearful of mainstream media mockery.

The Brand interview was agreed to out of desperation and can have made little difference to the vote (partly because Brand endorsed Miliband only after the deadline for voter registration: young voters suddenly overcome by an enthusiasm for participatory democracy after Brand’s public volte face on the utility of voting will have remained disenfranchised). But engaging with the swathes of young people who spend increasing amounts of their time on social media is a strategy for engagement that all parties ought to consider. YouTubers like PewDiePie have tens of millions of subscribers and billions of video views – their videos may seem unbelievably silly to many, but it is here that a good chunk the next generation of voters are to be found.

Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.
Use of emergent hashtags on Twitter during the 2015 General Election. Volumes are estimates based on a 10% sample with the exception of #ge2015, which reflects the exact value. All data from Datasift.

Only one of the leaders had a presence on social media that managed anything like the personal touch and universal reach that Obama achieved in 2008 and 2012 based on sustained engagement with social media – Nicola Sturgeon. The SNP’s use of social media, developed in last September’s referendum on Scottish independence had spawned a whole army of digital activists. All SNP candidates started the campaign with a Twitter account. When we look at the 650 local campaigns waged across the country, by far the most productive in the sense of generating mentions was the SNP; 100 tweets from SNP local candidates generating 10 times more mentions (1,000) than 100 tweets from (for example) the Liberal Democrats.

Scottish Labour’s failure to engage with Scottish peoples in this kind of way illustrates how difficult it is to suddenly develop relationships on social media – followers on all platforms are built up over years, not in the short space of a campaign. In strong contrast, advertising on these platforms as the Conservatives did is instantaneous, and based on the data science understanding (through advertising algorithms) of the platform itself. It doesn’t require huge databases of supporters – it doesn’t build up relationships between the party and supporters – indeed, they may remain anonymous to the party. It’s quick, dirty and effective.

The pollsters’ terrible night

So neither of the two largest parties really did anything with social media, or the huge databases of interactions that their platforms will have generated, to generate long-running engagement with the electorate. The campaigns were disconnected from their supporters, from their grass roots.

But the differing use of social media by the parties could lend a clue to why the opinion polls throughout the campaign got it so wrong, underestimating the Conservative lead by an average of five per cent. The social media data that may be gathered from this or any campaign is a valuable source of information about what the parties are doing, how they are being received, and what people are thinking or talking about in this important space – where so many people spend so much of their time. Of course, it is difficult to read from the outside; Andrew Cooper labeled the Conservatives’ campaign of big data to identify undecided voters, and micro-targeting on social media, as ‘silent and invisible’ and it seems to have been so to the polls.

Many voters were undecided until the last minute, or decided not to vote, which is impossible to predict with polls (bar the exit poll) – but possibly observable on social media, such as the spikes in attention to UKIP on Wikipedia towards the end of the campaign, which may have signaled their impressive share of the vote. As Jim Messina put it to msnbc news following up on his May 8th tweet that UK (and US) polling was ‘completely broken’ – ‘people communicate in different ways now’, arguing that the Miliband campaign had tried to go back to the 1970s.

Surveys – such as polls — give a (hopefully) representative picture of what people think they might do. Social media data provide an (unrepresentative) picture of what people really said or did. Long-running opinion surveys (such as the Ipsos MORI Issues Index) can monitor the hopes and fears of the electorate in between elections, but attention tends to focus on the huge barrage of opinion polls at election time – which are geared entirely at predicting the election result, and which do not contribute to more general understanding of voters. In contrast, social media are a good way to track rapid bursts in mobilization or support, which reflect immediately on social media platforms – and could also be developed to illustrate more long running trends, such as unpopular policies or failing services.

As opinion surveys face more and more challenges, there is surely good reason to supplement them with social media data, which reflect what people are really thinking on an ongoing basis – like, a video in rather than the irregular snapshots taken by polls. As a leading pollster João Francisco Meira, director of Vox Populi in Brazil (which is doing innovative work in using social media data to understand public opinion) put it in conversation with one of the authors in April – ‘we have spent so long trying to hear what people are saying – now they are crying out to be heard, every day’. It is a question of pollsters working out how to listen.

Political big data

Analysts of political behaviour – academics as well as pollsters — need to pay attention to this data. At the OII we gathered large quantities of data from Facebook, Twitter, Wikipedia and YouTube in the lead-up to the election campaign, including mentions of all candidates (as did Demos’s Centre for the Analysis of Social Media). Using this data we will be able, for example, to work out the relationship between local social media campaigns and the parties’ share of the vote, as well as modeling the relationship between social media presence and turnout.

We can already see that the story of the local campaigns varied enormously – while at the start of the campaign some candidates were probably requesting new passwords for their rusty Twitter accounts, some already had an ongoing relationship with their constituents (or potential constituents), which they could build on during the campaign. One of the candidates to take over the Labour party leadership, Chuka Umunna, joined Twitter in April 2009 and now has 100K followers, which will be useful in the forthcoming leadership contest.

Election results inject data into a research field that lacks ‘big data’. Data hungry political scientists will analyse these data in every way imaginable for the next five years. But data in between elections, for example relating to democratic or civic engagement or political mobilization, has traditionally been woefully short in our discipline. Analysis of the social media campaigns in #GE2015 will start to provide a foundation to understand patterns and trends in voting behaviour, particularly when linked to other sources of data, such as the actual constituency-level voting results and even discredited polls — which may yet yield insight, even having failed to achieve their predictive aims. As the OII’s Jonathan Bright and Taha Yasseri have argued, we need ‘a theory-informed model to drive social media predictions, that is based on an understanding of how the data is generated and hence enables us to correct for certain biases’

A political data science

Parties, pollsters and political analysts should all be thinking about these digital disconnects in #GE2015, rather than burying them with their hopes for this election. As I argued in a previous post, let’s use data generated by social media to understand political behaviour and institutions on an ongoing basis. Let’s find a way of incorporating social media analysis into polling models, for example by linking survey datasets to big data of this kind. The more such activity moves beyond the election campaign itself, the more useful social media data will be in tracking the underlying trends and patterns in political behavior.

And for the parties, these kind of ways of understanding and interacting with voters needs to be institutionalized in party structures, from top to bottom. On 8th May, the VP of a policy think-tank tweeted to both Axelrod and Messina ‘Gentlemen, welcome back to America. Let’s win the next one on this side of the pond’. The UK parties are on their own now. We must hope they use the time to build an ongoing dialogue with citizens and voters, learning from the success of the new online interest group barons, such as 38 degrees and Avaaz, by treating all internet contacts as ‘members’ and interacting with them on a regular basis. Don’t wait until 2020!


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics, investigating political behaviour, digital government and government-citizen interactions in the age of the internet, social media and big data. She has published over a hundred books, articles and major research reports in this area, including Political Turbulence: How Social Media Shape Collective Action (with Peter John, Scott Hale and Taha Yasseri, 2015).

Scott A. Hale is a Data Scientist at the OII. He develops and applies techniques from computer science to research questions in the social sciences. He is particularly interested in the area of human-computer interaction and the spread of information between speakers of different languages online and the roles of bilingual Internet users. He is also interested in collective action and politics more generally.

]]>
Technological innovation and disruption was a big theme of the WEF 2014 in Davos: but where was government? https://ensr.oii.ox.ac.uk/technological-innovation-disruption-was-big-theme-wef-2014-davos-but-where-was-government/ Thu, 30 Jan 2014 11:23:09 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2536
caption
The World Economic Forum engages business, political, academic and other leaders of society to shape global, regional and industry agendas. Image by World Economic Forum.

Last week, I was at the World Economic Forum in Davos, the first time that the Oxford Internet Institute has been represented there. Being closeted in a Swiss ski resort with 2,500 of the great, the good and the super-rich provided me with a good chance to see what the global elite are thinking about technological change and its role in ‘The Reshaping of the World: Consequences for Society, Politics and Business’, the stated focus of the WEF Annual Meeting in 2014.

What follows are those impressions that relate to public policy and the internet, and reflect only my own experience there. Outside the official programme there are whole hierarchies of breakfasts, lunches, dinners and other events, most of which a newcomer to Davos finds it difficult to discover and some of which require one to be at least a president of a small to medium-sized state — or Matt Damon.

There was much talk of hyperconnectivity, spirals of innovation, S-curves and exponential growth of technological diffusion, digitalization and disruption. As you might expect, the pace of these was emphasized most by those participants from the technology industry. The future of work in the face of leaps forward in robotics was a key theme, drawing on the new book by Eric Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies, which is just out in the US. There were several sessions on digital health and the eventual fruition of decades of pilots in telehealth (a banned term now, apparently), as applications based on mobile technologies start to be used more widely. Indeed, all delegates were presented with a ‘Jawbone’ bracelet which tracks the wearer’s exercise and sleep patterns (7,801 steps so far today). And of course there was much talk about the possibilities afforded by big data, if not quite as much as I expected.

The University of Oxford was represented in an ‘Ideas Lab’, convened by the Oxford Martin School on Data, Machines and the Human Factor. This format involves each presenter talking for five minutes in front of their 15 selected images rolling at 20 seconds each, with no control over the timing (described by the designer of the format before the session as ‘waterboarding for academics’, due to the conciseness and brevity required — and I can vouch for that). It was striking how much synergy there was in the presentations by the health engineer Lionel Tarassenko (talking about developments in digital healthcare in the home), the astrophysicist Chris Lintott (on crowdsourcing of science) and myself talking about collective action and mobilization in the social media age. We were all talking about the new possibilities that the internet and social media afford for citizens to contribute to healthcare, scientific knowledge and political change. Indeed, I was surprised that the topics of collective action and civic engagement, probably not traditional concerns of Davos, attracted widespread interest, including a session on ‘The New Citizen’ with the founders of Avaaz.

Of course there was some discussion of the Snowden revelations of the data crawling activities of the US NSA and UK GCHQ, and the privacy implications. A dinner on ‘the Digital Me’ generated an interesting discussion on privacy in the age of social media, reflecting a growing and welcome (to me anyway) pragmatism with respect to issues often hotly contested. As one participant put it, in an age of imperfect, partial information, we become used to the idea that what we read on Facebook is often, through its relation to the past, irrelevant to the present time and not to be taken into consideration when (for example) considering whether to offer someone a job. The wonderful danah boyd gave some insight from her new book It’s Complicated: the social lives of networked teens, from which emerged a discussion of a ‘taxonomy of privacy’ and the importance of considering the use to which data is put, as opposed to just the possession or collection of the data – although this could be dangerous ground, in the light of the Snowden revelations.

There was more talk of the future than the past. I participated in one dinner discussion of the topic of ‘Rethinking Living’ in 50 years time, a timespan challenged by Google Chairman Eric Schmidt’s argument earlier in the day that five years was an ‘infinite’ amount of time in the current speed of technological innovation. The after dinner discussion was surprisingly fun, and at my table at least we found ourselves drawn back to the past, wondering if the rise of preventative health care and the new localism that connectivity affords might look like a return to the pre-industrial age. When it came to the summing up and drawing out the implications for government, I was struck how most elements of any trajectory of change exposed a growing disconnect between citizens, or business, on the one hand – and government on the other.

This was the one topic that for me was notably absent from WEF 2014; the nature of government in this rapidly changing world, in spite of the three pillars — politics, society, and business — of the theme of the conference noted above. At one lunch convened by McKinsey that was particularly ebullient regarding the ceaseless pace of technological change, I pointed out that government was only at the beginning of the S-curve, or perhaps that such a curve had no relevance for government. Another delegate asked how the assembled audience might help government to manage better here, and another pointed out that globally, we were investing less and less in government at a time when it needed more resources, including far higher remuneration for top officials. But the panellists were less enthusiastic to pick up on these points.

As I have discussed previously on this blog and elsewhere, we are in an era where governments struggle to innovate technologically or to incorporate social media into organizational processes, where digital services lag far behind those of business, where the big data revolution is passing government by (apart from open data, which is not the same thing as big data, see my Guardian blog post on this issue). Pockets of innovation like the UK Government Digital Service push for governmentwide change, but we are still seeing major policy initiatives such as Obama’s healthcare plans in the US or Universal Credit in the UK flounder on technological grounds. Yet there were remarkably few delegates at the WEF representing the executive arm of government, particularly for the UK. So on the relationship between government and citizens in an age of rapid technological change, it was citizens – rather than governments – and, of course, business (given the predominance of CEOs) that received the attention of this high-powered crowd.

At the end of the ‘Rethinking Living’ dinner, a participant from another table said to me that in contrast to the participants from the technology industry, he thought 50 years was a rather short time horizon. As a landscape architect, designing with trees that take 30 years to grow, he had no problem imagining how things would look on this timescale. It occurred to me that there could be an analogy here with government, which likewise could take this kind of timescale to catch up with the technological revolution. But by that time, technology will have moved on and it may be that governments cannot afford this relaxed pace catching up with their citizens and the business world. Perhaps this should be a key theme for future forums.


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
Five recommendations for maximising the relevance of social science research for public policy-making in the big data era https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/ https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/#comments Mon, 04 Nov 2013 10:30:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2196 As I discussed in a previous post on the promises and threats of big data for public policy-making, public policy making has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means citizens and governments leave digital traces that can be harvested to generate big data. This increasingly rich data environment poses both promises and threats to policy-makers.

So how can social scientists help policy-makers in this changed environment, ensuring that social science research remains relevant? Social scientists have a good record on having policy influence, indeed in the UK better than other academic fields, including medicine, as recent research from the LSE Public Policy group has shown. Big data hold major promise for social science, which should enable us to further extend our record in policy research. We have access to a cornucopia of data of a kind which is more like that traditionally associated with so-called ‘hard’ science. Rather than being dependent on surveys, the traditional data staple of empirical social science, social media such as Wikipedia, Twitter, Facebook, and Google Search present us with the opportunity to scrape, generate, analyse and archive comparative data of unprecedented quantity. For example, at the OII over the last four years we have been generating a dataset of all petition signing in the UK and US, which contains the joining rate (updated every hour) for the 30,000 petitions created in the last three years. As a political scientist, I am very excited by this kind of data (up to now, we have had big data like this only for voting, and that only at election time), which will allow us to create a complete ecology of petition signing, one of the more popular acts of political participation in the UK. Likewise, we can look at the entire transaction history of online organizations like Wikipedia, or map the link structure of government’s online presence.

But big data holds threats for social scientists too. The technological challenge is ever present. To generate their own big data, researchers and students must learn to code, and for some that is an alien skill. At the OII we run a course on Digital Social Research that all our postgraduate students can take; but not all social science departments could either provide such a course, or persuade their postgraduate students that they needed it. Ours, who study the social science of the Internet, are obviously predisposed to do so. And big data analysis requires multi-disciplinary expertise. Our research team working on petitions data includes a computer scientist (Scott Hale), a physicist (Taha Yasseri) and a political scientist (myself). I can’t imagine doing this sort of research without such technical expertise, and as a multi-disciplinary department we are (reasonably) free to recruit these type of research faculty. But not all social science departments can promise a research career for computer scientists, or physicists, or any of the other disciplinary specialists that might be needed to tackle big data problems.

Five Recommendations for Social Scientists

So, how can social scientists overcome these challenges, and thereby be in a good position to aid policy-makers tackle their own barriers to making the most of the possibilities afforded by big data? Here are five recommendations:

Accept that multi-disciplinary research teams are going to become the norm for social science research, extending beyond social science disciplines into the life sciences, mathematics, physics, and engineering. At Policy and Internet’s 2012 Big Data conference, the keynote speaker Duncan Watts (physicist turned sociologist) called for a ‘dating agency’ for engineers and social scientists – with the former providing the technological expertise, and the latter identifying the important research questions. We need to make sure that forums exist where social scientists and technologists meet and discuss big data research at the earliest stages, so that research projects and programmes incorporate the core competencies of both.

We need to provide the normative and ethical basis for policy decisions in the big data era. That means bringing in normative political theorists and philosophers of information into our research teams. The government has committed £65 million to big data research funding, but it seems likely that any successful research proposals will have a strong ethics component embedded in the research programme, rather than an ethics add on or afterthought.

Training in data science. Many leading US universities are now admitting undergraduates to data science courses, but lack social science input. Of the 20 US masters courses in big data analytics compiled by Information Week, nearly all came from computer science or informatics departments. Social science research training needs to incorporate coding and analysis skills of the kind these courses provide, but with a social science focus. If we as social scientists leave the training to computer scientists, we will find that the new cadre of data scientists tend to leave out social science concerns or questions.

Bringing policy makers and academic researchers together to tackle the challenges that big data present. Last month the OII and Policy and Internet convened a workshop in Harvard on Responsible Research Agendas for Public Policy in the Big Data Era, which included various leading academic researchers in the government and big data field, and government officials from the Census Bureau, the Federal Reserve Board, the Bureau of Labor Statistics, and the Office of Management and Budget (OMB). The discussions revealed that there is continual procession of major events on big data in Washington DC (usually with a corporate or scientific research focus) to which US federal officials are invited, but also how few were really dedicated to tackling the distinctive issues that face government agencies such as those represented around the table.

Taking forward theoretical development in social science, incorporating big data insights. I recently spoke at the Oxford Analytica Global Horizons conference, at a session on Big Data. One of the few policy-makers (in proportion to corporate representatives) in the audience asked the panel “where is the theory”? As social scientists, we need to respond to that question, and fast.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop and the Political Studies Association Why Universities Matter: How Academic Social Science Contributes to Public Policy Impact, held at the LSE on 26 September 2013.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in e-government and digital era governance and politics, investigating the nature and implications of relationships between governments, citizens and the Internet and related digital technologies in the UK and internationally.

]]>
https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/feed/ 1
The promises and threats of big data for public policy-making https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/ https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/#comments Mon, 28 Oct 2013 15:07:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2299 The environment in which public policy is made has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means both citizens and governments leave digital traces that can be harvested to generate big data. Policy-making takes place in an increasingly rich data environment, which poses both promises and threats to policy-makers.

On the promise side, such data offers a chance for policy-making and implementation to be more citizen-focused, taking account of citizens’ needs, preferences and actual experience of public services, as recorded on social media platforms. As citizens express policy opinions on social networking sites such as Twitter and Facebook; rate or rank services or agencies on government applications such as NHS Choices; or enter discussions on the burgeoning range of social enterprise and NGO sites, such as Mumsnet, 38 degrees and patientopinion.org, they generate a whole range of data that government agencies might harvest to good use. Policy-makers also have access to a huge range of data on citizens’ actual behaviour, as recorded digitally whenever citizens interact with government administration or undertake some act of civic engagement, such as signing a petition.

Data mined from social media or administrative operations in this way also provide a range of new data which can enable government agencies to monitor – and improve – their own performance, for example through log usage data of their own electronic presence or transactions recorded on internal information systems, which are increasingly interlinked. And they can use data from social media for self-improvement, by understanding what people are saying about government, and which policies, services or providers are attracting negative opinions and complaints, enabling identification of a failing school, hospital or contractor, for example. They can solicit such data via their own sites, or those of social enterprises. And they can find out what people are concerned about or looking for, from the Google Search API or Google trends, which record the search patterns of a huge proportion of internet users.

As for threats, big data is technologically challenging for government, particularly those governments which have always struggled with large-scale information systems and technology projects. The UK government has long been a world leader in this regard and recent events have only consolidated its reputation. Governments have long suffered from information technology skill shortages and the complex skill sets required for big data analytics pose a particularly acute challenge. Even in the corporate sector, over a third of respondents to a recent survey of business technology professionals cited ‘Big data expertise is scarce and expensive’ as their primary concern about using big data software.

And there are particular cultural barriers to government in using social media, with the informal style and blurring of organizational and public-private boundaries which they engender. And gathering data from social media presents legal challenges, as companies like Facebook place barriers to the crawling and scraping of their sites.

More importantly, big data presents new moral and ethical dilemmas to policy makers. For example, it is possible to carry out probabilistic policy-making, where policy is made on the basis of what a small segment of individuals will probably do, rather than what they have done. Predictive policing has had some success particularly in California, where robberies declined by a quarter after use of the ‘PredPol’ policing software, but can lead to a “feedback loop of injustice” as one privacy advocacy group put it, as policing resources are targeted at increasingly small socio-economic groups. What responsibility does the state have to devote disproportionately more – or less – resources to the education of those school pupils who are, probabilistically, almost certain to drop out of secondary education? Such challenges are greater for governments than corporations. We (reasonably) happily trade privacy to allow Tesco and Facebook to use our data on the basis it will improve their products, but if government tries to use social media to understand citizens and improve its own performance, will it be accused of spying on its citizenry in order to quash potential resistance.

And of course there is an image problem for government in this field – discussion of big data and government puts the word ‘big’ dangerously close to the word ‘government’ and that is an unpopular combination. Policy-makers’ responses to Snowden’s revelations of the US Tempora and UK Prism programmes have done nothing to improve this image, with their focus on the use of big data to track down individuals and groups involved in acts of terrorism and criminality – rather than on anything to make policy-making better, or to use the wealth of information that these programmes collect for the public good.

However, policy-makers have no choice but to tackle some of these challenges. Big data has been the hottest trend in the corporate world for some years now, and commentators from IBM to the New Yorker are starting to talk about the big data ‘backlash’. Government has been far slower to recognize the advantages for policy-making and services. But in some policy sectors, big data poses very fundamental questions which call for an answer; how should governments conduct a census, for or produce labour statistics, for example, in the age of big data? Policy-makers will need to move fast to beat the backlash.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/feed/ 1
Experiments are the most exciting thing on the UK public policy horizon https://ensr.oii.ox.ac.uk/experiments-are-the-most-exciting-thing-on-the-uk-public-policy-horizon/ Thu, 28 Feb 2013 10:20:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=392
Iraq War protesters in Trafalgar Square, London
What makes people join political actions? Iraq War protesters crowd Trafalgar Square in February 2007. Image by DavidMartynHunt.
Experiments – or more technically, Randomised Control Trials – are the most exciting thing on the UK public policy horizon. In 2010, the incoming Coalition Government set up the Behavioural Insights Team in the Cabinet Office to find innovative and cost effective (cheap) ways to change people’s behaviour. Since then the team have run a number of exciting experiments with remarkable success, particularly in terms of encouraging organ donation and timely payment of taxes. With Bad Science author Ben Goldacre, they have now published a Guide to RCTs, and plenty more experiments are planned.

This sudden enthusiasm for experiments in the UK government is very exciting. The Behavioural Insights Team is the first of its kind in the world – In the US, there are few experiments at federal level, although there have been a few well publicised ones at local level – and the UK government has always been rather scared of the concept before, there being a number of cultural barriers to the very word ‘experiment’ in British government. Experiments came to the fore in the previous Administration’s Mindscape document. But what made them popular for Public Policy may well have been the 2008 book Nudge by Thaler and Sunstein, which shows that by knowing how people think, it is possible to design choice environments that make it “easier for people to choose what is best for themselves, their families, and their society.” Since then, the political scientist Peter John has published ‘Nudge, Nudge, Think, Think, which has received positive coverage in The Economist: The use of behavioural economics in public policy shows promise and the Financial Times: Nudge, nudge. Think, think. Say no more …; and has been reviewed by the LSE Review of Books: Nudge, Nudge, Think, Think: experimenting with ways to change civic behaviour.

But there is one thing missing here. Very few of these experiments use manipulation of information environments on the internet as a way to change people’s behaviour. The Internet seems to hold enormous promise for ‘Nudging’ by redesigning ‘choice environments’, yet Thaler and Sunstein’s book hardly mentions it, and none of the BIT’s experiments so far have used the Internet; although a new experiment looking at ways of encouraging court attendees to pay fines is based on text messages.

So, at the Oxford Internet Institute we are doing something about that. At OxLab, an experimental laboratory for the social sciences run by the OII and Said Business School, we are running online experiments to test the impact of various online platforms on people’s behaviour. So, for example, two reports for the UK National Audit Office: Government on the Internet (2007) and Communicating with Customers (2009) carried out by a joint OII-LSE team used experiments to see how people search for and find government-internet related information. Further experiments investigated the impact of various types of social influence, particularly social information about the behaviour of others and visibility (as opposed to anonymity), on the propensity of people to participate politically.

And the OII-edited journal Policy and Internet has been a good venue for experimentalists to publicise their work. So, Stephan Grimmelikhuijsen’s paper Transparency of Public Decision-Making: Towards Trust in Local Government? (Policy & Internet 2010; 2:1) reports an experiment to see if transparency (relating to decision-making by local government) actually leads to higher levels of trust. Interestingly, his results indicated that participants exposed to more information (in this case, full council minutes) were significantly more negative regarding the perceived competence of the council compared to those who did not access all the available information. Additionally, participants who received restricted information about the minutes thought the council was less honest compared to those who did not read them.

]]>