government – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:42 +0000 en-GB hourly 1 Open government policies are spreading across Europe — but what are the expected benefits? https://ensr.oii.ox.ac.uk/open-government-policies-are-spreading-across-europe-but-what-are-the-expected-benefits/ Mon, 17 Jul 2017 08:34:07 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4272 Open government policies are spreading across Europe, challenging previous models of the public sector, and defining new forms of relationship between government, citizens, and digital technologies. In their Policy & Internet article “Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries,” Emiliana De Blasio and Donatella Selva present a qualitative analysis of policy documents from France, Italy, Spain, and the UK, in order to map out the different meanings of open government, and how it is framed by different national governments.

As a policy agenda, open government can be thought of as involving four variables: transparency, participation, collaboration, and digital technologies in democratic processes. Although the variables are all interpreted in different ways, participation, collaboration, and digital technology provide the greatest challenge to government, given they imply a major restructuring of public administration, whereas transparency goals (i.e., the disclosure of open data and the provision of monitoring tools) do not. Indeed, transparency is mentioned in the earliest accounts of open government from the 1950s.

The authors show the emergence of competing models of open government in Europe, with transparency and digital technologies being the most prominent issues in open government, and participation and collaboration being less considered and implemented. The standard model of open government seems to stress innovation and openness, and occasionally of public–private collaboration, but fails to achieve open decision making, with the policy-making process typically rooted in existing mechanisms. However, the authors also see the emergence of a policy framework within which democratic innovations can develop, testament to the vibrancy of the relationship between citizens and the public administration in contemporary European democracies.

We caught up with the authors to discuss their findings:

Ed.: Would you say there are more similarities than differences between these countries’ approaches and expectations for open government? What were your main findings (briefly)?

Emiliana / Donatella: We can imagine the four European countries (France, Italy, Spain and the UK) as positioned in a continuum between a participatory frame and an economic/innovation frame: on the one side, we could observe that French policies focus on open government in order to strenghten and innovate the tradition of débat public; at the opposite side, the roots of the UK’s open government are in cost-efficiency, accountability and transparency arguments. Just between those two poles, Italian and Spanish policies situate open government in the context of a massive reform of the public sector, in order to reduce the administrative burden and to restore citizen trust in institutions. Two years after we wrote the article, we can observe that both in Italy and Spain something has changed, and participation has regained attention as a public policy issue.

Ed.: How much does policy around open data change according to who’s in power? (Obama and Trump clearly have very different ideas about the value of opening up government..). Or do civil services tend to smooth out any ideological differences around openness and transparency, even as parties enter and leave power?

Emiliana / Donatella: The case of open data is quite peculiar: it is one of the few policy issues directly addressed by the European Union Commission, and now by the transnational agreement on the G8 Open Data Charter, and for this reason we could say there is a homogenising trend. Moreover, opening up data is an ongoing process — started at least eight years ago — that will be too difficult for any new government to stop. As for openness and transparency in general, Cameron (and now May), Hollande, Monti (and then Renzi) and Rajoy’s governments, all wrote policies with a strong emphasis on innovation and openness as the key for a better future.

In fact, we observed that at the national level, the rhetoric of innovation and openness is bipartisan, and not dependent on political orientation — although the concrete policy instruments and implementation strategies might differ. It is also for this reason that governments tend to remain in the “comfort zone” of transparency and public-private partnerships: they still evocate a change in the relationship between public sector and civil society, but they don’t actually address this change.

Still, we should highlight that at the regional and local levels open data, transparency and participation policies are mostly promoted by liberal and/or left-leaning administrations.

Ed.: Your results for France (i.e. almost no mention of the digital economy, growth, or reform of public services) are basically the opposite of Macron’s (winning) platform of innovation and reform. Did Macron identify a problem in France; and might you expect a change as he takes control?

Emiliana / Donatella: Macron’s electoral programme is based on what he already did while in charge at the Ministry of Economy: he pursued a French digital agenda willing to attract foreign investments, to create digital productive hubs (the French Tech), and innovate the whole economy. Interestingly, however, he did not frame those policies under the umbrella of open government, preferring to speak about “modernisation”. The importance given by Macron to innovation in the economy and public sector finds some antecedents in the policies we analysed: the issue of “modernisation” was prominent and we expect it will be even more, now that he has gained the presidency.

Ed.: In your article you analyse policy documents, i.e. texts that set out hopes and intentions. But is there any sense of how much practical effect these have: particularly given how expensive it is to open up data? You note “the Spanish and Italian governments are especially focused on restoring trust in institutions, compensating for scandals, corruption, and a general distrust which is typical in Southern Europe” .. and yet the current Spanish government is still being rocked by corruption scandals.

Emiliana / Donatella: The efficacy of any kind of policies can vary depending on many factors — such as internal political context, international constraints, economic resources, and clarity of policy instruments. In addition, we should consider that at the national level, very few policies have an immediate consequence on citizens’ everyday lives. This is surely one of the worst problems of open government: from the one side, it is a policy agenda promoted in a top-down perspective — from international and/or national institutions; and from the other side, it fails to engage local communities in a purposeful dialogue. At such, open government policies appear to be self-reflective acts by governments, as paradoxical as this might be.

Ed.: Despite terrible, terrible things like the Trump administration’s apparent deletion of climate data, do you see a general trend towards increased datafication, accountability, and efficiency (perhaps even driven by industry, as well as NGOs)? Or are public administrations far too subject to political currents and individual whim?

Emiliana / Donatella: As we face turbulent times, it would be very risky to assert that tomorrow’s world will be more open than today’s. But even if we observe some interruptions, the principles of open democracy and open government have colonised public agendas: as we have tried to stress in our article, openness, participation, collaboration and innovation can have different meanings and degrees, but they succeeded in acquiring the status of policy issues.

And as you rightly point out, the way towards accountability and openness is not a public sector’s prerogative any more: many actors from civil society and industry have already mobilised in order to influence government agendas, public opinion, and to inform citizens. As the first open government policies start to produce practical effects on people’s everyday lives, we might expect that public awareness will rise, and that no individual will be able to ignore it.

Ed.: And does the EU have any supra-national influence, in terms of promoting general principles of openness, transparency etc.? Or is it strictly left to individual countries to open up (if they want), and in whatever direction they like? I would have thought the EU would be the ideal force to promote rational technocratic things like open government?

Emiliana / Donatella: The EU has the power of stressing some policy issues, and letting some others be “forgotten”. The complex legislative procedures of the EU, together with the trans-national conflictuality, produce policies with different degrees of enforcement. Generally speaking, some EU policies have a direct influence on national laws, whereas some others don’t, leaving with national governments the decision of whether or not to act. In the case of open government, we see that the EU has been particularly influential in setting the Digital Agenda for 2020 and now the Sustainable Future Agenda for 2030; in both documents, Europe encourages Member States to dialogue and collaborate with private actors and civil society, in order to achieve some objectives of economic development.

At the moment, initiatives like the Open Government Partnership — which runs outside the EU competence and involves many European countries — are tying up governments in trans-national networks converging on a set of principles and methods. Because of that Partnership, for example, countries like Italy and Spain have experimented with the first national co-drafting procedures.

Read the full article: De Blasio, E. and Selva, D. (2016) Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries. Policy & Internet 8 (3). DOI: doi:10.1002/poi3.118.


Emiliana De Blasio and Donatella Selva were talking to blog editor David Sutcliffe.

]]>
Back to the bad old days, as civil service infighting threatens UK’s only hope for digital government https://ensr.oii.ox.ac.uk/back-to-the-bad-old-days-as-civil-service-infighting-threatens-uks-only-hope-for-digital-government/ Wed, 10 Aug 2016 13:59:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3814 GDS-TheConversationTechnology and the public sector have rarely been happy bedfellows in the UK, where every government technology project seems doomed to arrive late, unperform and come in over budget. The Government Digital Service (GDS) was created to drag the civil service into the 21st century, making services “digital by default”, cheaper, faster, and easier to use. It quickly won accolades for its approach and early cost savings.

But then its leadership departed, not once or twice but three times – the latter two within the last few months. The largest government departments have begun to reassert their authority over GDS expert advice, and digital government looks likely to be dragged back towards the deeply dysfunctional old ways of doing things. GDS isn’t perfect, but to erase the progress it has put in place would be a terrible loss.

The UK government’s use of technology has previously lagged far behind other countries. Low usage of digital services rendered them expensive and inefficient. Digital operations were often handicapped by complex networks of legacy systems, some dating right back to the 1970s. The development of the long-promised “digital era governance” was mired in a series of mega contracts: huge in terms of cost, scope and timescale, bigger than any attempted by other governments worldwide, and to be delivered by the same handful of giant global computer consulting firms that rarely saw any challenge to their grip on public contracts. Departmental silos ensured there were no economies of scale, shared services failed, and the Treasury negotiated with 24 departments individually for their IT expenditure.

Some commentators (including this one) were a little sceptical on our first encounter with GDS. We had seen it before: the Office of the e-Envoy set up by Tony Blair in 1999, superseded by the E-government Unit (2004-7), and then Directgov until 2010.

Successes and failures

In many ways GDS has been a success story, with former prime minister David Cameron calling it one of the “great unsung triumphs of the last parliament” with a claimed £1.7 billion cost savings. Treasury negotiates with GDS, rather than with 24 departments, and GDS has been involved in every hiring decision for senior digital staff, raising the quality of digital expertise.

The building blocks of the GDS’ promised “government as a platform” approach have appeared: Verify, a federated identity system that doesn’t rely on ID cards or centralised identity databases, Govpay, which makes it easier to make payments to the government, and Notify, which allows government agencies to keep citizens informed of progress on services.

GDS tackled the overweening power of the huge firms that have dominated government IT in the past, and has given smaller departments and agencies the confidence to undertake some projects themselves, bringing expertise back in-house, embracing open source, and washing away some of the taint of failure from previous government IT projects.

There has even been a procession of visitors from overseas coming to investigate, and imitations have spawned across the world, from the US to Australia.

But elsewhere GDS has really only chipped away at monolithic government IT. For example, GDS and the Department for Work and Pensions failed to work together on Universal Credit. Instead, the huge Pathfindersystem that underpinned the Universal Credit trial phase was supplied by HP, Accenture, IBM and BT and ran into serious trouble at a cost of hundreds of millions of pounds. The department is now building a new system in parallel, with GDS advice, that will largely replace it.

The big systems integrators are still waiting in the wings, poised to renew their influence in government. Francis Maude, who as cabinet minister created GDS, recently admitted that if GDS had undertaken faster and more wholescale reform of legacy systems, it wouldn’t be under threat now.

The risks of centralisation

An issue GDS never tackled is one that has existed right from the start: is it an army, or is it a band of mercenaries working in other departments? Should GDS be at the centre, building and providing systems, or should it just help others to do so, building their expertise? GDS has done both, but the emphasis has been on the former, most evident through putting the government portal GOV.UK at the centre of public services.

Heading down a centralised route was always risky, as the National Audit Office observed of its forerunner direct.gov in 2007. Many departments resented the centralisation of GOV.UK, and the removal of their departmental websites, but it’s likely they’re used to it now, even relieved that it’s no longer their problem. But a staff of 700 with a budget of £112m (from 2015) was always going to look vulnerable to budget cuts.

Return of the Big Beasts

If GDS is diminished or disbanded, any hope of creating effective digital government faces two threats.

A land-grab from the biggest departments – HMRC, DWP and the Home Office, all critics of the GDS – is one possibility. There are already signs of a purge of the digital chiefs put in place by GDS, despite the National Audit Office citing continuity of leadership as critical. This looks like permanent secretaries in the civil service reasserting control over their departments’ digital operations – which will inevitably bring a return to siloed thinking and siloed data, completely at odds with the idea of government as a platform. While the big beasts can walk alone, without GDS the smaller agencies will struggle.

The other threat is the big companies, poised in the wings to renew their influence on government should GDS controls on contract size be removed. It has already begun: the ATLAS consortium led by HP has already won two Ministry of Defence contracts worth £1.5 billion since founding GDS chief Mike Bracken resigned.

It’s hard to see how government as a platform can be taken forward without expertise and capacity at the centre – no single department would have the incentive to do so. Canada’s former chief information officer recently attributed Canada’s decline as a world leader in digital government to the removal of funds dedicated to allowing departmental silos to work together. Even as the UN declares the UK to be the global leader for implementing e-government, unless the GDS can re-establish itself the UK may find the foundations it has created swept away – at a time when using digital services to do more with less is needed more than ever.


This was first posted on The Conversation.

]]>
Alan Turing Institute and OII: Summit on Data Science for Government and Policy Making https://ensr.oii.ox.ac.uk/alan-turing-institute-and-oii-summit-on-data-science-for-government-and-policy-making/ Tue, 31 May 2016 06:45:39 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3804 The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings.

The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI – with the academic partners in listening mode.

We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does – collect tax from the whole population, or give money away at scale, or possess the legitimate use of force – it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are unique and tackling them could put government working with researchers at the forefront of data science innovation.

During the Summit a range of stakeholders provided insight from their distinctive perspectives; the Government Chief Scientific Advisor, Sir Mark Walport; Deputy Director of the ATI, Patrick Wolfe; the National Statistician and Director of ONS, John Pullinger; Director of Data at the Government Digital Service, Paul Maltby. Representatives of frontline departments recounted how algorithmic decision-making is already bringing predictive capacity into operational business, improving efficiency and effectiveness.

Discussion revolved around the challenges of how to build core capability in data science across government, rather than outsourcing it (as happened in an earlier era with information technology) or confining it to a data science profession. Some delegates talked of being in the ‘foothills’ of data science. The scale, heterogeneity and complexity of some government departments currently works against data science innovation, particularly when larger departments can operate thousands of databases, creating legacy barriers to interoperability. Out-dated policies can work against data science methodologies. Attendees repeatedly voiced concerns about sharing data across government departments, in some case because of limitations of legal protections; in others because people were unsure what they can and cannot do.

The potential power of data science creates an urgent need for discussion of ethics. Delegates and speakers repeatedly affirmed the importance of an ethical framework and for thought leadership in this area, so that ethics is ‘part of the science’. The clear emergent option was a national Council for Data Ethics (along the lines of the Nuffield Council for Bioethics) convened by the ATI, as recommended in the recent Science and Technology parliamentary committee report The big data dilemma and the government response. Luciano Floridi (OII’s professor of the philosophy and ethics of information) warned that we cannot reduce ethics to mere compliance. Ethical problems do not normally have a single straightforward ‘right’ answer, but require dialogue and thought and extend far beyond individual privacy. There was consensus that the UK has the potential to provide global thought leadership and to set the standard for the rest of Europe. It was announced during the Summit that an ATI Working Group on the Ethics of Data Science has been confirmed, to take these issues forward.

So what happens now?

Throughout the Summit there were calls from policy makers for more data science leadership. We hope that the ATI will be instrumental in providing this, and an interface both between government, business and academia, and between separate Government departments. This Summit showed just how much real demand – and enthusiasm – there is from policy makers to develop data science methods and harness the power of big data. No-one wants to repeat with data science the history of government information technology – where in the 1950s and 60s, government led the way as an innovator, but has struggled to maintain this position ever since. We hope that the ATI can act to prevent the same fate for data science and provide both thought leadership and the ‘time and space’ (as one delegate put it) for policy-makers to work with the Institute to develop data science for the public good.

So since the Summit, in response to the clear need that emerged from the discussion and other conversations with stakeholders, the ATI has been designing a Policy Innovation Unit, with the aim of working with government departments on ‘data science for public good’ issues. Activities could include:

  • Secondments at the ATI for data scientists from government
  • Short term projects in government departments for ATI doctoral students and postdoctoral researchers
  • Developing ATI as an accredited data facility for public data, as suggested in the current Cabinet Office consultation on better use of data in government
  • ATI pilot policy projects, using government data
  • Policy symposia focused on specific issues and challenges
  • ATI representation in regular meetings at the senior level (for example, between Chief Scientific Advisors, the Cabinet Office, the Office for National Statistics, GO-Science).
  • ATI acting as an interface between public and private sectors, for example through knowledge exchange and the exploitation of non-government sources as well as government data
  • ATI offering a trusted space, time and a forum for formulating questions and developing solutions that tackle public policy problems and push forward the frontiers of data science
  • ATI as a source of cross-fertilization of expertise between departments
  • Reviewing the data science landscape in a department or agency, identifying feedback loops – or lack thereof – between policy-makers, analysts, front-line staff and identifying possibilities for an ‘intelligent centre’ model through strategic development of expertise.

The Summit, and a series of Whitehall Roundtables convened by GO-Science which led up to it, have initiated a nascent network of stakeholders across government, which we aim to build on and develop over the coming months. If you are interested in being part of this, please do be in touch with us

Helen Margetts, Oxford Internet Institute, University of Oxford (director@oii.ox.ac.uk)

Tom Melham, Department of Computer Science, University of Oxford

]]>
Exploring the Ethics of Monitoring Online Extremism https://ensr.oii.ox.ac.uk/exploring-the-ethics-of-monitoring-online-extremism/ Wed, 23 Mar 2016 09:59:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3616 (Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.

Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
Surveillance in NYC’s financial district. Photo by Jonathan McIntosh (flickr).

Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?

Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!

Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?

Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.

This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.

Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities.  How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?

Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”

One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.

Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh and Ian were talking to Blog Editor Bertie Vidgen.

]]>
Assessing the Ethics and Politics of Policing the Internet for Extremist Material https://ensr.oii.ox.ac.uk/assessing-the-ethics-and-politics-of-policing-the-internet-for-extremist-material/ Thu, 18 Feb 2016 22:59:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3558 The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

 

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organize, radicalize people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organization, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance – including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalization. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalization, but if they do get radicalized and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept – extremist material – and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorized processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.

]]>
Do Finland’s digitally crowdsourced laws show a way to resolve democracy’s “legitimacy crisis”? https://ensr.oii.ox.ac.uk/do-finlands-digitally-crowdsourced-laws-show-a-way-to-resolve-democracys-legitimacy-crisis/ Mon, 16 Nov 2015 12:29:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3475 There is much discussion about a perceived “legitimacy crisis” in democracy. In his article The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation, Taneli Heikka (University of Jyväskylä) discusses the digitally crowdsourced law for same-sex marriage that was passed in Finland in 2014, analysing how the campaign used new digital tools and created practices that affect democratic citizenship and power making.

Ed: There is much discussion about a perceived “legitimacy crisis” in democracy. For example, less than half of the Finnish electorate under 40 choose to vote. In your article you argue that Finland’s 2012 Citizens’ Initiative Act aimed to address this problem by allowing for the crowdsourcing of ideas for new legislation. How common is this idea? (And indeed, how successful?)

Taneli: The idea that digital participation could counter the “legitimacy crisis” is a fairly common one. Digital utopians have nurtured that idea from the early years of the internet, and have often been disappointed. A couple of things stand out in the Finnish experiment that make it worth a closer look.

First, the digital crowdsourcing system with strong digital identification is a reliable and potentially viral campaigning tool. Most civic initiative systems I have encountered rely on manual or otherwise cumbersome, and less reliable, signature collection methods.

Second, in the Finnish model, initiatives that break the threshold of 50,000 names must be treated in the Parliament equally to an initiative from a group of MPs. This gives the initiative constitutional and political weight.

Ed: The Act led to the passage of Finland’s first equal marriage law in 2014. In this case, online platforms were created for collecting signatures as well as drafting legislation. An NGO created a well-used platform, but it subsequently had to shut it down because it couldn’t afford the electronic signature system. Crowds are great, but not a silver bullet if something as prosaic as authentication is impossible. Where should the balance lie between NGOs and centrally funded services, i.e. government?

Taneli: The crucial thing in the success of a civic initiative system is whether it gives the people real power. This question is decided by the legal framework and constitutional basis of the initiative system. So, governments have a very important role in this early stage – designing a law for truly effective citizen initiatives.

When a framework for power-making is in place, service providers will emerge. Should the providers be public, private or third sector entities? I think that is defined by local political culture and history.

In the United States, the civic technology field is heavily funded by philanthropic foundations. There is an urge to make these tools commercially viable, though no one seems to have figured out the business model. In Europe there’s less philanthropic money, and in my experience experiments are more often government funded.

Both models have their pros and cons, but I’d like to see the two continents learning more from each other. American digital civic activists tell me enviously that the radically empowering Finnish model with a government-run service for crowdsourcing for law would be impossible in the US. In Europe, civic technologists say they wish they had the big foundations that Americans have.

Ed: But realistically, how useful is the input of non-lawyers in (technical) legislation drafting? And is there a critical threshold of people necessary to draft legislation?

Taneli: I believe that input is valuable from anyone who cares to invest some time in learning an issue. That said, having lawyers in the campaign team really helps. Writing legislation is a special skill. It’s a pity that the co-creation features in Finland’s Open Ministry website were shut down due to a lack of funding. In that model, help from lawyers could have been made more accessible for all campaign teams.

In terms of numbers, I don’t think the size of the group is an issue either way. A small group of skilled and committed people can do a lot in the drafting phase.

Ed: But can the drafting process become rather burdensome for contributors, given professional legislators will likely heavily rework, or even scrap, the text?

Taneli: Professional legislators will most likely rework the draft, and that is exactly what they are supposed to do. Initiating an idea, working on a draft, and collecting support for it are just phases in a complex process that continues in the parliament after the threshold of 50,000 signatures is reached. A well-written draft will make the legislators’ job easier, but it won’t replace them.

Ed: Do you think there’s a danger that crowdsourcing legislation might just end up reflecting the societal concerns of the web-savvy – or of campaigning and lobbying groups

Taneli: That’s certainly a risk, but so far there is little evidence of it happening. The only initiative passed so far in Finland – the Equal Marriage Act – was supported by the majority of Finns and by the majority of political parties, too. The initiative system was used to bypass a political gridlock. The handful of initiatives that have reached the 50,000 signatures threshold and entered parliamentary proceedings represent a healthy variety of issues in the fields of education, crime and punishment, and health care. Most initiatives seem to echo the viewpoint of the ‘ordinary people’ instead of lobbies or traditional political and business interest groups.

Ed: You state in your article that the real-time nature of digital crowdsourcing appeals to a generation that likes and dislikes quickly; a generation that inhabits “the space of flows”. Is this a potential source of instability or chaos? And how can this rapid turnover of attention be harnessed efficiently so as to usefully contribute to a stable and democratic society?

Taneli: The Citizens’ Initiative Act in Finland is one fairly successful model to look at in terms of balancing stability and disruptive change. It is a radical law in its potential to empower the individual and affect real power-making. But it is by no means a shortcut to ‘legislation by a digital mob’, or anything of that sort. While the digital campaigning phase can be an explosive expression of the power of the people in the ‘time and space of flows’, the elected representatives retain the final say. Passing a law is still a tedious process, and often for good reasons.

Ed: You also write about the emergence of the “mediating citizen” – what do you mean by this?

Taneli: The starting point for developing the idea of the mediating citizen is Lance Bennett’s AC/DC theory, i.e. the dichotomy of the actualising and the dutiful citizen. The dutiful citizen is the traditional form of democratic citizenship – it values voting, following the mass media, and political parties. The actualising citizen, on the other hand, finds voting and parties less appealing, and prefers more flexible and individualised forms of political action, such as ad hoc campaigns and the use of interactive technology.

I find these models accurate but was not able to place in this duality the emerging typologies of civic action I observed in the Finnish case. What we see is understanding and respect for parliamentary institutions and their power, but also strong faith in one’s skills and capability to improve the system in creative, technologically savvy ways. I used the concept of the mediating citizen to describe an actor who is able to move between the previous typologies, mediating between them. In the Finnish example, creative tools were developed to feed initiatives in the traditional power-making system of the parliament.

Ed: Do you think Finland’s Citizens Initiative Act is a model for other governments to follow when addressing concerns about “democratic legitimacy”?

Taneli: It is an interesting model to look at. But unfortunately the ‘legitimacy crisis’ is probably too complex a problem to be solved by a single participation tool. What I’d really like to see is a wave of experimentation, both on-line and off-line, as well as cross-border learning from each other. And is that not what happened when the representative model spread, too?

Read the full article: Heikka, T., (2015) The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation. Policy and Internet 7 (3) 268–291.


Taneli Heikka is a journalist, author, entrepreneur, and PhD student based in Washington.

Taneli Heikka was talking to Blog Editor Pamina Smith.

]]>
Assessing crowdsourcing technologies to collect public opinion around an urban renovation project https://ensr.oii.ox.ac.uk/assessing-crowdsourcing-technologies-to-collect-public-opinion-around-an-urban-renovation-project/ Mon, 09 Nov 2015 11:20:50 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3453 Ed: Given the “crisis in democratic accountability”, methods to increase citizen participation are in demand. To this end, your team developed some interactive crowdsourcing technologies to collect public opinion around an urban renovation project in Oulu, Finland. What form did the consultation take, and how did you assess its impact?

Simo: Over the years we’ve deployed various types of interactive interfaces on a network of public displays. In this case it was basically a network of interactive screens deployed in downtown Oulu, next to where a renovation project was happening that we wanted to collect feedback about. We deployed an app on the screens, that allowed people to type feedback direcly on the screens (on-screen soft keyboard), and submit feedback to city authorities via SMS, Twitter and email. We also had a smiley-based “rating” system there, which people could us to leave quick feedback about certain aspects of the renovation project.

We ourselves could not, and did not even want to, assess the impact — that’s why we did this in partnership with the city authorities. Then, together with the city folks we could better evaluate if what we were doing had any real-world value whatsoever. And, as we discuss, in the end it did!

Ed: How did you go about encouraging citizens to engage with touch screen technologies in a public space — particularly the non-digitally literate, or maybe people who are just a bit shy about participating?

Simo: Actually, the whole point was that we did not deliberately encourage them by advertising the deployment or by “forcing” anyone to use it. Quite to the contrary: we wanted to see if people voluntarily used it, and the technologies that are an integral part of the city itself. This is kind of the future vision of urban computing, anyway. The screens had been there for years already, and what we wanted to see is if people find this type of service on their own when exploring the screens, and if they take the opportunity to then give feedback using them. The screens hosted a variety of other applications as well: games, news, etc., so it was interesting to also gauge how appealing the idea of public civic feedback is in comparison to everything else that was being offered.

Ed: You mention that using SMS to provide citizen feedback was effective in filtering out noise since it required a minimal payment from citizens — but it also created an initial barrier to participation. How do you increase the quality of feedback without placing citizens on different-level playing fields from the outset — particularly where technology is concerned?

Simo: Yes, SMS really worked well in lowering the amount of irrelevant commentary and complete nonsense. And it is true that SMS already introduces a cost, and even if the cost is miniscule, it’s still a cost to the citizen — and just voicing one’s opinions should of course be free. So there’s no correct answer here — if the channel is public and publicly accessible to anyone, there will be a lot of noisy input. In such cases moderation is a heavy task, and to this end we have been exploring crowdsourcing as well. We can make the community moderate itself. First, we need to identify the users who are genuinely concerned or interested about the issues being explored, and then funnel those users to moderate the discussion / output. It is a win-win situation — the people who want to get involved are empowered to moderate the commentary from others, for implicit rewards.

Ed: For this experiment on citizen feedback in an urban space, your team assembled the world’s largest public display network, which was available for research purposes 24/7. In deploying this valuable research tool, how did you guarantee the privacy of the participants involved, given that some might not want to be seen submitting very negative comments? (e.g. might a form of social pressure be the cause of relatively low participation in the study?)

Simo: The display network was not built only for this experiment, but we have run hundreds of experiments on it, and have written close to a hundred academic papers about them. So, the overarching research focus, really, is on how we can benefit citizens using the network. Over the years we have been able to systematically study issues such as social pressure, group use, effects of the public space, or, one might say “stage”, etc. And yes, social pressure does affect a lot, and for this allowing people participate via e.g. SMS or email helps a lot. That way the users won’t be seen sending the input directly.

Group use is another thing: in groups people don’t feel pressure from the “outside world” so much and are willing to interact with our applications (such as the one documented in this work), but, again, it affects the feedback quality. Groups don’t necessarily tell the truth as they aim for consensus. So the individual, and very important, opinions may not become heard. Ultimately, this is all just part of the game we must deal with, and the real question becomes how to minimize those negative effects that the public space introduces. The positives are clear: everyone can participate, easily, in the heart of the city, and whenever they want.

Ed: Despite the low participation, you still believe that the experimental results are valuable. What did you learn?

Simo: The question in a way already reveals the first important point: people are just not as interested in these “civic” things as they might claim in interviews and pre-studies. When we deploy a civic feedback prototype as the “only option” on a public gizmo (a display, some kind of new tech piece, etc.), people out of curiosity use it. Now, in our case, we just deploy it “as is”, as part of the city infrastructure for people to use if, and only if, they want to use it. So, the prototype competes for attention against smartphones, other applications on the displays, the cluttered city itself… everything!

When one reads many academic papers on interactive civic engagement prototypes, the assumptions are set very high in the discussion: “we got this much participation in this short time”, etc., but that’s not the entire truth. Leave the thing there for months and see if it still interests people! We have done the same, deployed a prototype for three days, gotten tons of interaction, published it, and learned only afterwards that “oh, maybe we were a bit optimistic with the efficiency” when the use suddenly dropped to minimum. It’s just not that easy and the application require frequent updates to keep user interest longitudinally.

Also, the radical differences in the feedback channels were surprising, but we already talked about that a bit earlier.

Ed: Your team collaborated with local officials, which is obviously valuable (and laudable), but it can potentially impose an extra burden on academics. For example, you mention that instead of employing novel feedback formats (e.g. video, audio, images, interactive maps), your team used only text. But do you think working with public officials benefitted the project as a whole, and how?

Simo: The extra burden is a necessity if one wants to really claim authentic success in civic engagement. In our opinion, it only happens between citizens and the city, not between citizens and researchers. We do not wish to build these deployments for the sake of an academic article or two: the display infrastructure is there for citizens and the city, and if we don’t educate the authorities on how to use it then nobody will. Advertisers would be glad to take over the entire real estate there, so in a way this project is just a part of the bigger picture. Which is making the display infrastructure “useful” instead of just a gimmick to kill time with (games) or for advertising.

And yes, the burden is real, but also because of this we could document what we have learned about dealing with authorities: how it is first easy to sell these prototypes to them, but sometimes hard to get commitment, etc. And it is not just this prototype — we’ve done a number of other civic engagement projects where we have noticed the same issues mentioned in the paper as well.

Ed: You also mention that as academics and policymakers you had different notions of success: for example in terms of levels of engagement and feedback of citizens. What should academics aspiring to have a concrete impact on society keep in mind when working with policymakers?

Simo: It takes a lot of time to assess impact. Policymakers will not be able to say after only a few weeks (which is the typical length of studies in our field) if the prototype has actual value to it, or if it’s just a “nice experiment”. So, deploy your strategy / tech / anything you’re doing, write about it, and let it sit. Move on with the life, and then revisit it after months to see if anything has come out of it! Patience is key here.

Ed: Did the citizen feedback result in any changes to the urban renovation project they were being consulted on?

Simo: Not the project directly: the project naturally was planned years ahead and the blueprints were final at that point. The most remarkable finding for us (and the authorities) was that after moderating the noise out from the feedback, the remaining insight was pretty much the only feedback that they ever directly got from citizens. Finns tend to be a bit on the shy side, so people won’t just pick up the phone and call the local engineering department and speak out. Not sure if anyone does, really? So they complain and chat on forums and coffee tables. So it would require active work for the authorities to find and reach out to these people.

With the display infrastructure, which was already there, we were able to gauge the public opinion that did not affect the construction directly, but indirectly affected how the department could manage their press releases, which things to stress in public communications, what parts of PR to handle differently in the next stage of the renovation project etc.

Ed: Are you planning any more experiments?

Simo: We are constantly running quite a few experiments. On the civic engagement side, for example, we are investigating how to gamify environmental awareness (recycling, waste management, keeping the environment clean) for children, as well as running longer longitudinal studies to assess the engagement of specify groups of people (e.g., children and the elderly).

Read the full article: Hosio, S., Goncalves, J., Kostakos, V. and Riekki, J. (2015) Crowdsourcing Public Opinion Using Urban Pervasive Technologies: Lessons From Real-Life Experiments in Oulu. Policy and Internet 7 (2) 203–222.


Simon Hosio is a research scientist (Dr. Tech.) at the University of Oulu, in Finland. Core topics of his research are smart city tech, crowdsourcing, wisdom of the crowd, civic engagement, and all types of “mobile stuff” in general.

Simo Hosio was talking to blog editor Pamina Smith.

]]>
How do the mass media affect levels of trust in government? https://ensr.oii.ox.ac.uk/how-do-the-mass-media-affect-levels-of-trust-in-government/ Wed, 04 Mar 2015 16:33:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3157
Caption
The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness, using many different ICTs. Seoul at night by jonasginter.
Ed: You examine the influence of citizens’ use of online mass media on levels of trust in government. In brief, what did you find?

Greg: As I explain in the article, there is a common belief that mass media outlets, and especially online mass media outlets, often portray government in a negative light in an effort to pique the interest of readers. This tendency of media outlets to engage in ‘bureaucracy bashing’ is thought, in turn, to detract from the public’s support for their government. The basic assumption underpinning this relationship is that the more negative information on government there is, the more negative public opinion. However, in my analyses, I found evidence of a positive indirect relationship between citizens’ use of online mass media outlets and their levels of trust in government. Interestingly, however, the more frequently citizens used online mass media outlets for information about their government, the weaker this association became. These findings challenge conventional wisdom that suggests greater exposure to mass media outlets will result in more negative perceptions of the public sector.

Ed: So you find that that the particular positive or negative spin of the actual message may not be as important as the individuals’ sense that they are aware of the activities of the public sector. That’s presumably good news — both for government, and for efforts to ‘open it up’?

Greg: Yes, I think it can be. However, a few important caveats apply. First, the positive relationship between online mass media use and perceptions of government tapers off as respondents made more frequent use of online mass media outlets. In the study, I interpreted this to mean that exposure to mass media had less of an influence upon those who were more aware of public affairs, and more of an influence upon those who were less aware of public affairs. Therefore, there is something of a diminishing returns aspect to this relationship. Second, this study was not able to account for the valence (ie how positive or negative the information is) of information respondents were exposed to when using online mass media. While some attempts were made to control for valance by adding different control variables, further research drawing upon experimental research designs would be useful in substantiating the relationship between the valence of information disseminated by mass media outlets and citizens’ perceptions of their government.

Ed: Do you think governments are aware of this relationship — ie that an indirect effect of being more open and present in the media, might be increased citizen trust — and that they are responding accordingly?

Greg: I think that there is a general idea that more communication is better than less communication. However, at the same time there is a lot of evidence to suggest that some of the more complex aspects of the relationship between openness and trust in government go unaccounted for in current attempts by public sector organizations to become more open and transparent. As a result, this tool that public organizations have at their disposal is not being used as effectively as it could be, and in some instances is being used in ways that are counterproductive–that is, actually decreasing citizen trust in government. Therefore, in order for governments to translate greater openness into greater trust in government, more refined applications are necessary.

Ed: I know there are various initiatives in the UK — open government data / FoIs / departmental social media channels etc. — aimed at a general opening up of government processes. How open is the Korean government? Is a greater openness something they might adopt (or are adopting?) as part of a general aim to have a more informed and involved — and therefore hopefully more trusting — citizenry?

Greg: The South Korean Government, as well as the Seoul Metropolitan Government have gone to great lengths to enhance their openness. Their strategy has made use of different ICTs, such as e-government websites, social media accounts, non-emergency call centers, and smart phone apps. As a result, many now say that attempts by the Korean Government to become more open are more advanced than in many other areas of the developed world. However, the persistent issue in South Korea, as elsewhere, is whether these attempts are having the intended impact. A lot of empirical research has found, for example, that various attempts at becoming more open by many governments around the world have fallen short of creating a more informed and involved citizenry.

Ed: Finally — is there much empirical work or data in this area?

Greg: While there is a lot of excellent empirical research from the field of political science that has examined how mass media use relates to citizens’ perceptions of politicians, political preferences, or their levels of political knowledge, this topic has received almost no attention at all in public management/administration. This lack of discussion is surprising, given mass media has long served as a key means of enhancing the transparency and accountability of public organizations.

Read the full article: Porumbescu, G. (2013) Assessing the Link Between Online Mass Media and Trust in Government: Evidence From Seoul, South Korea. Policy & Internet 5 (4) 418-443.


Greg Porumbescu was talking to blog editor David Sutcliffe.

Gregory Porumbescu is an Assistant Professor at the Northern Illinois University Department of Public Administration. His research interests primarily relate to public sector applications of information and communications technology, transparency and accountability, and citizens’ perceptions of public service provision.

]]>
Past and Emerging Themes in Policy and Internet Studies https://ensr.oii.ox.ac.uk/past-and-emerging-themes-in-policy-and-internet-studies/ Mon, 12 May 2014 09:24:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2673 Caption
We can’t understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) “Stop Watching Us” Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody.

The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision.

At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams is fundamental rights in a mediated society (11 articles), which focuses particularly on privacy and freedom of expression. Related streams are children and child protection (six articles), copyright and piracy (five articles), and general e-commerce regulation (six articles), including taxation. A recently emerged stream in the journal is hate speech and cybersecurity (four articles). Of course, an enduring research stream is Internet governance, or the regulation of technical infrastructures and economic institutions that constitute the material basis of the Internet (seven articles). In recent years, the research agenda in this stream has been influenced by national policy debates around broadband market competition and network neutrality (Hahn and Singer 2013). Another enduring stream deals with the Internet and public health (eight articles).

Looking specifically at “the Internet and policy processes” category, the largest stream is e-participation, or the role of the Internet in engaging citizens in national and local government policy processes, through methods such as online deliberation, petition platforms, and voting advice applications (18 articles). Two other streams are e-government, or the use of Internet technologies for government service provision (seven articles), and e-politics, or the use of the Internet in mainstream politics, such as election campaigning and communications of the political elite (nine articles). Another stream that has gained pace during recent years, is online collective action, or the role of the Internet in activism, ‘clicktivism,’ and protest campaigns (16 articles). Last year the journal published a special issue on online collective action (Calderaro and Kavada 2013), and the next forthcoming issue includes an invited article on digital civics by Ethan Zuckerman, director of MIT’s Center for Civic Media, with commentary from prominent scholars of Internet activism. A trajectory discernible in this stream over the years is a movement from discussing mere potentials towards analyzing real impacts—including critical analyses of the sometimes inflated expectations and “democracy bubbles” created by digital media (Shulman 2009; Karpf 2012; Bryer 2012).

The final category, discussion of novel methodologies, consists of articles that develop, analyze, and reflect critically on methodological innovations in policy and Internet studies. Empirical articles published in the journal have made use of a wide range of conventional and novel research methods, from interviews and surveys to automated content analysis and advanced network analysis methods. But of those articles where methodology is the topic rather than merely the tool, the majority deal with so-called “big data,” or the use of large-scale transactional data sources in research, commerce, and evidence-based public policy (nine articles). The journal recently devoted a special issue to the potentials and pitfalls of big data for public policy (Margetts and Sutcliffe 2013), based on selected contributions to the journal’s 2012 big data conference: Big Data, Big Challenges? In general, the notion of data science and public policy is a growing research theme.

This brief analysis suggests that research published in the journal over the last five years has indeed followed the broad contours of the original vision. The two challenges, namely policy implications of Internet-related social change and Internet-related changes in policy processes, have both been addressed. In particular, research has addressed the implications of the Internet’s increasing role in social and political life. The journal has also furthered the development of new methodologies, especially the use of online network analysis techniques and large-scale transactional data sources (aka ‘big data’).

As expected, authors from a wide range of disciplines have contributed their perspectives to the journal, and engaged with other disciplines, while retaining the rigor of their own specialisms. The geographic scope of the contributions has been truly global, with authors and research contexts from six continents. I am also pleased to note that a characteristic common to all the published articles is polish; this is no doubt in part due to the high level of editorial support that the journal is able to afford to authors, including copyediting. The justifications for the journal’s establishment five years ago have clearly been borne out, so that the journal now performs an important function in fostering and bringing together research on the public policy implications of an increasingly Internet-mediated society.

And what of my own research interests as an editor? In the inaugural editorial, Helen Margetts highlighted work, finance, exchange, and economic themes in general as being among the prominent areas of Internet-related social change that are likely to have significant future policy implications. I think for the most part, these implications remain to be addressed, and this is an area that the journal can encourage authors to tackle better. As an editor, I will work to direct attention to this opportunity, and welcome manuscript submissions on all aspects of Internet-enabled economic change and its policy implications. This work will be kickstarted by the journal’s 2014 conference (26-27 September), which this year focuses on crowdsourcing and online labor.

Our published articles will continue to be highlighted here in the journal’s blog. Launched last year, we believe this blog will help to expand the reach and impact of research published in Policy and Internet to the wider academic and practitioner communities, promote discussion, and increase authors’ citations. After all, publication is only the start of an article’s public life: we want people reading, debating, citing, and offering responses to the research that we, and our excellent reviewers, feel is important, and worth publishing.

Read the full editorial:  Lehdonvirta, V. (2014) Past and Emerging Themes in Policy and Internet Studies. Policy & Internet 6(2): 109-114.

References

Bryer, T.A. (2011) Online Public Engagement in the Obama Administration: Building a Democracy Bubble? Policy & Internet 3 (4).

Calderaro, A. and Kavada, A. (2013) Challenges and Opportunities of Online Collective Action for Policy Change. Policy & Internet (5) 1.

Hahn, R. and Singer, H. (2013) Is the U.S. Government’s Internet Policy Broken? Policy & Internet 5 (3) 340-363.

Karpf, D. (2012) Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism. Policy & Internet 2 (4) 7-41.

Margetts, H. (2009) The Internet and Public Policy. Policy and Internet 1 (1).

Margetts, H. and Sutcliffe, D. (2013) Addressing the Policy Challenges and Opportunities of ‘Big Data.’ Policy & Internet 5 (2) 139-146.

Shulman, S.W. (2009) The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking. Policy & Internet 1 (1) 23-53.

]]>
The challenges of government use of cloud services for public service delivery https://ensr.oii.ox.ac.uk/challenges-government-use-cloud-services-public-service-delivery/ Mon, 24 Feb 2014 08:50:15 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2584 Caption
Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally — presenting particular challenges to government. Image by NASA Goddard Photo and Video

Ed: You open your recent Policy and Internet article by noting that “the modern treasury of public institutions is where the wealth of public information is stored and processed” … what are the challenges of government use of cloud services?

Kristina: The public sector is a very large user of information technology but data handling policies, vendor accreditation and procurement often predate the era of cloud computing. Governments first have to put in place new internal policies to ensure the security and integrity of their information assets residing in the cloud. Through this process governments are discovering that their traditional notions of control are challenged because cloud services are virtual, dynamic, and operate across borders.

One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data — after all, public sector information embodies the past, the present, and the future of a country. The ability to govern presupposes command and control over government information to the extent necessary to deliver public services, protect citizens’ personal data and to ensure the integrity of the state, among other considerations. One could even assert that in today’s interconnected world national sovereignty is conditional upon adequate data sovereignty.

Ed: A basic question: if a country’s health records (in the cloud) temporarily reside on / are processed on commercial servers in a different country: who is liable for the integrity and protection of that data, and under who’s legal scheme? ie can a country actually technically lose sovereignty over its data?

Kristina: There is always one line of responsibility flowing from the contract with the cloud service provider. However, when these health records cross borders they are effectively governed under a third country’s jurisdiction where disclosure authorities vis-à-vis the cloud service provider can likely be invoked. In some situations the geographical whereabouts of the public health records is not even that important because certain countries’ legislation has extra-territorial reach and it suffices that the cloud service provider is under an obligation to turn over data in its custody. In both situations countries’ exclusive sovereignty over public sector information would be contested. And service providers may find themselves in a Catch22 when they have to decide their legitimate course of action.

Ed: Is there a sense of how many government services are currently hosted “in the cloud”; and have there been any known problems so far about access and jurisdiction?

Kristina: The US has published some targets but otherwise we have no sense of the magnitude of government cloud computing. It is certainly an ever growing phenomenon in leading countries, for example both the US Federal Cloud Computing Strategy and the United Kingdom’s G-Cloud Framework leverage public sector cloud migration with a cloud-first strategy and they operate government application stores where public authorities can self-provision themselves with cloud-based IT services. Until now, the issues of access and jurisdiction have primarily been discussed in terms of risk (as I showed in my article) with governments adopting strategies to keep their public records within national territory, even if they are residing on a cloud service.

Ed: Is there anything about the cloud that is actually functionally novel; ie that calls for new regulation at national or international level, beyond existing data legislation?

Kristina: Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally. The legal risks arising from its transnationality won’t be solved by more legislation at the national level; even if this is a pragmatic solution, the resurrection of territoriality in cloud service contracts with the government conflicts with scalability. My article explores various avenues at the international level, for example extending diplomatic immunity, international agreements for cross-border data transfers, and reliance on mutual legal assistance treaties but in my opinion they do not satisfyingly restore a country’s quest for data sovereignty in the cloud context. In the EU a regional approach could be feasible and I am very much drawn by the idea of a European cloud environment where common information assurance principles prevail — also curtailing individual member states’ disclosure authorities.

Ed: As the economies of scale of cloud services kick in, do you think we will see increasing commercialisation of public record storing and processing (with a possible further erosion of national sovereignty)?

Kristina: Where governments have the capability they adopt a differentiated, risk-based approach corresponding to the information’s security classification: data in the public domain or that have low security markings are suitable for cloud services without further restrictions. Data that has medium security markings may still be processed on cloud services but are often confined to the national territory. Beyond this threshold, i.e. for sensitive and classified information, cloud services are not an option, judging from analysis of the emerging practice in the U.S., the UK, Canada and Australia. What we will increasingly see is IT-outsourcing that is labelled “cloud” despite not meeting the specifications of a true cloud service. Some governments are more inclined to introduce dedicated private “clouds” that are not fully scalable, in other words central data centres. For a vast number of countries, including developing ones, the options are further limited because there is no local cloud infrastructure and/or the public sector cannot afford to contract a dedicated government cloud. In this situation I could imagine an increasing reliance on transnational cloud services, with all the attendant pros and cons.

Ed: How do these sovereignty / jurisdiction / data protection questions relate to the revelations around the NSA’s PRISM surveillance programme?

Kristina: It only confirms that disclosure authorities are extensively used for intelligence gathering and that legal risks have to be taken as seriously as technical vulnerabilities. As a consequence of the Snowden revelations it is quite likely that the sensitivity of governments (as well as private sector organizations) to the impact of foreign jurisdictions will become even more pronounced. For example, there are reports estimating that the lack of trust in US-based cloud services is bound to affect the industry’s growth.

Ed: Could this usher in a whole new industry of ‘guaranteed’ national clouds..? ie how is the industry responding to these worries?

Kristina: This is already happening; in particular, European and Asian players are being very vocal in terms of marketing their regional or national cloud offerings as compatible with specific jurisdiction or national data protection frameworks.

Ed: And finally, who do you think is driving the debate about sovereignty and cloud services: government or industry?

Kristina: In the Western world it is government with its special security needs and buying power to which industry is responsive. As a nascent technology cloud services nonetheless thrive on business with governments because it opens new markets where previously in-house IT services dominated in the public sector.


Read the full paper: Kristina Irion (2013) Government Cloud Computing and National Data Sovereignty. Policy and Internet 4 (3/4) 40–71.

Kristina Irion was talking to blog editor David Sutcliffe.

]]>
Technological innovation and disruption was a big theme of the WEF 2014 in Davos: but where was government? https://ensr.oii.ox.ac.uk/technological-innovation-disruption-was-big-theme-wef-2014-davos-but-where-was-government/ Thu, 30 Jan 2014 11:23:09 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2536
caption
The World Economic Forum engages business, political, academic and other leaders of society to shape global, regional and industry agendas. Image by World Economic Forum.

Last week, I was at the World Economic Forum in Davos, the first time that the Oxford Internet Institute has been represented there. Being closeted in a Swiss ski resort with 2,500 of the great, the good and the super-rich provided me with a good chance to see what the global elite are thinking about technological change and its role in ‘The Reshaping of the World: Consequences for Society, Politics and Business’, the stated focus of the WEF Annual Meeting in 2014.

What follows are those impressions that relate to public policy and the internet, and reflect only my own experience there. Outside the official programme there are whole hierarchies of breakfasts, lunches, dinners and other events, most of which a newcomer to Davos finds it difficult to discover and some of which require one to be at least a president of a small to medium-sized state — or Matt Damon.

There was much talk of hyperconnectivity, spirals of innovation, S-curves and exponential growth of technological diffusion, digitalization and disruption. As you might expect, the pace of these was emphasized most by those participants from the technology industry. The future of work in the face of leaps forward in robotics was a key theme, drawing on the new book by Eric Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies, which is just out in the US. There were several sessions on digital health and the eventual fruition of decades of pilots in telehealth (a banned term now, apparently), as applications based on mobile technologies start to be used more widely. Indeed, all delegates were presented with a ‘Jawbone’ bracelet which tracks the wearer’s exercise and sleep patterns (7,801 steps so far today). And of course there was much talk about the possibilities afforded by big data, if not quite as much as I expected.

The University of Oxford was represented in an ‘Ideas Lab’, convened by the Oxford Martin School on Data, Machines and the Human Factor. This format involves each presenter talking for five minutes in front of their 15 selected images rolling at 20 seconds each, with no control over the timing (described by the designer of the format before the session as ‘waterboarding for academics’, due to the conciseness and brevity required — and I can vouch for that). It was striking how much synergy there was in the presentations by the health engineer Lionel Tarassenko (talking about developments in digital healthcare in the home), the astrophysicist Chris Lintott (on crowdsourcing of science) and myself talking about collective action and mobilization in the social media age. We were all talking about the new possibilities that the internet and social media afford for citizens to contribute to healthcare, scientific knowledge and political change. Indeed, I was surprised that the topics of collective action and civic engagement, probably not traditional concerns of Davos, attracted widespread interest, including a session on ‘The New Citizen’ with the founders of Avaaz.

Of course there was some discussion of the Snowden revelations of the data crawling activities of the US NSA and UK GCHQ, and the privacy implications. A dinner on ‘the Digital Me’ generated an interesting discussion on privacy in the age of social media, reflecting a growing and welcome (to me anyway) pragmatism with respect to issues often hotly contested. As one participant put it, in an age of imperfect, partial information, we become used to the idea that what we read on Facebook is often, through its relation to the past, irrelevant to the present time and not to be taken into consideration when (for example) considering whether to offer someone a job. The wonderful danah boyd gave some insight from her new book It’s Complicated: the social lives of networked teens, from which emerged a discussion of a ‘taxonomy of privacy’ and the importance of considering the use to which data is put, as opposed to just the possession or collection of the data – although this could be dangerous ground, in the light of the Snowden revelations.

There was more talk of the future than the past. I participated in one dinner discussion of the topic of ‘Rethinking Living’ in 50 years time, a timespan challenged by Google Chairman Eric Schmidt’s argument earlier in the day that five years was an ‘infinite’ amount of time in the current speed of technological innovation. The after dinner discussion was surprisingly fun, and at my table at least we found ourselves drawn back to the past, wondering if the rise of preventative health care and the new localism that connectivity affords might look like a return to the pre-industrial age. When it came to the summing up and drawing out the implications for government, I was struck how most elements of any trajectory of change exposed a growing disconnect between citizens, or business, on the one hand – and government on the other.

This was the one topic that for me was notably absent from WEF 2014; the nature of government in this rapidly changing world, in spite of the three pillars — politics, society, and business — of the theme of the conference noted above. At one lunch convened by McKinsey that was particularly ebullient regarding the ceaseless pace of technological change, I pointed out that government was only at the beginning of the S-curve, or perhaps that such a curve had no relevance for government. Another delegate asked how the assembled audience might help government to manage better here, and another pointed out that globally, we were investing less and less in government at a time when it needed more resources, including far higher remuneration for top officials. But the panellists were less enthusiastic to pick up on these points.

As I have discussed previously on this blog and elsewhere, we are in an era where governments struggle to innovate technologically or to incorporate social media into organizational processes, where digital services lag far behind those of business, where the big data revolution is passing government by (apart from open data, which is not the same thing as big data, see my Guardian blog post on this issue). Pockets of innovation like the UK Government Digital Service push for governmentwide change, but we are still seeing major policy initiatives such as Obama’s healthcare plans in the US or Universal Credit in the UK flounder on technological grounds. Yet there were remarkably few delegates at the WEF representing the executive arm of government, particularly for the UK. So on the relationship between government and citizens in an age of rapid technological change, it was citizens – rather than governments – and, of course, business (given the predominance of CEOs) that received the attention of this high-powered crowd.

At the end of the ‘Rethinking Living’ dinner, a participant from another table said to me that in contrast to the participants from the technology industry, he thought 50 years was a rather short time horizon. As a landscape architect, designing with trees that take 30 years to grow, he had no problem imagining how things would look on this timescale. It occurred to me that there could be an analogy here with government, which likewise could take this kind of timescale to catch up with the technological revolution. But by that time, technology will have moved on and it may be that governments cannot afford this relaxed pace catching up with their citizens and the business world. Perhaps this should be a key theme for future forums.


Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
Five recommendations for maximising the relevance of social science research for public policy-making in the big data era https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/ https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/#comments Mon, 04 Nov 2013 10:30:30 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2196 As I discussed in a previous post on the promises and threats of big data for public policy-making, public policy making has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means citizens and governments leave digital traces that can be harvested to generate big data. This increasingly rich data environment poses both promises and threats to policy-makers.

So how can social scientists help policy-makers in this changed environment, ensuring that social science research remains relevant? Social scientists have a good record on having policy influence, indeed in the UK better than other academic fields, including medicine, as recent research from the LSE Public Policy group has shown. Big data hold major promise for social science, which should enable us to further extend our record in policy research. We have access to a cornucopia of data of a kind which is more like that traditionally associated with so-called ‘hard’ science. Rather than being dependent on surveys, the traditional data staple of empirical social science, social media such as Wikipedia, Twitter, Facebook, and Google Search present us with the opportunity to scrape, generate, analyse and archive comparative data of unprecedented quantity. For example, at the OII over the last four years we have been generating a dataset of all petition signing in the UK and US, which contains the joining rate (updated every hour) for the 30,000 petitions created in the last three years. As a political scientist, I am very excited by this kind of data (up to now, we have had big data like this only for voting, and that only at election time), which will allow us to create a complete ecology of petition signing, one of the more popular acts of political participation in the UK. Likewise, we can look at the entire transaction history of online organizations like Wikipedia, or map the link structure of government’s online presence.

But big data holds threats for social scientists too. The technological challenge is ever present. To generate their own big data, researchers and students must learn to code, and for some that is an alien skill. At the OII we run a course on Digital Social Research that all our postgraduate students can take; but not all social science departments could either provide such a course, or persuade their postgraduate students that they needed it. Ours, who study the social science of the Internet, are obviously predisposed to do so. And big data analysis requires multi-disciplinary expertise. Our research team working on petitions data includes a computer scientist (Scott Hale), a physicist (Taha Yasseri) and a political scientist (myself). I can’t imagine doing this sort of research without such technical expertise, and as a multi-disciplinary department we are (reasonably) free to recruit these type of research faculty. But not all social science departments can promise a research career for computer scientists, or physicists, or any of the other disciplinary specialists that might be needed to tackle big data problems.

Five Recommendations for Social Scientists

So, how can social scientists overcome these challenges, and thereby be in a good position to aid policy-makers tackle their own barriers to making the most of the possibilities afforded by big data? Here are five recommendations:

Accept that multi-disciplinary research teams are going to become the norm for social science research, extending beyond social science disciplines into the life sciences, mathematics, physics, and engineering. At Policy and Internet’s 2012 Big Data conference, the keynote speaker Duncan Watts (physicist turned sociologist) called for a ‘dating agency’ for engineers and social scientists – with the former providing the technological expertise, and the latter identifying the important research questions. We need to make sure that forums exist where social scientists and technologists meet and discuss big data research at the earliest stages, so that research projects and programmes incorporate the core competencies of both.

We need to provide the normative and ethical basis for policy decisions in the big data era. That means bringing in normative political theorists and philosophers of information into our research teams. The government has committed £65 million to big data research funding, but it seems likely that any successful research proposals will have a strong ethics component embedded in the research programme, rather than an ethics add on or afterthought.

Training in data science. Many leading US universities are now admitting undergraduates to data science courses, but lack social science input. Of the 20 US masters courses in big data analytics compiled by Information Week, nearly all came from computer science or informatics departments. Social science research training needs to incorporate coding and analysis skills of the kind these courses provide, but with a social science focus. If we as social scientists leave the training to computer scientists, we will find that the new cadre of data scientists tend to leave out social science concerns or questions.

Bringing policy makers and academic researchers together to tackle the challenges that big data present. Last month the OII and Policy and Internet convened a workshop in Harvard on Responsible Research Agendas for Public Policy in the Big Data Era, which included various leading academic researchers in the government and big data field, and government officials from the Census Bureau, the Federal Reserve Board, the Bureau of Labor Statistics, and the Office of Management and Budget (OMB). The discussions revealed that there is continual procession of major events on big data in Washington DC (usually with a corporate or scientific research focus) to which US federal officials are invited, but also how few were really dedicated to tackling the distinctive issues that face government agencies such as those represented around the table.

Taking forward theoretical development in social science, incorporating big data insights. I recently spoke at the Oxford Analytica Global Horizons conference, at a session on Big Data. One of the few policy-makers (in proportion to corporate representatives) in the audience asked the panel “where is the theory”? As social scientists, we need to respond to that question, and fast.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop and the Political Studies Association Why Universities Matter: How Academic Social Science Contributes to Public Policy Impact, held at the LSE on 26 September 2013.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in e-government and digital era governance and politics, investigating the nature and implications of relationships between governments, citizens and the Internet and related digital technologies in the UK and internationally.

]]>
https://ensr.oii.ox.ac.uk/five-recommendations-for-maximising-the-relevance-of-social-science-research-for-public-policy-making-in-the-big-data-era/feed/ 1
The promises and threats of big data for public policy-making https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/ https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/#comments Mon, 28 Oct 2013 15:07:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2299 The environment in which public policy is made has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means both citizens and governments leave digital traces that can be harvested to generate big data. Policy-making takes place in an increasingly rich data environment, which poses both promises and threats to policy-makers.

On the promise side, such data offers a chance for policy-making and implementation to be more citizen-focused, taking account of citizens’ needs, preferences and actual experience of public services, as recorded on social media platforms. As citizens express policy opinions on social networking sites such as Twitter and Facebook; rate or rank services or agencies on government applications such as NHS Choices; or enter discussions on the burgeoning range of social enterprise and NGO sites, such as Mumsnet, 38 degrees and patientopinion.org, they generate a whole range of data that government agencies might harvest to good use. Policy-makers also have access to a huge range of data on citizens’ actual behaviour, as recorded digitally whenever citizens interact with government administration or undertake some act of civic engagement, such as signing a petition.

Data mined from social media or administrative operations in this way also provide a range of new data which can enable government agencies to monitor – and improve – their own performance, for example through log usage data of their own electronic presence or transactions recorded on internal information systems, which are increasingly interlinked. And they can use data from social media for self-improvement, by understanding what people are saying about government, and which policies, services or providers are attracting negative opinions and complaints, enabling identification of a failing school, hospital or contractor, for example. They can solicit such data via their own sites, or those of social enterprises. And they can find out what people are concerned about or looking for, from the Google Search API or Google trends, which record the search patterns of a huge proportion of internet users.

As for threats, big data is technologically challenging for government, particularly those governments which have always struggled with large-scale information systems and technology projects. The UK government has long been a world leader in this regard and recent events have only consolidated its reputation. Governments have long suffered from information technology skill shortages and the complex skill sets required for big data analytics pose a particularly acute challenge. Even in the corporate sector, over a third of respondents to a recent survey of business technology professionals cited ‘Big data expertise is scarce and expensive’ as their primary concern about using big data software.

And there are particular cultural barriers to government in using social media, with the informal style and blurring of organizational and public-private boundaries which they engender. And gathering data from social media presents legal challenges, as companies like Facebook place barriers to the crawling and scraping of their sites.

More importantly, big data presents new moral and ethical dilemmas to policy makers. For example, it is possible to carry out probabilistic policy-making, where policy is made on the basis of what a small segment of individuals will probably do, rather than what they have done. Predictive policing has had some success particularly in California, where robberies declined by a quarter after use of the ‘PredPol’ policing software, but can lead to a “feedback loop of injustice” as one privacy advocacy group put it, as policing resources are targeted at increasingly small socio-economic groups. What responsibility does the state have to devote disproportionately more – or less – resources to the education of those school pupils who are, probabilistically, almost certain to drop out of secondary education? Such challenges are greater for governments than corporations. We (reasonably) happily trade privacy to allow Tesco and Facebook to use our data on the basis it will improve their products, but if government tries to use social media to understand citizens and improve its own performance, will it be accused of spying on its citizenry in order to quash potential resistance.

And of course there is an image problem for government in this field – discussion of big data and government puts the word ‘big’ dangerously close to the word ‘government’ and that is an unpopular combination. Policy-makers’ responses to Snowden’s revelations of the US Tempora and UK Prism programmes have done nothing to improve this image, with their focus on the use of big data to track down individuals and groups involved in acts of terrorism and criminality – rather than on anything to make policy-making better, or to use the wealth of information that these programmes collect for the public good.

However, policy-makers have no choice but to tackle some of these challenges. Big data has been the hottest trend in the corporate world for some years now, and commentators from IBM to the New Yorker are starting to talk about the big data ‘backlash’. Government has been far slower to recognize the advantages for policy-making and services. But in some policy sectors, big data poses very fundamental questions which call for an answer; how should governments conduct a census, for or produce labour statistics, for example, in the age of big data? Policy-makers will need to move fast to beat the backlash.


This post is based on discussions at the workshop on Responsible Research Agendas for Public Policy in the era of Big Data workshop.

Helen Margetts is the Director of the OII, and Professor of Society and the Internet. She is a political scientist specialising in digital era governance and politics.

]]>
https://ensr.oii.ox.ac.uk/promises-threats-big-data-for-public-policy-making/feed/ 1
Can text mining help handle the data deluge in public policy analysis? https://ensr.oii.ox.ac.uk/can-text-mining-help-handle-data-deluge-public-policy-analysis/ Sun, 27 Oct 2013 12:29:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2273 Policy makers today must contend with two inescapable phenomena. On the one hand, there has been a major shift in the policies of governments concerning participatory governance – that is, engaged, collaborative, and community-focused public policy. At the same time, a significant proportion of government activities have now moved online, bringing about “a change to the whole information environment within which government operates” (Margetts 2009, 6).

Indeed, the Internet has become the main medium of interaction between government and citizens, and numerous websites offer opportunities for online democratic participation. The Hansard Society, for instance, regularly runs e-consultations on behalf of UK parliamentary select committees. For examples, e-consultations have been run on the Climate Change Bill (2007), the Human Tissue and Embryo Bill (2007), and on domestic violence and forced marriage (2008). Councils and boroughs also regularly invite citizens to take part in online consultations on issues affecting their area. The London Borough of Hammersmith and Fulham, for example, recently asked its residents for thier views on Sex Entertainment Venues and Sex Establishment Licensing policy.

However, citizen participation poses certain challenges for the design and analysis of public policy. In particular, governments and organizations must demonstrate that all opinions expressed through participatory exercises have been duly considered and carefully weighted before decisions are reached. One method for partly automating the interpretation of large quantities of online content typically produced by public consultations is text mining. Software products currently available range from those primarily used in qualitative research (integrating functions like tagging, indexing, and classification), to those integrating more quantitative and statistical tools, such as word frequency and cluster analysis (more information on text mining tools can be found at the National Centre for Text Mining).

While these methods have certainly attracted criticism and skepticism in terms of the interpretability of the output, they offer four important advantages for the analyst: namely categorization, data reduction, visualization, and speed.

1. Categorization. When analyzing the results of consultation exercises, analysts and policymakers must make sense of the high volume of disparate responses they receive; text mining supports the structuring of large amounts of this qualitative, discursive data into predefined or naturally occurring categories by storage and retrieval of sentence segments, indexing, and cross-referencing. Analysis of sentence segments from respondents with similar demographics (eg age) or opinions can itself be valuable, for example in the construction of descriptive typologies of respondents.

2. Data Reduction. Data reduction techniques include stemming (reduction of a word to its root form), combining of synonyms, and removal of non-informative “tool” or stop words. Hierarchical classifications, cluster analysis, and correspondence analysis methods allow the further reduction of texts to their structural components, highlighting the distinctive points of view associated with particular groups of respondents.

3. Visualization. Important points and interrelationships are easy to miss when read by eye, and rapid generation of visual overviews of responses (eg dendrograms, 3D scatter plots, heat maps, etc.) make large and complex datasets easier to comprehend in terms of identifying the main points of view and dimensions of a public debate.

4. Speed. Speed depends on whether a special dictionary or vocabulary needs to be compiled for the analysis, and on the amount of coding required. Coding is usually relatively fast and straightforward, and the succinct overview of responses provided by these methods can reduce the time for consultation responses.

Despite the above advantages of automated approaches to consultation analysis, text mining methods present several limitations. Automatic classification of responses runs the risk of missing or miscategorising distinctive or marginal points of view if sentence segments are too short, or if they rely on a rare vocabulary. Stemming can also generate problems if important semantic variations are overlooked (eg lumping together ‘ill+ness’, ‘ill+defined’, and ‘ill+ustration’). Other issues applicable to public e-consultation analysis include the danger that analysts distance themselves from the data, especially when converting words to numbers. This is quite apart from the issues of inter-coder reliability and data preparation, missing data, and insensitivity to figurative language, meaning and context, which can also result in misclassification when not human-verified.

However, when responding to criticisms of specific tools, we need to remember that different text mining methods are complementary, not mutually exclusive. A single solution to the analysis of qualitative or quantitative data would be very unlikely; and at the very least, exploratory techniques provide a useful first step that could be followed by a theory-testing model, or by triangulation exercises to confirm results obtained by other methods.

Apart from these technical issues, policy makers and analysts employing text mining methods for e-consultation analysis must also consider certain ethical issues in addition to those of informed consent, privacy, and confidentiality. First (of relevance to academics), respondents may not expect to end up as research subjects. They may simply be expecting to participate in a general consultation exercise, interacting exclusively with public officials and not indirectly with an analyst post hoc; much less ending up as a specific, traceable data point.

This has been a particularly delicate issue for healthcare professionals. Sharf (1999, 247) describes various negative experiences of following up online postings: one woman, on being contacted by a researcher seeking consent to gain insights from breast cancer patients about their personal experiences, accused the researcher of behaving voyeuristically and “taking advantage of people in distress.” Statistical interpretation of responses also presents its own issues, particularly if analyses are to be returned or made accessible to respondents.

Respondents might also be confused about or disagree with text mining as a method applied to their answers; indeed, it could be perceived as dehumanizing – reducing personal opinions and arguments to statistical data points. In a public consultation, respondents might feel somewhat betrayed that their views and opinions eventually result in just a dot on a correspondence analysis with no immediate, apparent meaning or import, at least in lay terms. Obviously the consultation organizer needs to outline clearly and precisely how qualitative responses can be collated into a quantifiable account of a sample population’s views.

This is an important point; in order to reduce both technical and ethical risks, researchers should ensure that their methodology combines both qualitative and quantitative analyses. While many text mining techniques provide useful statistical output, the UK Government’s prescribed Code of Practice on public consultation is quite explicit on the topic: “The focus should be on the evidence given by consultees to back up their arguments. Analyzing consultation responses is primarily a qualitative rather than a quantitative exercise” (2008, 12). This suggests that the perennial debate between quantitative and qualitative methodologists needs to be updated and better resolved.

References

Margetts, H. 2009. “The Internet and Public Policy.” Policy & Internet 1 (1).

Sharf, B. 1999. “Beyond Netiquette: The Ethics of Doing Naturalistic Discourse Research on the Internet.” In Doing Internet Research, ed. S. Jones, London: Sage.


Read the full paper: Bicquelet, A., and Weale, A. (2011) Coping with the Cornucopia: Can Text Mining Help Handle the Data Deluge in Public Policy Analysis? Policy & Internet 3 (4).

Dr Aude Bicquelet is a Fellow in LSE’s Department of Methodology. Her main research interests include computer-assisted analysis, Text Mining methods, comparative politics and public policy. She has published a number of journal articles in these areas and is the author of a forthcoming book, “Textual Analysis” (Sage Benchmarks in Social Research Methods, in press).

]]>
Responsible research agendas for public policy in the era of big data https://ensr.oii.ox.ac.uk/responsible-research-agendas-for-public-policy-in-the-era-of-big-data/ Thu, 19 Sep 2013 15:17:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2164 Last week the OII went to Harvard. Against the backdrop of a gathering storm of interest around the potential of computational social science to contribute to the public good, we sought to bring together leading social science academics with senior government agency staff to discuss its public policy potential. Supported by the OII-edited journal Policy and Internet and its owners, the Washington-based Policy Studies Organization (PSO), this one-day workshop facilitated a thought-provoking conversation between leading big data researchers such as David Lazer, Brooke Foucault-Welles and Sandra Gonzalez-Bailon, e-government experts such as Cary Coglianese, Helen Margetts and Jane Fountain, and senior agency staff from US federal bureaus including Labor Statistics, Census, and the Office for the Management of the Budget.

It’s often difficult to appreciate the impact of research beyond the ivory tower, but what this productive workshop demonstrated is that policy-makers and academics share many similar hopes and challenges in relation to the exploitation of ‘big data’. Our motivations and approaches may differ, but insofar as the youth of the ‘big data’ concept explains the lack of common language and understanding, there is value in mutual exploration of the issues. Although it’s impossible to do justice to the richness of the day’s interactions, some of the most pertinent and interesting conversations arose around the following four issues.

Managing a diversity of data sources. In a world where our capacity to ask important questions often exceeds the availability of data to answer them, many participants spoke of the difficulties of managing a diversity of data sources. For agency staff this issue comes into sharp focus when available administrative data that is supposed to inform policy formulation is either incomplete or inadequate. Consider, for example, the challenge of regulating an economy in a situation of fundamental data asymmetry, where private sector institutions track, record and analyse every transaction, whilst the state only has access to far more basic performance metrics and accounts. Such asymmetric data practices also affect academic research, where once again private sector tech companies such as Google, Facebook and Twitter often offer access only to portions of their data. In both cases participants gave examples of creative solutions using merged or blended data sources, which raise significant methodological and also ethical difficulties which merit further attention. The Berkman Center’s Rob Faris also noted the challenges of combining ‘intentional’ and ‘found’ data, where the former allow far greater certainty about the circumstances of their collection.

Data dictating the questions. If participants expressed the need to expend more effort on getting the most out of available but diverse data sources, several also canvassed against the dangers of letting data availability dictate the questions that could be asked. As we’ve experienced at the OII, for example, the availability of Wikipedia or Twitter data means that questions of unequal digital access (to political resources, knowledge production etc.) can often be addressed through the lens of these applications or platforms. But these data can provide only a snapshot, and large questions of great social or political importance may not easily be answered through such proxy measurements. Similarly, big data may be very helpful in providing insights into policy-relevant patterns or correlations, such as identifying early indicators of seasonal diseases or neighbourhood decline, but seem ill-suited to answer difficult questions regarding say, the efficacy of small-scale family interventions. Just because the latter are harder to answer using currently vogue-ish tools doesn’t mean we should cease to ask these questions.

Ethics. Concerns about privacy are frequently raised as a significant limitation of the usefulness of big data. Given that with two or more data sets even supposedly anonymous data subjects may be identified, the general consensus seems to be that ‘privacy is dead’. Whilst all participants recognised the importance of public debate around this issue, several academics and policy-makers expressed a desire to get beyond this discussion to a more nuanced consideration of appropriate ethical standards. Accountability and transparency are often held up as more realistic means of protecting citizens’ interests, but one workshop participant also suggested it would be helpful to encourage more public debate about acceptable and unacceptable uses of our data, to determine whether some uses might simply be deemed ‘off-limits’, whilst other uses could be accepted as offering few risks.

Accountability. Following on from this debate about the ethical limits of our uses of big data, discussion exposed the starkly differing standards to which government and academics (to say nothing of industry) are held accountable. As agency officials noted on several occasions it matters less what they actually do with citizens’ data, than what they are perceived to do with it, or even what it’s feared they might do. One of the greatest hurdles to be overcome here concerns the fundamental complexity of big data research, and the sheer difficulty of communicating to the public how it informs policy decisions. Quite apart from the opacity of the algorithms underlying big data analysis, the explicit focus on correlation rather than causation or explanation presents a new challenge for the justification of policy decisions, and consequently, for public acceptance of their legitimacy. As Greg Elin of Gitmachines emphasised, policy decisions are still the result of explicitly normative political discussion, but the justifiability of such decisions may be rendered more difficult given the nature of the evidence employed.

We could not resolve all these issues over the course of the day, but they served as pivot points for honest and productive discussion amongst the group. If nothing else, they demonstrate the value of interaction between academics and policy-makers in a research field where the stakes are set very high. We plan to reconvene in Washington in the spring.

*We are very grateful to the Policy Studies Organization (PSO) and the American Public University for their generous support of this workshop. The workshop “Responsible Research Agendas for Public Policy in the Era of Big Data” was held at the Harvard Faculty Club on 13 September 2013.

Also read: Big Data and Public Policy Workshop by Eric Meyer, workshop attendee and PI of the OII project Accessing and Using Big Data to Advance Social Science Knowledge.


Victoria Nash received her M.Phil in Politics from Magdalen College in 1996, after completing a First Class BA (Hons) Degree in Politics, Philosophy and Economics, before going on to complete a D.Phil in Politics from Nuffield College, Oxford University in 1999. She was a Research Fellow at the Institute of Public Policy Research prior to joining the OII in 2002. As Research and Policy Fellow at the OII, her work seeks to connect OII research with policy and practice, identifying and communicating the broader implications of OII’s research into Internet and technology use.

]]>
How are internal monitoring systems being used to tackle corruption in the Chinese public administration? https://ensr.oii.ox.ac.uk/how-are-internal-monitoring-systems-being-used-to-tackle-corruption-in-the-chinese-public-administration/ Fri, 26 Jul 2013 07:30:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1774 The Great Hall of the People
China has made concerted efforts to reduce corruption at the lowest levels of government. Image of the 18th National Congress of the CPC in the Great Hall of the People, Beijing, by: Bert van Dijk.

Ed: Investment by the Chinese government in internal monitoring systems has been substantial: what components make it up?

Jesper: Two different information systems are currently in use. Within the government there is one system directed towards administrative case-processing. In addition to this, the Communist Party has its own monitoring system, which is less sophisticated in terms of real-time surveillance, but which has a deeper structure, as it collects and cross-references personal information about party-members working in the administration. These two systems parallel the existing institutional arrangements found in the dual structure consisting of the Discipline Inspection Commissions and the Bureaus of Supervision on different levels of government. As such, the e-monitoring system has particular ‘Chinese characteristics’, reflecting the bureaucracy’s Leninist heritage where Party-affairs and government-affairs are handled separately, applying different sets of rules.

On the government’s e-monitoring platform the Bureau of Supervision (the closest we get to an Ombudsman function in the Chinese public administration) can collect data from several other data systems, such as the e-government systems of the individual bureaus involved in case processing; feeds from surveillance cameras in different government organisations; and even geographical data from satellites. The e-monitoring platform does not, however, afford scanning of information outside the government systems. For instance, social media are not part of the administration surveillance infrastructure.

Ed: How centralised is it as a system? Is local or province-level monitoring of public officials linked up to the central government?

Jesper: The architecture of the e-monitoring systems integrates the information flows to the provincial level, but not to the central level. One reason for this may be found by following the money. Funding for these systems mainly comes from local sources, and the construction was initially based on municipal-level systems supported by the provincial level. Hence, at the early stages the path towards individual local-level systems was the natural choice. A reason for why the build up was not initially envisioned to comprise the central level could be that the Chinese central government is comparatively small, and they could be worried about information overload. It could, however, also be an expression of provinces wanting to handle ‘internal affairs’ themselves rather than having central actors involved; possibly a case of provincial resistance to central monitoring.

Ed: Digital systems allow for the efficient control and recording of vast numbers of transactions (e.g. by timestamping, alerting, etc.). But all systems are subvertible: is there any evidence that this is happening?

Jesper: There are certainly attempts to shirk work or continue corrupt activities despite the monitoring system. For instance, some urban managers who work in the streets (which are hard to monitor by video surveillance) have used fake photos to ‘prove’ that a particular maintenance task had been completed, thereby saving themselves the time and energy of verifying that the problem had in fact been solved. They could do this because the system did not stamp the photo with geographical information data, and hence they could claim that a photo was taken at any location.

However, administrative processes that take place in an office rather than ‘in the wild’ are easier to monitor. Administrative approval processes that relate to, e.g., tax and business licensing, which the government handles in one-stop-shopping service centres, tend to be less corrupt after the introduction of the e-monitoring system. To be sure, this does not mean that the administration is clean now; instead the corruption moves to other places, such as applications for business licenses for larger companies, which is only partly covered by e-monitoring.

Ed: We are used to a degree of audit and oversight of our working behaviour and performance in the West; does this personal monitoring go beyond what might be considered normal (or just) to us?

Jesper: The notion of being video surveilled during office work would probably be met with resistance by employees in Western government agencies. This is, however, a widespread practice in call centres in the West, so in this sense it is not entirely unknown in work settings. Additionally, government one-stop shops in the West are often equipped with closed-circuit television, but this is mostly — as I understand — used to document client violations of the public employees rather than the other way round. Another aspect that sets apart the Chinese administration is that the options for recourse (e.g. for a wrongfully accused public employee) only include the authorities already dealing with the case.

Ed: Could these systems also be used to monitor the behaviour of citizens?

Jesper: Indeed, the monitoring system enables access to information from a number of different sources, such as registers of tax payment, social welfare benefits and real-estate holdings, and to some extent it is already used in relation to citizens. For instance the tax register and the real-estate register are cross-referenced. If a real-estate owner has a tax debt then documentation for the real estate cannot be printed until the debt is paid. We must expect further development of these kinds of functions. This e-monitoring ‘architecture of control’ can thus be activated both towards the administration itself as well as outward towards citizens.

Ed: There is oversight of the actions of government officials by the Bureau of Supervision; but is there any public oversight of, e.g., the government’s decision-making process, particularly of potentially embarrassing decisions? Who watches the watchers?

Jesper: Currently in China there are two digitally mediated mechanisms working simultaneously to reduce corruption. The first is the e-monitoring system described here, which mainly addresses administrative corruption. The second is what we might call a ‘fire alarm’ mechanism whereby citizens point public attention to corruption scandals or local government failures — often through the use of microblogs. E-monitoring addresses corruption in the work process but does not include government decision-making. The ‘fire alarm’ in part addresses the latter concern as citizens can vent their frustrations online. However, even though microblogging has empowered citizens to speak out against corruption and counter-productive policies, this does not reflect institutionalised control but happens on an ad hoc basis. If the Bureau of Supervision and the Disciplinary Inspection Commission do not wish to act, there is no further backstop. The Internet-based e-monitoring systems, hence, do not alter the institutional setup of the system and there is no-one to ‘watch the watchers’ except for in the occasional cases where the fire alarm mechanism works.

Ed: Is there a danger that public disclosure of power abuses might generate dissatisfaction and mistrust in government, without necessarily solving the issue of corruption itself?

Jesper: Over the last few years a number of corruption scandals have been brought to public attention through microblogs. Civil servants have been punished, and obviously these incidents have not improved public satisfaction with the particular local governments involved. Apart from the negative consequences of public mistrust, one could speculate that the microblogging ‘fire alarm’ only works when it is allowed to do so by the government. Technically speaking it is relatively simple for the sophisticated Chinese censoring apparatus to stop debates that touch upon issues that are too sensitive for the Party. So, it would be naive to believe that this mechanism is revealing more than the tip of the iceberg in terms of corruption.

Ed: Both Russia and India have big problems with corruption: do you know if there are similar electronic oversight systems embedded in their public administrations? What makes China different if not?

Jesper: In this area, China has made concerted efforts to reduce corruption at the lowest levels of government, as a result of dissatisfaction from both the business communities and the general public. Similarly, in Russia and India (and a number of Asian states) many functions such as taxation, business licensing, etc., have been incorporated in e-government systems and through this process been made more transparent and easy to track than previous processes. However, to my knowledge, the Chinese system is at the forefront when it comes to integrating these different platforms into a larger monitoring system ecology.


Jesper Schlæger is an Associate Professor at Sichuan University, School of Public Administration. His current research topics include comparative public administration, e-government, electronic monitoring, public values, and urban management in a comparative perspective. His latest book is E-Government in China: Technology, Power and Local Government Reform (Routledge, 2013).

Jesper Schlæger was talking to blog editor David Sutcliffe.

]]>