Governance & Security – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:21 +0000 en-GB hourly 1 Mapping Fentanyl Trades on the Darknet https://ensr.oii.ox.ac.uk/mapping-fentanyl-trades-on-the-darknet/ Mon, 16 Oct 2017 08:16:27 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4435 My colleagues Joss Wright, Martin Dittus and I have been scraping the world’s largest darknet marketplaces over the last few months, as part of our darknet mapping project. The data we collected allow us to explore a wide range of trading activities, including the trade in the synthetic opioid Fentanyl, one of the drugs blamed for the rapid rise in overdose deaths and widespread opioid addiction in the US.

The above map shows the global distribution of the Fentanyl trade on the darknet. The US accounts for almost 40% of global darknet trade, with Canada and Australia at 15% and 12%, respectively. The UK and Germany are the largest sellers in Europe with 9% and 5% of sales. While China is often mentioned as an important source of the drug, it accounts for only 4% of darknet sales. However, this does not necessarily mean that China is not the ultimate site of production. Many of the sellers in places like the US, Canada, and Western Europe are likely intermediaries rather than producers themselves.

In the next few months, we’ll be sharing more visualisations of the economic geographies of products on the darknet. In the meantime you can find out more about our work by Exploring the Darknet in Five Easy Questions.

Follow the project here: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

]]>
Could data pay for global development? Introducing data financing for global good https://ensr.oii.ox.ac.uk/could-data-pay-for-global-development-introducing-data-financing-for-global-good/ Tue, 03 Jan 2017 15:12:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3903 “If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study.

The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific — very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good?

Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal.

Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the UNITAID air ticket levy used to advance global health.

Data financing, then, is a subset of innovative financing that refers to mechanisms that attempt to redirect a slice of the value created in the global data economy towards broader social objectives. For instance, a Global Internet Subsidy funded by large Internet companies could help to educate and and build infrastructure in the world’s marginalized regions, in the long run also growing the market for Internet companies’ services. But such a model would need well-designed governance mechanisms to avoid the pitfalls of current Internet subsidization initiatives, which risk failing because of well-founded concerns that they further entrench Internet giants’ dominance over emerging digital markets.

Besides the Global Internet Subsidy, other data financing models examined in the report are a Privacy Insurance for personal data processing, a Shared Knowledge Duty payable by businesses profiting from open and public data, and an Attention Levy to disincentivise intrusive marketing. Many of these have been considered before, and they come with significant economic, legal, political, and technical challenges. Our report considers these challenges in turn, assesses the feasibility of potential solutions, and presents rough estimates of potential financial impacts.

Some of the prevailing business models of the data economy — provoking users’ attention, extracting their personal information, and monetizing it through advertising — are more or less taken for granted today. But they are something of a historical accident, an unanticipated corollary to some of the technical and political decisions made early in the Internet’s design. Certainly they are not any inherent feature of data as such. Although our report focuses on the technical, legal, and political practicalities of the idea of data financing, it also invites a careful reader to question some of the accepted truths on how a data-intensive economy could be organized, and what business models might be possible.

Read the report: Lehdonvirta, V., Mittelstadt, B. D., Taylor, G., Lu, Y. Y., Kadikov, A., and Margetts, H. (2016) Data Financing for Global Good: A Feasibility Study. University of Oxford: Oxford Internet Institute.

]]>
Should there be a better accounting of the algorithms that choose our news for us? https://ensr.oii.ox.ac.uk/should-there-be-a-better-accounting-of-the-algorithms-that-choose-our-news-for-us/ Wed, 07 Dec 2016 14:44:31 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3875 A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information — and content personalization systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse.

A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalization systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalization systems undermine open exchange of ideas and evidence among participants: at a minimum, personalization systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers — content personalization systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalization systems.

He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalized content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalization systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing.

The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine — no longer in force — and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealized version of political discourse described here. Both precedents promote balance in public political discourse by setting standards for delivery of politically relevant content. Whether it is appropriate to hold service providers that use content personalization systems to a similar standard remains a crucial question.

Read the full article: Mittelstadt, B. (2016) Auditing for Transparency in Content Personalization Systems. International Journal of Communication 10(2016), 4991–5002.

We caught up with Brent to explore the broader implications of the study:

Ed: We basically accept that the tabloids will be filled with gross bias, populism and lies (in order to sell copy) — and editorial decisions are not generally transparent to us. In terms of their impact on the democratic process, what is the difference between the editorial boardroom and a personalising social media algorithm?

Brent: There are a number of differences. First, although not necessarily transparent to the public, one hopes that editorial boardrooms are at least transparent to those within the news organisations. Editors can discuss and debate the tone and factual accuracy of their stories, explain their reasoning to one another, reflect upon the impact of their decisions on their readers, and generally have a fair debate about the merits and weaknesses of particular content.

This is not the case for a personalising social media algorithm; those working with the algorithm inside a social media company are often unable to explain why the algorithm is functioning in a particular way, or determined a particular story or topic to be ‘trending’ or displayed to particular users, while others are not. It is also far more difficult to ‘fact check’ algorithmically curated news; a news item can be widely disseminated merely by many users posting or interacting with it, without any purposeful dissemination or fact checking by the platform provider.

Another big difference is the degree to which users can be aware of the bias of the stories they are reading. Whereas a reader of The Daily Mail or The Guardian will have some idea of the values of the paper, the same cannot be said of platforms offering algorithmically curated news and information. The platform can be neutral insofar as it disseminates news items and information reflecting a range of values and political viewpoints. A user will encounter items reflecting her particular values (or, more accurately, her history of interactions with the platform and the values inferred from them), but these values, and their impact on her exposure to alternative viewpoints, may not be apparent to the user.

Ed: And how is content “personalisation” different to content filtering (e.g. as we see with the Great Firewall of China) that people get very worked up about? Should we be more worried about personalisation?

Brent: Personalisation and filtering are essentially the same mechanism; information is tailored to a user or users according to some prevailing criteria. One difference is whether content is merely infeasible to access, or technically inaccessible. Content of all types will typically still be accessible in principle when personalisation is used, but the user will have to make an effort to access content that is not recommended or otherwise given special attention. Filtering systems, in contrast, will impose technical measures to make particular content inaccessible from a particular device or geographical area.

Another difference is the source of the criteria used to set the visibility of different types of content. In the case of personalisation, these criteria are typically based on the users (inferred) interests, values, past behaviours and explicit requests. Critically, these values are not necessarily apparent to the user. For filtering, criteria are typically externally determined by a third party, often a government. Some types of information are set off limits, according to the prevailing values of the third party. It is the imposition of external values, which limit the capacity of users to access content of their choosing, which often causes an outcry against filtering and censorship.

Importantly, the two mechanisms do not necessarily differ in terms of the transparency of the limiting factors or rules to users. In some cases, such as the recently proposed ban in the UK of adult websites that do not provide meaningful age verification mechanisms, the criteria that determine whether sites are off limits will be publicly known at a general level. In other cases, and especially with personalisation, the user inside the ‘filter bubble’ will be unaware of the rules that determine whether content is (in)accessible. And it is not always the case that the platform provider intentionally keeps these rules secret. Rather, the personalisation algorithms and background analytics that determine the rules can be too complex, inaccessible or poorly understood even by the provider to give the user any meaningful insight.

Ed: Where are these algorithms developed: are they basically all proprietary? i.e. how would you gain oversight of massively valuable and commercially sensitive intellectual property?

Brent: Personalisation algorithms tend to be proprietary, and thus are not normally open to public scrutiny in any meaningful sense. In one sense this is understandable; personalisation algorithms are valuable intellectual property. At the same time the lack of transparency is a problem, as personalisation fundamentally affects how users encounter and digest information on any number of topics. As recently argued, it may be the case that personalisation of news impacts on political and democratic processes. Existing regulatory mechanisms have not been successful in opening up the ‘black box’ so to speak.

It can be argued, however, that legal requirements should be adopted to require these algorithms to be open to public scrutiny due to the fundamental way they shape our consumption of news and information. Oversight can take a number of forms. As I argue in the article, algorithmic auditing is one promising route, performed both internally by the companies themselves, and externally by a government agency or researchers. A good starting point would be for the companies developing and deploying these algorithms to extend their cooperation with researchers, thereby allowing a third party to examine the effects these systems are having on political discourse, and society more broadly.

Ed: By “algorithm audit” — do you mean examining the code and inferring what the outcome might be in terms of bias, or checking the outcome (presumably statistically) and inferring that the algorithm must be introducing bias somewhere? And is it even possible to meaningfully audit personalisation algorithms, when they might rely on vast amounts of unpredictable user feedback to train the system?

Brent: Algorithm auditing can mean both of these things, and more. Audit studies are a tool already in use, whereby human participants introduce different inputs into a system, and examine the effect on the system’s outputs. Similar methods have long been used to detect discriminatory hiring practices, for instance. Code audits are another possibility, but are generally prohibitive due to problems of access and complexity. Also, even if you can access and understand the code of an algorithm, that tells you little about how the algorithm performs in practice when given certain input data. Both the algorithm and input data would need to be audited.

Alternatively, auditing can assess just the outputs of the algorithm; recent work to design mechanisms to detect disparate impact and discrimination, particularly in the Fairness, Accountability and Transparency in Machine Learning (FAT-ML) community, is a great example of this type of auditing. Algorithms can also be designed to attempt to prevent or detect discrimination and other harms as they occur. These methods are as much about the operation of the algorithm, as they are about the nature of the training and input data, which may itself be biased. In short, auditing is very difficult, but there are promising avenues of research and development. Once we have reliable auditing methods, the next major challenge will be to tailor them to specific sectors; a one-size-meets-all approach to auditing is not on the cards.

Ed: Do you think this is a real problem for our democracy? And what is the solution if so?

Brent: It’s difficult to say, in part because access and data to study the effects of personalisation systems are hard to come by. It is one thing to prove that personalisation is occurring on a particular platform, or to show that users are systematically displayed content reflecting a narrow range of values or interests. It is quite another to prove that these effects are having an overall harmful effect on democracy. Digesting information is one of the most basic elements of social and political life, so any mechanism that fundamentally changes how information is encountered should be subject to serious and sustained scrutiny.

Assuming personalisation actually harms democracy or political discourse, mitigating its effects is quite a different issue. Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished.

At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users. A promising step would be proactively giving the user some idea of what the system thinks it knows about them, or how they are being classified or profiled, without the user first needing to ask.


Brent Mittelstadt was talking to blog editor David Sutcliffe.

]]>
The blockchain paradox: Why distributed ledger technologies may do little to transform the economy https://ensr.oii.ox.ac.uk/the-blockchain-paradox-why-distributed-ledger-technologies-may-do-little-to-transform-the-economy/ Mon, 21 Nov 2016 17:08:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3867 Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organization”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology.


 

I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionize industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organization, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below.

 

[youtube https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360]

 

First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies.

In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design to process at most approximately seven transactions per second, whereas the Visa payment network has a peak capacity of 56,000 transactions per second. Other implementations may have better performance, and on some other metrics blockchain technologies can perhaps beat more conventional technologies. But technical performance is not why so many people think blockchain is revolutionary and paradigm-shifting.

The reason that blockchain is making waves is that it promises to change the very way economies are organized: to eliminate centralized third parties. Let me explain what this means in theoretical terms. Many economic transactions, such as long-distance trade, can be modeled as a game of Prisoners’ Dilemma. The buyer and the seller can either cooperate (send the shipment/payment as promised) or defect (not send the shipment/payment). If the buyer and the seller don’t trust each other, then the equilibrium solution is that neither player cooperates and no trade takes place. This is known as the fundamental problem of cooperation.

There are several classic solutions to the problem of cooperation. One is reputation. In a community of traders where members repeatedly engage in exchange, any trader who defects (fails to deliver on a promise) will gain a negative reputation, and other traders will refuse to trade with them out of self-interest. This threat of exclusion from the community acts as a deterrent against defection, and the equilibrium under certain conditions becomes that everyone will cooperate.

Reputation is only a limited solution, however. It only works within communities where reputational information spreads effectively, and traders may still defect if the payoff from doing so is greater than the loss of future trade. Modern large-scale market economies where people trade with strangers on a daily basis are only possible because of another solution: third-party enforcement. In particular, this means state-enforced contracts and bills of exchange enforced by banks. These third parties in essence force parties to cooperate and to follow through with their promises.

Besides trade, another example of the problem of cooperation is currency. Currency can be modeled as a multiplayer game of Prisoners’ Dilemma. Traders collectively have an interest in maintaining a stable currency, because it acts as a lubricant to trade. But each trader individually has an interest in debasing the currency, in the sense of paying with fake money (what in blockchain-speak is referred to as double spending). Again the classic solution to this dilemma is third-party enforcement: the state polices metal currencies and punishes counterfeiters, and banks control ledgers and prevent people from spending money they don’t have.

So third-party enforcement is the dominant model of economic organization in today’s market economies. But it’s not without its problems. The enforcer is in a powerful position in relation to the enforced: banks could extract exorbitant fees, and states could abuse their power by debasing the currency, illegitimately freezing assets, or enforcing contracts in unfair ways. One classic solution to the problems of third-party enforcement is competition. Bank fees are kept in check by competition: the enforced can switch to another enforcer if the fees get excessive.

But competition is not always a viable solution: there is a very high cost to switching to another state (i.e. becoming a refugee) if your state starts to abuse its power. Another classic solution is accountability: democratic institutions that try to ensure the enforcer acts in the interest of the enforced. For instance, the interbank payment messaging network SWIFT is a cooperative society owned by its member banks. The members elect a Board of Directors that is the highest decision making body in the organization. This way, they attempt to ensure that SWIFT does not try to extract excessive fees from the member banks or abuse its power against them. Still, even accountability is not without its problems, since it comes with the politics of trying to reconcile different members’ diverging interests as best as possible.

Into this picture enters blockchain: a technology where third-party enforcers are replaced with a distributed network that enforces the rules. It can enforce contracts, prevent double spending, and cap the size of the money pool all without participants having to cede power to any particular third party who might abuse the power. No rent-seeking, no abuses of power, no politics — blockchain technologies can be used to create “math-based money” and “unstoppable” contracts that are enforced with the impartiality of a machine instead of the imperfect and capricious human bureaucracy of a state or a bank. This is why so many people are so excited about blockchain: its supposed ability change economic organization in a way that transforms dominant relationships of power.

Unfortunately this turns out to be a naive understanding of blockchain, and the reality is inevitably less exciting. Let me explain why. In economic organization, we must distinguish between enforcing rules and making rules. Laws are rules enforced by state bureaucracy and made by a legislature. The SWIFT Protocol is a set of rules enforced by SWIFTNet (a centralized computational system) and made, ultimately, by SWIFT’s Board of Directors. The Bitcoin Protocol is a set of rules enforced by the Bitcoin Network (a distributed network of computers) made by — whom exactly? Who makes the rules matters at least as much as who enforces them. Blockchain technology may provide for completely impartial rule-enforcement, but that is of little comfort if the rules themselves are changed. This rule-making is what we refer to as governance.

Using Bitcoin as an example, the initial versions of the protocol (ie. the rules) were written by the pseudonymous Satoshi Nakamoto, and later versions are released by a core development team. The development team is not autocratic: a complex set of social and technical entanglements means that other people are also influential in how Bitcoin’s rules are set; in particular, so-called mining pools, headed by a handful of individuals, are very influential. The point here is not to attempt to pick apart Bitcoin’s political order; the point is that Bitcoin has not in any sense eliminated human politics; humans are still very much in charge of setting the rules that the network enforces.

There is, however, no formal process for how governance works in Bitcoin, because for a very long time these politics were not explicitly recognized, and many people don’t recognize them, preferring instead the idea that Bitcoin is purely “math-based money” and that all the developers are doing is purely apolitical plumbing work. But what has started to make this position untenable and Bitcoin’s politics visible is the so-called “block size debate” — a big disagreement between factions of the Bitcoin community over the future direction of the rules. Different stakeholders have different interests in the matter, and in the absence of a robust governance mechanism that could reconcile between the interests, this has resulted in open “warfare” between the camps over social media and discussion forums.

Will competition solve the issue? Multiple “forks” of the Bitcoin protocol have emerged, each with slightly different rules. But network economics teaches us that competition does not work well at all in the presence of strong network effects: everyone prefers to be in the network where other people are, even if its rules are not exactly what they would prefer. Network markets tend to tip in favour of the largest network. Every fork/split diminishes the total value of the system, and those on the losing side of a fork may eventually find their assets worthless.

If competition doesn’t work, this leaves us with accountability. There is no obvious path how Bitcoin could develop accountable governance institutions. But other blockchain projects, especially those that are gaining some kind of commercial or public sector legitimacy, are designed from the ground up with some level of accountable governance. For instance, R3 is a firm that develops blockchain technology for use in the financial services industry. It has enrolled a consortium of banks to guide the effort, and its documents talk about the “mandate” it has from its “member banks”. Its governance model thus sounds a lot like the beginnings of something like SWIFT. Another example is RSCoin, designed by my ATI colleagues George Danezis and Sarah Meiklejohn, which is intended to be governed by a central bank.

Regardless of the model, my point is that blockchain technologies cannot escape the problem of governance. Whether they recognize it or not, they face the same governance issues as conventional third-party enforcers. You can use technologies to potentially enhance the processes of governance (eg. transparency, online deliberation, e-voting), but you can’t engineer away governance as such. All this leads me to wonder how revolutionary blockchain technologies really are. If you still rely on a Board of Directors or similar body to make it work, how much has economic organization really changed?

And this leads me to my final point, a provocation: once you address the problem of governance, you no longer need blockchain; you can just as well use conventional technology that assumes a trusted central party to enforce the rules, because you’re already trusting somebody (or some organization/process) to make the rules. I call this blockchain’s ‘governance paradox’: once you master it, you no longer need it. Indeed, R3’s design seems to have something called “uniqueness services”, which look a lot like trusted third-party enforcers (though this isn’t clear from the white paper). RSCoin likewise relies entirely on trusted third parties. The differences to conventional technology are no longer that apparent.

Perhaps blockchain technologies can still deliver better technical performance, like better availability and data integrity. But it’s not clear to me what real changes to economic organization and power relations they could bring about. I’m very happy to be challenged on this, if you can point out a place in my reasoning where I’ve made an error. Understanding grows via debate. But for the time being, I can’t help but be very skeptical of the claims that blockchain will fundamentally transform the economy or government.

The governance of DLTs is also examined in this report chapter that I coauthored earlier this year:

Lehdonvirta, V. & Robleh, A. (2016) Governance and Regulation. In: M. Walport (ed.), Distributed Ledger Technology: Beyond Blockchain. London: UK Government Office for Science, pp. 40-45.

]]>
Exploring the Ethics of Monitoring Online Extremism https://ensr.oii.ox.ac.uk/exploring-the-ethics-of-monitoring-online-extremism/ Wed, 23 Mar 2016 09:59:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3616 (Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.

Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
Surveillance in NYC’s financial district. Photo by Jonathan McIntosh (flickr).

Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?

Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!

Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?

Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.

This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.

Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities.  How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?

Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”

One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.

Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh and Ian were talking to Blog Editor Bertie Vidgen.

]]>
Assessing the Ethics and Politics of Policing the Internet for Extremist Material https://ensr.oii.ox.ac.uk/assessing-the-ethics-and-politics-of-policing-the-internet-for-extremist-material/ Thu, 18 Feb 2016 22:59:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3558 The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

 

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organize, radicalize people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organization, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance – including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalization. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalization, but if they do get radicalized and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept – extremist material – and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorized processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.

]]>
New Voluntary Code: Guidance for Sharing Data Between Organisations https://ensr.oii.ox.ac.uk/new-voluntary-code-guidance-for-sharing-data-between-organisations/ Fri, 08 Jan 2016 10:40:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3540 Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

]]>
Controlling the crowd? Government and citizen interaction on emergency-response platforms https://ensr.oii.ox.ac.uk/controlling-the-crowd-government-and-citizen-interaction-on-emergency-response-platforms/ Mon, 07 Dec 2015 11:21:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3529 There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov‘s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down — not just to “mobilize them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up woords in the forest one winter after a terribly huge forest fires in Russia in year 2010. Image: Max Mayorov.
RUSSIA, NEAR RYAZAN – 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).
My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realized that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves.

To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different.

Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to the image of these institutions, which didn’t want citizens to be portrayed as the leading responding actors. Second, any type of citizen-based collective action, even if not purely political, may be an issue of concern in authoritarian countries in particular. Accordingly, one can argue that, while citizens are struggling against a disaster, in some cases the traditional institutions may make substantial efforts to restrain and contain the action of citizens. In this light, the role of information technologies can include not only enhancing citizen engagement and increasing the efficiency of the response, but also controlling the digital crowd of potential volunteers.

The purpose of this paper was to conceptualize the tension between the role of ICTs in the engagement of the crowd and its resources, and the role of ICTs in controlling the resources of the crowd. The research suggests a theoretical and methodological framework that allows us to explore this tension. The paper focuses on an analysis of specific platforms and suggests empirical data about the structure of the platforms, and interviews with developers and administrators of the platforms. This data is used in order to identify how tools of engagement are transformed into tools of control, and what major differences there are between platforms that seek to achieve these two goals. That said, obviously any platform can have properties of control and properties of engagement at the same time; however the proportion of these two types of elements can differ significantly.

One of the core issues for my research is how traditional actors respond to fast, bottom-up innovation by citizens.[2]. On the one hand, the authorities try to restrict the empowerment of citizens by the new tools. On the other hand, the institutional actors also seek to innovate and develop new tools that can restore the balance of power that has been challenged by citizen-based innovation. The tension between using digital tools for the engagement of the crowd and for control of the crowd can be considered as one of the aspects of this dynamic.

That doesn’t mean that all state-backed platforms are created solely for the purpose of control. One can argue, however, that the development of digital tools that offer a mechanism of command and control over the resources of the crowd is prevalent among the projects that are supported by the authorities. This can also be approached as a means of using information technologies in order to include the digital crowd within the “vertical of power”, which is a top-down strategy of governance. That is why this paper seeks to conceptualize this phenomenon as “vertical crowdsourcing”.

The question of whether using a digital tool as a mechanism of control is intentional is to some extent secondary. What is important is that the analysis of platform structures relying on activity theory identifies a number of properties that allow us to argue that these tools are primarily tools of control. The conceptual framework introduced in the paper is used in order to follow the transformation of tools for the engagement of the crowd into tools of control over the crowd. That said, some of the interviews with the developers and administrators of the platforms may suggest the intentional nature of the development of tools of control, while crowd engagement is secondary.

[1] Asmolov G. “Natural Disasters and Alternative Modes of Governance: The Role of Social Networks and Crowdsourcing Platforms in Russia”, in Bits and Atoms Information and Communication Technology in Areas of Limited Statehood, edited by Steven Livingston and Gregor Walter-Drop, Oxford University Press, 2013.

[2] Asmolov G., “Dynamics of innovation and the balance of power in Russia”, in State Power 2.0 Authoritarian Entrenchment and Political Engagement Worldwide, edited by Muzammil M. Hussain and Philip N. Howard, Ashgate, 2013.

Read the full article: Asmolov, G. (2015) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy and Internet 7,3: 292–318.


asmolovGregory Asmolov is a PhD student at the LSE, where he is studying crowdsourcing and emergence of spontaneous order in situations of limited statehood. He is examining the emerging collaborative power of ICT-enabled crowds in crisis situations, and aiming to investigate the topic drawing on evolutionary theories concerned with spontaneous action and the sustainability of voluntary networked organizations. He analyzes whether crowdsourcing practices can lead to development of bottom-up online networked institutions and “peer-to-peer” governance.

]]>
Government “only” retaining online metadata still presents a privacy risk https://ensr.oii.ox.ac.uk/government-only-retaining-online-metadata-still-presents-a-privacy-risk/ Mon, 30 Nov 2015 08:14:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3514 Issues around data capture, retention and control are gaining significant attention in many Western countries — including in the UK. In this piece originally posted on the Ethics Centre Blog, the OII’s Brent Mittelstadt considers the implications of metadata retention for privacy. He argues that when considered in relation to individuals’ privacy, metadata should not be viewed as fundamentally different to data about the content of a communication.

From 13 October onwards telecommunications providers in Australia will be required to retain metadata on communications for two years. Image by r2hox (Flickr).
Since 13 October 2015 telecommunications providers in Australia have been required to retain metadata on communications for two years. Image by h2hox (Flickr)

Australia’s new data retention law for telecommunications providers, comparable to extant UK and US legislation, came into effect 13 October 2015. Telecoms and ISPs are now required to retain metadata about communications for two years to assist law enforcement agencies in crime and terrorism investigation. Despite now being in effect, the extent and types of data to be collected remain unclear. The law has been widely criticised for violating Australians’ right to privacy by introducing overly broad surveillance of civilians. The Government has argued against this portrayal. They argue the content of communications will not be retained but rather the “data about the data” – location, time, date and duration of a call.

Metadata retention raises complex ethical issues often framed in terms of privacy which are relevant globally. A popular argument is that metadata offers a lower risk of violating privacy compared to primary data – the content of communication. The distinction between the “content” and “nature” of a communication implies that if the content of a message is protected, so is the privacy of the sender and receiver.

The assumption that metadata retention is more acceptable because of its lower privacy risks is unfortunately misguided. Sufficient volumes of metadata offer comparable opportunities to generate invasive information about civilians. Consider a hypothetical. I am given access to a mobile carrier’s dataset that specifies time, date, caller and receiver identity in addition to a continuous record of location constructed with telecommunication tower triangulation records. I see from this that when John’s wife Jane leaves the house, John often calls Jill and visits her for a short period from afterwards. From this I conclude that John may be having an affair with Jill. Now consider the alternative. Instead of metadata I have access to recordings of the calls between John and Jill with which I reach the same conclusion.

From a privacy perspective the method I used to infer something about John’s marriage is trivial. In both cases I am making an intrusive inference about John based on data that describes his behaviours. I cannot be certain but in both cases I am sufficiently confident that my inference is correct based on the data available. My inferences are actionable – I treat them as if they are reliable, accurate knowledge when interacting with John. It is this willingness to act on uncertainty (which is central to ‘Big Data’) that makes metadata ethically similar to primary data. While it is comparatively difficult to learn something from metadata, the potential is undeniable. Both types allow for invasive inferences to be made about the lives and behaviours of people.

Going further, some would argue that metadata can actually be more invasive than primary data. Variables such as location, time and duration are easier to assemble into a historical record of behaviour than content. These concerns are deepened by the difficulty of “opting out” of metadata surveillance. While a person can hypothetically forego all modern communication technologies, privacy suddenly has a much higher cost in terms of quality of life.

Technologies such as encrypted communication platforms, virtual private networks (VPN) and anonymity networks have all been advocated as ways to subvert metadata collection by hiding aspects of your communications. It is worth remembering that these techniques remain feasible only so long as they remain legal, one has the technical knowledge and (in some cases) ability to pay. These technologies raise a question of whether a right to anonymity exists. Perhaps privacy enhancing technologies are immoral? Headlines about digital piracy and the “dark web” show how quickly technologically hiding one’s identity and behaviours can take on a criminal and immoral tone. The status quo of privacy subtly shifts when techniques to hide aspects of one’s personal life are portrayed as necessarily subversive. The technologies to combat metadata retention are not criminal or immoral – they are privacy enhancing technologies.

Privacy is historically a fundamental human value. Individuals have a right to privacy. Violations must be justified by a competing interest. In discussing the ethics of metadata retention and anonymity technologies it is easy to forget this status quo. Privacy is not something that individuals have to justify or argue for – it should be assumed.


Brent Mittelstadt is a Postdoctoral Research Fellow at the Oxford Internet Institute working on the ‘Ethics of Biomedical Big Data‘ project with Prof. Luciano Floridi. His research interests include the ethics of information handled by medical ICT, theoretical developments in discourse and virtue ethics, and epistemology of information.

]]>
Crowdsourcing ideas as an emerging form of multistakeholder participation in Internet governance https://ensr.oii.ox.ac.uk/crowdsourcing-ideas-as-an-emerging-form-of-multistakeholder-participation-in-internet-governance/ Wed, 21 Oct 2015 11:59:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3445 What are the linkages between multistakeholder governance and crowdsourcing? Both are new — trendy, if you will — approaches to governance premised on the potential of collective wisdom, bringing together diverse groups in policy-shaping processes. Their interlinkage has remained underexplored so far. Our article recently published in Policy and Internet sought to investigate this in the context of Internet governance, in order to assess the extent to which crowdsourcing represents an emerging opportunity of participation in global public policymaking.

We examined two recent Internet governance initiatives which incorporated crowdsourcing with mixed results: the first one, the ICANN Strategy Panel on Multistakeholder Innovation, received only limited support from the online community; the second, NETmundial, had a significant number of online inputs from global stakeholders who had the opportunity to engage using a platform for political participation specifically set up for the drafting of the outcome document. The study builds on these two cases to evaluate how crowdsourcing was used as a form of public consultation aimed at bringing the online voice of the “undefined many” (as opposed to the “elected few”) into Internet governance processes.

From the two cases, it emerged that the design of the consultation processes conducted via crowdsourcing platforms is key in overcoming barriers of participation. For instance, in the NETmundial process, the ability to submit comments and participate remotely via www.netmundial.br attracted inputs from all over the world very early on, since the preparatory phase of the meeting. In addition, substantial public engagement was obtained from the local community in the drafting of the outcome document, through a platform for political participation — www.participa.br — that gathered comments in Portuguese. In contrast, the outreach efforts of the ICANN Strategy Panel on Multistakeholder Innovation remained limited; the crowdsourcing platform they used only gathered input (exclusively in English) from a small group of people, insufficient to attribute to online public input a significant role in the reform of ICANN’s multistakeholder processes.

Second, questions around how crowdsourcing should and could be used effectively to enhance the legitimacy of decision-making processes in Internet governance remain unanswered. A proper institutional setting that recognizes a role for online multistakeholder participation is yet to be defined; in its absence, the initiatives we examined present a set of procedural limitations. For instance, in the NETmundial case, the Executive Multistakeholder Committee, in charge of drafting an outcome document to be discussed during the meeting based on the analysis of online contributions, favoured more “mainstream” and “uncontroversial” contributions. Additionally, online deliberation mechanisms for different propositions put forward by a High-Level Multistakeholder Committee, which commented on the initial draft, were not in place.

With regard to ICANN, online consultations have been used on a regular basis since its creation in 1998. Its target audience is the “ICANN community,” a group of stakeholders that volunteer their time and expertise to improve policy processes within the organization. Despite the effort, initiatives such as the 2000 global election for the new At-Large Directors have revealed difficulties in reaching as broad of an audience as wanted. Our study discusses some of the obstacles of the implementation of this ambitious initiative, including limited information and awareness about the At-Large elections, and low Internet access and use in most developing countries, particularly in Africa and Latin America.

Third, there is a need for clear rules regarding the way in which contributions are evaluated in crowdsourcing efforts. When the deliberating body (or committee) is free to disregard inputs without providing any motivation, it triggers concerns about the broader transnational governance framework in which we operate, as there is no election of those few who end up determining which parts of the contributions should be reflected in the outcome document. To avoid the agency problem arising from the lack of accountability over the incorporation of inputs, it is important that crowdsourcing attempts pay particular attention to designing a clear and comprehensive assessment process.

The “wisdom of the crowd” has traditionally been explored in developing the Internet, yet it remains a contested ground when it comes to its governance. In multistakeholder set-ups, the diversity of voices and the collection of ideas and input from as many actors as possible — via online means — represent a desideratum, rather than a reality. In our exploration of empowerment through online crowdsourcing for institutional reform, we identify three fundamental preconditions: first, the existence of sufficient community interest, able to leverage wide expertise beyond a purely technical discussion; second, the existence of procedures for the collection and screening of inputs, streamlining certain ideas considered for implementation; and third, commitment to institutionalizing the procedures, especially by clearly defining the rules according to which feedback is incorporated and circumvention is avoided.

Read the full paper: Radu, R., Zingales, N. and Calandro, E. (2015), Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet, 7: 362–382. doi: 10.1002/poi3.99


Roxana Radu is a PhD candidate in International Relations at the Graduate Institute of International and Development Studies in Geneva and a fellow at the Center for Media, Data and Society, Central European University (Budapest). Her current research explores the negotiation of internet policy-making in global and regional frameworks.

Nicolo Zingales is an assistant professor at Tilburg law school, a senior member of the Tilburg Law and Economics Center (TILEC), and a research associate of the Tilburg Institute for Law, Technology and Society (TILT). He researches on various aspects of Internet governance and regulation, including multistakeholder processes, data-driven innovation and the role of online intermediaries.

Enrico Calandro (PhD) is a senior research fellow at Research ICT Africa, an ICT policy think-tank based based in Cape Town. His academic research focuses on accessibility and affordability of ICT, broadband policy, and internet governance issues from an African perspective.

]]>
Uber and Airbnb make the rules now — but to whose benefit? https://ensr.oii.ox.ac.uk/uber-and-airbnb-make-the-rules-now-but-to-whose-benefit/ Mon, 27 Jul 2015 07:12:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3319 The "Airbnb Law" was signed by Mayor Ed Lee in October 2014 at San Francisco City Hall, legalizing short-term rentals in SF with many conditions. Image by Kevin Krejci (Flickr).
The “Airbnb Law” was signed by Mayor Ed Lee in October 2014 at San Francisco City Hall, legalizing short-term rentals in SF with many conditions. Image of protesters by Kevin Krejci (Flickr).

Ride-hailing app Uber is close to replacing government-licensed taxis in some cities, while Airbnb’s accommodation rental platform has become a serious competitor to government-regulated hotel markets. Many other apps and platforms are trying to do the same in other sectors of the economy. In my previous post, I argued that platforms can be viewed in social science terms as economic institutions that provide infrastructures necessary for markets to thrive. I explained how the natural selection theory of institutional change suggests that people are migrating from state institutions to these new code-based institutions because they provide a more efficient environment for doing business. In this article, I will discuss some of the problems with this theory, and outline a more nuanced theory of institutional change that suggests that platforms’ effects on society will be complex and influence different people in different ways.

Economic sociologists like Neil Fligstein have pointed out that not everyone is as free to choose the means through which they conduct their trade. For example, if buyers in a market switch to new institutions, sellers may have little choice but to follow, even if the new institutions leave them worse off than the old ones did. Even if taxi drivers don’t like Uber’s rules, they may find that there is little business to be had outside the platform, and switch anyway. In the end, the choice of institutions can boil down to power. Economists have shown that even a small group of participants with enough market power — like corporate buyers — may be able to force a whole market to tip in favour of particular institutions. Uber offers a special solution for corporate clients, though I don’t know if this has played any part in the platform’s success.

Even when everyone participates in an institutional arrangement willingly, we still can’t assume that it will contribute to the social good. Cambridge economic historian Sheilagh Ogilvie has pointed out that an institution that is efficient for everyone who participates in it can still be inefficient for society as a whole if it affects third parties. For example, when Airbnb is used to turn an ordinary flat into a hotel room, it can cause nuisance to neighbours in the form of noise, traffic, and guests unfamiliar with the local rules. The convenience and low cost of doing business through the platform is achieved in part at others’ expense. In the worst case, a platform can make society not more but less efficient — by creating a ‘free rider economy’.

In general, social scientists recognize that different people and groups in society often have conflicting interests in how economic institutions are shaped. These interests are reconciled — if they are reconciled — through political institutions. Many social scientists thus look not so much at efficiencies but at political institutions to understand why economic institutions are shaped the way they are. For example, a democratic local government in principle represents the interests of its citizens, through political institutions such as council elections and public consultations. Local governments consequently try to strike a balance between the conflicting interests of hoteliers and their neighbours, by limiting hotel business to certain zones. In contrast, Airbnb as a for-profit business must cater to the interests of its customers, the would-be hoteliers and their guests. It has no mechanism, and more importantly, no mandate, to address on an equal footing the interests of third parties like customers’ neighbours. Perhaps because of this, 74% of Airbnb’s properties are not in the main hotel districts, but in ordinary residential blocks.

That said, governments have their own challenges in producing fair and efficient economic institutions. Not least among these is the fact that government regulators are at a risk of capture by incumbent market participants, or at the very least they face the innovator’s dilemma: it is easier to craft rules that benefit the incumbents than rules that provide great but uncertain benefits to future market participants. For example, cities around the world operate taxi licensing systems, where only strictly limited numbers of license owners are allowed to operate taxicabs. Whatever benefits this system offers to customers in terms of quality assurance, among its biggest beneficiaries are the license owners, and among its losers the would-be drivers who are excluded from the market. Institutional insiders and outsiders have conflicting interests, and government political institutions are often such that it is easier for it to side with the insiders.

Against this background, platforms appear almost as radical reformers that provide market access to those whom the establishment has denied it. For example, Uber recently announced that it aims to create one million jobs for women by 2020, a bold pledge in the male-dominated transport industry, and one that would likely not be possible if it adhered to government licensing requirements, as most licenses are owned by men. Having said that, Uber’s definition of a ‘job’ is something much more precarious and entrepreneurial than the conventional definition. My point here is not to side with either Uber or the licensing system, but to show that their social implications are very different. Both possess at least some flaws as well as redeeming qualities, many of which can be traced back to their political institutions and whom they represent.

What kind of new economic institutions are platform developers creating? How efficient are they? What other consequences, including unintended ones, do they have and to whom? Whose interests are they geared to represent — capital vs. labour, consumer vs. producer, Silicon Valley vs. local business, incumbent vs. marginalized? These are the questions that policy makers, journalists, and social scientists ought to be asking at this moment of transformation in our economic institutions. Instead of being forced to choose one or the other between established institutions and platforms as they currently are, I hope that we will be able to discover ways to take what is good in both, and create infrastructure for an economy that is as fair and inclusive as it is efficient and innovative.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
Why are citizens migrating to Uber and Airbnb, and what should governments do about it? https://ensr.oii.ox.ac.uk/why-are-citizens-migrating-to-uber-and-airbnb-and-what-should-governments-do-about-it/ Mon, 27 Jul 2015 06:48:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3307 protested fair taxi laws by parking in Pioneer square. Organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).
Protest for fair taxi laws in Portland; organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).

Cars were smashed and tires burned in France last month in protests against the ride hailing app Uber. Less violent protests have also been staged against Airbnb, a platform for renting short-term accommodation. Despite the protests, neither platform shows any signs of faltering. Uber says it has a million users in France, and is available in 57 countries. Airbnb is available in over 190 countries, and boasts over a million rooms, more than hotel giants like Hilton and Marriott. Policy makers at the highest levels are starting to notice the rise of these and similar platforms. An EU Commission flagship strategy paper notes that “online platforms are playing an ever more central role in social and economic life,” while the Federal Trade Commission recently held a workshop on the topic in Washington.

Journalists and entrepreneurs have been quick to coin terms that try to capture the essence of the social and economic changes associated with online platforms: the sharing economy; the on-demand economy; the peer-to-peer economy; and so on. Each perhaps captures one aspect of the phenomenon, but doesn’t go very far in helping us make sense of all its potentials and contradictions, including why some people love it and some would like to smash it into pieces. Instead of starting from the assumption that everything we see today is new and unprecedented, what if we dug into existing social science theory to see what it has to say about economic transformation and the emergence of markets?

Economic sociologists are adamant that markets don’t just emerge by themselves: they are always based on some kind of an underlying infrastructure that allows people to find out what goods and services are on offer, agree on prices and terms, pay, and have a reasonable expectation that the other party will honour the agreement. The oldest market infrastructure is the personal social network: traders hear what’s on offer through word of mouth and trade only with those whom they personally know and trust. But personal networks alone couldn’t sustain the immense scale of trading in today’s society. Every day we do business with strangers and trust them to provide for our most basic needs. This is possible because modern society has developed institutions — things like private property, enforceable contracts, standardized weights and measures, consumer protection, and many other general and sector specific norms and facilities. By enabling and constraining everyone’s behaviours in predictable ways, institutions constitute a robust and more inclusive infrastructure for markets than personal social networks.

Modern institutions didn’t of course appear out of nowhere. Between prehistoric social networks and the contemporary institutions of the modern state, there is a long historical continuum of economic institutions, from ancient trade routes with their customs to medieval fairs with their codes of conduct to state-enforced trade laws of the early industrial era. Institutional economists led by Oliver Williamson and economic historians led by Douglass North theorized in the 1980s that economic institutions evolve towards more efficient forms through a process of natural selection. As new institutional forms become possible thanks to technological and organizational innovation, people switch to cheaper, easier, more secure, and overall more efficient institutions out of self-interest. Old and cumbersome institutions fall into disuse, and society becomes more efficient and economically prosperous as a result. Williamson and North both later received the Nobel Memorial Prize in Economic Sciences.

It is easy to frame platforms as the next step in such an evolutionary process. Even if platforms don’t replace state institutions, they can plug gaps that remain the state-provided infrastructure. For example, enforcing a contract in court is often too expensive and unwieldy to be used to secure transactions between individual consumers. Platforms provide cheaper and easier alternatives to formal contract enforcement, in the form of reputation systems that allow participants to rate each others’ conduct and view past ratings. Thanks to this, small transactions like sharing a commute that previously only happened in personal networks can now potentially take place on a wider scale, resulting in greater resource efficiency and prosperity (the ‘sharing economy’). Platforms are not the first companies to plug holes in state-provided market infrastructure, though. Private arbitrators, recruitment agencies, and credit rating firms have been doing similar things for a long time.

What’s arguably new about platforms, though, is that some of the most popular ones are not mere complements, but almost complete substitutes to state-provided market infrastructures. Uber provides a complete substitute to government-licensed taxi infrastructures, addressing everything from quality and discovery to trust and payment. Airbnb provides a similarly sweeping solution to short-term accommodation rental. Both platforms have been hugely successful; in San Francisco, Uber has far surpassed the city’s official taxi market in size. The sellers on these platforms are not just consumers wanting to make better use of their resources, but also firms and professionals switching over from the state infrastructure. It is as if people and companies were abandoning their national institutions and emigrating en masse to Platform Nation.

From the natural selection perspective, this move from state institutions to platforms seems easy to understand. State institutions are designed by committee and carry all kinds of historical baggage, while platforms are designed from the ground up to address their users’ needs. Government institutions are geographically fragmented, while platforms offer a seamless experience from one city, country, and language area to the other. Government offices have opening hours and queues, while platforms make use of latest technologies to provide services around the clock (the ‘on-demand economy’). Given the choice, people switch to the most efficient institutions, and society becomes more efficient as a result. The policy implications of the theory are that government shouldn’t try to stop people from using Uber and Airbnb, and that it shouldn’t try to impose its evidently less efficient norms on the platforms. Let competing platforms innovate new regulatory regimes, and let people vote with their feet; let there be a market for markets.

The natural selection theory of institutional change provides a compellingly simple way to explain the rise of platforms. However, it has difficulty in explaining some important facts, like why economic institutions have historically developed differently in different places around the world, and why some people now protest vehemently against supposedly better institutions. Indeed, over the years since the theory was first introduced, social scientists have discovered significant problems in it. Economic sociologists like Neil Fligstein have noted that not everyone is as free to choose the institutions that they use. Economic historian Sheilagh Ogilvie has pointed out that even institutions that are efficient for those who participate in them can still sometimes be inefficient for society as a whole. These points suggest a different theory of institutional change, which I will apply to online platforms in my next post.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
Iris scanners can now identify us from 40 feet away https://ensr.oii.ox.ac.uk/iris-scanners-can-now-identify-us-from-40-feet-away/ Thu, 21 May 2015 10:23:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3369 Public anxiety and legal protections currently pose a major challenge to anyone wanting to introduce eye-scanning security technologies. Reposted from The Conversation.

 

Biometric technologies are on the rise. By electronically recording data about individual’s physical attributes such as fingerprints or iris patterns, security and law enforcement services can quickly identify people with a high degree of accuracy.

The latest development in this field is the scanning of irises from a distance of up to 40 feet (12 metres) away. Researchers from Carnegie Mellon University in the US demonstrated they were able to use their iris recognition technology to identify drivers from an image of their eye captured from their vehicle’s side mirror.

The developers of this technology envisage that, as well as improving security, it will be more convenient for the individuals being identified. By using measurements of physiological characteristics, people no longer need security tokens or cumbersome passwords to identify themselves.

However, introducing such technology will come with serious challenges. There are both legal issues and public anxiety around having such sensitive data captured, stored, and accessed.

Social resistance

We have researched this area by presenting people with potential future scenarios that involved biometrics. We found that, despite the convenience of long-range identification (no queuing in front of scanners), there is a considerable reluctance to accept this technology.

On a basic level, people prefer a physical interaction when their biometrics are being read. “I feel negatively about a remote iris scan because I want there to be some kind of interaction between me and this system that’s going to be monitoring me,” said one participant in our research.

But another serious concern was that of “function creep”, whereby people slowly become accustomed to security and surveillance technologies because they are introduced gradually. This means the public may eventually be faced with much greater use of these systems than they would initially agree to.

Crowd control Shutterstock

For example, implementing biometric identification in smart phones and other everyday objects such as computers or cars could make people see the technology as useful and easy to operate, This may increase their willingness to adopt such systems. “I could imagine this becoming normalised to a point where you don’t really worry about it,“ said one research participant.

Such familiarity could lead to the introduction of more invasive long-distance recognition systems. This could ultimately produce far more widespread commercial and governmental usage of biometric identification than the average citizen might be comfortable with. As one participant put it: “[A remote scan] could be done every time we walk into a big shopping centre, they could just identify people all over the place and you’re not aware of it.”

Legal barriers

The implementation of biometric systems is not just dependent on user acceptance or resistance. Before iris-scanning technology could be introduced in the EU, major data protection and privacy considerations would have to be made.

The EU has a robust legal framework on privacy and data protection. These are recognised as fundamental rights and so related laws are among the highest ranking. Biometric data, such as iris scans, are often treated as special due to the sensitivity of the information they can contain. Our respondents also acknowledged this: “I think it’s a little too invasive and to me it sounds a bit creepy. Who knows what they can find out by scanning my irises?”

Before iris technology could be deployed, certain legal steps would need to be taken. Under EU law and the European Convention on Human Rights, authorities would need to demonstrate it was a necessary and proportionate solution to a legitimate, specific problem. They would also need to prove iris recognition was the least intrusive way to achieve that goal. And a proportionality test would have to take into account the risks the technology brings along with the benefits.

The very fact that long-range iris scanners can capture data without the collaboration of their subject also creates legal issues. EU law requires individuals to be informed when such information was being collected, by whom, for what purposes, and the existence of their rights surrounding the data.

Another issue is how the data is kept secure, particularly in the case of iris-scanning by objects such as smart phones. Scans stored on the device and/or on the cloud for purposes of future authentication would legally require robust security protection. Data stored on the cloud tends to move around between different servers and countries, which makes preventing unauthorised access more difficult.

The other issue with iris scanning is that, while the technology could be precise, it is not infallible. At its current level, the technology can still be fooled (see video above). And processing data accurately is another principle of EU data protection law.

Even if we do find ourselves subject to unwanted iris-scanning from 40 feet, safeguards for individuals should always be in place to ensure that they do not bear the burden of technological imperfections.

]]>
Should we use old or new rules to regulate warfare in the information age? https://ensr.oii.ox.ac.uk/should-we-use-old-or-new-rules-to-regulate-warfare-in-the-information-age/ https://ensr.oii.ox.ac.uk/should-we-use-old-or-new-rules-to-regulate-warfare-in-the-information-age/#comments Mon, 09 Mar 2015 12:43:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3171 Caption
Critical infrastructures such as electric power grids are susceptible to cyberwarfare, leading to economic disruption in the event of massive power outages. Image courtesy of Pacific Northwest National Laboratory.

Before the pervasive dissemination of Information and Communication Technologies (ICTs), the use of information in war waging referred to intelligence gathering and propaganda. In the age of the information revolution things have radically changed. Information has now acquired a pivotal role in contemporary warfare, for it has become both an effective target and a viable means. These days, we use ‘cyber warfare’ to refer to the use of ICTs by state actors to disruptive (or even destructive) ends.

As contemporary societies grow increasingly dependant on ICTs, any form of attack that involves their informational infrastructures poses serious risks and raises the need for adequate defence and regulatory measures. However, such a need contrasts with the novelty of this phenomenon, with cyber warfare posing a radical shift in the paradigm within which warfare has been conceived so far. In the new paradigm, impairment of functionality, disruption, and reversible damage substitute for bloodshed, destruction, and casualties. At the same time, the intangible environment (the cyber sphere), targets, and agents substitute for beings in blood and flesh, firearms, and physical targets (at least in the non-kinetic instances of cyber warfare).

The paradigm shift raises questions about the adequacy and efficacy of existing laws and ethical theories for the regulation of cyber warfare. Military experts, strategy planners, law- and policy-makers, philosophers, and ethicists all participate in discussions around this problem. The debate is polarised around two main approaches: (1) the analogy approach, and (2) the discontinuous approach. The former stresses that the regulatory gap concerning cyber warfare is only apparent, insofar as cyber conflicts are not radically different from other forms of conflicts. As Schmitt put it “a thick web of international law norms suffuses cyber-space. These norms both outlaw many malevolent cyber-operations and allow states to mount robust responses. The UN Charter, NATO Treaty, Geneva Conventions, the first two Additional Protocols, and Convention restricting or prohibiting the use of certain conventional weapons are more than sufficient to regulate cyber warfare; all that is needed is an in-depth analysis of such laws and an adequate interpretation. This is the approach underpinning, for example, the so-called Tallinn Manual.

The opposite position, the discontinuous approach, stresses the novelty of cyber conflicts and maintains that existing ethical principles and laws are not adequate to regulate this phenomenon. Just War Theory is the main object of contention in this case. Those defending this approach argue that Just War Theory is not the right conceptual tool to address non-kinetic forms of warfare, for it assumes bloody and violent warfare occurring in the physical domain. This view sees cyber warfare as one of the most compelling signs of the information revolution — as Luciano Floridi has put it “those who live by the digit, die by the digit”. As such, it claims that any successful attempt to regulate cyber warfare cannot ignore the conceptual and ethical changes that such a revolution has brought about.

These two approaches have proceeded in parallel over the last decade, stalling rather than fostering a fruitful debate. There is therefore a clear need to establish a coordinated interdisciplinary approach that allows for experts with different backgrounds to collaborate and find a common ground to overcome the polarisation of the discussion. This is precisely the goal of the project financed by the NATO Cooperative Cyber Defence Centre of Excellence (NATO CCD COE) and that I co-led with Lt Glorioso, a representative of the Centre. The project has convened a series of workshops gathering international experts in the fields of law, military strategies, philosophy, and ethics to discuss the ethical and regulatory problems posed by cyber warfare.

The first workshop was held in 2013 at the Centro Alti Studi Difesa in Rome and had the goal of launching an interdisciplinary and coordinated approach to the problems posed by cyber warfare. The second event was hosted in last November at Magdalen College, Oxford. It relied on the approach established in 2013 to foster an interdisciplinary discussion on issues concerning attribution, the principle of proportionality, the distinction between combatant and non-combatant, and the one between pre-emption and prevention. A report on the workshop has now been published surveying the main positions and the key discussion points that emerged during the meeting.

One of most relevant points concerned the risks that cyber warfare poses for the established political equilibrium and maintaining peace. The risk of escalation, both in the nature and in the number of conflicts, was perceived as realistic by both the speakers and the audience attending the workshop. Deterrence therefore emerged as one of the most pressing challenges posed by cyber warfare – and one that experts need to take into account in their efforts to develop new forms of regulation in support of peace and stability in the information age.

Read the full report: Corinne J.N. Cath, Ludovica Glorioso, Maria Rosaria Taddeo (2015) Ethics and Policies for Cyber Warfare [PDF, 400kb]. Report on the NATO CCD COE Workshop on ‘Ethics and Policies for Cyber Warfare’, Magdalen College, Oxford, 11-12 November 2014.


Dr Mariarosaria Taddeo is a researcher at the Oxford Internet Institute, University of Oxford. Her main research areas are information and computer ethics, philosophy of information, philosophy of technology, ethics of cyber-conflict and cyber-security, and applied ethics. She also serves as president of the International Association for Computing and Philosophy.

]]>
https://ensr.oii.ox.ac.uk/should-we-use-old-or-new-rules-to-regulate-warfare-in-the-information-age/feed/ 1
Does a market-approach to online privacy protection result in better protection for users? https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/ https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/#comments Wed, 25 Feb 2015 11:21:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3123 Ed: You examined the voluntary provision by commercial sites of information privacy protection and control under the self-regulatory policy of the U.S. Federal Trade Commission (FTC). In brief, what did you find?

Yong Jin: First, because we rely on the Internet to perform almost all types of transactions, how personal privacy is protected is perhaps one of the important issues we face in this digital age. There are many important findings: the most significant one is that the more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected. This is surprising because one might expect “the more popular, the better privacy protection” — a sort of marketplace magic that automatically solves the issue of personal privacy online. This was not the case at all, because the popular sites with more resources did not provide better privacy protection. Of course, the Internet in general is a malleable medium. This means that commercial sites can design, modify, or easily manipulate user interfaces to maximize the ease with which users can protect their personal privacy. The fact that this is not really happening for commercial websites in the U.S. is not only alarming, but also suggests that commercial forces may not have a strong incentive to provide privacy protection.

Ed: Your sample included websites oriented toward young users and sensitive data relating to health and finance: what did you find for them?

Yong Jin: Because the sample size for these websites was limited, caution is needed in interpreting the results. But what is clear is that just because the websites deal with health or financial data, they did not seem to be better at providing more privacy protection. To me, this should raise enormous concerns from those who use the Internet for health information seeking or financial data. The finding should also inform and urge policymakers to ask whether the current non-intervention policy (regarding commercial websites in the U.S.) is effective, when no consideration is given for the different privacy needs in different commercial sectors.

Ed: How do your findings compare with the first investigation into these matters by the FTC in 1998?

Yong Jin: This is a very interesting question. In fact, at least as far as the findings from this study are concerned, it seems that no clear improvement has been made in almost two decades. Of course, the picture is somewhat complicated. On the one hand, we see (on the surface) that websites have a lot more interactive features. But this does not necessarily mean improvement, because when it comes to actually informing users of what features are available for their privacy control and protection, they still tend to perform poorly. Note that today’s privacy policies are longer and are likely to carry more pages and information, which makes it even more difficult for users to understand what options they do have. I think informing people about what they can actually do is harder, but is getting more important in today’s online environment.

Ed: Is this just another example of a US market-led vs European regulation-led approach to a particular problem? Or is the situation more complicated?

Yong Jin: The answer is yes and no. Yes, it is because a US market-led approach clearly presents no strong statuary ground to mandate privacy protection in commercial websites. However, the answer is also no: even in the EU there is no regulatory mandate for websites to have certain interface-protections concerning how users should get informed about their personal data, and interact with websites to control its use. The difference is more on the fundamental principle of the “opt-in” EU approach. Although the “opt-in” is stronger than the “opt-out” approach in the U.S. this does not require websites to have certain interface-design aspects that are optimized for users’ data control. In other words, to me, the reality of the EU regulation (despite its robust policy approach) will not necessarily be rosier than the U.S., because commercial websites in the EU context also operate under the same incentive of personal data collection and uses. Ultimately, this is an empirical question that will require further studies. Interestingly, the next frontier of this debate will be on privacy in mobile platforms – and useful information concerning this can be found at the OII’s project to develop ethical privacy guidelines for mobile connectivity measurements.

Ed: Awareness of issues around personal data protection is pretty prominent in Europe — witness the recent European Court of Justice ruling about the ‘Right to Forget’ — how prominent is this awareness in the States? Who’s interested in / pushing / discussing these issues?

Yong Jin: The general public in the U.S. has an enormous concern for personal data privacy, since the Edward Snowden revelations in 2013 revealed extensive government surveillance activities. Yet my sense is that public awareness concerning data collection and surveillance by commercial companies has not yet reached the same level. Certainly, the issue such as the “Right to Forget” is being discussed among only a small circle of scholars, website operators, journalists, and policymakers, and I see the general public mostly remains left out of this discussion. In fact, a number of U.S. scholars have recently begun to weigh the pros and cons of a “Right to Forget” in terms of the public’s right to know vs the individual’s right to privacy. Given the strong tradition of freedom of speech, however, I highly doubt that U.S. policymakers will have a serious interest in pushing a similar type of approach in the foreseeable future.

My own work on privacy awareness, digital literacy, and behavior online suggests that public interest and demand for strong legislation such as a “Right to Forget” is a long shot, especially in the context of commercial websites.

Ed: Given privacy policies are notoriously awful to deal with (and are therefore generally unread) — what is the solution? You say the situation doesn’t seem to have improved in ten years, and that some aspects — such as readability of policies — might actually have become worse: is this just ‘the way things are always going to be’, or are privacy policies something that realistically can and should be addressed across the board, not just for a few sites?

Yong Jin: A great question, and I see no easy answer! I actually pondered a similar question when I conducted this study. I wonder: “Are there any viable solutions for online privacy protection when commercial websites are so desperate to use personal data?” My short answer is No. And I do think the problem will persist if the current regulatory contours in the U.S. continue. This means that there is a need for appropriate policy intervention that is not entirely dependent on market-based solutions.

My longer answer would be that realistically, to solve the notoriously difficult privacy problems on the Internet, we will need multiple approaches — which means a combination of appropriate regulatory forces by all the entities involved: regulatory mandates (government), user awareness and literacy (public), commercial firms and websites (market), and interface design (technology). For instance, it is plausible to perceive a certain level of readability of policy statement is to be required of all websites targeting children or teenagers. Of course, this will function with appropriate organizational behaviors, users’ awareness and interest in privacy, etc. In my article I put a particular emphasis on the role of the government (particularly in the U.S.) where the industry often ‘captures’ the regulatory agencies. The issue is quite complicated because for privacy protection, it is not just the FTC but also Congress who should enact to empower the FTC in its jurisdiction. The apparent lack of improvement over the years since the FTC took over online privacy regulation in the mid 1990s reflects this gridlock in legislative dynamics — as much as it reflects the commercial imperative for personal data collection and use.

I made a similar argument for multiple approaches to solve privacy problems in my article Offline Status, Online Status Reproduction of Social Categories in Personal Information Skill and Knowledge, and related, excellent discussions can be found in Information Privacy in Cyberspace Transactions (by Jerry Kang), and Exploring Identity and Identification in Cyberspace, by Oscar Gandy.

Read the full article: Park, Y.J. (2014) A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. Policy & Internet 6 (4) 360-376.


Yong Jin Park was taking to blog editor David Sutcliffe.

Yong Jin Park is an Associate Professor at the School of Communications, Howard University. His research interests center on social and policy implications of new technologies; current projects examine various dimensions of digital privacy.

]]>
https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/feed/ 1
Will digital innovation disintermediate banking — and can regulatory frameworks keep up? https://ensr.oii.ox.ac.uk/will-digital-innovation-disintermediate-banking-and-can-regulatory-frameworks-keep-up/ Thu, 19 Feb 2015 12:11:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3114
Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Image by MPD01605.
Innovation doesn’t just fall from the sky. It’s not distributed proportionately or randomly around the world or within countries, or found disproportionately where there is the least regulation, or in exact linear correlation with the percentage of GDP spent on R&D. Innovation arises in cities and countries, and perhaps most importantly of all, in the greatest proportion in ecosystems or clusters. Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Specifically, Europe’s innovation finance ecosystem lacks the necessary scale, plurality, and appetite for risk to drive investments in long-term initiatives aiming to produce a disruptive new technology. Such long-term investments are taking place more in the rising economies of Asia than in Europe.

While these problems could be addressed by new approaches and technologies for financing dynamism in Europe’s economies, financing of (potentially risky) innovation could also be held back by financial regulation that focuses on stability, avoiding forum shopping (i.e., looking for the most permissive regulatory environment), and preventing fraud, to the exclusion of other interests, particularly innovation and renewal. But the role of finance in enabling the development and implementation of new ideas is vital — an economy’s dynamism depends on innovative competitors challenging, and if successful, replacing complacent players in the markets.

However, newcomers obviously need capital to grow. As a reaction to the markets having priced risk too low before the financial crisis, risk is now being priced too high in Europe, starving the innovation efforts of private financing at a time when much public funding has suffered from austerity measures. Of course, complementary (non-bank) sources of finance can also help fund entrepreneurship, and without that petrol of money, the engine of the new technology economy will likely stall.

The Internet has made it possible to fund innovation in new ways like crowd funding — an innovation in finance itself — and there is no reason to think that financial institutions should be immune to disruptive innovation produced by new entrants that offer completely novel ways of saving, insuring, loaning, transferring and investing money. New approaches such as crowdfunding and other financial technology (aka “FinTech”) initiatives could provide depth and a plurality of perspectives, in order to foster innovation in financial services and in the European economy as a whole.

The time has come to integrate these financial technologies into the overall financial frameworks in a manner that does not neuter their creativity, or lower their potential to revitalize the economy. There are potential synergies with macro-prudential policies focused on mitigating systemic risk and ensuring the stability of financial systems. These platforms have great potential for cross-border lending and investment and could help to remedy the retreat of bank capital behind national borders since the financial crisis. It is time for a new perspective grounded in an “innovation-friendly” philosophy and regulatory approach to emerge.

Crowdfunding is a newcomer to the financial industry, and as such, actions (such as complex and burdensome regulatory frameworks or high levels of guaranteed compensation for losses) that could close it down or raise high barriers of entry should be avoided. Competition in the interests of the consumer and of entrepreneurs looking for funding should be encouraged. Regulators should be ready to step in if abuses do, or threaten to, arise while leaving space for new ideas around crowdfunding to gain traction rapidly, without being overburdened by regulatory requirements at an early stage.

The interests of both “financing innovation” and “innovation in the financial sector” also coincide in the FinTech entrepreneurial community. Schumpeter wrote in 1942: “[the] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” An economy’s dynamism depends on innovative competitors challenging, and if successful, taking the place of complacent players in the markets. Keeping with the theme of Schumpeterian creative destruction, the financial sector is one seen by banking sector analysts and commentators as being particularly ripe for disruptive innovation, given its current profits and lax competition. Technology-driven disintermediation of many financial services is on the cards, for example, in financial advice, lending, investing, trading, virtual currencies and risk management.

The UK’s Financial Conduct Authority’s regulatory dialogues with FinTech developers to provide legal clarity on the status of their new initiatives are an example of good practice , as regulation in this highly monitored sector is potentially a serious barrier to entry and new innovation. The FCA also proactively addresses enabling innovation with Project Innovate, an initiative to assist both start-ups and established businesses in implementing innovative ideas in the financial services markets through an Incubator and Innovation Hub.

By its nature, FinTech is a sector that can benefit and benefit from the EU’s Digital Single Market and make Europe a sectoral global leader in this field. In evaluating possible future FinTech regulation, we need to ensure an optimal regulatory framework and specific rules. The innovation principle I discuss in my article should be part of an approach ensuring not only that regulation is clear and proportional — so that innovators can easily comply — but also ensuring that we are ready, when justified, to adapt regulation to enable innovations. Furthermore, any regulatory approaches should be “future proofed” and should not lock in today’s existing technologies, business models or processes.

Read the full article: Zilgalvis, P. (2014) The Need for an Innovation Principle in Regulatory Impact Assessment: The Case of Finance and Innovation in Europe. Policy and Internet 6 (4) 377–392.


Pēteris Zilgalvis, J.D. is a Senior Member of St Antony’s College, University of Oxford, and an Associate of its Political Economy of Financial Markets Programme. In 2013-14 he was a Senior EU Fellow at St Antony’s. He is also currently Head of Unit for eHealth and Well Being, DG CONNECT, European Commission.

]]>
Designing Internet technologies for the public good https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/ https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/#comments Wed, 08 Oct 2014 11:48:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2887
Caption
MEPs failed to support a Green call to protect Edward Snowden as a whistleblower, in order to allow him to give his testimony to the European Parliament in March. Image by greensefa.
Computers have developed enormously since the Second World War: alongside a rough doubling of computer power every two years, communications bandwidth and storage capacity have grown just as quickly. Computers can now store much more personal data, process it much faster, and rapidly share it across networks.

Data is collected about us as we interact with digital technology, directly and via organisations. Many people volunteer data to social networking sites, and sensors – in smartphones, CCTV cameras, and “Internet of Things” objects – are making the physical world as trackable as the virtual. People are very often unaware of how much data is gathered about them – let alone the purposes for which it can be used. Also, most privacy risks are highly probabilistic, cumulative, and difficult to calculate. A student sharing a photo today might not be thinking about a future interview panel; or that the heart rate data shared from a fitness gadget might affect future decisions by insurance and financial services (Brown 2014).

Rather than organisations waiting for something to go wrong, then spending large amounts of time and money trying (and often failing) to fix privacy problems, computer scientists have been developing methods for designing privacy directly into new technologies and systems (Spiekermann and Cranor 2009). One of the most important principles is data minimization; that is, limiting the collection of personal data to that needed to provide a service – rather than storing everything that can be conveniently retrieved. This limits the impact of data losses and breaches, for example by corrupt staff with authorised access to data – a practice that the UK Information Commissioner’s Office (2006) has shown to be widespread.

Privacy by design also protects against function creep (Gürses et al. 2011). When an organisation invests significant resources to collect personal data for one reason, it can be very tempting to use it for other purposes. While this is limited in the EU by data protection law, government agencies are in a good position to push for changes to national laws if they wish, bypassing such “purpose limitations”. Nor do these rules tend to apply to intelligence agencies.

Another key aspect of putting users in control of their personal data is making sure they know what data is being collected, how it is being used – and ideally being asked for their consent. There have been some interesting experiments with privacy interfaces, for example helping smartphone users understand who is asking for their location data, and what data has been recently shared with whom.

Smartphones have enough storage and computing capacity to do some tasks, such as showing users adverts relevant to their known interests, without sharing any personal data with third parties such as advertisers. This kind of user-controlled data storage and processing has all kinds of applications – for example, with smart electricity meters (Danezis et al. 2013), and congestion charging for roads (Balasch et al. 2010).

What broader lessons can be drawn about shaping technologies for the public good? What is the public good, and who gets to define it? One option is to look at opinion polling about public concerns and values over long periods of time. The European Commission’s Eurobarometer polls reveal that in most European countries (including the UK), people have had significant concerns about data privacy for decades.

A more fundamental view of core social values can be found at the national level in constitutions, and between nations in human rights treaties. As well as the protection of private life and correspondence in the European Convention on Human Rights’ Article 8, the freedom of thought, expression, association and assembly rights in Articles 9-11 (and their equivalents in the US Bill of Rights, and the International Covenant on Civil and Political Rights) are also relevant.

This national and international law restricts how states use technology to infringe human rights – even for national security purposes. There are several US legal challenges to the constitutionality of NSA communications surveillance, with a federal court in Washington DC finding that bulk access to phone records is against the Fourth Amendment [1] (but another court in New York finding the opposite [2]). The UK campaign groups Big Brother Watch, Open Rights Group, and English PEN have taken a case to the European Court of Human Rights, arguing that UK law in this regard is incompatible with the Human Rights Convention.

Can technology development be shaped more broadly to reflect such constitutional values? One of the best-known attempts is the European Union’s data protection framework. Privacy is a core European political value, not least because of the horrors of the Nazi and Communist regimes of the 20th century. Germany, France and Sweden all developed data protection laws in the 1970s in response to the development of automated systems for processing personal data, followed by most other European countries. The EU’s Data Protection Directive (95/46/EC) harmonises these laws, and has provisions that encourage organisations to use technical measures to protect personal data.

An update of this Directive, which the European parliament has been debating over the last year, more explicitly includes this type of regulation by technology. Under this General Data Protection Regulation, organisations that are processing personal data will have to implement appropriate technical measures to protect Regulation rights. By default, organisations should only collect the minimum personal data they need, and allow individuals to control the distribution of their personal data. The Regulation would also require companies to make it easier for users to download all of their data, so that it could be uploaded to a competitor service (for example, one with better data protection) – bringing market pressure to bear (Brown and Marsden 2013).

This type of technology regulation is not uncontroversial. The European Commissioner responsible until July for the Data Protection Regulation, Viviane Reding, said that she had seen unprecedented and “absolutely fierce” lobbying against some of its provisions. Legislators would clearly be foolish to try and micro-manage the development of new technology. But the EU’s principles-based approach to privacy has been internationally influential, with over 100 countries now having adopted the Data Protection Directive or similar laws (Greenleaf 2014).

If the EU can find the right balance in its Regulation, it has the opportunity to set the new global standard for privacy-protective technologies – a very significant opportunity indeed in the global marketplace.

[1] Klayman v. Obama, 2013 WL 6571596 (D.D.C. 2013)

[2] ACLU v. Clapper, No. 13-3994 (S.D. New York December 28, 2013)

References

Balasch, J., Rial, A., Troncoso, C., Preneel, B., Verbauwhede, I. and Geuens, C. (2010) PrETP: Privacy-preserving electronic toll pricing. 19th USENIX Security Symposium, pp. 63–78.

Brown, I. (2014) The economics of privacy, data protection and surveillance. In J.M. Bauer and M. Latzer (eds.) Research Handbook on the Economics of the Internet. Cheltenham: Edward Elgar.

Brown, I. and Marsden, C. (2013) Regulating Code: Good Governance and Better Regulation in the Information Age. Cambridge, MA: MIT Press.

Danezis, G., Fournet, C., Kohlweiss, M. and Zanella-Beguelin, S. (2013) Smart Meter Aggregation via Secret-Sharing. ACM Smart Energy Grid Security Workshop.

Greenleaf, G. (2014) Sheherezade and the 101 data privacy laws: Origins, significance and global trajectories. Journal of Law, Information & Science.

Gürses, S., Troncoso, C. and Diaz, C. (2011) Engineering Privacy by Design. Computers, Privacy & Data Protection.

Haddadi, H, Hui, P., Henderson, T. and Brown, I. (2011) Targeted Advertising on the Handset: Privacy and Security Challenges. In Müller, J., Alt, F., Michelis, D. (eds) Pervasive Advertising. Heidelberg: Springer, pp. 119-137.

Information Commissioner’s Office (2006) What price privacy? HC 1056.

Spiekermann, S. and Cranor, L.F. (2009) Engineering Privacy. IEEE Transactions on Software Engineering 35 (1).


Read the full article: Keeping our secrets? Designing Internet technologies for the public good, European Human Rights Law Review 4: 369-377. This article is adapted from Ian Brown’s 2014 Oxford London Lecture, given at Church House, Westminster, on 18 March 2014, supported by Oxford University’s Romanes fund.

Professor Ian Brown is Associate Director of Oxford University’s Cyber Security Centre and Senior Research Fellow at the Oxford Internet Institute. His research is focused on information security, privacy-enhancing technologies, and Internet regulation.

]]>
https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/feed/ 1
Monitoring Internet openness and rights: report from the Citizen Lab Summer Institute 2014 https://ensr.oii.ox.ac.uk/monitoring-internet-openness-and-rights-report-from-citizen-lab-summer-institute/ Tue, 12 Aug 2014 11:44:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2916 Caption
Jon Penny presenting on the US experience of Internet-related corporate transparency reporting.

根据相关法律法规和政策,部分搜索结果未予显示 could be a warning message we will see displayed more often on the Internet; but likely translations thereof. In Chinese, this means “according to the relevant laws, regulations, and policies, a portion of search results have not been displayed.” The control of information flows on the Internet is becoming more commonplace, in authoritarian regimes as well as in liberal democracies, either via technical or regulatory means. Such information controls can be defined as “[…] actions conducted in or through information and communications technologies (ICTs), which seek to deny (such as web filtering), disrupt (such as denial-of-service attacks), shape (such as throttling), secure (such as through encryption or circumvention) or monitor (such as passive or targeted surveillance) information for political ends. Information controls can also be non-technical and can be implemented through legal and regulatory frameworks, including informal pressures placed on private companies. […]” Information controls are not intrinsically good or bad, but much is to be explored and analysed about their use, for political or commercial purposes.

The University of Toronto’s Citizen Lab organised a one-week summer institute titled “Monitoring Internet Openness and Rights” to inform the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights. A week full of presentations and workshops on the intersection of technical tools, social science research, ethical and legal reflections and policy implications was attended by a distinguished group of about 60 community members, amongst whom were two OII DPhil students; Jon Penney and Ben Zevenbergen. Conducting Internet measurements may be considered to be a terra incognita in terms of methodology and data collection, but the relevance and impacts for Internet policy-making, geopolitics or network management are obvious and undisputed.

The Citizen Lab prides itself in being a “hacker hothouse”, or an “intelligence agency for civil society” where security expertise, politics, and ethics intersect. Their research adds the much-needed geopolitical angle to the deeply technical and quantitative Internet measurements they conduct on information networks worldwide. While the Internet is fast becoming the backbone of our modern societies in many positive and welcome ways, abundant (intentional) security vulnerabilities, the ease with which human rights such as privacy and freedom of speech can be violated, threats to the neutrality of the network and the extent of mass surveillance threaten to compromise the potential of our global information sphere. Threats to a free and open internet need to be uncovered and explained to policymakers, in order encourage informed, evidence-based policy decisions, especially in a time when the underlying technology is not well-understood by decision makers.

Participants at the summer institute came with the intent to make sense of Internet measurements and information controls, as well as their social, political and ethical impacts. Through discussions in larger and smaller groups throughout the Munk School of Global Affairs – as well as restaurants and bars around Toronto – the current state of the information controls, their regulation and deployment became clear, and multi-disciplinary projects to measure breaches of human rights on the Internet or its fundamental principles were devised and coordinated.

The outcomes of the week in Toronto are impressive. The OII DPhil students presented their recent work on transparency reporting and ethical data collection in Internet measurement.

Jon Penney gave a talk on “the United States experience” with Internet-related corporate transparency reporting, that is, the evolution of existing American corporate practices in publishing “transparency reports” about the nature and quantity of government and law enforcement requests for Internet user data or content removal. Jon first began working on transparency issues as a Google Policy Fellow with the Citizen Lab in 2011, and his work has continued during his time at Harvard’s Berkman Center for Internet and Society. In this talk, Jon argued that in the U.S., corporate transparency reporting largely began with the leadership of Google and a few other Silicon Valley tech companies like Twitter, but in the Post-Snowden era, has been adopted by a wider cross section of not only technology companies, but also established telecommunications companies like Verizon and AT&T previously resistant to greater transparency in this space (perhaps due to closer, longer term relationships with federal agencies than Silicon Valley companies). Jon also canvassed evolving legal and regulatory challenges facing U.S. transparency reporting and means by which companies may provide some measure of transparency— via tools like warrant canaries— in the face of increasingly complex national security laws.

Ben Zevenbergen has recently launched ethical guidelines for the protection of privacy with regards to Internet measurements conducted via mobile phones. The first panel of the week on “Network Measurement and Information Controls” called explicitly for more concrete ethical and legal guidelines for Internet measurement projects, because the extent of data collection necessarily entails that much personal data is collected and analyzed. In the second panel on “Mobile Security and Privacy”, Ben explained how his guidelines form a privacy impact assessment for a privacy-by-design approach to mobile network measurements. The iterative process of designing a research in close cooperation with colleagues, possibly from different disciplines, ensures that privacy is taken into account at all stages of the project development. His talk led to two connected and well-attended sessions during the week to discuss the ethics of information controls research and Internet measurements. A mailing list has been set up for engineers, programmers, activists, lawyers and ethicists to discuss the ethical and legal aspects of Internet measurements. A data collection has begun to create a taxonomy of ethical issues in the discipline to inform forthcoming peer-reviewed papers.

The Citizen Lab will host its final summer institute of the series in 2015.

Caption
Ben Zevenbergen discusses ethical guidelines for Internet measurements conducted via mobile phones.

Photo credits: Ben Zevenbergen, Jon Penney. Writing Credits: Ben Zevenbergen, with small contribution from Jon Penney.

Ben Zevenbergen is an OII DPhil student and Research Assistant working on the EU Internet Science project. He has worked on legal, political and policy aspects of the information society for several years. Most recently he was a policy advisor to an MEP in the European Parliament, working on Europe’s Digital Agenda.

Jon Penney is a legal academic, doctoral student at the Oxford Internet Institute, and a Research Fellow / Affiliate of both The Citizen Lab an interdisciplinary research lab specializing in digital media, cyber-security, and human rights, at the University of Toronto’s Munk School for Global Affairs, and at the Berkman Center for Internet & Society, Harvard University.

]]>
Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/ https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/#comments Tue, 29 Jul 2014 10:47:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2847
The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger
Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions.

Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm and activities online, albeit often outside the social sciences. With support from the OUP Fell Fund, I worked with colleagues Vera Slavtcheva-Petkova and Monica Bulger to review the extent of evidence available across these other disciplines. Looking at journal articles published between 1997 and 2012, we aimed to identify any empirical evidence detailing Internet-related harms experienced by children and adolescents and to gain a sense of the types of harm recorded, their severity and frequency.

Our findings demonstrate that there are many good studies out there which do address questions of harm, rather than just risk. The narrowly drawn search found 148 empirical studies which either clearly delineated evidence of very specific harms, or offered some evidence of less well-defined harms. Further, these studies offer rich insights into three broad types of harm: health-related (including harms relating to the exacerbation of eating disorders, self-harming behaviour and suicide attempts); sex-related (largely focused on studies of online solicitation and child abuse); and bullying-related (including the effects on mental health and behaviour). Such a range of coverage would come as no surprise to most researchers focusing on children’s Internet use – these are generally well-documented areas, albeit with the focus more normally on risk rather than harm. Perhaps more surprising was the absence in our search of evidence of harm in relation to privacy violations or economic well-being, both of which are increasingly discussed as significant concerns or risks for minors using the Internet. This gap might have been a factor of our search terms, of course, but given the policy relevance of both issues, more empirical study of not just risk but actual harm would seem to be merited in these areas.

Another important gap in the literature concerned the absence of literature demonstrating that severe harms often befall those without prior evidence of vulnerability or risky behaviour. For example, in relation to websites promoting self-harm or eating disorders, there is little evidence that young people previously unaffected by self-harm or eating disorders are influenced by these websites. This isn’t unexpected – other researchers have shown that harm more often befalls those who display riskier behaviour, but this is important to bear in mind when devising treatment or policy strategies for reducing such harms.

It’s also worth noting how difficult it is to determine the prevalence of harms. The best-documented cases are often those where medical, police or court records provide great depth of qualitative detail about individual suffering in cases of online grooming and abuse, eating disorders or self-harm. Yet these cases provide little insight into prevalence. And whilst survey research offers more sense of scale, we found substantial disparities in the levels of harm reported on some issues, with the prevalence of cyber-bullying, for example, varying from 9% to 72% across studies with similar age groups of children. It’s also clear that we quite simply need much more research and policy attention on certain issues. The studies relating to the online grooming of children and production of abuse images are an excellent example of how a broad research base can make an important contribution to our understanding of online risks and harms. Here, journal articles offered a remarkably rich understanding, drawing on data from police reports, court records or clinical files as well as surveys and interviews with victims, perpetrators and carers. There would be real benefits to taking a similarly thorough approach to the study of users of pro-eating disorder, self-harm and pro-suicide websites.

Our review flagged up some important lessons for policy-makers. First, whilst we (justifiably) devote a wealth of resources to the small proportion of children experiencing severe harms as a result of online experiences, the number of those experiencing more minor harms such as those caused by online bullying is likely much higher and may thus deserve more attention than currently received. Second, the diversity of topics discussed and types of harm identified seems to suggest that a one-size-fits-all solution will not work when it comes to online protection of minors. Simply banning or filtering all potentially harmful websites, pages or groups might be more damaging than useful if it drives users to less public means of communicating. Further, whilst some content such as child sexual abuse images are clearly illegal and generate great harms, other content and sites is less easy to condemn if the balance between perpetuating harmful behavior and provide valued peer support is hard to call. It should also be remembered that the need to protect young people from online harms must always be balanced against the need to protect their rights (and opportunities) to freely express themselves and seek information online.

Finally, this study makes an important contribution to public debates about child online safety by reminding us that risk and harm are not equivalent and should not be conflated. More children and young people are exposed to online risks than are actually harmed as a result and our policy responses should reflect this. In this context, the need to protect minors from online harms must always be balanced against their rights and opportunities to freely express themselves and seek information online.

A more detailed account of our findings can be found in this Information, Communication and Society journal article: Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research. If you can’t access this, please e-mail me for a copy.


Victoria Nash is a Policy and Research Fellow at the Oxford Internet Institute (OII), responsible for connecting OII research with policy and practice. Her own particular research interests draw on her background as a political theorist, and concern the theoretical and practical application of fundamental liberal values in the Internet era. Recent projects have included efforts to map the legal and regulatory trends shaping freedom of expression online for UNESCO, analysis of age verification as a tool to protect and empower children online, and the role of information and Internet access in the development of moral autonomy.

]]>
https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/feed/ 1
The challenges of government use of cloud services for public service delivery https://ensr.oii.ox.ac.uk/challenges-government-use-cloud-services-public-service-delivery/ Mon, 24 Feb 2014 08:50:15 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2584 Caption
Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally — presenting particular challenges to government. Image by NASA Goddard Photo and Video

Ed: You open your recent Policy and Internet article by noting that “the modern treasury of public institutions is where the wealth of public information is stored and processed” … what are the challenges of government use of cloud services?

Kristina: The public sector is a very large user of information technology but data handling policies, vendor accreditation and procurement often predate the era of cloud computing. Governments first have to put in place new internal policies to ensure the security and integrity of their information assets residing in the cloud. Through this process governments are discovering that their traditional notions of control are challenged because cloud services are virtual, dynamic, and operate across borders.

One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data — after all, public sector information embodies the past, the present, and the future of a country. The ability to govern presupposes command and control over government information to the extent necessary to deliver public services, protect citizens’ personal data and to ensure the integrity of the state, among other considerations. One could even assert that in today’s interconnected world national sovereignty is conditional upon adequate data sovereignty.

Ed: A basic question: if a country’s health records (in the cloud) temporarily reside on / are processed on commercial servers in a different country: who is liable for the integrity and protection of that data, and under who’s legal scheme? ie can a country actually technically lose sovereignty over its data?

Kristina: There is always one line of responsibility flowing from the contract with the cloud service provider. However, when these health records cross borders they are effectively governed under a third country’s jurisdiction where disclosure authorities vis-à-vis the cloud service provider can likely be invoked. In some situations the geographical whereabouts of the public health records is not even that important because certain countries’ legislation has extra-territorial reach and it suffices that the cloud service provider is under an obligation to turn over data in its custody. In both situations countries’ exclusive sovereignty over public sector information would be contested. And service providers may find themselves in a Catch22 when they have to decide their legitimate course of action.

Ed: Is there a sense of how many government services are currently hosted “in the cloud”; and have there been any known problems so far about access and jurisdiction?

Kristina: The US has published some targets but otherwise we have no sense of the magnitude of government cloud computing. It is certainly an ever growing phenomenon in leading countries, for example both the US Federal Cloud Computing Strategy and the United Kingdom’s G-Cloud Framework leverage public sector cloud migration with a cloud-first strategy and they operate government application stores where public authorities can self-provision themselves with cloud-based IT services. Until now, the issues of access and jurisdiction have primarily been discussed in terms of risk (as I showed in my article) with governments adopting strategies to keep their public records within national territory, even if they are residing on a cloud service.

Ed: Is there anything about the cloud that is actually functionally novel; ie that calls for new regulation at national or international level, beyond existing data legislation?

Kristina: Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally. The legal risks arising from its transnationality won’t be solved by more legislation at the national level; even if this is a pragmatic solution, the resurrection of territoriality in cloud service contracts with the government conflicts with scalability. My article explores various avenues at the international level, for example extending diplomatic immunity, international agreements for cross-border data transfers, and reliance on mutual legal assistance treaties but in my opinion they do not satisfyingly restore a country’s quest for data sovereignty in the cloud context. In the EU a regional approach could be feasible and I am very much drawn by the idea of a European cloud environment where common information assurance principles prevail — also curtailing individual member states’ disclosure authorities.

Ed: As the economies of scale of cloud services kick in, do you think we will see increasing commercialisation of public record storing and processing (with a possible further erosion of national sovereignty)?

Kristina: Where governments have the capability they adopt a differentiated, risk-based approach corresponding to the information’s security classification: data in the public domain or that have low security markings are suitable for cloud services without further restrictions. Data that has medium security markings may still be processed on cloud services but are often confined to the national territory. Beyond this threshold, i.e. for sensitive and classified information, cloud services are not an option, judging from analysis of the emerging practice in the U.S., the UK, Canada and Australia. What we will increasingly see is IT-outsourcing that is labelled “cloud” despite not meeting the specifications of a true cloud service. Some governments are more inclined to introduce dedicated private “clouds” that are not fully scalable, in other words central data centres. For a vast number of countries, including developing ones, the options are further limited because there is no local cloud infrastructure and/or the public sector cannot afford to contract a dedicated government cloud. In this situation I could imagine an increasing reliance on transnational cloud services, with all the attendant pros and cons.

Ed: How do these sovereignty / jurisdiction / data protection questions relate to the revelations around the NSA’s PRISM surveillance programme?

Kristina: It only confirms that disclosure authorities are extensively used for intelligence gathering and that legal risks have to be taken as seriously as technical vulnerabilities. As a consequence of the Snowden revelations it is quite likely that the sensitivity of governments (as well as private sector organizations) to the impact of foreign jurisdictions will become even more pronounced. For example, there are reports estimating that the lack of trust in US-based cloud services is bound to affect the industry’s growth.

Ed: Could this usher in a whole new industry of ‘guaranteed’ national clouds..? ie how is the industry responding to these worries?

Kristina: This is already happening; in particular, European and Asian players are being very vocal in terms of marketing their regional or national cloud offerings as compatible with specific jurisdiction or national data protection frameworks.

Ed: And finally, who do you think is driving the debate about sovereignty and cloud services: government or industry?

Kristina: In the Western world it is government with its special security needs and buying power to which industry is responsive. As a nascent technology cloud services nonetheless thrive on business with governments because it opens new markets where previously in-house IT services dominated in the public sector.


Read the full paper: Kristina Irion (2013) Government Cloud Computing and National Data Sovereignty. Policy and Internet 4 (3/4) 40–71.

Kristina Irion was talking to blog editor David Sutcliffe.

]]>
Exploring variation in parental concerns about online safety issues https://ensr.oii.ox.ac.uk/exploring-variation-parental-concerns-about-online-safety-issues/ Thu, 14 Nov 2013 08:29:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1208 Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate?

boyd / Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole.

As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices — both the good and the bad. Our goal is to introduce research that can help contextualize socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology.

Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research?

boyd / Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous.

Parents have different and often conflicting views about what’s best for their children or for children writ large. This creates a significant challenge for designing interventions that are meant to be beneficial and applicable to a diverse group of people. What’s beneficial or desirable to one may not be positively received by another. More importantly, what’s helpful to one group of parents may not actually benefit parents or youth as a whole. As a result, we think it’s important to start interrogating assumptions that underpin technology policy interventions so that policymakers have a better understanding of how their decisions affect whom they’re hoping to reach.

Ed: What did your study reveal, and in particular, where do you see the greatest differences in attitudes arising? Did it reveal anything unexpected?

boyd / Hargittai: The most significant take-away from our research is that there are significant demographic differences in concerns about young people. Some of the differences are not particularly surprising. For example, parents of children who have been exposed to pornography or violent content, or who have bullied or been bullied, have greater concern that this will happen to their child. Yet, other factors may be more surprising. For example, we found significant racial and ethnic differences in how parents approach these topics. Black, Hispanic, and Asian parents are much more concerned about at least some of the online safety measures than Whites, even when controlling for socioeconomic factors and previous experiences.

While differences in cultural experiences may help explain some of these findings, our results raise serious questions as to the underlying processes and reasons for these discrepancies. Are these parents more concerned because they have a higher level of distrust for technology? Because they feel as though there are fewer societal protections for their children? Because they feel less empowered as parents? We don’t know. Still, our findings challenge policy-makers to think about the diversity of perspectives their law-making should address. And when they enact laws, they should be attentive to how those interventions are received. Just because parents of colour are more concerned does not mean that an intervention intended to empower them will do so. Like many other research projects, this study results in as many — if not more — questions than it answers.

Ed: Are parents worrying about the right things? For example, you point out that ‘stranger danger’ registers the highest level of concern from most parents, yet this is a relatively rare occurrence. Bullying is much more common, yet not such a source of concern. Do we need to do more to educate parents about risks, opportunities and coping?

boyd / Hargittai: Parental fear is a contested issue among scholars and for good reason. In many ways, it’s a philosophical issue. Should parents worry more about frequent but low-consequence issues? Or should they concern themselves more with the possibility of rare but devastating incidents? How much fear is too much fear? Fear is an understandable response to danger, but left unchecked, it can become an irrational response to perceived but unlikely risks. Fear can prevent injury, but too much fear can result in a form of protectionism that itself can be harmful. Most parents want to protect their children from harm but few think about the consequences of smothering their children in their efforts to keep them safe. All too often, in erring on the side of caution, we escalate a societal tendency to become overprotective, limiting our children’s opportunities to explore, learn, be creative and mature. Finding the right balance is very tricky.

People tend to fear things that they don’t understand. New technologies are often terrifying because they are foreign. And so parents are reasonably concerned when they see their children using tools that confound them. One of the best antidotes to fear is knowledge. Although this is outside of the scope of this paper, we strongly recommend that parents take the time to learn about the tools that their children are using, ideally by discussing them with their children. The more that parents can understand the technological choices and decisions made by their children, the more that parents can help them navigate the risks and challenges that they do face, online and off.

Ed: On the whole, it seems that parents whose children have had negative experiences online are more likely to say they are concerned, which seems highly appropriate. But we also have evidence from other studies that many parents are unaware of such experiences, and also that children who are more vulnerable offline, may be more vulnerable online too. Is there anything in your research to suggest that certain groups of parents aren’t worrying enough?

boyd / Hargittai: As researchers, we regularly use different methodologies and different analytical angles to get at various research questions. Each approach has its strengths and weaknesses, insights and blind spots. In this project, we surveyed parents, which allows us to get at their perspective, but it limits our ability to understand what they do not know or will not admit. Over the course of our careers, we’ve also surveyed and interviewed numerous youth and young adults, parents and other adults who’ve worked with youth. In particular, danah has spent a lot of time working with at-risk youth who are especially vulnerable. Unfortunately, what she’s learned in the process — and what numerous survey studies have shown — is that those who are facing some of the most negative experiences do not necessarily have positive home life experiences. Many youth face parents who are absent, addicts, or abusive; these are the youth who are most likely to be physically, psychologically, or socially harmed, online and offline.

In this study, we took parents at face value, assuming that parents are good actors with positive intentions. It is important to recognise, however, that this cannot be taken for granted. As with all studies, our findings are limited because of the methodological approach we took. We have no way of knowing whether or not these parents are paying attention, let alone whether or not their relationship to their children is unhealthy.

Although the issues of abuse and neglect are outside of the scope of this particular paper, these have significant policy implications. Empowering well-intended parents is generally a good thing, but empowering abusive parents can create unintended consequences for youth. This is an area where much more research is needed because it’s important to understand when and how empowering parents can actually put youth at risk in different ways.

Ed: What gaps remain in our understanding of parental attitudes towards online risks?

boyd / Hargittai: As noted above, our paper assumes well-intentioned parenting on behalf of caretakers. A study could explore online attitudes in the context of more information about people’s general parenting practices. Regarding our findings about attitudinal differences by race and ethnicity, much remains to be done. While existing literature alludes to some reasons as to why we might observe these variations, it would be helpful to see additional research aiming to uncover the sources of these discrepancies. It would be fruitful to gain a better understanding of what influences parental attitudes about children’s use of technology in the first place. What role do mainstream media, parents’ own experiences with technology, their personal networks, and other factors play in this process?

Another line of inquiry could explore how parental concerns influence rules aimed at children about technology uses and how such rules affect youth adoption and use of digital media. The latter is a question that Eszter is addressing in a forthcoming paper with Sabrina Connell, although that study does not include data on parental attitudes, only rules. Including details about parental concerns in future studies would allow more nuanced investigation of the above questions. Finally, much is needed to understand the impact that policy interventions in this space have on parents, youth, and communities. Even the most well-intentioned policy may inadvertently cause harm. It is important that all policy interventions are monitored and assessed as to both their efficacy and secondary effects.


Read the full paper: boyd, d., and Hargittai, E. (2013) Connected and Concerned: Exploring Variation in Parental Concerns About Online Safety Issues. Policy and Internet 5 (3).

danah boyd and Eszter Hargittai were talking to blog editor David Sutcliffe.

]]>
Ethical privacy guidelines for mobile connectivity measurements https://ensr.oii.ox.ac.uk/ethical-privacy-guidelines-for-mobile-connectivity-measurements/ Thu, 07 Nov 2013 16:01:33 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2386 Caption
Four of the 6.8 billion mobile phones worldwide. Measuring the mobile Internet can expose information about an individual’s location, contact details, and communications metadata. Image by Cocoarmani.

Ed: GCHQ / the NSA aside … Who collects mobile data and for what purpose? How can you tell if your data are being collected and passed on?

Ben: Data collected from mobile phones is used for a wide range of (divergent) purposes. First and foremost, mobile operators need information about mobile phones in real-time to be able to communicate with individual mobile handsets. Apps can also collect all sorts of information, which may be necessary to provide entertainment, location specific services, to conduct network research and many other reasons.

Mobile phone users usually consent to the collection of their data by clicking “I agree” or other legally relevant buttons, but this is not always the case. Sometimes data is collected lawfully without consent, for example for the provision of a mobile connectivity service. Other times it is harder to substantiate a relevant legal basis. Many applications keep track of the information that is generated by a mobile phone and it is often not possible to find out how the receiver processes this data.

Ed: How are data subjects typically recruited for a mobile research project? And how many subjects might a typical research data set contain?

Ben: This depends on the research design; some research projects provide data subjects with a specific app, which they can use to conduct measurements (so called ‘active measurements’). Other apps collect data in the background and, in effect, conduct local surveillance of the mobile phone use (so called passive measurements). Other research uses existing datasets, for example provided by telecom operators, which will generally be de-identified in some way. We purposely do not use the term anonymisation in the report, because much research and several case studies have shown that real anonymisation is very difficult to achieve if the original raw data is collected about individuals. Datasets can be re-identified by techniques such as fingerprinting or by linking them with existing, auxiliary datasets.

The size of datasets differs per release. Telecom operators can provide data about millions of users, while it will be more challenging to reach such a number with a research specific app. However, depending on the information collected and provided, a specific app may provide richer information about a user’s behaviour.

Ed: What sort of research can be done with this sort of data?

Ben: Data collected from mobile phones can reveal much interesting and useful information. For example, such data can show exact geographic locations and thus the movements of the owner, which can be relevant for the social sciences. On a larger scale, mass movements of persons can be monitored via mobile phones. This information is useful for public policy objectives such as crowd control, traffic management, identifying migration patterns, emergency aid, etc. Such data can also be very useful for commercial purposes, such as location specific advertising, studying the movement of consumers, or generally studying the use of mobile phones.

Mobile phone data is also necessary to understand the complex dynamics of the underlying Internet architecture. The mobile Internet is has different requirements than the fixed line Internet, so targeted investments in future Internet architecture will need to be assessed by detailed network research. Also, network research can study issues such as censorship or other forms of blocking information and transactions, which are increasingly carried out through mobile phones. This can serve as early warning systems for policy makers, activists and humanitarian aid workers, to name only a few stakeholders.

Ed: Some of these research datasets are later published as ‘open data’. What sorts of uses might researchers (or companies) put these data to? Does it tend to be mostly technical research, or there also social science applications?

Ben: The intriguing characteristic of the open data concept is that secondary uses can be unpredictable. A re-use is not necessarily technical, even if the raw data has been collected for a purely technical network research. New social science research could be based on existing technical data, or existing research analyses may be falsified or validated by other researchers. Artists, developers, entrepreneurs or public authorities can also use existing data to create new applications or to enrich existing information systems. There have been many instances when open data has been re-used for beneficial or profitable means.

However, there is also a flipside to open data, especially when the dataset contains personal information, or information that can be linked to individuals. A working definition of open data is that one makes entire databases available, in standardized, machine readable and electronic format, to any secondary user, free of charge and free of restrictions or obligations, for any purpose. If a dataset contains information about your Internet browsing habits, your movements throughout the day or the phone numbers you have called over a specific period of time, it could be quite troubling if you have no control over who re-uses this information.

The risks and harms of such re-use are very context dependent, of course. In the Western world, such data could be used as means for blackmail, stalking, identity theft, unsolicited commercial communications, etc. Further, if there is a chance our telecom operators just share data on how we use our mobile phones, we may refrain from activities, such as taking part in demonstrations, attending political gatherings, or accessing certain socially unacceptable information. Such self-censorship will damage the free society we expect. In the developing world, or in authoritarian regimes, risks and harms can be a matter of life and death for data subjects, or at least involve the risk of physical harm. This is true for all citizens, but also diplomats, aid workers and journalists or social media users.

Finally, we cannot envisage how political contexts will change in the future. Future malevolent governments, even in Europe or the US, could easily use datasets containing sensitive information to harm or control specific groups of society. One only need look at the changing political landscape in Hungary to see how specific groups are suddenly targeted in what we thought was becoming a country that adheres to Western values.

Ed: The ethical privacy guidelines note the basic relation between the level of detail in information collected and the resulting usefulness of the dataset (datasets becoming less powerful as subjects are increasingly de-identified). This seems a fairly intuitive and fundamentally unavoidable problem; is there anything in particular to say about it?

Ben: Research often requires rich datasets for worthwhile analyses to be conducted. These will inevitably sometimes contain personal information, as it can be important to relate specific data to data subjects, whether anonymised, pseudonymised or otherwise. Far reaching deletion, aggregation or randomisation of data can make the dataset useless for the research purposes.

Sophisticated methods of re-identifying datasets, and unforeseen methods which will be developed in future, mean that much information must be deleted or aggregated in order for a dataset containing personal information to be truly anonymous. It has become very difficult to determine when a dataset is sufficiently anonymised to the extent that it can enjoy the legal exception offered by data protection laws around the world and therefore be distributed as open data, without legal restrictions.

As a result, many research datasets cannot simply be released. The guidelines do not force the researcher to a zero-risk situation, where only useless or meaningless datasets can be released. The guidelines force the researcher to think very carefully about the type of data that will be collected, about data processing techniques and different disclosure methods. Although open data is an attractive method of disseminating research data, sometimes managed access systems may be more appropriate. The guidelines constantly trigger the researcher to consider the risks to data subjects in their specific context during each stage of the research design. They serve as a guide, but also a normative framework for research that is potentially privacy invasive.

Ed: Presumably mobile companies have a duty to delete their data after a certain period; does this conflict with open datasets, whose aim is to be available indefinitely?

Ben: It is not a requirement for open data to be available indefinitely. However, once information is published freely on the Internet, it is very hard – if not impossible – to delete it. The researcher loses all control over a dataset once it is published online. So, if a dataset is sufficiently de-identified for the re-identification techniques that are known today, this does not mean that future techniques cannot re-identify the dataset. We can’t expect researchers to take into account all science-fiction type future developments, but the guidelines to force the researcher to consider what successful re-identification would reveal about data subjects.

European mobile phone companies do have a duty to keep logs of communications for 6 months to 2 years, depending on the implication of the misguided data retention directive. We have recently learned that intelligence services worldwide have more or less unrestricted access to such information. We have no idea how long this information is stored in practice. Recently it has been frequently been stated that deleting data has become more expensive than just keeping it. This means that mobile phone operators and intelligence agencies may keep data on our mobile phone use forever. This must be taken into account when assessing which auxiliary datasets could be used to re-identify a research dataset. An IP-address could be sufficient to link much information to an individual.

Ed: Presumably it’s impossible for a subject to later decide they want to be taken out of an open dataset; firstly due to cost, but also because (by definition) it ought to be impossible to find them in an anonymised dataset. Does this present any practical or legal problems?

Ben: In some countries, especially in Europe, data subjects have a legal right to object to their data being processed, by withdrawing consent or engaging in a legal procedure with the data processor. Although this is an important right, exercising it may lead to undesirable consequences for research. For example, the underlying dataset will be incomplete for secondary researchers who want to validate findings.

Our guidelines encourage researchers to be transparent about their research design, data processing and foreseeable secondary uses of the data. On the one hand, this builds trust in the network research discipline. On the other, it gives data subjects the necessary information to feel confident to share their data. Still, data subjects should be able to retract their consent via electronic means, instead of sending letters, if they can substantiate an appreciable harm to them.

Ed: How aware are funding bodies and ethics boards of the particular problems presented by mobile research; and are they categorically different from other human-subject research data? (eg interviews / social network data / genetic studies etc.)

Ben: University ethical boards or funding bodies are be staffed by experts in a wide range of disciplines. However, this does not mean they understand the intricate details of complex Internet measurements, de-identification techniques or the state of affairs with regards to re-identification techniques, nor the harms a research programme can inflict given a specific context. For example, not everyone’s intuitive moral privacy compass will be activated when they read in a research proposal that the research systems will “monitor routing dynamics, by analysing packet traces collected from cell towers and internet exchanges”, or similar sentences.

Our guidelines encourage the researcher to write up the choices made with regards to personal information in a manner that is clear and understandable for the layperson. Such a level of transparency is useful for data subjects —  as well as ethical boards and funding bodies — to understand exactly what the research entails and how risks have been accommodated.

Ed: Linnet Taylor has already discussed mobile data mining from regions of the world with weak privacy laws: what is the general status of mobile privacy legislation worldwide?

Ben: Privacy legislation itself is about as fragmented and disputed as it gets. The US generally treats personal information as a commodity that can be traded, which enables Internet companies in Silicon Valley to use data as the new raw material in the information age. Europe considers privacy and data protection as a fundamental right, which is currently regulated in detail, albeit based on a law from 1995. The review of European data protection regulation has been postponed to 2015, possibly as a result of the intense lobbying effort in Brussels to either weaken or strengthen the proposed law. Some countries have not regulated privacy or data protection at all. Other countries have a fundamental right to privacy, which is not further developed in a specific data protection law and thus hardly enforced. Another group of countries have transplanted the European approach, but do not have the legal expertise to apply the 1995 law to the digital environment. The future of data protection is very much up in the air and requires much careful study.

The guidelines we have publishing take the international human rights framework as a base, while drawing inspiration from several existing legal concepts such as data minimisation, purpose limitation, privacy by design and informed consent. The guidelines give a solid base for privacy aware research design. We do encourage researchers to discuss their projects with colleagues and legal experts as much as possible, though, because best practices and legal subtleties can vary per country, state or region.

Read the guidelines: Zevenbergen, B., Brown,I., Wright, J., and Erdos, D. (2013) Ethical Privacy Guidelines for Mobile Connectivity Measurements. Oxford Internet Institute, University of Oxford.


Ben Zevenbergen was talking to blog editor David Sutcliffe.

]]>
Staying free in a world of persuasive technologies https://ensr.oii.ox.ac.uk/staying-free-in-a-world-of-persuasive-technologies/ Mon, 29 Jul 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1541 iPhone apps
We’re living through a crisis of distraction. Image: “What’s on my iPhone” by Erik Mallinson

Ed: What persuasive technologies might we routinely meet online? And how are they designed to guide us towards certain decisions?

There’s a broad spectrum, from the very simple to the very complex. A simple example would be something like Amazon’s “one-click” purchase feature, which compresses the entire checkout process down to a split-second decision. This uses a persuasive technique known as “reduction” to minimise the perceived cost to a user of going through with a purchase, making it more likely that they’ll transact. At the more complex end of the spectrum, you have the whole set of systems and subsystems that is online advertising. As it becomes easier to measure people’s behaviour over time and across media, advertisers are increasingly able to customise messages to potential customers and guide them down the path toward a purchase.

It isn’t just commerce, though: mobile behavior-change apps have seen really vibrant growth in the past couple years. In particular, health and fitness: products like Nike+, Map My Run, and Fitbit let you monitor your exercise, share your performance with friends, use social motivation to help you define and reach your fitness goals, and so on. One interesting example I came across recently is called “Zombies, Run!” which motivates by fright, spawning virtual zombies to chase you down the street while you’re on your run.

As one final example, If you’ve ever tried to deactivate your Facebook account, you’ve probably seen a good example of social persuasive technology: the screen that comes up saying, “If you leave Facebook, these people will miss you” and then shows you pictures of your friends. Broadly speaking, most of the online services we think we’re using for “free” — that is, the ones we’re paying for with the currency of our attention — have some sort of persuasive design goal. And this can be particularly apparent when people are entering or exiting the system.

Ed: Advertising has been around for centuries, so we might assume that we have become clever about recognizing and negotiating it — what is it about these online persuasive technologies that poses new ethical questions or concerns?

The ethical questions themselves aren’t new, but the environment in which we’re asking them makes them much more urgent. There are several important trends here. For one, the Internet is becoming part of the background of human experience: devices are shrinking, proliferating, and becoming more persistent companions through life. In tandem with this, rapid advances in measurement and analytics are enabling us to more quickly optimise technologies to reach greater levels of persuasiveness. That persuasiveness is further augmented by applying knowledge of our non-rational psychological biases to technology design, which we are doing much more quickly than in the design of slower-moving systems such as law or ethics. Finally, the explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power.

To me, the biggest ethical questions are those that concern individual freedom and autonomy. When, exactly, does a “nudge” become a “push”? When we call these types of technology “persuasive,” we’re implying that they shouldn’t cross the line into being coercive or manipulative. But it’s hard to say where that line is, especially when it comes to persuasion that plays on our non-rational biases and impulses. How persuasive is too persuasive? Again, this isn’t a new ethical question by any means, but it is more urgent than ever.

These technologies also remind us that the ethics of attention is just as important as the ethics of information. Many important conversations are taking place across society that deal with the tracking and measurement of user behaviour. But that information is valuable largely because it can be used to inform some sort of action, which is often persuasive in nature. But we don’t talk nearly as much about the ethics of the persuasive act as we do about the ethics of the data. If we did, we might decide, for instance, that some companies have a moral obligation to collect more of a certain type of user data because it’s the only way they could know if they were persuading a person to do something that was contrary to their well-being, values, or goals. Knowing a person better can be the basis not only for acting more wrongly toward them, but also more rightly.

As users, then, persuasive technologies require us to be more intentional about how we define and express our own goals. The more persuasion we encounter, the clearer we need to be about what it is we actually want. If you ask most people what their goals are, they’ll say things like “spending more time with family,” “being healthier,” “learning piano,” etc. But we don’t all accomplish the goals we have — we get distracted. The risk of persuasive technology is that we’ll have more temptations, more distractions. But its promise is that we can use it to motivate ourselves toward the things we find fulfilling. So I think what’s needed is more intentional and habitual reflection about what our own goals actually are. To me, the ultimate question in all this is how we can shape technology to support human goals, and not the other way around.

Ed: What if a persuasive design or technology is simply making it easier to do something we already want to do: isn’t this just ‘user centered design’? (ie a good thing?)

Yes, persuasive design can certainly help motivate a user toward their own goals. In these cases it generally resonates well with user-centered design. The tension really arises when the design leads users toward goals they don’t already have. User-centered design doesn’t really have a good way to address persuasive situations, where the goals of the user and the designer diverge.

To reconcile this tension, I think we’ll probably need to get much better at measuring people’s intentions and goals than we are now. Longer-term, we’ll probably need to rethink notions like “design” altogether. When it comes to online services, it’s already hard to talk about “products” and “users” as though they were distinct entities, and I think this will only get harder as we become increasingly enmeshed in an ongoing co-evolution.

Governments and corporations are increasingly interested in “data-driven” decision-making: isn’t that a good thing? Particularly if the technologies now exist to collect ‘big’ data about our online actions (if not intentions)?

I don’t think data ever really drives decisions. It can definitely provide an evidentiary basis, but any data is ultimately still defined and shaped by human goals and priorities. We too often forget that there’s no such thing as “pure” or “raw” data — that any measurement reflects, before anything else, evidence of attention.

That being said, data-based decisions are certainly preferable to arbitrary ones, provided that you’re giving attention to the right things. But data can’t tell you what those right things are. It can’t tell you what to care about. This point seems to be getting lost in a lot of the fervour about “big data,” which as far as I can tell is a way of marketing analytics and relational databases to people who are not familiar with them.

The psychology of that term, “big data,” is actually really interesting. On one hand, there’s a playful simplicity to the word “big” that suggests a kind of childlike awe where words fail. “How big is the universe? It’s really, really big.” It’s the unknown unknowns at scale, the sublime. On the other hand, there’s a physicality to the phrase that suggests an impulse to corral all our data into one place: to contain it, mould it, master it. Really, the term isn’t about data abundance at all – it reflects our grappling with a scarcity of attention.

The philosopher Luciano Floridi likens the “big data” question to being at a buffet where you can eat anything, but not everything. The challenge comes in the choosing. So how do you choose? Whether you’re a government, a corporation, or an individual, it’s your ultimate aims and values — your ethical priorities — that should ultimately guide your choosiness. In other words, the trick is to make sure you’re measuring what you value, rather than just valuing what you already measure.


James Williams is a doctoral student at the Oxford Internet Institute. He studies the ethical design of persuasive technology. His research explores the complex boundary between persuasive power and human freedom in environments of high technological persuasion.

James Williams was talking to blog editor Thain Simon.

]]>
How effective is online blocking of illegal child sexual content? https://ensr.oii.ox.ac.uk/how-effective-is-online-blocking-of-illegal-child-sexual-content/ Fri, 28 Jun 2013 09:30:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1576 Anonymous Belgium
The recent announcement by ‘Anonymous Belgium’ (above) that they would ‘liberate the Belgian Web’ on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking?

Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email.

Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites – notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are?

Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures.

It is, in our view, especially important at a time when governments do not hesitate to carry out secret online surveillance of citizens without any transparency or accountability, that any interference with online content must be clearly prescribed by law, have a legitimate aim and, most importantly, be proportional and not go beyond what is necessary to achieve that aim. In addition, the role of private actors, such as ISPs, search engine companies or social networks, must be very carefully considered. It must be clear that decisions about which content or behaviours are illegal and/or harmful must be taken or at least be surveyed by the judicial power in a democratic society.

Ed: You suggest that removal of websites at their source (mostly in the US and Canada) is a more effective means of stopping the distribution of child pornography — but that European law enforcement has often been insufficiently committed to such action. Why is this? And how easy are cross-jurisdictional efforts to tackle this sort of content?

Authors: The blocking of websites is, although questionably ineffective as a method of making the content inaccessible, a quick way to be seen to take action against the appearance of unwanted material on the Internet. The removal of content on the other hand requires not only the identification of those responsible for hosting the content but more importantly the actual perpetrators. This is of course a more intrusive and lengthy process, for which law enforcement agencies currently lack resources.

Moreover, these agencies may indeed run into obstacles related to territorial jurisdiction and difficult international cooperation. However, prioritising and investing in actual removal of content, even though not feasible in certain circumstances, will ensure that child sexual abuse images do not further circulate, and, hence, that the risk of repetitive re-victimization of abused children is reduced.


Read the full paper: Karel Demeyer, Eva Lievens and Jos Dumortier (2012) Blocking and Removing Illegal Child Sexual Content: Analysis from a Technical and Legal Perspective. Policy and Internet 4 (3-4).

Karel Demeyer, Eva Lievens and Jos Dumortier were talking to blog editor Heather Ford.

]]>
The global fight over copyright control: Is David beating Goliath at his own game? https://ensr.oii.ox.ac.uk/the-global-fight-over-copyright-control-is-david-beating-goliath-at-his-own-game/ Mon, 10 Jun 2013 11:27:11 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1262 Anti-HADOPI march in Paris
Anti-HADOPI march in Paris, 2009. Image by kurto.

In the past few years, many governments have attempted to curb online “piracy” by enforcing harsher copyright control upon Internet users. This trend is now well documented in the academic literature, as with Jon Bright and José Agustina‘s or Sebastian Haunss‘ recent reviews of such developments.

However, as the digital copyright control bills of the 21st century reached parliamentary floors, several of them failed to pass. Many of these legislative failures, such as the postponement of the SOPA and PIPA bills in the United States, succeeded in mobilizing large audiences and received widespread media coverage.

Writing about these bills and the related events that led to the demise of the similarly-intentioned Anti-Counterfeiting Treaty Agreement (ACTA), Susan Sell, a seasoned analyst of intellectual property enforcement, points to the transnational coalition of Internet users at the heart of these outcomes. As she puts it:

In key respects, this is a David and Goliath story in which relatively weak activists were able to achieve surprising success against the strong.

That analogy also appears in our recently published article in Policy & Internet, which focuses on the groups that fought several digital copyright control bills as they went through the European and French parliaments in 2007-2009 — most notably the EU “Telecoms Package” and the French “HADOPI” laws.

Like Susan Sell, our analysis shows “David” civil society groups formed by socially and technically skilled activists disrupting the work of “Goliath” coalitions of powerful actors that had previously been successful at converting the interests of the so-called “creative industries” into copyright law.

To explain this process, we stress the importance of digital environments for providing contenders of copyright reform with a robust discursive opportunity structure — a space in which activist groups could defend and diffuse alternative understandings and practices of copyright control and telecommunication reform.

These counter-frames and practices refer to the Internet as a public good, and make openness, sharing and creativity central features of the new digital economy. They also require that copyright control and telecom regulation respect basic principles of democratic life, such as the right to access information.

Once put into the public space by skilled activists from the free software community and beyond, this discourse chimed with a larger audience, which eventually led many European and French parliamentarians to oppose “graduated response” and “three-strikes” initiatives that threatened Internet users with Internet access termination for successive copyright infringement. The reforms that we studied had different legal outcomes, thereby reflecting the current state of copyright regulation.

In our analysis, we say a lot more about the kind of skills that we briefly allude to here, such as political coding abilities to support political and legal analysis. We also draw on previous work by Andrew Chadwick to forge the concept of digital network repertoires of contention, by which we mean the tactical use of digital communication to mobilize individuals into loose protest groups.

This part of our research sheds light on how “David” ended up beating “Goliath”, with activists relying on their technical skills and high levels of digital literacy to overcome the logic of collective action and to counterbalance their comparatively weak economic resources.

However, as we write in our paper, David does not systematically beat Goliath over copyright control and telecom regulation. The “three-strikes” or “graduated response” approach to unauthorized file-sharing, where Internet users are monitored and sanctioned if suspected of digital “piracy”, is still very much alive.

France is an interesting case study to date, as it pioneered this scheme under Nicolas Sarkozy’s presidency. Although the current left-wing government seems determined to dismantle the “HADOPI” body set up by its predecessor, which has proven largely ineffective in curbing online copyright infringement, it has not renounced the monitoring and sanctioning of illegal file-sharing.

Furthermore, as both our case studies illustrate, online collective action had to be complemented by offline lobbying and alliances with like-minded parliamentary actors, consumer groups and businesses to work effectively. The extent to which activism has actually gone ‘digital’ therefore requires some nuance.

Finally, as we stress in our article and as Yana observes in her literature review on Internet content regulation in liberal democracies, further comparative work is needed to assess whether the “Davids” of Internet activism are beating the “Goliaths” in the global fight over online file-sharing and copyright control.

We therefore hope that our article will incite other researchers to study the social groups that compete over intellectual property lawmaking. The legislative landscape is rife with reforms of copyright law and telecom regulation, and the conflicts that they generate carry important lessons for Internet politics scholars.

References

Breindl, Y. and Briatte, F. (2013) Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5 (1) 27-55.

Breindl, Y. (2013) Internet content regulation in liberal democracies. A literature review. Working Papers on Digital Humanities, Institut für Politikwissenschaft der Georg-August-Universität Göttingen.

Bright, J. and Agustina, J.R. (2013) Mediating Surveillance: The Developing Landscape of European Online Copyright Enforcement. Journal of Contemporary European Research 9 (1).

Chadwick, A. (2007) Digital Network Repertoires and Organizational Hybridity. Political Communication 24 (3).

Haunss, S. (2013) Conflicts in the Knowledge Society: The Contentious Politics of Intellectual Property. Cambridge Intellectual Property and Information Law (No. 20), Cambridge University Press.

Sell, S.K. (2013) Revenge of the “Nerds”: Collective Action against Intellectual Property Maximalism in the Global Information Age. International Studies Review 15 (1) 67-85.


Read the full paper: Yana Breindl and François Briatte (2012) Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5 (1).

]]>
Time for debate about the societal impact of the Internet of Things https://ensr.oii.ox.ac.uk/time-for-debate-about-the-societal-impact-of-the-internet-of-things/ Mon, 22 Apr 2013 14:32:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=931
European conference on the Internet of Things
The 2nd Annual Internet of Things Europe 2010: A Roadmap for Europe, 2010. Image by Pierre Metivier.
On 17 April 2013, the US Federal Trade Commission published a call for inputs on the ‘consumer privacy and security issues posed by the growing connectivity of consumer devices, such as cars, appliances, and medical devices’, in other words, about the impact of the Internet of Things (IoT) on the everyday lives of citizens. The call is in large part one for information to establish what the current state of technology development is and how it will develop, but it also looks for views on how privacy risks should be weighed against potential societal benefits.

There’s a lot that’s not very new about the IoT. Embedded computing, sensor networks and machine to machine communications have been around a long time. Mark Weiser was developing the concept of ubiquitous computing (and prototyping it) at Xerox PARC in 1990.  Many of the big ideas in the IoT — smart cars, smart homes, wearable computing — are already envisaged in works such as Nicholas Negroponte’s Being Digital, which was published in 1995 before the mass popularisation of the internet itself. The term ‘Internet of Things’ has been around since at least 1999. What is new is the speed with which technological change has made these ideas implementable on a societal scale. The FTC’s interest reflects a growing awareness of the potential significance of the IoT, and the need for public debate about its adoption.

As the cost and size of devices falls and network access becomes ubiquitous, it is evident that not only major industries but whole areas of consumption, public service and domestic life will be capable of being transformed. The number of connected devices is likely to grow fast in the next few years. The Organisation for Economic Co-operation and Development (OECD) estimates that while a family with two teenagers may have 10 devices connected to the internet, in 2022 this may well grow to 50 or more. Across the OECD area the number of connected devices in households may rise from an estimated 1.7 billion today to 14 billion by 2022. Programmes such as smart cities, smart transport and smart metering will begin to have their effect soon. In other countries, notably in China and Korea, whole new cities are being built around smart infrastructuregiving technology companies the opportunity to develop models that could be implemented subsequently in Western economies.

Businesses and governments alike see this as an opportunity for new investment both as a basis for new employment and growth and for the more efficient use of existing resources. The UK Government is funding a strand of work under the auspices of the Technology Strategy Board on the IoT, and the IoT is one of five themes that are the subject of the Department for Business, Innovation & Skills (BIS)’s consultation on the UK’s Digital Economy Strategy (alongside big data, cloud computing, smart cities, and eCommerce).

The enormous quantity of information that will be produced will provide further opportunities for collecting and analysing big data. There is consequently an emerging agenda about privacy, transparency and accountability. There are challenges too to the way we understand and can manage the complexity of interacting systems that will underpin critical social infrastructure.

The FTC is not alone in looking to open public debate about these issues. In February, the OII and BCS (the Chartered Institute for IT) ran a joint seminar to help the BCS’s consideration about how it should fulfil its public education and lobbying role in this area. A summary of the contributions is published on the BCS website.

The debate at the seminar was wide ranging. There was no doubt that the train has left the station as far as this next phase of the Internet is concerned. The scale of major corporate investment, government encouragement and entrepreneurial enthusiasm are not to be deflected. In many sectors of the economy there are already changes that are being felt already by consumers or will be soon enough. Smart metering, smart grid, and transport automation (including cars) are all examples. A lot of the discussion focused on risk. In a society which places high value on audit and accountability, it is perhaps unsurprising that early implementations have often been in using sensors and tags to track processes and monitor activity. This is especially attractive in industrial structures that have high degrees of subcontracting.

Wider societal risks were also discussed. As for the FTC, the privacy agenda is salient. There is real concern that the assumptions which underlie the data protection regimeespecially its reliance on data minimisationwill not be adequate to protect individuals in an era of ubiquitous data. Nor is it clear that the UK’s regulatorthe Information Commissionerwill be equipped to deal with the volume of potential business. Alongside privacy, there is also concern for security and the protection of critical infrastructure. The growth of reliance on the IoT will make cybersecurity significant in many new ways. There are issues too about complexity and the unforeseenand arguably unforeseeableconsequences of the interactions between complex, large, distributed systems acting in real time, and with consequences that go very directly to the wellbeing of individuals and communities.

There are great opportunities and a pressing need for social research into the IoT. The data about social impacts has been limited hitherto given the relatively few systems deployed. This will change rapidly. As Governments consult and bodies like the BCS seek to advise, it’s very desirable that public debate about privacy and security, access and governance, take place on the basis of real evidence and sound analysis.

]]>
Uncovering the structure of online child exploitation networks https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/ https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/#comments Thu, 07 Feb 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=661 The Internet has provided the social, individual, and technological circumstances needed for child pornography to flourish. Sex offenders have been able to utilize the Internet for dissemination of child pornographic content, for social networking with other pedophiles through chatrooms and newsgroups, and for sexual communication with children. A 2009 estimate by the United Nations estimates that there are more than four million websites containing child pornography, with 35 percent of them depicting serious sexual assault [1]. Even if this report or others exaggerate the true prevalence of those websites by a wide margin, the fact of the matter is that those websites are pervasive on the world wide web.

Despite large investments of law enforcement resources, online child exploitation is nowhere near under control, and while there are numerous technological products to aid in finding child pornography online, they still require substantial human intervention. Despite this, steps can be taken to increase the automation process of these searches, to reduce the amount of content police officers have to examine, and increase the time they can spend on investigating individuals.

While law enforcement agencies will aim for maximum disruption of online child exploitation networks by targeting the most connected players, there is a general lack of research on the structural nature of these networks; something we aimed to address in our study, by developing a method to extract child exploitation networks, map their structure, and analyze their content. Our custom-written Child Exploitation Network Extractor (CENE) automatically crawls the Web from a user-specified seed page, collecting information about the pages it visits by recursively following the links out of the page; the result of the crawl is a network structure containing information about the content of the websites, and the linkages between them [2].

We chose ten websites as starting points for the crawls; four were selected from a list of known child pornography websites while the other six were selected and verified through Google searches using child pornography search terms. To guide the network extraction process we defined a set of 63 keywords, which included words commonly used by the Royal Canadian Mounted Police to find illegal content; most of them code words used by pedophiles. Websites included in the analysis had to contain at least seven of the 63 unique keywords, on a given web page; manual verification showed us that seven keywords distinguished well between child exploitation web pages and regular web pages. Ten sports networks were analyzed as a control.

The web crawler was found to be able to properly identify child exploitation websites, with a clear difference found in the hardcore content hosted by child exploitation and non-child exploitation websites. Our results further suggest that a ‘network capital’ measure — which takes into account network connectivity, as well as severity of content — could aid in identifying the key players within online child exploitation networks. These websites are the main concern of law enforcement agencies, making the web crawler a time saving tool in target prioritization exercises. Interestingly, while one might assume that website owners would find ways to avoid detection by a web crawler of the type we have used, these websites — despite the fact that much of the content is illegal — turned out to be easy to find. This fits with previous research that has found that only 20-25 percent of online child pornography arrestees used sophisticated tools for hiding illegal content [3,4].

As mentioned earlier, the huge amount of content found on the Internet means that the likelihood of eradicating the problem of online child exploitation is nil. As the decentralized nature of the Internet makes combating child exploitation difficult, it becomes more important to introduce new methods to address this. Social network analysis measurements, in general, can be of great assistance to law enforcement investigating all forms of online crime—including online child exploitation. By creating a web crawler that reduces the amount of hours officers need to spend examining possible child pornography websites, and determining whom to target, we believe that we have touched on a method to maximize the current efforts by law enforcement. An automated process has the added benefit of aiding to keep officers in the department longer, as they would not be subjugated to as much traumatic content.

There are still areas for further research; the first step being to further refine the web crawler. Despite being a considerable improvement over a manual analysis of 300,000 web pages, it could be improved to allow for efficient analysis of larger networks, bringing us closer to the true size of the full online child exploitation network, but also, we expect, to some of the more hidden (e.g., password/membership protected) websites. This does not negate the value of researching publicly accessible websites, given that they may be used as starting locations for most individuals.

Much of the law enforcement to date has focused on investigating images, with the primary reason being that databases of hash values (used to authenticate the content) exists for images, and not for videos. Our web crawler did not distinguish between the image content, but utilizing known hash values would help improve the validity of our severity measurement. Although it would be naïve to suggest that online child exploitation can be completely eradicated, the sorts of social network analysis methods described in our study provide a means of understanding the structure (and therefore key vulnerabilities) of online networks; in turn, greatly improving the effectiveness of law enforcement.

[1] Engeler, E. 2009. September 16. UN Expert: Child Porn on Internet Increases. The Associated Press.

[2] Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

[3] Carr, J. 2004. Child Abuse, Child Pornography and the Internet. London: NCH.

[4] Wolak, J., D. Finkelhor, and K.J. Mitchell. 2005. “Child Pornography Possessors Arrested in Internet-Related Crimes: Findings from the National Juvenile Online Victimization Study (NCMEC 06–05–023).” Alexandria, VA: National Center for Missing and Exploited Children.


Read the full paper: Westlake, B.G., Bouchard, M., and Frank, R. 2012. Finding the Key Players in Online Child Exploitation Networks. Policy and Internet 3 (2).

]]>
https://ensr.oii.ox.ac.uk/uncovering-the-structure-of-online-child-exploitation-networks/feed/ 2
Public policy responses to cybercrime: a new special issue from Policy and Internet https://ensr.oii.ox.ac.uk/public-policy-responses-to-cybercrime-a-new-special-issue-from-policy-and-internet/ Wed, 01 Jun 2011 11:38:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=645 Cybercrime is just one of many significant and challenging issues of ethics and public policy raised by the Internet. It has policy implications for both national and supra-national legislation, involving, as it may, attacks against the integrity, authenticity, and confidentiality of information systems; content-related crimes, and “traditional”: crimes committed using networked technologies.

While ‘cybercrime’ can be used to describe a wide range of undesirable conduct facilitated by networked technologies, it is not a legal term of art, and many so-called cybercrimes (such as cyber-rape and the virtual vandalism of virtual worlds) are not necessarily crimes as far as the criminal law is concerned. This can give rise to novel situations in which outcomes that feel instinctively wrong do not give rise to criminal liability. Emily Finch discusses the tragic case of Bernard Gilbert: a man whose argument over a disputed parking space led to his death, a police officer having disclosed Gilbert’s home address to his assailant. The officer was charged simply with the offence of disclosing personal data; the particular consequences of such disclosure being immaterial under English criminal law. Finch argues that this is unsatisfactory: as more personal information is gathered and available online, the greater the potential risk to the individual from its unauthorized disclosure. She advocates a two-tier structure for liability in the event that disclosure results in harm.

Clearly, current criminal law often struggles to deal with behaviours that were either not technologically possible at the time that the law was made, or were not within the contemplation of the legislature. Conversely, criminal liability might arise unexpectedly. Sandra Schmitz and Lawrence Siry explore the curious overlap between the nature and motivation of sexting and the possibility that it might fall foul of child pornography laws. They argue that the law as it stands is questionable, as the nature and content of ‘sext’ messages is generally at odds with conceptualizations of child pornography. Given that laws designed to protect children may also be used to criminalize unremarkable adolescent behaviour, this is an example of where the law should clearly be changed.

Simone van der Hof and Bert-Jaap Koops also address the mobile Internet usage of young people, considering the boundaries between freedom and autonomy on the one hand and control and repression on the other, arguing that the criminal law is yet again somewhat inadequate, as its overuse can lead to adolescents relying on legal protection rather than taking more proactive responsibility for their own online safety. The role of policy here should be to stimulate digital literacy in the first instance and, for the graver risks, to foster a co-regulatory regime by the Internet industry; criminalization should only be used as a last resort. However, the technology itself can also be used to assist law enforcement agencies in policing the Internet, for example in identifying and targeting online child exploitation networks. Bryce G. Westlake, Martin Bouchard, and Richard Frank demonstrate the use of an automated tool to provide greater efficiency in target prioritization for policing authorities; also opening the policy discussion surrounding the desirability of augmentation of traditional police procedures by automated software agents.

As well as harm to individuals, misuse of the Internet can also cause immense commercial damage to the creative industries by unlawful copyright infringement. The empirical study by Jonathan Basamanowicz and Martin Bouchard on the policing of copyright piracy rings suggests that increased regulation will only encourage offenders to adapt their behaviour into something less amenable to control. They argue that an effective response must consider the motivations and modus operandi of the offenders, and propose a situational crime prevention framework which may, at the very least, curtail their activities. Finally, again from an economic perspective, Michael R. Hammock considers the economics of website security seals and analyses whether market forces are controlling privacy and security adequately, concluding that unilateral regulation may actually harm those consumers who might not care whether or not they are protected, but who will have to bear the indirect cost in the form of a premium for increased protection.

The spectrum of policy responses can therefore be seen as existing along a continuum, with “top-down” responses originating from state agencies at one extreme, to bottom-up responses originating from private institutions at the other. Malcolm Shore, Yi Du, and Sherali Zeadally apply this public–private model to the regulation of cyber-attacks on critical national infrastructures. They consider the benefits and limitations of the various public and private initiatives designed to implement a national cybersecurity strategy for New Zealand and propose a model of assured public–private partnership based on incentivized adoption as the most effective way forward.

The articles in this special issue show that there can be no single simple policy-based solution to cybercrime, but suggest instead that there is a role for policymakers to introduce multi-tiered responses, involving a mix of the law, education, industry responsibility, and technology. Responses will also often require cooperation between law enforcement organizations, international coordination, and cooperation between the public and private spheres. It is clear that an effective response to cybercrime must therefore be greater than the sum of its parts, and should evolve as part of the diffuse governance network which results from the complex, yet natural, tensions between law, society, and the Internet.

]]>
Personal data protection vs the digital economy? OII policy forum considers our digital footprints https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/ https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/#comments Thu, 03 Feb 2011 11:12:13 +0000 http://blogs.oii.ox.ac.uk/policy/?p=177 Catching a bus, picking up some groceries, calling home to check on the children – all simple, seemingly private activities that characterise many people’s end to the working day. Yet each of these activities leaves a data trail that enables companies, even the state, to track the most mundane aspects of our lives. Add to this the range and quantity of personal data that many of us willingly post online on our blogs, Facebook walls or Google docs, and it is clear that the trail of digital footprints we leave is long and hard to erase.

Even if in most cases, this data is only likely to be used in an anonymised and aggregated form to identify trends in transport or shopping patterns, or to personalise the Internet services available to us, the fact that its collection is now so routine and so extensive should make us question whether the regulatory system governing data collection, storage and use is fit for purpose. A forthcoming OII policy forum on Tracing the Policy Implications of the Future Digital Economy (16 Feb) will consider this question, bringing together leading academics from across several disciplines with policy-makers and industry experts.

This is a topic which the OII is well-placed to address. Ian Brown’s Privacy Values Network project addresses a major knowledge gap, measuring the various costs and benefits to individuals of handing over data in different contexts, as without this we simply don’t know how much people value their privacy (or indeed understand its limits). The last Oxford Internet Survey (OxIS) rather surprisingly showed that in 2009 people were significantly less concerned about privacy online in the UK than in previous years (45% of all those surveyed in 2009 against 66% in 2007); we wait to see whether this finding is repeated when OxIS 2011 goes into the field next month.

Our faculty also have much to say about the adequacy (or otherwise) of the regulatory framework: a recent report by Ian Brown and Douwe Korff on New Challenges to Data Protection identified for the European Commission the scale of challenges presented to the current data protection regime, whilst Viktor-Mayer Schoenberger’s book Delete: The Virtue of Forgetting in the Digital Age has rightly raised the suggestion that personal information online should have an expiration date, to ensure it doesn’t hang around for years to embarrass us at a later date.

The forum will consider the way in which the market for information storage and collection is rapidly changing with the advent of new technologies, and on this point, one conclusion is clear: if we accept Helen Nissenbaum’s contention that personal information and data should be collected and protected according to the social norms governing different social contexts, then we need to get to grips pretty fast with the way in which these technologies are playing out in the way we work, play, learn and consume.

]]>
https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/feed/ 1
Internet, Politics, Policy 2010: Closing keynote by Viktor Mayer-Schönberger https://ensr.oii.ox.ac.uk/internet-politics-policy-2010-closing-keynote-by-viktor-mayer-schonberger/ Fri, 17 Sep 2010 15:48:04 +0000 http://blogs.oii.ox.ac.uk/policy/?p=94 Our two-day conference is coming to a close with a keynote by Viktor Mayer-Schönberger who is soon to be joining the faculty of the Oxford Internet Institute as Professor of Internet Governance and Regulation.

Viktor talked about the theme of his recent book“Delete: The Virtue of Forgetting in the Digital Age”(a webcast of this keynote will be available soon on the OII website but you can also listen to a previous talk here). It touches on many of the recent debates about information that has been published on the web in some context and which might suddenly come back to us in a completely different context, e.g. when applying for a job and being confronted with some drunken picture of us obtained from Facebook.

Viktor puts that into a broad perspective, contrasting the two themes of “forgetting” and “remembering”. He convincingly argues how for most of human history, forgetting has been the default. This state of affairs has experienced quite a dramatic change with the advances of the computer technology, data storage and information retrieval technologies available on a global information infrastructure.  Now remembering is the default as most of the information stored digitally is available forever and in multiple places.

What he sees at stake is power because of the permanent threat of our activities are being watched by others – not necessarily now but possibly even in the future – can result in altering our behaviour today. What is more, he says that without forgetting it is hard for us to forgive as we deny us and others the possibility to change.

No matter to what degree you are prepared to follow the argument, the most intriguing question is how the current state of remembering could be changed to forgetting. Viktor discusses a number of ideas that pose no real solution:

  1. privacy rights – don’t go very far in changing actual behaviour
  2. information ecology – the idea to store only as much as necessary
  3. digital abstinence – just not using these digital tools but this is not very practical
  4. full contextualization – store as much information as possible in order to provide necessary context for evaluating the informations from the past
  5. cognitive adjustments – humans have to change in order to learn how to discard the information but this is very difficult
  6. privacy digital rights management – requires the need to create a global infrastructure that would create more threats than solutions

Instead Viktor wants to establish mechanisms that ease forgetting, primarily by making it a little bit more difficult to remember. Ideas include

  • expiration date for information, less in order to technically force deletion but to socially force thinking about forgetting
  • making older information a bit more difficult to retrieve

Whatever the actual tool, the default should be forgetting and to prompt its users to reflect and choose about just how long a certain piece of information should be valid.

Nice closing statement: “Let us remember to forget!

]]>