governance – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:22 +0000 en-GB hourly 1 Could Counterfactuals Explain Algorithmic Decisions Without Opening the Black Box? https://ensr.oii.ox.ac.uk/could-counterfactuals-explain-algorithmic-decisions-without-opening-the-black-box/ Mon, 15 Jan 2018 10:37:21 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4465 The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly.

In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm.

Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation.

We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice:

Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems?

Sandra: Basically, every decision that can be made by a human can now be made by an algorithm. Which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and correlations that even experienced humans might miss, for example in predicting disease. They are also very cost efficient—they don’t get tired, and they don’t need holidays. This could help to cut costs, for example in healthcare.

Algorithms are also certainly more consistent than humans in making decisions. We have the famous example of judges varying the severity of their judgements depending on whether or not they’ve had lunch. That wouldn’t happen with an algorithm. That’s not to say algorithms are always going to make better decisions: but they do make more consistent ones. If the decision is bad, it’ll be distributed equally, but still be bad. Of course, in a certain way humans are also black boxes—we don’t understand what humans do either. But you can at least try to understand an algorithm: it can’t lie, for example.

Brent: In principle, any sector involving human decision-making could be prone to decision-making by algorithms. In practice, we already see algorithmic systems either making automated decisions or producing recommendations for human decision-makers in online search, advertising, shopping, medicine, criminal justice, etc. The information you consume online, the products you are recommended when shopping, the friends and contacts you are encouraged to engage with, even assessments of your likelihood to commit a crime in the immediate and long-term future—all of these tasks can currently be affected by algorithmic decision-making.

Ed: I can see that algorithmic decision-making could be faster and better than human decisions in many situations. Are there downsides?

Sandra: Simple algorithms that follow a basic decision tree (with parameters decided by people) can be easily understood. But we’re now also using much more complex systems like neural nets that act in a very unpredictable way, and that’s the problem. The system is also starting to become autonomous, rather than being under the full control of the operator. You will see the output, but not necessarily why it got there. This also happens with humans, of course: I could be told by a recruiter that my failure to land a job had nothing to do with my gender (even if it did); an algorithm, however, would not intentionally lie. But of course the algorithm might be biased against me if it’s trained on biased data—thereby reproducing the biases of our world.

We have seen that the COMPAS algorithm used by US judges to calculate the probability of re-offending when making sentencing and parole decisions is a major source of discrimination. Data provenance is massively important, and probably one of the reasons why we have biased decisions. We don’t necessarily know where the data comes from, and whether it’s accurate, complete, biased, etc. We need to have lots of standards in place to ensure that the data set is unbiased. Only then can the algorithm produce nondiscriminatory results.

A more fundamental problem with predictions is that you might never know what would have happened—as you’re just dealing with probabilities; with correlations in a population, rather than with causalities. Another problem is that algorithms might produce correct decisions, but not necessarily fair ones. We’ve been wrestling with the concept of fairness for centuries, without consensus. But lack of fairness is certainly something the system won’t correct itself—that’s something that society must correct.

Brent: The biases and inequalities that exist in the real world and in real people can easily be transferred to algorithmic systems. Humans training learning systems can inadvertently or purposefully embed biases into the model, for example through labelling content as ‘offensive’ or ‘inoffensive’ based on personal taste. Once learned, these biases can spread at scale, exacerbating existing inequalities. Eliminating these biases can be very difficult, hence we currently see much research done on the measurement of fairness or detection of discrimination in algorithmic systems.

These systems can also be very difficult—if not impossible—to understand, for experts as well as the general public. We might traditionally expect to be able to question the reasoning of a human decision-maker, even if imperfectly, but the rationale of many complex algorithmic systems can be highly inaccessible to people affected by their decisions. These potential risks aren’t necessarily reasons to forego algorithmic decision-making altogether; rather, they can be seen as potential effects to be mitigated through other means (e.g. a loan programme weighted towards historically disadvantaged communities), or at least to be weighed against the potential benefits when choosing whether or not to adopt a system.

Ed: So it sounds like many algorithmic decisions could be too complex to “explain” to someone, even if a right to explanation became law. But you propose “counterfactual explanations” as an alternative— i.e. explaining to the subject what would have to change (e.g. about a job application) for a different decision to be arrived at. How does this simplify things?

Brent: So rather than trying to explain the entire rationale of a highly complex decision-making process, counterfactuals allow us to provide simple statements about what would have needed to be different about an individual’s situation to get a different, preferred outcome. You basically work from the outcome: you say “I am here; what is the minimum I need to do to get there?” By providing simple statements that are generally meaningful, and that reveal a small bit of the rationale of a decision, the individual has grounds to change their situation or contest the decision, regardless of their technical expertise. Understanding even a bit of how a decision is made is better than being told “sorry, you wouldn’t understand”—at least in terms of fostering trust in the system.

Sandra: And the nice thing about counterfactuals is that they work with highly complex systems, like neural nets. They don’t explain why something happened, but they explain what happened. And three things people might want to know are:

(1) What happened: why did I not get the loan (or get refused parole, etc.)?

(2) Information so I can contest the decision if I think it’s inaccurate or unfair.

(3) Even if the decision was accurate and fair, tell me what I can do to improve my chances in the future.

Machine learning and neural nets make use of so much information that individuals have really no oversight of what they’re processing, so it’s much easier to give someone an explanation of the key variables that affected the decision. With the counterfactual idea of a “close possible world” you give an indication of the minimal changes required to get what you actually want.

Ed: So would a series of counterfactuals (e.g. “over 18” “no prior convictions” “no debt”) essentially define a space within which a certain decision is likely to be reached? This decision space could presumably be graphed quite easily, to help people understand what factors will likely be important in reaching a decision?

Brent: This would only work for highly simplistic, linear models, which are not normally the type that confound human capacities for understanding. The complex systems that we refer to as ‘black boxes’ are highly dimensional and involve a multitude of (probabilistic) dependencies between variables that can’t be graphed simply. It may be the case that if I were aged between 35-40 with an income of £30,000, I would not get a loan. But, I could be told that if I had an income of £35,000, I would have gotten the loan. I may then assume that an income over £35,000 guarantees me a loan in the future. But, it may turn out that I would be refused a loan with an income above £40,000 because of a change in tax bracket. Non-linear relationships of this type can make it misleading to graph decision spaces. For simple linear models, such a graph may be a very good idea, but not for black box systems; they could, in fact, be highly misleading.

Chris: As Brent says, we’re concerned with understanding complicated algorithms that don’t just use hard cut-offs based on binary features. To use your example, maybe a little bit of debt is acceptable, but it would increase your risk of default slightly, so the amount of money you need to earn would go up. Or maybe certain convictions committed in the past also only increase your risk of defaulting slightly, and can be compensated for with higher salary. It’s not at all obvious how you could graph these complicated interdependencies over many variables together. This is why we picked on counterfactuals as a way to give people a direct and easy to understand path to move from the decision they got now, to a more favourable one at a later date.

Ed: But could a counterfactual approach just end up kicking the can down the road, if we know “how” a particular decision was reached, but not “why” the algorithm was weighted in such a way to produce that decision?

Brent: It depends what we mean by “why”. If this is “why” in the sense of, why was the system designed this way, to consider this type of data for this task, then we should be asking these questions while these systems are designed and deployed. Counterfactuals address decisions that have already been made, but still can reveal uncomfortable knowledge about a system’s design and functionality. So it can certainly inform “why” questions.

Sandra: Just to echo Brent, we don’t want to imply that asking the “why” is unimportant—I think it’s very important, and interpretability as a field has to be pursued, particularly if we’re using algorithms in highly sensitive areas. Even if we have the “what”, the “why” question is still necessary to ensure the safety of those systems.

Chris: And anyone who’s talked to a three-year old knows there is an endless stream of “Why” questions that can be asked. But already, counterfactuals provide a major step forward in answering why, compared to previous approaches that were concerned with providing approximate descriptions of how algorithms make decisions—but not the “why” or the external facts leading to that decision. I think when judging the strength of an explanation, you also have to look at questions like “How easy is this to understand?” and “How does this help the person I’m explaining things to?” For me, counterfactuals are a more immediately useful explanation, than something which explains where the weights came from. Even if you did know, what could you do with that information?

Ed: I guess the question of algorithmic decision making in society involves a hugely complex intersection of industry, research, and policy making? Are we control of things?

Sandra: Artificial intelligence (and the technology supporting it) is an area where many sectors are now trying to work together, including in the crucial areas of fairness, transparency and accountability of algorithmic decision-making. I feel at the moment we see a very multi-stakeholder approach, and I hope that continues in the future. We can see for example that industry is very concerned with it—the Partnership in AI is addressing these topics and trying to come up with a set of industry guidelines, recognising the responsibilities inherent in producing these systems. There are also lots of data scientists (eg at the OII and Turing Institute) working on these questions. Policy-makers around the world (e.g. UK, EU, US, China) preparing their countries for the AI future, so it’s on everybody’s mind at the moment. It’s an extremely important topic.

Law and ethics obviously has an important role to play. The opacity, unpredictability of AI and its potentially discriminatory nature, requires that we think about the legal and ethical implications very early on. That starts with educating the coding community, and ensuring diversity. At the same time, it’s important to have an interdisciplinary approach. At the moment we’re focusing a bit too much on the STEM subjects; there’s a lot of funding going to those areas (which makes sense, obviously), but the social sciences are currently a bit neglected despite the major role they play in recognising things like discrimination and bias, which you might not recognise from just looking at code.

Brent: Yes—and we’ll need much greater interaction and collaboration between these sectors to stay ‘in control’ of things, so to speak. Policy always has a tendency to lag behind technological developments; the challenge here is to stay close enough to the curve to prevent major issues from arising. The potential for algorithms to transform society is massive, so ensuring a quicker and more reflexive relationship between these sectors than normal is absolutely critical.

Read the full article: Sandra Wachter, Brent Mittelstadt, Chris Russell (2018) Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology (Forthcoming).

This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.


Sandra Wachter, Brent Mittelstadt and Chris Russell were talking to blog editor David Sutcliffe.

]]>
Digital platforms are governing systems — so it’s time we examined them in more detail https://ensr.oii.ox.ac.uk/digital-platforms-are-governing-systems-so-its-time-we-examined-them-in-more-detail/ Tue, 29 Aug 2017 09:49:29 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4346 Digital platforms are not just software-based media, they are governing systems that control, interact, and accumulate. As surfaces on which social action takes place, digital platforms mediate — and to a considerable extent, dictate — economic relationships and social action. By automating market exchanges they solidify relationships into material infrastructure, lend a degree of immutability and traceability to engagements, and render what previously would have been informal exchanges into much more formalized rules.

In his Policy & Internet article “Platform Logic: An Interdisciplinary Approach to the Platform-based Economy“, Jonas Andersson Schwarz argues that digital platforms enact a twofold logic of micro-level technocentric control and macro-level geopolitical domination, while supporting a range of generative outcomes between the two levels. Technology isn’t ‘neutral’, and what designers want may clash with what users want: so it’s important that we take a multi-perspective view of the role of digital platforms in contemporary society. For example, if we only consider the technical, we’ll notice modularity, compatibility, compliance, flexibility, mutual subsistence, and cross-subsidization. By contrast, if we consider ownership and organizational control, we’ll observe issues of consolidation, privatization, enclosure, financialization and protectionism.

When focusing on local interactions (e.g. with users), the digital nature of platforms is seen to strongly determine structure; essentially representing an absolute or totalitarian form of control. When we focus on geopolitical power arrangements in the “platform society”, patterns can be observed that are worryingly suggestive of market dominance, colonization, and consolidation. Concerns have been expressed that these (overwhelmingly US-biased) platform giants are not only enacting hegemony, but are on a road to “usurpation through tech — a worry that these companies could grow so large and become so deeply entrenched in world economies that they could effectively make their own laws”.

We caught up with Jonas to discuss his findings:

Ed.: You say that there are lots of different ways of considering “platforms”: what (briefly) are some of these different approaches, and why should they be linked up a bit? Certainly the conference your paper was presented at (“IPP2016: The Platform Society”) seemed to have struck an incredibly rich seam in this topic, and I think showed the value of approaching an issue like digital platforms from multiple disciplinary angles.

Jonas: In my article I’ve chosen to exclusively theorize *digital* platforms, which of course narrows down the meaning of the concept, to begin with. There are different interpretations as for what actually constitutes a digital platform. There has to be an element of proprietary control over the surface on which interaction takes place, for example. While being ubiquitous digital tools, free software and open protocols need not necessarily be considered as platforms, while proprietary operating systems should.

Within contemporary media studies there is considerable divergence as to whether one should define so-called over-the-top streaming services as platforms or not. Netflix, for example: In a strict technical sense, it’s not a platform for self-publishing and sharing in the way that YouTube is—but, in an economic sense, Netflix definitely enacts a multi-sided market, which is one of the key components of a what a platform does, economically speaking. Since platforms crystallize economic relationships into material infrastructure, conceptual conflation of this kind is unavoidable—different scholars tend to put different emphasis on different things.

Hence, when it comes to normative concerns, there are numerous approaches, ranging from largely apolitical computer science and design management studies, brandishing a largely optimistic view where blithe conceptions of innovation and generativity are emphasized, to critical approaches in political economy, where things like market dominance and consolidation are emphasized.

In my article, I try to relate to both of these schools of thought, by noting that they each are normative — albeit in vastly different ways — and by noting that not only do they each have somewhat different focus, they actually bring different research objects to the table: Usually, “efficacy” in purely technical interaction design is something altogether different than “efficacy” in matters of societal power relations, for example. While both notions can be said to be true, their respective validity might differ, depending on which matter of concern we are dealing with in each respective inquiry.

Ed.: You note in your article that platforms have a “twofold logic of micro-level technocentric control and macro-level geopolitical domination” .. which sounds quite a lot like what government does. Do you think “platform as government” is a useful way to think about this, i.e. are there any analogies?

Jonas: Sure, especially if we understand how platforms enact governance in really quite rigid forms. Platforms literally transform market relations into infrastructure. Compared to informal or spontaneous social structures, where there’s a lot of elasticity and ambiguity — put simply, giving-and-taking — automated digital infrastructure operates by unambiguous implementations of computer code. As Lawrence Lessig and others have argued, the perhaps most dangerous aspect of this is when digital infrastructures implement highly centralized modes of governance, often literally only having one point of command-and-control. The platform owner flicks a switch, and then certain listings and settings are allowed or disallowed, and so on…

This should worry any liberal, since it is a mode of governance that is totalitarian by nature; it runs counter to any democratic, liberal notion of spontaneous, emergent civic action. Funnily, a lot of Silicon Valley ideology appears to be indebted to theorists like Friedrich von Hayek, who observed a calculative rationality emerging out of heterogeneous, spontaneous market activity — but at the same time, Hayek’s call to arms was in itself a reaction to central planning of the very kind that I think digital platforms, when designed in too rigid a way, risk erecting.

Ed.: Is there a sense (in hindsight) that these platforms are basically the logical outcome of the ruthless pursuit of market efficiency, i.e. enabled by digital technologies? But is there also a danger that they could lock out equitable development and innovation if they become too powerful (e.g. leading to worries about market concentration and anti-trust issues)? At one point you ask: “Why is society collectively acquiescing to this development?” .. why do you think that is?

Jonas: The governance aspect above rests on a kind of managerialist fantasy of perfect calculative rationality that is conferred upon the platform as an allegedly neutral agent or intermediary; scholars like Frank Pasquale have begun to unravel some of the rather dodgy ideology underpinning this informational idealism, or “dataism,” as José van Dijck calls it. However, it’s important to note how much of this risk for overly rigid structures comes down to sheer design implementation; I truly believe there is scope for more democratically adaptive, benign platforms, but that can only be achieved either through real incentives at the design stage (e.g. Wikipedia, and the ways in which its core business idea involves quality control by design), or through ex-post regulation, forcing platform owners to consider certain societally desirable consequences.

Ed.: A lot of this discussion seems to be based on control. Is there a general theory of “control” — i.e. are these companies creating systems of user management and control that follow similar conceptual / theoretical lines, or just doing “what seems right” to them in their own particular contexts?

Jonas: Down the stack, there is always a binary logic of control at play in any digital infrastructure. Still, on a higher level in the stack, as more complexity is added, we should expect to see more non-linear, adaptive functionality that can handle complexity and context. And where computational logic falls short, we should demand tolerable degrees of human moderation, more than there is now, to be sure. Regulators are going this way when it comes to things like Facebook and hate speech, and I think there is considerable consumer demand for it, as when disputes arise on Airbnb and similar markets.

Ed.: What do you think are the main worries with the way things are going with these mega-platforms, i.e. the things that policy-makers should hopefully be concentrating on, and looking out for?

Jonas: Policymakers are beginning to realize the unexpected synergies that big data gives rise to. As The Economist recently pointed out, once you control portable smartphones, you’ll have instant geopositioning data on a massive scale — you’ll want to own and control map services because you’ll then also have data on car traffic in real time, which means you’d be likely to have the transportation market cornered, self driving cars especially… If one takes an agnostic, heterodox view on companies like Alphabet, some of their far-flung projects actually begin to make sense, if synergy is taken into consideration. For automated systems, the more detailed the data becomes, the better the system will perform; vast pools of data get to act as protective moats.

One solution that The Economist suggests, and that has been championed for years by internet veteran Doc Searls, is to press for vastly increased transparency in terms of user data, so that individuals can improve their own sovereignty, control their relationships with platform companies, and thereby collectively demand that the companies in question disclose the value of this data — which would, by extent, improve signalling of the actual value of the company itself. If today’s platform companies are reluctant to do this, is that because it would perhaps reveal some of them to be less valuable than what they are held out to be?

Another potentially useful, proactive measure, that I describe in my article, is the establishment of vital competitors or supplements to the services that so many of us have gotten used to being provided for by platform giants. Instead of Facebook monopolizing identity management online, which sadly seems to have become the norm in some countries, look to the Scandinavian example of BankID, which is a platform service run by a regional bank consortium, offering a much safer and more nationally controllable identity management solution.

Alternative platform services like these could be built by private companies as well as state-funded ones; alongside privately owned consortia of this kind, it would be interesting to see innovation within the public service remit, exploring how that concept could be re-thought in an era of platform capitalism.


Read the full article: Jonas Andersson Schwarz (2017) Platform Logic: An Interdisciplinary Approach to the Platform-based Economy. Policy & Internet DOI: 10.1002/poi3.159.

Jonas Andersson Schwarz was talking to blog editor David Sutcliffe.

]]>
Could data pay for global development? Introducing data financing for global good https://ensr.oii.ox.ac.uk/could-data-pay-for-global-development-introducing-data-financing-for-global-good/ Tue, 03 Jan 2017 15:12:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3903 “If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study.

The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific — very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good?

Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal.

Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the UNITAID air ticket levy used to advance global health.

Data financing, then, is a subset of innovative financing that refers to mechanisms that attempt to redirect a slice of the value created in the global data economy towards broader social objectives. For instance, a Global Internet Subsidy funded by large Internet companies could help to educate and and build infrastructure in the world’s marginalized regions, in the long run also growing the market for Internet companies’ services. But such a model would need well-designed governance mechanisms to avoid the pitfalls of current Internet subsidization initiatives, which risk failing because of well-founded concerns that they further entrench Internet giants’ dominance over emerging digital markets.

Besides the Global Internet Subsidy, other data financing models examined in the report are a Privacy Insurance for personal data processing, a Shared Knowledge Duty payable by businesses profiting from open and public data, and an Attention Levy to disincentivise intrusive marketing. Many of these have been considered before, and they come with significant economic, legal, political, and technical challenges. Our report considers these challenges in turn, assesses the feasibility of potential solutions, and presents rough estimates of potential financial impacts.

Some of the prevailing business models of the data economy — provoking users’ attention, extracting their personal information, and monetizing it through advertising — are more or less taken for granted today. But they are something of a historical accident, an unanticipated corollary to some of the technical and political decisions made early in the Internet’s design. Certainly they are not any inherent feature of data as such. Although our report focuses on the technical, legal, and political practicalities of the idea of data financing, it also invites a careful reader to question some of the accepted truths on how a data-intensive economy could be organized, and what business models might be possible.

Read the report: Lehdonvirta, V., Mittelstadt, B. D., Taylor, G., Lu, Y. Y., Kadikov, A., and Margetts, H. (2016) Data Financing for Global Good: A Feasibility Study. University of Oxford: Oxford Internet Institute.

]]>
The blockchain paradox: Why distributed ledger technologies may do little to transform the economy https://ensr.oii.ox.ac.uk/the-blockchain-paradox-why-distributed-ledger-technologies-may-do-little-to-transform-the-economy/ Mon, 21 Nov 2016 17:08:34 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3867 Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organization”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology.


 

I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionize industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organization, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below.

 

[youtube https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360]

 

First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies.

In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design to process at most approximately seven transactions per second, whereas the Visa payment network has a peak capacity of 56,000 transactions per second. Other implementations may have better performance, and on some other metrics blockchain technologies can perhaps beat more conventional technologies. But technical performance is not why so many people think blockchain is revolutionary and paradigm-shifting.

The reason that blockchain is making waves is that it promises to change the very way economies are organized: to eliminate centralized third parties. Let me explain what this means in theoretical terms. Many economic transactions, such as long-distance trade, can be modeled as a game of Prisoners’ Dilemma. The buyer and the seller can either cooperate (send the shipment/payment as promised) or defect (not send the shipment/payment). If the buyer and the seller don’t trust each other, then the equilibrium solution is that neither player cooperates and no trade takes place. This is known as the fundamental problem of cooperation.

There are several classic solutions to the problem of cooperation. One is reputation. In a community of traders where members repeatedly engage in exchange, any trader who defects (fails to deliver on a promise) will gain a negative reputation, and other traders will refuse to trade with them out of self-interest. This threat of exclusion from the community acts as a deterrent against defection, and the equilibrium under certain conditions becomes that everyone will cooperate.

Reputation is only a limited solution, however. It only works within communities where reputational information spreads effectively, and traders may still defect if the payoff from doing so is greater than the loss of future trade. Modern large-scale market economies where people trade with strangers on a daily basis are only possible because of another solution: third-party enforcement. In particular, this means state-enforced contracts and bills of exchange enforced by banks. These third parties in essence force parties to cooperate and to follow through with their promises.

Besides trade, another example of the problem of cooperation is currency. Currency can be modeled as a multiplayer game of Prisoners’ Dilemma. Traders collectively have an interest in maintaining a stable currency, because it acts as a lubricant to trade. But each trader individually has an interest in debasing the currency, in the sense of paying with fake money (what in blockchain-speak is referred to as double spending). Again the classic solution to this dilemma is third-party enforcement: the state polices metal currencies and punishes counterfeiters, and banks control ledgers and prevent people from spending money they don’t have.

So third-party enforcement is the dominant model of economic organization in today’s market economies. But it’s not without its problems. The enforcer is in a powerful position in relation to the enforced: banks could extract exorbitant fees, and states could abuse their power by debasing the currency, illegitimately freezing assets, or enforcing contracts in unfair ways. One classic solution to the problems of third-party enforcement is competition. Bank fees are kept in check by competition: the enforced can switch to another enforcer if the fees get excessive.

But competition is not always a viable solution: there is a very high cost to switching to another state (i.e. becoming a refugee) if your state starts to abuse its power. Another classic solution is accountability: democratic institutions that try to ensure the enforcer acts in the interest of the enforced. For instance, the interbank payment messaging network SWIFT is a cooperative society owned by its member banks. The members elect a Board of Directors that is the highest decision making body in the organization. This way, they attempt to ensure that SWIFT does not try to extract excessive fees from the member banks or abuse its power against them. Still, even accountability is not without its problems, since it comes with the politics of trying to reconcile different members’ diverging interests as best as possible.

Into this picture enters blockchain: a technology where third-party enforcers are replaced with a distributed network that enforces the rules. It can enforce contracts, prevent double spending, and cap the size of the money pool all without participants having to cede power to any particular third party who might abuse the power. No rent-seeking, no abuses of power, no politics — blockchain technologies can be used to create “math-based money” and “unstoppable” contracts that are enforced with the impartiality of a machine instead of the imperfect and capricious human bureaucracy of a state or a bank. This is why so many people are so excited about blockchain: its supposed ability change economic organization in a way that transforms dominant relationships of power.

Unfortunately this turns out to be a naive understanding of blockchain, and the reality is inevitably less exciting. Let me explain why. In economic organization, we must distinguish between enforcing rules and making rules. Laws are rules enforced by state bureaucracy and made by a legislature. The SWIFT Protocol is a set of rules enforced by SWIFTNet (a centralized computational system) and made, ultimately, by SWIFT’s Board of Directors. The Bitcoin Protocol is a set of rules enforced by the Bitcoin Network (a distributed network of computers) made by — whom exactly? Who makes the rules matters at least as much as who enforces them. Blockchain technology may provide for completely impartial rule-enforcement, but that is of little comfort if the rules themselves are changed. This rule-making is what we refer to as governance.

Using Bitcoin as an example, the initial versions of the protocol (ie. the rules) were written by the pseudonymous Satoshi Nakamoto, and later versions are released by a core development team. The development team is not autocratic: a complex set of social and technical entanglements means that other people are also influential in how Bitcoin’s rules are set; in particular, so-called mining pools, headed by a handful of individuals, are very influential. The point here is not to attempt to pick apart Bitcoin’s political order; the point is that Bitcoin has not in any sense eliminated human politics; humans are still very much in charge of setting the rules that the network enforces.

There is, however, no formal process for how governance works in Bitcoin, because for a very long time these politics were not explicitly recognized, and many people don’t recognize them, preferring instead the idea that Bitcoin is purely “math-based money” and that all the developers are doing is purely apolitical plumbing work. But what has started to make this position untenable and Bitcoin’s politics visible is the so-called “block size debate” — a big disagreement between factions of the Bitcoin community over the future direction of the rules. Different stakeholders have different interests in the matter, and in the absence of a robust governance mechanism that could reconcile between the interests, this has resulted in open “warfare” between the camps over social media and discussion forums.

Will competition solve the issue? Multiple “forks” of the Bitcoin protocol have emerged, each with slightly different rules. But network economics teaches us that competition does not work well at all in the presence of strong network effects: everyone prefers to be in the network where other people are, even if its rules are not exactly what they would prefer. Network markets tend to tip in favour of the largest network. Every fork/split diminishes the total value of the system, and those on the losing side of a fork may eventually find their assets worthless.

If competition doesn’t work, this leaves us with accountability. There is no obvious path how Bitcoin could develop accountable governance institutions. But other blockchain projects, especially those that are gaining some kind of commercial or public sector legitimacy, are designed from the ground up with some level of accountable governance. For instance, R3 is a firm that develops blockchain technology for use in the financial services industry. It has enrolled a consortium of banks to guide the effort, and its documents talk about the “mandate” it has from its “member banks”. Its governance model thus sounds a lot like the beginnings of something like SWIFT. Another example is RSCoin, designed by my ATI colleagues George Danezis and Sarah Meiklejohn, which is intended to be governed by a central bank.

Regardless of the model, my point is that blockchain technologies cannot escape the problem of governance. Whether they recognize it or not, they face the same governance issues as conventional third-party enforcers. You can use technologies to potentially enhance the processes of governance (eg. transparency, online deliberation, e-voting), but you can’t engineer away governance as such. All this leads me to wonder how revolutionary blockchain technologies really are. If you still rely on a Board of Directors or similar body to make it work, how much has economic organization really changed?

And this leads me to my final point, a provocation: once you address the problem of governance, you no longer need blockchain; you can just as well use conventional technology that assumes a trusted central party to enforce the rules, because you’re already trusting somebody (or some organization/process) to make the rules. I call this blockchain’s ‘governance paradox’: once you master it, you no longer need it. Indeed, R3’s design seems to have something called “uniqueness services”, which look a lot like trusted third-party enforcers (though this isn’t clear from the white paper). RSCoin likewise relies entirely on trusted third parties. The differences to conventional technology are no longer that apparent.

Perhaps blockchain technologies can still deliver better technical performance, like better availability and data integrity. But it’s not clear to me what real changes to economic organization and power relations they could bring about. I’m very happy to be challenged on this, if you can point out a place in my reasoning where I’ve made an error. Understanding grows via debate. But for the time being, I can’t help but be very skeptical of the claims that blockchain will fundamentally transform the economy or government.

The governance of DLTs is also examined in this report chapter that I coauthored earlier this year:

Lehdonvirta, V. & Robleh, A. (2016) Governance and Regulation. In: M. Walport (ed.), Distributed Ledger Technology: Beyond Blockchain. London: UK Government Office for Science, pp. 40-45.

]]>
New Voluntary Code: Guidance for Sharing Data Between Organisations https://ensr.oii.ox.ac.uk/new-voluntary-code-guidance-for-sharing-data-between-organisations/ Fri, 08 Jan 2016 10:40:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3540 Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

]]>
Crowdsourcing ideas as an emerging form of multistakeholder participation in Internet governance https://ensr.oii.ox.ac.uk/crowdsourcing-ideas-as-an-emerging-form-of-multistakeholder-participation-in-internet-governance/ Wed, 21 Oct 2015 11:59:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3445 What are the linkages between multistakeholder governance and crowdsourcing? Both are new — trendy, if you will — approaches to governance premised on the potential of collective wisdom, bringing together diverse groups in policy-shaping processes. Their interlinkage has remained underexplored so far. Our article recently published in Policy and Internet sought to investigate this in the context of Internet governance, in order to assess the extent to which crowdsourcing represents an emerging opportunity of participation in global public policymaking.

We examined two recent Internet governance initiatives which incorporated crowdsourcing with mixed results: the first one, the ICANN Strategy Panel on Multistakeholder Innovation, received only limited support from the online community; the second, NETmundial, had a significant number of online inputs from global stakeholders who had the opportunity to engage using a platform for political participation specifically set up for the drafting of the outcome document. The study builds on these two cases to evaluate how crowdsourcing was used as a form of public consultation aimed at bringing the online voice of the “undefined many” (as opposed to the “elected few”) into Internet governance processes.

From the two cases, it emerged that the design of the consultation processes conducted via crowdsourcing platforms is key in overcoming barriers of participation. For instance, in the NETmundial process, the ability to submit comments and participate remotely via www.netmundial.br attracted inputs from all over the world very early on, since the preparatory phase of the meeting. In addition, substantial public engagement was obtained from the local community in the drafting of the outcome document, through a platform for political participation — www.participa.br — that gathered comments in Portuguese. In contrast, the outreach efforts of the ICANN Strategy Panel on Multistakeholder Innovation remained limited; the crowdsourcing platform they used only gathered input (exclusively in English) from a small group of people, insufficient to attribute to online public input a significant role in the reform of ICANN’s multistakeholder processes.

Second, questions around how crowdsourcing should and could be used effectively to enhance the legitimacy of decision-making processes in Internet governance remain unanswered. A proper institutional setting that recognizes a role for online multistakeholder participation is yet to be defined; in its absence, the initiatives we examined present a set of procedural limitations. For instance, in the NETmundial case, the Executive Multistakeholder Committee, in charge of drafting an outcome document to be discussed during the meeting based on the analysis of online contributions, favoured more “mainstream” and “uncontroversial” contributions. Additionally, online deliberation mechanisms for different propositions put forward by a High-Level Multistakeholder Committee, which commented on the initial draft, were not in place.

With regard to ICANN, online consultations have been used on a regular basis since its creation in 1998. Its target audience is the “ICANN community,” a group of stakeholders that volunteer their time and expertise to improve policy processes within the organization. Despite the effort, initiatives such as the 2000 global election for the new At-Large Directors have revealed difficulties in reaching as broad of an audience as wanted. Our study discusses some of the obstacles of the implementation of this ambitious initiative, including limited information and awareness about the At-Large elections, and low Internet access and use in most developing countries, particularly in Africa and Latin America.

Third, there is a need for clear rules regarding the way in which contributions are evaluated in crowdsourcing efforts. When the deliberating body (or committee) is free to disregard inputs without providing any motivation, it triggers concerns about the broader transnational governance framework in which we operate, as there is no election of those few who end up determining which parts of the contributions should be reflected in the outcome document. To avoid the agency problem arising from the lack of accountability over the incorporation of inputs, it is important that crowdsourcing attempts pay particular attention to designing a clear and comprehensive assessment process.

The “wisdom of the crowd” has traditionally been explored in developing the Internet, yet it remains a contested ground when it comes to its governance. In multistakeholder set-ups, the diversity of voices and the collection of ideas and input from as many actors as possible — via online means — represent a desideratum, rather than a reality. In our exploration of empowerment through online crowdsourcing for institutional reform, we identify three fundamental preconditions: first, the existence of sufficient community interest, able to leverage wide expertise beyond a purely technical discussion; second, the existence of procedures for the collection and screening of inputs, streamlining certain ideas considered for implementation; and third, commitment to institutionalizing the procedures, especially by clearly defining the rules according to which feedback is incorporated and circumvention is avoided.

Read the full paper: Radu, R., Zingales, N. and Calandro, E. (2015), Crowdsourcing Ideas as an Emerging Form of Multistakeholder Participation in Internet Governance. Policy & Internet, 7: 362–382. doi: 10.1002/poi3.99


Roxana Radu is a PhD candidate in International Relations at the Graduate Institute of International and Development Studies in Geneva and a fellow at the Center for Media, Data and Society, Central European University (Budapest). Her current research explores the negotiation of internet policy-making in global and regional frameworks.

Nicolo Zingales is an assistant professor at Tilburg law school, a senior member of the Tilburg Law and Economics Center (TILEC), and a research associate of the Tilburg Institute for Law, Technology and Society (TILT). He researches on various aspects of Internet governance and regulation, including multistakeholder processes, data-driven innovation and the role of online intermediaries.

Enrico Calandro (PhD) is a senior research fellow at Research ICT Africa, an ICT policy think-tank based based in Cape Town. His academic research focuses on accessibility and affordability of ICT, broadband policy, and internet governance issues from an African perspective.

]]>
Uber and Airbnb make the rules now — but to whose benefit? https://ensr.oii.ox.ac.uk/uber-and-airbnb-make-the-rules-now-but-to-whose-benefit/ Mon, 27 Jul 2015 07:12:20 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3319 The "Airbnb Law" was signed by Mayor Ed Lee in October 2014 at San Francisco City Hall, legalizing short-term rentals in SF with many conditions. Image by Kevin Krejci (Flickr).
The “Airbnb Law” was signed by Mayor Ed Lee in October 2014 at San Francisco City Hall, legalizing short-term rentals in SF with many conditions. Image of protesters by Kevin Krejci (Flickr).

Ride-hailing app Uber is close to replacing government-licensed taxis in some cities, while Airbnb’s accommodation rental platform has become a serious competitor to government-regulated hotel markets. Many other apps and platforms are trying to do the same in other sectors of the economy. In my previous post, I argued that platforms can be viewed in social science terms as economic institutions that provide infrastructures necessary for markets to thrive. I explained how the natural selection theory of institutional change suggests that people are migrating from state institutions to these new code-based institutions because they provide a more efficient environment for doing business. In this article, I will discuss some of the problems with this theory, and outline a more nuanced theory of institutional change that suggests that platforms’ effects on society will be complex and influence different people in different ways.

Economic sociologists like Neil Fligstein have pointed out that not everyone is as free to choose the means through which they conduct their trade. For example, if buyers in a market switch to new institutions, sellers may have little choice but to follow, even if the new institutions leave them worse off than the old ones did. Even if taxi drivers don’t like Uber’s rules, they may find that there is little business to be had outside the platform, and switch anyway. In the end, the choice of institutions can boil down to power. Economists have shown that even a small group of participants with enough market power — like corporate buyers — may be able to force a whole market to tip in favour of particular institutions. Uber offers a special solution for corporate clients, though I don’t know if this has played any part in the platform’s success.

Even when everyone participates in an institutional arrangement willingly, we still can’t assume that it will contribute to the social good. Cambridge economic historian Sheilagh Ogilvie has pointed out that an institution that is efficient for everyone who participates in it can still be inefficient for society as a whole if it affects third parties. For example, when Airbnb is used to turn an ordinary flat into a hotel room, it can cause nuisance to neighbours in the form of noise, traffic, and guests unfamiliar with the local rules. The convenience and low cost of doing business through the platform is achieved in part at others’ expense. In the worst case, a platform can make society not more but less efficient — by creating a ‘free rider economy’.

In general, social scientists recognize that different people and groups in society often have conflicting interests in how economic institutions are shaped. These interests are reconciled — if they are reconciled — through political institutions. Many social scientists thus look not so much at efficiencies but at political institutions to understand why economic institutions are shaped the way they are. For example, a democratic local government in principle represents the interests of its citizens, through political institutions such as council elections and public consultations. Local governments consequently try to strike a balance between the conflicting interests of hoteliers and their neighbours, by limiting hotel business to certain zones. In contrast, Airbnb as a for-profit business must cater to the interests of its customers, the would-be hoteliers and their guests. It has no mechanism, and more importantly, no mandate, to address on an equal footing the interests of third parties like customers’ neighbours. Perhaps because of this, 74% of Airbnb’s properties are not in the main hotel districts, but in ordinary residential blocks.

That said, governments have their own challenges in producing fair and efficient economic institutions. Not least among these is the fact that government regulators are at a risk of capture by incumbent market participants, or at the very least they face the innovator’s dilemma: it is easier to craft rules that benefit the incumbents than rules that provide great but uncertain benefits to future market participants. For example, cities around the world operate taxi licensing systems, where only strictly limited numbers of license owners are allowed to operate taxicabs. Whatever benefits this system offers to customers in terms of quality assurance, among its biggest beneficiaries are the license owners, and among its losers the would-be drivers who are excluded from the market. Institutional insiders and outsiders have conflicting interests, and government political institutions are often such that it is easier for it to side with the insiders.

Against this background, platforms appear almost as radical reformers that provide market access to those whom the establishment has denied it. For example, Uber recently announced that it aims to create one million jobs for women by 2020, a bold pledge in the male-dominated transport industry, and one that would likely not be possible if it adhered to government licensing requirements, as most licenses are owned by men. Having said that, Uber’s definition of a ‘job’ is something much more precarious and entrepreneurial than the conventional definition. My point here is not to side with either Uber or the licensing system, but to show that their social implications are very different. Both possess at least some flaws as well as redeeming qualities, many of which can be traced back to their political institutions and whom they represent.

What kind of new economic institutions are platform developers creating? How efficient are they? What other consequences, including unintended ones, do they have and to whom? Whose interests are they geared to represent — capital vs. labour, consumer vs. producer, Silicon Valley vs. local business, incumbent vs. marginalized? These are the questions that policy makers, journalists, and social scientists ought to be asking at this moment of transformation in our economic institutions. Instead of being forced to choose one or the other between established institutions and platforms as they currently are, I hope that we will be able to discover ways to take what is good in both, and create infrastructure for an economy that is as fair and inclusive as it is efficient and innovative.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
Why are citizens migrating to Uber and Airbnb, and what should governments do about it? https://ensr.oii.ox.ac.uk/why-are-citizens-migrating-to-uber-and-airbnb-and-what-should-governments-do-about-it/ Mon, 27 Jul 2015 06:48:57 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3307 protested fair taxi laws by parking in Pioneer square. Organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).
Protest for fair taxi laws in Portland; organizers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).

Cars were smashed and tires burned in France last month in protests against the ride hailing app Uber. Less violent protests have also been staged against Airbnb, a platform for renting short-term accommodation. Despite the protests, neither platform shows any signs of faltering. Uber says it has a million users in France, and is available in 57 countries. Airbnb is available in over 190 countries, and boasts over a million rooms, more than hotel giants like Hilton and Marriott. Policy makers at the highest levels are starting to notice the rise of these and similar platforms. An EU Commission flagship strategy paper notes that “online platforms are playing an ever more central role in social and economic life,” while the Federal Trade Commission recently held a workshop on the topic in Washington.

Journalists and entrepreneurs have been quick to coin terms that try to capture the essence of the social and economic changes associated with online platforms: the sharing economy; the on-demand economy; the peer-to-peer economy; and so on. Each perhaps captures one aspect of the phenomenon, but doesn’t go very far in helping us make sense of all its potentials and contradictions, including why some people love it and some would like to smash it into pieces. Instead of starting from the assumption that everything we see today is new and unprecedented, what if we dug into existing social science theory to see what it has to say about economic transformation and the emergence of markets?

Economic sociologists are adamant that markets don’t just emerge by themselves: they are always based on some kind of an underlying infrastructure that allows people to find out what goods and services are on offer, agree on prices and terms, pay, and have a reasonable expectation that the other party will honour the agreement. The oldest market infrastructure is the personal social network: traders hear what’s on offer through word of mouth and trade only with those whom they personally know and trust. But personal networks alone couldn’t sustain the immense scale of trading in today’s society. Every day we do business with strangers and trust them to provide for our most basic needs. This is possible because modern society has developed institutions — things like private property, enforceable contracts, standardized weights and measures, consumer protection, and many other general and sector specific norms and facilities. By enabling and constraining everyone’s behaviours in predictable ways, institutions constitute a robust and more inclusive infrastructure for markets than personal social networks.

Modern institutions didn’t of course appear out of nowhere. Between prehistoric social networks and the contemporary institutions of the modern state, there is a long historical continuum of economic institutions, from ancient trade routes with their customs to medieval fairs with their codes of conduct to state-enforced trade laws of the early industrial era. Institutional economists led by Oliver Williamson and economic historians led by Douglass North theorized in the 1980s that economic institutions evolve towards more efficient forms through a process of natural selection. As new institutional forms become possible thanks to technological and organizational innovation, people switch to cheaper, easier, more secure, and overall more efficient institutions out of self-interest. Old and cumbersome institutions fall into disuse, and society becomes more efficient and economically prosperous as a result. Williamson and North both later received the Nobel Memorial Prize in Economic Sciences.

It is easy to frame platforms as the next step in such an evolutionary process. Even if platforms don’t replace state institutions, they can plug gaps that remain the state-provided infrastructure. For example, enforcing a contract in court is often too expensive and unwieldy to be used to secure transactions between individual consumers. Platforms provide cheaper and easier alternatives to formal contract enforcement, in the form of reputation systems that allow participants to rate each others’ conduct and view past ratings. Thanks to this, small transactions like sharing a commute that previously only happened in personal networks can now potentially take place on a wider scale, resulting in greater resource efficiency and prosperity (the ‘sharing economy’). Platforms are not the first companies to plug holes in state-provided market infrastructure, though. Private arbitrators, recruitment agencies, and credit rating firms have been doing similar things for a long time.

What’s arguably new about platforms, though, is that some of the most popular ones are not mere complements, but almost complete substitutes to state-provided market infrastructures. Uber provides a complete substitute to government-licensed taxi infrastructures, addressing everything from quality and discovery to trust and payment. Airbnb provides a similarly sweeping solution to short-term accommodation rental. Both platforms have been hugely successful; in San Francisco, Uber has far surpassed the city’s official taxi market in size. The sellers on these platforms are not just consumers wanting to make better use of their resources, but also firms and professionals switching over from the state infrastructure. It is as if people and companies were abandoning their national institutions and emigrating en masse to Platform Nation.

From the natural selection perspective, this move from state institutions to platforms seems easy to understand. State institutions are designed by committee and carry all kinds of historical baggage, while platforms are designed from the ground up to address their users’ needs. Government institutions are geographically fragmented, while platforms offer a seamless experience from one city, country, and language area to the other. Government offices have opening hours and queues, while platforms make use of latest technologies to provide services around the clock (the ‘on-demand economy’). Given the choice, people switch to the most efficient institutions, and society becomes more efficient as a result. The policy implications of the theory are that government shouldn’t try to stop people from using Uber and Airbnb, and that it shouldn’t try to impose its evidently less efficient norms on the platforms. Let competing platforms innovate new regulatory regimes, and let people vote with their feet; let there be a market for markets.

The natural selection theory of institutional change provides a compellingly simple way to explain the rise of platforms. However, it has difficulty in explaining some important facts, like why economic institutions have historically developed differently in different places around the world, and why some people now protest vehemently against supposedly better institutions. Indeed, over the years since the theory was first introduced, social scientists have discovered significant problems in it. Economic sociologists like Neil Fligstein have noted that not everyone is as free to choose the institutions that they use. Economic historian Sheilagh Ogilvie has pointed out that even institutions that are efficient for those who participate in them can still sometimes be inefficient for society as a whole. These points suggest a different theory of institutional change, which I will apply to online platforms in my next post.


Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.

]]>
How big data is breathing new life into the smart cities concept https://ensr.oii.ox.ac.uk/how-big-data-is-breathing-new-life-into-the-smart-cities-concept/ Thu, 23 Jul 2015 09:57:10 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3297 “Big data” is a growing area of interest for public policy makers: for example, it was highlighted in UK Chancellor George Osborne’s recent budget speech as a major means of improving efficiency in public service delivery. While big data can apply to government at every level, the majority of innovation is currently being driven by local government, especially cities, who perhaps have greater flexibility and room to experiment and who are constantly on a drive to improve service delivery without increasing budgets.

Work on big data for cities is increasingly incorporated under the rubric of “smart cities”. The smart city is an old(ish) idea: give urban policymakers real time information on a whole variety of indicators about their city (from traffic and pollution to park usage and waste bin collection) and they will be able to improve decision making and optimise service delivery. But the initial vision, which mostly centred around adding sensors and RFID tags to objects around the city so that they would be able to communicate, has thus far remained unrealised (big up front investment needs and the requirements of IPv6 are perhaps the most obvious reasons for this).

The rise of big data – large, heterogeneous datasets generated by the increasing digitisation of social life – has however breathed new life into the smart cities concept. If all the cars have GPS devices, all the people have mobile phones, and all opinions are expressed on social media, then do we really need the city to be smart at all? Instead, policymakers can simply extract what they need from a sea of data which is already around them. And indeed, data from mobile phone operators has already been used for traffic optimisation, Oyster card data has been used to plan London Underground service interruptions, sewage data has been used to estimate population levels … the examples go on.

However, at the moment these examples remain largely anecdotal, driven forward by a few cities rather than adopted worldwide. The big data driven smart city faces considerable challenges if it is to become a default means of policymaking rather than a conversation piece. Getting access to the right data; correcting for biases and inaccuracies (not everyone has a GPS, phone, or expresses themselves on social media); and communicating it all to executives remain key concerns. Furthermore, especially in a context of tight budgets, most local governments cannot afford to experiment with new techniques which may not pay off instantly.

This is the context of two current OII projects in the smart cities field: UrbanData2Decide (2014-2016) and NEXUS (2015-2017). UrbanData2Decide joins together a consortium of European universities, each working with a local city partner, to explore how local government problems can be resolved with urban generated data. In Oxford, we are looking at how open mapping data can be used to estimate alcohol availability; how website analytics can be used to estimate service disruption; and how internal administrative data and social media data can be used to estimate population levels. The best concepts will be built into an application which allows decision makers to access these concepts real time.

NEXUS builds on this work. A collaborative partnership with BT, it will look at how social media data and some internal BT data can be used to estimate people movement and traffic patterns around the city, joining these data into network visualisations which are then displayed to policymakers in a data visualisation application. Both projects fill an important gap by allowing city officials to experiment with data driven solutions, providing proof of concepts and showing what works and what doesn’t. Increasing academic-government partnerships in this way has real potential to drive forward the field and turn the smart city vision into a reality.


OII Resarch Fellow Jonathan Bright is a political scientist specialising in computational and ‘big data’ approaches to the social sciences. His major interest concerns studying how people get information about the political process, and how this is changing in the internet era.

]]>
How can big data be used to advance dementia research? https://ensr.oii.ox.ac.uk/how-can-big-data-be-used-to-advance-dementia-research/ Mon, 16 Mar 2015 08:00:11 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3186 Caption
Image by K. Kendall of “Sights and Scents at the Cloisters: for people with dementia and their care partners”; a program developed in consultation with the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain, Alzheimer’s Disease Research Center at Columbia University, and the Alzheimer’s Association.

Dementia affects about 44 million individuals, a number that is expected to nearly double by 2030 and triple by 2050. With an estimated annual cost of USD 604 billion, dementia represents a major economic burden for both industrial and developing countries, as well as a significant physical and emotional burden on individuals, family members and caregivers. There is currently no cure for dementia or a reliable way to slow its progress, and the G8 health ministers have set the goal of finding a cure or disease-modifying therapy by 2025. However, the underlying mechanisms are complex, and influenced by a range of genetic and environmental influences that may have no immediately apparent connection to brain health.

Of course medical research relies on access to large amounts of data, including clinical, genetic and imaging datasets. Making these widely available across research groups helps reduce data collection efforts, increases the statistical power of studies and makes data accessible to more researchers. This is particularly important from a global perspective: Swedish researchers say, for example, that they are sitting on a goldmine of excellent longitudinal and linked data on a variety of medical conditions including dementia, but that they have too few researchers to exploit its potential. Other countries will have many researchers, and less data.

‘Big data’ adds new sources of data and ways of analysing them to the repertoire of traditional medical research data. This can include (non-medical) data from online patient platforms, shop loyalty cards, and mobile phones — made available, for example, through Apple’s ResearchKit, just announced last week. As dementia is believed to be influenced by a wide range of social, environmental and lifestyle-related factors (such as diet, smoking, fitness training, and people’s social networks), and this behavioural data has the potential to improve early diagnosis, as well as allow retrospective insights into events in the years leading up to a diagnosis. For example, data on changes in shopping habits (accessible through loyalty cards) may provide an early indication of dementia.

However, there are many challenges to using and sharing big data for dementia research. The technology hurdles can largely be overcome, but there are also deep-seated issues around the management of data collection, analysis and sharing, as well as underlying people-related challenges in relation to skills, incentives, and mindsets. Change will only happen if we tackle these challenges at all levels jointly.

As data are combined from different research teams, institutions and nations — or even from non-medical sources — new access models will need to be developed that make data widely available to researchers while protecting the privacy and other interests of the data originator. Establishing robust and flexible core data standards that make data more sharable by design can lower barriers for data sharing, and help avoid researchers expending time and effort trying to establish the conditions of their use.

At the same time, we need policies that protect citizens against undue exploitation of their data. Consent needs to be understood by individuals — including the complex and far-reaching implications of providing genetic information — and should provide effective enforcement mechanisms to protect them against data misuse. Privacy concerns about digital, highly sensitive data are important and should not be de-emphasised as a subordinate goal to advancing dementia research. Beyond releasing data in a protected environments, allowing people to voluntarily “donate data”, and making consent understandable and enforceable, we also need governance mechanisms that safeguard appropriate data use for a wide range of purposes. This is particularly important as the significance of data changes with its context of use, and data will never be fully anonymisable.

We also need a favourable ecosystem with stable and beneficial legal frameworks, and links between academic researchers and private organisations for exchange of data and expertise. Legislation needs to account of the growing importance of global research communities in terms of funding and making best use of human and data resources. Also important is sustainable funding for data infrastructures, as well as an understanding that funders can have considerable influence on how research data, in particular, are made available. One of the most fundamental challenges in terms of data sharing is that there are relatively few incentives or career rewards that accrue to data creators and curators, so ways to recognise the value of shared data must be built into the research system.

In terms of skills, we need more health-/bioinformatics talent, as well as collaboration with those disciplines researching factors “below the neck”, such as cardiovascular or metabolic diseases, as scientists increasingly find that these may be associated with dementia to a larger extent than previously thought. Linking in engineers, physicists or innovative private sector organisations may prove fruitful for tapping into new skill sets to separate the signal from the noise in big data approaches.

In summary, everyone involved needs to adopt a mindset of responsible data sharing, collaborative effort, and a long-term commitment to building two-way connections between basic science, clinical care and the healthcare in everyday life. Fully capturing the health-related potential of big data requires “out of the box” thinking in terms of how to profit from the huge amounts of data being generated routinely across all facets of our everyday lives. This sort of data offers ways for individuals to become involved, by actively donating their data to research efforts, participating in consumer-led research, or engaging as citizen scientists. Empowering people to be active contributors to science may help alleviate the common feeling of helplessness faced by those whose lives are affected by dementia.

Of course, to do this we need to develop a culture that promotes trust between the people providing the data and those capturing and using it, as well as an ongoing dialogue about new ethical questions raised by collection and use of big data. Technical, legal and consent-related mechanisms to protect individual’s sensitive biomedical and lifestyle-related data against misuse may not always be sufficient, as the recent Nuffield Council on Bioethics report has argued. For example, we need a discussion around the direct and indirect benefits to participants of engaging in research, when it is appropriate for data collected for one purpose to be put to others, and to what extent individuals can make decisions particularly on genetic data, which may have more far-reaching consequences for their own and their family members’ professional and personal lives if health conditions, for example, can be predicted by others (such as employers and insurance companies).

Policymakers and the international community have an integral leadership role to play in informing and driving the public debate on responsible use and sharing of medical data, as well as in supporting the process through funding, incentivising collaboration between public and private stakeholders, creating data sharing incentives (for example, via taxation), and ensuring stability of research and legal frameworks.

Dementia is a disease that concerns all nations in the developed and developing world, and just as diseases have no respect for national boundaries, neither should research into dementia (and the data infrastructures that support it) be seen as a purely national or regional priority. The high personal, societal and economic importance of improving the prevention, diagnosis, treatment and cure of dementia worldwide should provide a strong incentive for establishing robust and safe mechanisms for data sharing.


Read the full report: Deetjen, U., E. T. Meyer and R. Schroeder (2015) Big Data for Advancing Dementia Research. Paris, France: OECD Publishing.

]]>
Monitoring Internet openness and rights: report from the Citizen Lab Summer Institute 2014 https://ensr.oii.ox.ac.uk/monitoring-internet-openness-and-rights-report-from-citizen-lab-summer-institute/ Tue, 12 Aug 2014 11:44:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2916 Caption
Jon Penny presenting on the US experience of Internet-related corporate transparency reporting.

根据相关法律法规和政策,部分搜索结果未予显示 could be a warning message we will see displayed more often on the Internet; but likely translations thereof. In Chinese, this means “according to the relevant laws, regulations, and policies, a portion of search results have not been displayed.” The control of information flows on the Internet is becoming more commonplace, in authoritarian regimes as well as in liberal democracies, either via technical or regulatory means. Such information controls can be defined as “[…] actions conducted in or through information and communications technologies (ICTs), which seek to deny (such as web filtering), disrupt (such as denial-of-service attacks), shape (such as throttling), secure (such as through encryption or circumvention) or monitor (such as passive or targeted surveillance) information for political ends. Information controls can also be non-technical and can be implemented through legal and regulatory frameworks, including informal pressures placed on private companies. […]” Information controls are not intrinsically good or bad, but much is to be explored and analysed about their use, for political or commercial purposes.

The University of Toronto’s Citizen Lab organised a one-week summer institute titled “Monitoring Internet Openness and Rights” to inform the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights. A week full of presentations and workshops on the intersection of technical tools, social science research, ethical and legal reflections and policy implications was attended by a distinguished group of about 60 community members, amongst whom were two OII DPhil students; Jon Penney and Ben Zevenbergen. Conducting Internet measurements may be considered to be a terra incognita in terms of methodology and data collection, but the relevance and impacts for Internet policy-making, geopolitics or network management are obvious and undisputed.

The Citizen Lab prides itself in being a “hacker hothouse”, or an “intelligence agency for civil society” where security expertise, politics, and ethics intersect. Their research adds the much-needed geopolitical angle to the deeply technical and quantitative Internet measurements they conduct on information networks worldwide. While the Internet is fast becoming the backbone of our modern societies in many positive and welcome ways, abundant (intentional) security vulnerabilities, the ease with which human rights such as privacy and freedom of speech can be violated, threats to the neutrality of the network and the extent of mass surveillance threaten to compromise the potential of our global information sphere. Threats to a free and open internet need to be uncovered and explained to policymakers, in order encourage informed, evidence-based policy decisions, especially in a time when the underlying technology is not well-understood by decision makers.

Participants at the summer institute came with the intent to make sense of Internet measurements and information controls, as well as their social, political and ethical impacts. Through discussions in larger and smaller groups throughout the Munk School of Global Affairs – as well as restaurants and bars around Toronto – the current state of the information controls, their regulation and deployment became clear, and multi-disciplinary projects to measure breaches of human rights on the Internet or its fundamental principles were devised and coordinated.

The outcomes of the week in Toronto are impressive. The OII DPhil students presented their recent work on transparency reporting and ethical data collection in Internet measurement.

Jon Penney gave a talk on “the United States experience” with Internet-related corporate transparency reporting, that is, the evolution of existing American corporate practices in publishing “transparency reports” about the nature and quantity of government and law enforcement requests for Internet user data or content removal. Jon first began working on transparency issues as a Google Policy Fellow with the Citizen Lab in 2011, and his work has continued during his time at Harvard’s Berkman Center for Internet and Society. In this talk, Jon argued that in the U.S., corporate transparency reporting largely began with the leadership of Google and a few other Silicon Valley tech companies like Twitter, but in the Post-Snowden era, has been adopted by a wider cross section of not only technology companies, but also established telecommunications companies like Verizon and AT&T previously resistant to greater transparency in this space (perhaps due to closer, longer term relationships with federal agencies than Silicon Valley companies). Jon also canvassed evolving legal and regulatory challenges facing U.S. transparency reporting and means by which companies may provide some measure of transparency— via tools like warrant canaries— in the face of increasingly complex national security laws.

Ben Zevenbergen has recently launched ethical guidelines for the protection of privacy with regards to Internet measurements conducted via mobile phones. The first panel of the week on “Network Measurement and Information Controls” called explicitly for more concrete ethical and legal guidelines for Internet measurement projects, because the extent of data collection necessarily entails that much personal data is collected and analyzed. In the second panel on “Mobile Security and Privacy”, Ben explained how his guidelines form a privacy impact assessment for a privacy-by-design approach to mobile network measurements. The iterative process of designing a research in close cooperation with colleagues, possibly from different disciplines, ensures that privacy is taken into account at all stages of the project development. His talk led to two connected and well-attended sessions during the week to discuss the ethics of information controls research and Internet measurements. A mailing list has been set up for engineers, programmers, activists, lawyers and ethicists to discuss the ethical and legal aspects of Internet measurements. A data collection has begun to create a taxonomy of ethical issues in the discipline to inform forthcoming peer-reviewed papers.

The Citizen Lab will host its final summer institute of the series in 2015.

Caption
Ben Zevenbergen discusses ethical guidelines for Internet measurements conducted via mobile phones.

Photo credits: Ben Zevenbergen, Jon Penney. Writing Credits: Ben Zevenbergen, with small contribution from Jon Penney.

Ben Zevenbergen is an OII DPhil student and Research Assistant working on the EU Internet Science project. He has worked on legal, political and policy aspects of the information society for several years. Most recently he was a policy advisor to an MEP in the European Parliament, working on Europe’s Digital Agenda.

Jon Penney is a legal academic, doctoral student at the Oxford Internet Institute, and a Research Fellow / Affiliate of both The Citizen Lab an interdisciplinary research lab specializing in digital media, cyber-security, and human rights, at the University of Toronto’s Munk School for Global Affairs, and at the Berkman Center for Internet & Society, Harvard University.

]]>
Past and Emerging Themes in Policy and Internet Studies https://ensr.oii.ox.ac.uk/past-and-emerging-themes-in-policy-and-internet-studies/ Mon, 12 May 2014 09:24:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2673 Caption
We can’t understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) “Stop Watching Us” Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody.

The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision.

At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams is fundamental rights in a mediated society (11 articles), which focuses particularly on privacy and freedom of expression. Related streams are children and child protection (six articles), copyright and piracy (five articles), and general e-commerce regulation (six articles), including taxation. A recently emerged stream in the journal is hate speech and cybersecurity (four articles). Of course, an enduring research stream is Internet governance, or the regulation of technical infrastructures and economic institutions that constitute the material basis of the Internet (seven articles). In recent years, the research agenda in this stream has been influenced by national policy debates around broadband market competition and network neutrality (Hahn and Singer 2013). Another enduring stream deals with the Internet and public health (eight articles).

Looking specifically at “the Internet and policy processes” category, the largest stream is e-participation, or the role of the Internet in engaging citizens in national and local government policy processes, through methods such as online deliberation, petition platforms, and voting advice applications (18 articles). Two other streams are e-government, or the use of Internet technologies for government service provision (seven articles), and e-politics, or the use of the Internet in mainstream politics, such as election campaigning and communications of the political elite (nine articles). Another stream that has gained pace during recent years, is online collective action, or the role of the Internet in activism, ‘clicktivism,’ and protest campaigns (16 articles). Last year the journal published a special issue on online collective action (Calderaro and Kavada 2013), and the next forthcoming issue includes an invited article on digital civics by Ethan Zuckerman, director of MIT’s Center for Civic Media, with commentary from prominent scholars of Internet activism. A trajectory discernible in this stream over the years is a movement from discussing mere potentials towards analyzing real impacts—including critical analyses of the sometimes inflated expectations and “democracy bubbles” created by digital media (Shulman 2009; Karpf 2012; Bryer 2012).

The final category, discussion of novel methodologies, consists of articles that develop, analyze, and reflect critically on methodological innovations in policy and Internet studies. Empirical articles published in the journal have made use of a wide range of conventional and novel research methods, from interviews and surveys to automated content analysis and advanced network analysis methods. But of those articles where methodology is the topic rather than merely the tool, the majority deal with so-called “big data,” or the use of large-scale transactional data sources in research, commerce, and evidence-based public policy (nine articles). The journal recently devoted a special issue to the potentials and pitfalls of big data for public policy (Margetts and Sutcliffe 2013), based on selected contributions to the journal’s 2012 big data conference: Big Data, Big Challenges? In general, the notion of data science and public policy is a growing research theme.

This brief analysis suggests that research published in the journal over the last five years has indeed followed the broad contours of the original vision. The two challenges, namely policy implications of Internet-related social change and Internet-related changes in policy processes, have both been addressed. In particular, research has addressed the implications of the Internet’s increasing role in social and political life. The journal has also furthered the development of new methodologies, especially the use of online network analysis techniques and large-scale transactional data sources (aka ‘big data’).

As expected, authors from a wide range of disciplines have contributed their perspectives to the journal, and engaged with other disciplines, while retaining the rigor of their own specialisms. The geographic scope of the contributions has been truly global, with authors and research contexts from six continents. I am also pleased to note that a characteristic common to all the published articles is polish; this is no doubt in part due to the high level of editorial support that the journal is able to afford to authors, including copyediting. The justifications for the journal’s establishment five years ago have clearly been borne out, so that the journal now performs an important function in fostering and bringing together research on the public policy implications of an increasingly Internet-mediated society.

And what of my own research interests as an editor? In the inaugural editorial, Helen Margetts highlighted work, finance, exchange, and economic themes in general as being among the prominent areas of Internet-related social change that are likely to have significant future policy implications. I think for the most part, these implications remain to be addressed, and this is an area that the journal can encourage authors to tackle better. As an editor, I will work to direct attention to this opportunity, and welcome manuscript submissions on all aspects of Internet-enabled economic change and its policy implications. This work will be kickstarted by the journal’s 2014 conference (26-27 September), which this year focuses on crowdsourcing and online labor.

Our published articles will continue to be highlighted here in the journal’s blog. Launched last year, we believe this blog will help to expand the reach and impact of research published in Policy and Internet to the wider academic and practitioner communities, promote discussion, and increase authors’ citations. After all, publication is only the start of an article’s public life: we want people reading, debating, citing, and offering responses to the research that we, and our excellent reviewers, feel is important, and worth publishing.

Read the full editorial:  Lehdonvirta, V. (2014) Past and Emerging Themes in Policy and Internet Studies. Policy & Internet 6(2): 109-114.

References

Bryer, T.A. (2011) Online Public Engagement in the Obama Administration: Building a Democracy Bubble? Policy & Internet 3 (4).

Calderaro, A. and Kavada, A. (2013) Challenges and Opportunities of Online Collective Action for Policy Change. Policy & Internet (5) 1.

Hahn, R. and Singer, H. (2013) Is the U.S. Government’s Internet Policy Broken? Policy & Internet 5 (3) 340-363.

Karpf, D. (2012) Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism. Policy & Internet 2 (4) 7-41.

Margetts, H. (2009) The Internet and Public Policy. Policy and Internet 1 (1).

Margetts, H. and Sutcliffe, D. (2013) Addressing the Policy Challenges and Opportunities of ‘Big Data.’ Policy & Internet 5 (2) 139-146.

Shulman, S.W. (2009) The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking. Policy & Internet 1 (1) 23-53.

]]>
The complicated relationship between Chinese Internet users and their government https://ensr.oii.ox.ac.uk/the-complicated-relationship-between-chinese-internet-users-and-their-government/ Thu, 01 Aug 2013 06:28:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1827 David:For our research, we surveyed postgraduate students from all over China who had come to Shanghai to study. We asked them five questions to which they provided mostly rather lengthy answers. Despite them being young university students and being very active online, their answers managed to surprise us. Notably, the young Chinese who took part in our research felt very ambiguous about the Internet and its supposed benefits for individual people in China. They appreciated the greater freedom the Internet offered when compared to offline China, but were very wary of others abusing this freedom to their detriment.

Ed: In your paper you note that the opinions of many young people closely mirrored those of the government’s statements about the Internet — in what way?

David: In 2010 the government published a White Paper on the Internet in China in which they argued that the main uses of the Internet were for obtaining information, and for communicating with others. In contrast to Euro-American discourses around the Internet as a ‘force for democracy’, the students’ answers to our questions agreed with the evaluation of the government and did not see the Internet as a place to begin organising politically. The main reason for this — in my opinion — is that young Chinese are not used to discussing ‘politics’, and are mostly focused on pursuing the ‘Chinese dream’: good job, large flat or house, nice car, suitable spouse; usually in that order.

Ed: The Chinese Internet has usually been discussed in the West as a ‘force for democracy’ — leading to the inevitable relinquishing of control by the Chinese Communist Party. Is this viewpoint hopelessly naive?

David: Not naive as such, but both deterministic and limited, as it assumes that the introduction of technology can only have one ‘built-in’ outcome, thus ignoring human agency, and as it pretends that the Chinese Communist Party does not use technology at all. Given the intense involvement of Party and government offices, as well as of individual party members and government officials with the Internet it makes little sense to talk about ‘the Party’ and ‘the Internet’ as unconnected entities. Compared to governments in Europe or America, the Chinese Communist Party and the Chinese government have embraced the Internet and treated it as a real and valid communication channel between citizens and government/Party at all levels.

Ed: Chinese citizens are being encouraged by the government to engage and complain online, eg to expose inefficiency and corruption. Is the Internet just a space to blow off steam, or is it really capable of ‘changing’ Chinese society, as many have assumed?

David: This is mostly a matter of perspective and expectations. The Internet has NOT changed the system in China, nor is it likely to. In all likelihood, the Internet is bolstering the legitimacy and the control of the Chinese Communist Party over China. However, in many specific instances of citizen unhappiness and unrest, the Internet has proved a powerful channel of communication for the people to achieve their goals, as the authorities have reacted to online protests and supported the demands of citizens. This is a genuine change and empowerment of the people, though episodic and local, not global.

Ed: Why do you think your respondents were so accepting (and welcoming) of government control of the Internet in China: is this mainly due to government efforts to manage online opinion, or something else?

David: I think this is a reflex response fairly similar to what has happened elsewhere as well. If e.g. children manage to access porn sites, or an adult manages to groom several children over the Internet the mass media and the parents of the children call for ‘government’ to protect the children. This abrogation of power and shifting of responsibility to ‘the government’ by individuals — in the example by parents, in our study by young Chinese — is fairly widespread, if deplorable. Ultimately this demand for government ‘protection’ leads to what I would consider excessive government surveillance and control (and regulation) of online spaces in the name of ‘protection’ and the public’s acquiescence of the policing of cyberspace. In China, this takes the form of a widespread (resigned) acceptance of government censorship; in the UK it led to the acceptance of GCHQ’s involvement in Prism, or of the sentencing of Deyka Ayan Hassan or of Liam Stacey, which have turned the UK into the only country in the world in which people have been arrested for posting single, offensive posts on microblogs.

Ed: How does the central Government manage and control opinion online?

David: There is no unified system of government control over the Internet in China. Instead, there are many groups and institutions at all levels from central to local with overlapping areas of responsibility in China who are all exerting an influence on Chinese cyberspaces. There are direct posts by government or Party officials, posts by ‘famous’ people in support of government decisions or policies, paid, ‘hidden’ posters or even people sympathetic to the government. China’s notorious online celebrity Han Han once pointed out that the term ‘the Communist Party’ really means a population group of over 300 million people connected to someone who is an actual Party member.

In addition to pro-government postings, there are many different forms of censorship that try to prevent unacceptable posts. The exact definition of ‘unacceptable’ changes from time to time and even from location to location, though. In Beijing, around October 1, the Chinese National Day, many more websites are inaccessible than, for example in Shenzhen during April. Different government or Party groups also add different terms to the list of ‘unacceptable’ topics (or remove them), which contributes to the flexibility of the censorship system.

As a result of the often unpredictable ‘current’ limits of censorship, many Internet companies, forum and site managers, as well as individual Internet users add their own ‘self-censorship’ to the mix to ensure their own uninterrupted presence online. This ‘self-censorship’ is often stricter than existing government or Party regulations, so as not to even test the limits of the possible.

Ed: Despite the constant encouragement / admonishment of the government that citizens should report and discuss their problems online; do you think this is a clever (ie safe) thing for citizens to do? Are people pretty clever about negotiating their way online?

David: If it looks like a duck, moves like a duck, talks like a duck … is it a duck? There has been a lot of evidence over the years (and many academic articles) that demonstrate the government’s willingness to listen to criticism online without punishing the posters. People do get punished if they stray into ‘definitely illegal’ territory, e.g. promoting independence for parts of China, or questioning the right of the Communist Party to govern China, but so far people have been free to express their criticism of specific government actions online, and have received support from the authorities for their complaints.

Just to note briefly; one underlying issue here is the definition of ‘politics’ and ‘power’. Following Foucault, in Europe and America ‘everything’ is political, and ‘everything’ is a question of power. In China, there is a difference between ‘political’ issues, which are the responsibility of the Communist Party, and ‘social’ issues, which can be discussed (and complained about) by anybody. It might be worth exploring this difference of definitions without a priori acceptance of the Foucauldian position as ‘correct’.

Ed: There’s a lot of emphasis on using eg social media to expose corrupt officials and hold them to account; is there a similar emphasis on finding and rewarding ‘good’ officials? Or of officials using online public opinion to further their own reputations and careers? How cynical is the online public?

David: The online public is very cynical, and getting ever more so (which is seen as a problem by the government as well). The emphasis on ‘bad’ officials is fairly ‘normal’, though, as ‘good’ officials are not ‘newsworthy’. In the Chinese context there is the additional problem that socialist governments like to promote ‘model workers’, ‘model units’, etc. which would make the praising of individual ‘good’ officials by Internet users highly suspect. Other Internet users would simply assume the posters to be paid ‘hidden’ posters for the government or the Party.

Ed: Do you think (on balance) that the Internet has brought more benefits (and power) to the Chinese Government or new problems and worries?

David: I think the Internet has changed many things for many people worldwide. Limiting the debate on the Internet to the dichotomies of government vs Internet, empowered netizens vs disenfranchised Luddites, online power vs wasting time online, etc. is highly problematic. The open engagement with the Internet by government (and Party) authorities has been greater in China than elsewhere; in my view, the Chinese authorities have reacted much faster, and ‘better’ to the Internet than authorities elsewhere. As the so-called ‘revelations’ of the past few months have shown, governments everywhere have tried and are trying to control and use Internet technologies in pursuit of power.

Although I personally would prefer the Internet to be a ‘free’ and ‘independent’ place, I realise that this is a utopian dream given the political and economic benefits and possibilities of the Internet. Given the inevitability of government controls, though, I prefer the open control exercised by Chinese authorities to the hypocrisy of European and American governments, even if the Chinese controls (apparently) exceed those of other governments.


Dr David Herold is an Assistant Professor of Sociology at Hong Kong Polytechnic University, where he researches Chinese culture and contemporary PRC society, China’s relationship with other countries, and Chinese cyberspace and online society. His paper Captive Artists: Chinese University Students Talk about the Internet was presented at the presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

David Herold was talking to blog editor David Sutcliffe.

]]>
Chinese Internet users share the same values concerning free speech, privacy, and control as their Western counterparts https://ensr.oii.ox.ac.uk/chinese-internet-users-share-the-same-values-concerning-free-speech-privacy-and-control-as-their-western-counterparts/ Wed, 17 Jul 2013 13:34:35 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1709 Free Internet in Shanghai airport
There are now over half a billion Internet users in China, part of a shift in the centre of gravity of Internet use away from the US and Europe. Image of Pudong International Airport, Shanghai, by ToGa Wanderings.

Ed: You recently presented your results at the OII’s China and the New Internet World ICA preconference. What were people most interested in?

Gillian: A lot of people were interested in our finding that China was such a big online shopping market compared to other countries, with 60% of our survey respondents reporting that they make an online purchase at least weekly. That’s twice the world’s average. A lot of people who study the Chinese Internet talk about governance issues rather than commerce, but the fact that there is this massive investment in ecommerce in China and a rapid transition to a middle class lifestyle for such a large number of Chinese means that Chinese consumer behaviours will have a significant impact on global issues such as resource scarcity, global warming, and the global economy.

Others were interested in our findings concerning Internet use in ’emerging’ Internet countries like China. The Internet’s development in Western Europe and the US was driven by people who saw the technology as a platform for freedom of expression and peer-to-peer applications. In China, you see this optimism but you also see that a lot of people coming online move straight to smart phones and other locked-down technologies like the iPad, which you can only interact with in a certain way. Eighty-six percent of our Chinese respondents reported that they owned a smart phone, which was the highest percentage of all of the 24 countries we examined individually. A lot of these people are using those devices to play games and watch movies, which is a very different initial exposure to the Internet than we saw in early adopting Western countries.

Ed: So, a lot of significant differences between usages in emerging versus established Internet nations. Any similarities?

Gillian: In general, we find that uses are different but values are similar. People in emerging nations share the same values concerning free speech, privacy, and control as their Western counterparts. These are values that were embedded in the Internet’s creation and that have spread with it to other countries, regardless of national policy rhetorics. Many people – even in China – see the Internet as a tool for free speech and as a place where you can expect a certain degree of privacy and anonymity.

Ed: But isn’t there a disconnect between the fact that people are using more closed technologies as they are coming online and yet are sharing the same values of freedom associated with the Internet?

Gillian: There’s a difference between uses and values. People in emerging countries produce more content, they’re more sociable online, they listen to more music. But the way that people express their values doesn’t always match what they actually do. There is no correlation between whether someone approves of government censorship and their concern of being personally censored. There’s also no correlation in China between the frequency with which people post political opinions online and a worry that their online comments will be censored.

Ed: It seems that there are a few really interesting results in your study that run counter to accepted wisdom about the Internet. Were you surprised by any of the results?

Gillian: I was, particularly, surprised by the high levels of political commentary in emerging nations. We know that levels of online political expression in the West are very low (around 15%). But 40% of respondents in the emerging nations we surveyed reported posting a political opinion online at least weekly. That’s a huge difference. Even China, which we expected to have lower levels of political expression than the general average, followed a similar pattern. We didn’t see any chilling effect – i.e. any reduction of the frequency of posting of political opinions among Chinese users.

This matches other studies of the Chinese Internet that have concluded that there is very little censorship of people expressing themselves online – that censorship only really happens when people start to organise others. However, I was surprised by the extent of the difference: 18% of users in the US and UK reported posting a political opinion online at least weekly, 13 percent in France, and 3 percent in Japan; but 32% of Chinese, 51% of Brazilians, 50% of Indians, and 64% of Egyptians reported posting online at least weekly. This shows that these conclusions we have drawn about low levels of online political participation based on studies of Western Internet users are likely not applicable to users in other countries.

Of course, we have to remember that this is an online survey and so our results only reflect what Internet users report their activities and attitudes to be. However, the incentive to over-report activities is probably about the same for the US and for China. The thing that may be different in different countries is what people interpret as a political comment. Many more types of comments in China might be seen as political since the government controls so much more. A comment about the price of food might be seen as political speech in China, for example, since the government controls food prices, whereas a similar comment may not be seen as political by US respondents.

Ed: This research is interesting because it calls into question some fundamental assumptions about the Internet. What did you take away from the project?

Gillian: A lot of scholarship on the Internet is presented as applicable to the whole world, but isn’t actually applicable everywhere. The best example here is the very low percentage of people participating in the political process in the West, which needs to be re-evaluated with these findings. It shows that we need to be much more specific in Internet research about the unit of analysis, and what it applies to. However, we also found that Internet values are similar across the world. I think this shows that discourses about the Internet as a place for free expression and privacy are distributed hand-in-hand with the technology. Although Western users are declining as an overall percentage of the world’s Internet population, these founding rhetorics remain powerfully associated with the technology.


Read the full paper: Bolsover, G., Dutton, W.H., Law, G. and Dutta, S. (2013) Social Foundations of the Internet in China and the New Internet World: A Cross-National Comparative Perspective. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

Gillian was talking to blog editor Heather Ford.

]]>
Is China changing the Internet, or is the Internet changing China? https://ensr.oii.ox.ac.uk/is-china-changing-the-internet-or-is-the-internet-changing-china/ Fri, 12 Jul 2013 08:13:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1741 The rising prominence of China is one of the most important developments shaping the Internet. Once typified primarily by Internet users in the US, there are now more Internet users in China than there are Americans on the planet. By 2015, the proportion of Chinese language Internet users is expected to exceed the proportion of English language users. These are just two aspects of a larger shift in the centre of gravity of Internet use, in which the major growth is increasingly taking place in Asia and the rapidly developing economies of the Global South, and the BRIC nations of Brazil, Russia, India — and China.

The 2013 ICA Preconference “China and the New Internet World” (14 July 2013), organised by the OII in collaboration with many partners at collaborating universities, explored the issues raised by these developments, focusing on two main interrelated questions: how is the rise of China reshaping the global use and societal implications of the Internet? And in turn, how is China itself being reshaped by these regional and global developments?

As China has become more powerful, much attention has been focused on the number of Internet users: China now represents the largest group of Internet users in the world, with over half a billion people online. But how the Internet is used is also important; this group doesn’t just include passive ‘users’, it also includes authors, bloggers, designers and architects — that is, people who shape and design values into the Internet. This input will undoubtedly affect the Internet going forward, as Chinese institutions take on a greater role in shaping the Internet, in terms of policy, such as around freedom of expression and privacy, and practice, such as social and commercial uses, like shopping online.

Most discussion of the Internet tends to emphasise technological change and ignore many aspects of the social changes that accompany the Internet’s evolution, such as this dramatic global shift in the concentration of Internet users. The Internet is not just a technological artefact. In 1988, Deng Xiaoping declared that “science and technology are primary productive forces” that would be active and decisive factors in the new Chinese society. At the time China naturally paid a great deal of attention to technology as a means to lift its people out of poverty, but it may not have occurred to Deng that the Internet would not just impact the national economy, but that it would come to affect a person’s entire life — and society more generally — as well. In China today, users are more apt to shop online, but also to discuss political issues online than most of the other 65 nations across the world surveyed in a recent report [1].

The transformative potential of the Internet has challenged top-down communication patterns in China, by supporting multi-level and multi-directional flows of communication. Of course, communications systems reflect economic and political power to a large extent: the Internet is not a new or separate world, and its rules reflect offline rules and structures. In terms of the large ‘digital divide’ that exists in China (whose Internet penetration currently stands at a bit over 40%, meaning that 700 million people are still not online), we have to remember that this digital divide is likely to reflect other real economic and political divides, such as lack of access to other basic resources.

While there is much discussion about how the Internet is affecting China’s domestic policy (in terms of public administration, ensuring reliable systems of supply and control, the urban-rural divide and migration, and policy on things like anonymity and free speech), less time is spent discussing the geopolitics of the Internet. China certainly has the potential for great influence beyond its own borders, for example affecting communication flows worldwide and the global division of power. For such reasons, it is valuable to move beyond ‘single country studies’ to consider global shifts in attitudes and values shaping the Internet across the world. As a contested and contestable space, the political role of the Internet is likely to be a focal point for traditional discussions of key values, such as freedom of expression and assembly; remember Hilary Clinton’s 2010 ‘Internet freedom’ speech, delivered at Washington’s Newseum Institute. Contemporary debates over privacy and freedom of expression are indeed increasingly focused on Internet policy and practice.

Now is not the first time in the histories of the US and China that their respective foreign policies have been of great interest and importance to the other. However this might also be a period of anxiety-driven (rather than rational) policy making, particularly if increased exposure and access to information around the world leads to efforts to create Berlin walls of the digital age. In this period of national anxieties on the part of governments and citizens — who may feel that “something must be done” — there will inevitably be competition between the US, China, and the EU to drive national Internet policies that assert local control and jurisdiction. Ownership and control of the Internet by countries and companies is certainly becoming an increasingly politicized issue. Instead of supporting technical innovation and the diffusion of the Internet, nations are increasingly focused on controlling the flow of online content and exploiting the Internet as a means for gauging public sentiment and opinion, rather than as a channel to help shape public policy and social accountability.

For researchers, it is time to question a myopic focus on national units of analysis when studying the Internet, since many activities of critical importance take place in smaller regions, such as Silicon Valley, larger regions, such as the global South, and in virtual spaces that are truly global. We tend to think of single places: “the Internet” / “the world” / “China”: but as a number of conference speakers emphasized, there is more than one China, if we consider for example Taiwan, Hong Kong, rural China, and the factory zones — each with their different cultural, legal and economic dynamics. Similarly, there are a multitude of actors, for example corporations, which are shaping the Chinese Internet as surely as Beijing is. As Jack Qui, one of the opening panelists, observed: “There are many Internets, and many worlds.” There are also multiple histories of the Internet in China, and as yet no standard narrative.

The conference certainly made clear that we are learning a lot about China, as a rapidly growing number of Chinese scholars increasingly research and publish on the subject. The vitality of the Chinese Journal of Communication is one sign of this energy, but Internet research is expanding globally as well. Some of the panel topics will be familiar to anyone following the news, even if there is still not much published in the academic literature: smart censorship, trust in online information, human flesh search, political scandal, democratisation. But there were also interesting discussions from new perspectives, or perspectives that are already very familiar in a Western context: social networking, job markets, public administration, and e-commerce.

However, while international conferences and dedicated panels are making these cross-cultural (and cross-topic) discussions and conversations easier, we still lack enough published content about China and the Internet, and it can be difficult to find material, due to its recent diffusion, and major barriers such as language. This is an important point, given how easy it is to oversimplify another culture. A proper comparative analysis is hard and often frustrating to carry out, but important, if we are to see our own frameworks and settings in a different way.

One of the opening panelists remarked that two great transformations had occurred during his academic life: the emergence of the Internet, and the rise of China. The intersection of the two is providing fertile ground for research, and the potential for a whole new, rich research agenda. Of course the challenge for academics is not simply to find new, interesting and important things to say about a subject, but to draw enduring theoretical perspectives that can be applied to other nations and over time.

In returning to the framing question: “is China changing the Internet, or is the Internet changing China?” obviously the answer to both is “yes”, but as the Dean of USC Annenberg School, Ernest Wilson put it, we need to be asking “how?” and “to what degree?” I hope this preconference encouraged more scholars to pursue these questions.

Reference

[1] Bolsover, G., Dutton, W.H., Law, G. and Dutta, S. (2013) Social Foundations of the Internet in China and the New Internet World: A Cross-National Comparative Perspective. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.


The OII’s Founding Director (2002-2011), Professor William H. Dutton is Professor of Internet Studies, University of Oxford, and Fellow of Balliol College. Before coming to Oxford in 2002, he was a Professor in the Annenberg School for Communication at the University of Southern California, where he is now an Emeritus Professor. His most recent books include World Wide Research: Reshaping the Sciences and Humanities, co-edited with P. Jeffreys (MIT Press, 2011) and the Oxford Handbook of Internet Studies (Oxford University Press, 2013). Read Bill’s blog.

]]>
The scramble for Africa’s data https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/ https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/#comments Mon, 08 Jul 2013 09:21:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1230 Mobile phone advert in Zomba, Malawi
Africa is in the midst of a technological revolution, and the current wave of digitisation has the potential to make the continent’s citizens a rich mine of data. Intersection in Zomba, Malawi. Image by john.duffell.

 

After the last decade’s exponential rise in ICT use, Africa is fast becoming a source of big data. Africans are increasingly emitting digital information with their mobile phone calls, internet use and various forms of digitised transactions, while on a state level e-government starts to become a reality. As Africa goes digital, the challenge for policymakers becomes what the WRR, a Dutch policy organisation, has identified as ‘i-government’: moving from digitisation to managing and curating digital data in ways that keep people’s identities and activities secure.

On one level, this is an important development for African policymakers, given that accurate information on their populations has been notoriously hard to come by and, where it exists, has not been shared. On another, however, it represents a tremendous challenge. The WRR has pointed out the unpreparedness of European governments, who have been digitising for decades, for the age of i-government. How are African policymakers, as relative newcomers to digital data, supposed to respond?

There are two possible scenarios. One is that systems will develop for the release and curation of Africans’ data by corporations and governments, and that it will become possible, in the words of the UN’s Global Pulse initiative, to use it as a ‘public good’ – an invaluable tool for development policies and crisis response. The other is that there will be a new scramble for Africa: a digital resource grab that may have implications as great as the original scramble amongst the colonial powers in the late 19th century.

We know that African data is not only valuable to Africans. The current wave of digitisation has the potential to make the continent’s citizens a rich mine of data about health interventions, human mobility, conflict and violence, technology adoption, communication dynamics and financial behaviour, with the default mode being for this to happen without their consent or involvement, and without ethical and normative frameworks to ensure data protection or to weigh the risks against the benefits. Orange’s recent release of call data from Cote d’Ivoire both represents an example of the emerging potential of African digital data, but also the challenge of understanding the kind of anonymisation and ethical challenge that it represents.

I have heard various arguments as to why data protection is not a problem for Africans. One is that people in African countries don’t care about their privacy because they live in a ‘collective society’. (Whatever that means.) Another is that they don’t yet have any privacy to protect because they are still disconnected from the kinds of system that make data privacy important. Another more convincing and evidence-based argument is that the ends may justify the means (as made here by the ICRC in a thoughtful post by Patrick Meier about data privacy in crisis situations), and that if significant benefits can be delivered using African big data these outweigh potential or future threats to privacy. The same argument is being made by Global Pulse, a UN initiative which aims to convince corporations to release data on developing countries as a public good for use in devising development interventions.

There are three main questions: what can incentivise African countries’ citizens and policymakers to address privacy in parallel with the collection of massive amounts of personal data, rather than after abuses occur? What are the models that might be useful in devising privacy frameworks for groups with restricted technological access and sophistication? And finally, how can such a system be participatory enough to be relevant to the needs of particular countries or populations?

Regarding the first question, this may be a lost cause. The WRR’s i-government work suggests that only public pressure due to highly publicised breaches of data security may spur policymakers to act. The answer to the second question is being pursued, among others, by John Clippinger and Alex Pentland at MIT (with their work on the social stack); by the World Economic Forum, which is thinking about the kinds of rules that should govern personal data worldwide; by the aforementioned Global Pulse, which has a strong interest in building frameworks which make it safe for corporations to share people’s data; by Microsoft, which is doing some serious thinking about differential privacy for large datasets; by independent researchers such as Patrick Meier, who is looking at how crowdsourced data about crises and human rights abuses should be handled; and by the Oxford Internet Institute’s new M-Data project which is devising privacy guidelines for collecting and using mobile connectivity data.

Regarding the last question, participatory systems will require African country activists, scientists and policymakers to build them. To be relevant, they will also need to be made enforceable, which may be an even greater challenge. Privacy frameworks are only useful if they are made a living part of both governance and citizenship: there must be the institutional power to hold offenders accountable (in this case extremely large and powerful corporations, governments and international institutions), and awareness amongst ordinary people about the existence and use of their data. This, of course, has not really been achieved in developed countries, so doing it in Africa may not exactly be a piece of cake.

Notwithstanding these challenges, the region offers an opportunity to push researchers and policymakers – local and worldwide – to think clearly about the risks and benefits of big data, and to make solutions workable, enforceable and accessible. In terms of data privacy, if it works in Burkina Faso, it will probably work in New York, but the reverse is unlikely to be true. This makes a strong argument for figuring it out in Burkina Faso.

Some may contend that this discussion only points out the massive holes in the governance of technology that prevail in Africa – and in fact a whole other level of problems regarding accountability and power asymmetries. My response: Yes. Absolutely.


Linnet Taylor’s research focuses on social and economic aspects of the diffusion of the internet in Africa, and human mobility as a factor in technology adoption (.. read her blog). Her doctoral research was on Ghana, where she looked at mobility’s influence on the formation and viability of internet cafes in poor and remote areas, networking amongst Ghanaian technology professionals and ICT4D policy. At the OII she works on a Sloan Foundation funded project on Accessing and Using Big Data to Advance Social Science Knowledge.

]]>
https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/feed/ 1
Time for debate about the societal impact of the Internet of Things https://ensr.oii.ox.ac.uk/time-for-debate-about-the-societal-impact-of-the-internet-of-things/ Mon, 22 Apr 2013 14:32:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=931
European conference on the Internet of Things
The 2nd Annual Internet of Things Europe 2010: A Roadmap for Europe, 2010. Image by Pierre Metivier.
On 17 April 2013, the US Federal Trade Commission published a call for inputs on the ‘consumer privacy and security issues posed by the growing connectivity of consumer devices, such as cars, appliances, and medical devices’, in other words, about the impact of the Internet of Things (IoT) on the everyday lives of citizens. The call is in large part one for information to establish what the current state of technology development is and how it will develop, but it also looks for views on how privacy risks should be weighed against potential societal benefits.

There’s a lot that’s not very new about the IoT. Embedded computing, sensor networks and machine to machine communications have been around a long time. Mark Weiser was developing the concept of ubiquitous computing (and prototyping it) at Xerox PARC in 1990.  Many of the big ideas in the IoT — smart cars, smart homes, wearable computing — are already envisaged in works such as Nicholas Negroponte’s Being Digital, which was published in 1995 before the mass popularisation of the internet itself. The term ‘Internet of Things’ has been around since at least 1999. What is new is the speed with which technological change has made these ideas implementable on a societal scale. The FTC’s interest reflects a growing awareness of the potential significance of the IoT, and the need for public debate about its adoption.

As the cost and size of devices falls and network access becomes ubiquitous, it is evident that not only major industries but whole areas of consumption, public service and domestic life will be capable of being transformed. The number of connected devices is likely to grow fast in the next few years. The Organisation for Economic Co-operation and Development (OECD) estimates that while a family with two teenagers may have 10 devices connected to the internet, in 2022 this may well grow to 50 or more. Across the OECD area the number of connected devices in households may rise from an estimated 1.7 billion today to 14 billion by 2022. Programmes such as smart cities, smart transport and smart metering will begin to have their effect soon. In other countries, notably in China and Korea, whole new cities are being built around smart infrastructuregiving technology companies the opportunity to develop models that could be implemented subsequently in Western economies.

Businesses and governments alike see this as an opportunity for new investment both as a basis for new employment and growth and for the more efficient use of existing resources. The UK Government is funding a strand of work under the auspices of the Technology Strategy Board on the IoT, and the IoT is one of five themes that are the subject of the Department for Business, Innovation & Skills (BIS)’s consultation on the UK’s Digital Economy Strategy (alongside big data, cloud computing, smart cities, and eCommerce).

The enormous quantity of information that will be produced will provide further opportunities for collecting and analysing big data. There is consequently an emerging agenda about privacy, transparency and accountability. There are challenges too to the way we understand and can manage the complexity of interacting systems that will underpin critical social infrastructure.

The FTC is not alone in looking to open public debate about these issues. In February, the OII and BCS (the Chartered Institute for IT) ran a joint seminar to help the BCS’s consideration about how it should fulfil its public education and lobbying role in this area. A summary of the contributions is published on the BCS website.

The debate at the seminar was wide ranging. There was no doubt that the train has left the station as far as this next phase of the Internet is concerned. The scale of major corporate investment, government encouragement and entrepreneurial enthusiasm are not to be deflected. In many sectors of the economy there are already changes that are being felt already by consumers or will be soon enough. Smart metering, smart grid, and transport automation (including cars) are all examples. A lot of the discussion focused on risk. In a society which places high value on audit and accountability, it is perhaps unsurprising that early implementations have often been in using sensors and tags to track processes and monitor activity. This is especially attractive in industrial structures that have high degrees of subcontracting.

Wider societal risks were also discussed. As for the FTC, the privacy agenda is salient. There is real concern that the assumptions which underlie the data protection regimeespecially its reliance on data minimisationwill not be adequate to protect individuals in an era of ubiquitous data. Nor is it clear that the UK’s regulatorthe Information Commissionerwill be equipped to deal with the volume of potential business. Alongside privacy, there is also concern for security and the protection of critical infrastructure. The growth of reliance on the IoT will make cybersecurity significant in many new ways. There are issues too about complexity and the unforeseenand arguably unforeseeableconsequences of the interactions between complex, large, distributed systems acting in real time, and with consequences that go very directly to the wellbeing of individuals and communities.

There are great opportunities and a pressing need for social research into the IoT. The data about social impacts has been limited hitherto given the relatively few systems deployed. This will change rapidly. As Governments consult and bodies like the BCS seek to advise, it’s very desirable that public debate about privacy and security, access and governance, take place on the basis of real evidence and sound analysis.

]]>
Papers on Policy, Activism, Government and Representation: New Issue of Policy and Internet https://ensr.oii.ox.ac.uk/issue-34/ Wed, 16 Jan 2013 21:40:43 +0000 http://blogs.oii.ox.ac.uk/policy/?p=667 We are pleased to present the combined third and fourth issue of Volume 4 of Policy and Internet. It contains eleven articles, each of which investigates the relationship between Internet-based applications and data and the policy process. The papers have been grouped into the broad themes of policy, government, representation, and activism.

POLICY: In December 2011, the European Parliament Directive on Combating the Sexual Abuse, Sexual Exploitation of Children and Child Pornography was adopted. The directive’s much-debated Article 25 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavor to obtain the removal of such websites hosted outside their territory. Member States are also given the option to block access to such websites to users within their territory. Both these policy choices have been highly controversial and much debated; Karel Demeyer, Eva Lievens, and Jos Dumortie analyse the technical and legal means of blocking and removing illegal child sexual content from the Internet, clarifying the advantages and drawbacks of the various policy options.

Another issue of jurisdiction surrounds government use of cloud services. While cloud services promise to render government service delivery more effective and efficient, they are also potentially stateless, triggering government concern over data sovereignty. Kristina Irion explores these issues, tracing the evolution of individual national strategies and international policy on data sovereignty. She concludes that data sovereignty presents national governments with a legal risk that can’t be addressed through technology or contractual arrangements alone, and recommends that governments retain sovereignty over their information.

While the Internet allows unprecedented freedom of expression, it also facilitates anonymity and facelessness, increasing the possibility of damage caused by harmful online behavior, including online bullying. Myoung-Jin Lee, Yu Jung Choi, and Setbyol Choi investigate the discourse surrounding the introduction of the Korean Government’s “Verification of Identity” policy, which aimed to foster a more responsible Internet culture by mandating registration of a user’s real identity before allowing them to post to online message boards. The authors find that although arguments about restrictions on freedom of expression continue, the policy has maintained public support in Korea.

A different theoretical approach to another controversy topic is offered by Sameer Hinduja, who applies Actor-Network Theory (ANT) to the phenomenon of music piracy, arguing that we should pay attention not only to the social aspects, but also to the technical, economic, political, organizational, and contextual aspects of piracy. He argues that each of these components merits attention and response by law enforcers if progress is to be made in understanding and responding to digital piracy.

GOVERNMENT: While many governments have been lauded for their success in the online delivery of services, fewer have been successful in employing the Internet for more democratic purposes. Tamara A. Small asks whether the Canadian government — with its well-established e-government strategy — fits the pattern of service delivery oriented (rather than democracy oriented) e-government. Based on a content analysis of Government of Canada tweets, she finds that they do indeed tend to focus on service delivery, and shows how nominal a commitment the Canadian government has made to the more interactive and conversational qualities of Twitter.

While political scientists have greatly benefitted from the increasing availability of online legislative data, data collections and search capabilities are not comprehensive, nor are they comparable across the different U.S. states. David L. Leal, Taofang Huang, Byung-Jae Lee, and Jill Strube review the availability and limitations of state online legislative resources in facilitating political research. They discuss levels of capacity and access, note changes over time, and note that their usability index could potentially be used as an independent variable for researchers seeking to measure the transparency of state legislatures.

RERESENTATION: An ongoing theme in the study of elected representatives is how they present themselves to their constituents in order to enhance their re-election prospects. Royce Koop and Alex Marland compare presentation of self by Canadian Members of Parliament on parliamentary websites and in the older medium of parliamentary newsletters. They find that MPs are likely to present themselves as outsiders on their websites, that this differs from patterns observed in newsletters, and that party affiliation plays an important role in shaping self-presentation online.

Many strategic, structural and individual factors can explain the use of online campaigning in elections; based on candidate surveys, Julia Metag and Frank Marcinkowski show that strategic and structural variables, such as party membership or the perceived share of indecisive voters, do most to explain online campaigning. Internet-related perceptions are explanatory in a few cases; if candidates think that other candidates campaign online they feel obliged to use online media during the election campaign.

ACTIVISM: Mainstream opinion at the time of the protests of the “Arab Spring” – and the earlier Iranian “Twitter Revolution” – was that use of social media would significantly affect the outcome of revolutionary collective action. Throughout the Libyan Civil War, Twitter users took the initiative to collect and process data for use in the rebellion against the Qadhafi regime, including map overlays depicting the situation on the ground. In an exploratory case study on crisis mapping of intelligence information, Steve Stottlemyre and Sonia Stottlemyre investigate whether the information collected and disseminated by Twitter users during the Libyan civil war met the minimum requirements to be considered tactical military intelligence.

Philipp S. Mueller and Sophie van Huellen focus on the 2009 post-election protests in Teheran in their analysis of the effect of many-to-many media on power structures in society. They offer two analytical approaches as possible ways to frame the complex interplay of media and revolutionary politics. While social media raised international awareness by transforming the agenda-setting process of the Western mass media, the authors conclude that, given the inability of protesters to overthrow the regime, a change in the “media-scape” does not automatically imply a changed “power-scape.”

A different theoretical approach is offered by Mark K. McBeth, Elizabeth A. Shanahan, Molly C. Arrandale Anderson, and Barbara Rose, who look at how interest groups increasingly turn to new media such as YouTube as tools for indirect lobbying, allowing them to enter into and have influence on public policy debates through wide dissemination of their policy preferences. They explore the use of policy narratives in new media, using a Narrative Policy Framework to analyze YouTube videos posted by the Buffalo Field Campaign, an environmental activist group.

]]>
New issue of Policy and Internet (2,3) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-23/ Thu, 04 Nov 2010 12:08:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=121 Welcome to the third issue of Policy & Internet for 2010. We are pleased to present five articles focusing on substantive public policy issues arising from widespread use of the Internet: regulation of trade in virtual goods; development of electronic government in Korea; online policy discourse in UK elections; regulatory models for broadband technologies in the US; and alternative governance frameworks for open ICT standards.

Three of the articles are the first to be published from the highly successful conference ‘Internet, Politics and Policy‘ held by the journal in Oxford, 16th-17th September 2010. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Vili Lehdonvirta and Perttu Virtanen: A New Frontier in Digital Content Policy: Case Studies in the Regulation of Virtual Goods and Artificial Scarcity

Joon Hyoung Lim: Digital Divides in Urban E-Government in South Korea: Exploring Differences in Municipalities’ Use of the Internet for Environmental Governance

Darren G. Lilleker and Nigel A. Jackson: Towards a More Participatory Style of Election Campaigning: The Impact of Web 2.0 on the UK 2010 General Election

Michael J. Santorelli: Regulatory Federalism in the Age of Broadband: A U.S. Perspective

Laura DeNardis: E-Governance Policies for Interoperability and Open Standards

]]>
Internet, Politics, Policy 2010: Closing keynote by Viktor Mayer-Schönberger https://ensr.oii.ox.ac.uk/internet-politics-policy-2010-closing-keynote-by-viktor-mayer-schonberger/ Fri, 17 Sep 2010 15:48:04 +0000 http://blogs.oii.ox.ac.uk/policy/?p=94 Our two-day conference is coming to a close with a keynote by Viktor Mayer-Schönberger who is soon to be joining the faculty of the Oxford Internet Institute as Professor of Internet Governance and Regulation.

Viktor talked about the theme of his recent book“Delete: The Virtue of Forgetting in the Digital Age”(a webcast of this keynote will be available soon on the OII website but you can also listen to a previous talk here). It touches on many of the recent debates about information that has been published on the web in some context and which might suddenly come back to us in a completely different context, e.g. when applying for a job and being confronted with some drunken picture of us obtained from Facebook.

Viktor puts that into a broad perspective, contrasting the two themes of “forgetting” and “remembering”. He convincingly argues how for most of human history, forgetting has been the default. This state of affairs has experienced quite a dramatic change with the advances of the computer technology, data storage and information retrieval technologies available on a global information infrastructure.  Now remembering is the default as most of the information stored digitally is available forever and in multiple places.

What he sees at stake is power because of the permanent threat of our activities are being watched by others – not necessarily now but possibly even in the future – can result in altering our behaviour today. What is more, he says that without forgetting it is hard for us to forgive as we deny us and others the possibility to change.

No matter to what degree you are prepared to follow the argument, the most intriguing question is how the current state of remembering could be changed to forgetting. Viktor discusses a number of ideas that pose no real solution:

  1. privacy rights – don’t go very far in changing actual behaviour
  2. information ecology – the idea to store only as much as necessary
  3. digital abstinence – just not using these digital tools but this is not very practical
  4. full contextualization – store as much information as possible in order to provide necessary context for evaluating the informations from the past
  5. cognitive adjustments – humans have to change in order to learn how to discard the information but this is very difficult
  6. privacy digital rights management – requires the need to create a global infrastructure that would create more threats than solutions

Instead Viktor wants to establish mechanisms that ease forgetting, primarily by making it a little bit more difficult to remember. Ideas include

  • expiration date for information, less in order to technically force deletion but to socially force thinking about forgetting
  • making older information a bit more difficult to retrieve

Whatever the actual tool, the default should be forgetting and to prompt its users to reflect and choose about just how long a certain piece of information should be valid.

Nice closing statement: “Let us remember to forget!

]]>
New issue of Policy and Internet (2,2) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-22/ Thu, 19 Aug 2010 12:17:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=128 Welcome to the second issue of Policy & Internet for 2010! We are pleased to present six articles which investigate the role of the Internet in a wide range of policy processes and sectors: agenda setting in online and traditional media; environmental policy networks; online deliberation on climate change; data protection and privacy; net neutrality; and digital inclusion/exclusion. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Ben Sayre, Leticia Bode, Dhavan Shah, Dave Wilcox, and Chirag Shah: Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News

Kathleen McNutt and Adam Wellstead: Virtual Policy Networks in Forestry and Climate Change in the U.S. and Canada: Government Nodality, Internationalization and Actor Complexity

Julien Talpin and Stéphanie Wojcik: Deliberating Environmental Policy Issues: Comparing the Learning Potential of Online and Face-To-Face Discussions on Climate Change

Andrew A. Adams, Kiyoshi Murata, and Yohko Orito: The Development of Japanese Data Protection

Scott Jordan: The Application of Net Neutrality to Wireless Networks Based on Network Architecture

Alison Powell, Amelia Bryne, and Dharma Dailey: The Essential Internet: Digital Exclusion in Low-Income American Communities

]]>