data protection – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:22 +0000 en-GB hourly 1 New Voluntary Code: Guidance for Sharing Data Between Organisations https://ensr.oii.ox.ac.uk/new-voluntary-code-guidance-for-sharing-data-between-organisations/ Fri, 08 Jan 2016 10:40:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3540 Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

]]>
How can big data be used to advance dementia research? https://ensr.oii.ox.ac.uk/how-can-big-data-be-used-to-advance-dementia-research/ Mon, 16 Mar 2015 08:00:11 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3186 Caption
Image by K. Kendall of “Sights and Scents at the Cloisters: for people with dementia and their care partners”; a program developed in consultation with the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain, Alzheimer’s Disease Research Center at Columbia University, and the Alzheimer’s Association.

Dementia affects about 44 million individuals, a number that is expected to nearly double by 2030 and triple by 2050. With an estimated annual cost of USD 604 billion, dementia represents a major economic burden for both industrial and developing countries, as well as a significant physical and emotional burden on individuals, family members and caregivers. There is currently no cure for dementia or a reliable way to slow its progress, and the G8 health ministers have set the goal of finding a cure or disease-modifying therapy by 2025. However, the underlying mechanisms are complex, and influenced by a range of genetic and environmental influences that may have no immediately apparent connection to brain health.

Of course medical research relies on access to large amounts of data, including clinical, genetic and imaging datasets. Making these widely available across research groups helps reduce data collection efforts, increases the statistical power of studies and makes data accessible to more researchers. This is particularly important from a global perspective: Swedish researchers say, for example, that they are sitting on a goldmine of excellent longitudinal and linked data on a variety of medical conditions including dementia, but that they have too few researchers to exploit its potential. Other countries will have many researchers, and less data.

‘Big data’ adds new sources of data and ways of analysing them to the repertoire of traditional medical research data. This can include (non-medical) data from online patient platforms, shop loyalty cards, and mobile phones — made available, for example, through Apple’s ResearchKit, just announced last week. As dementia is believed to be influenced by a wide range of social, environmental and lifestyle-related factors (such as diet, smoking, fitness training, and people’s social networks), and this behavioural data has the potential to improve early diagnosis, as well as allow retrospective insights into events in the years leading up to a diagnosis. For example, data on changes in shopping habits (accessible through loyalty cards) may provide an early indication of dementia.

However, there are many challenges to using and sharing big data for dementia research. The technology hurdles can largely be overcome, but there are also deep-seated issues around the management of data collection, analysis and sharing, as well as underlying people-related challenges in relation to skills, incentives, and mindsets. Change will only happen if we tackle these challenges at all levels jointly.

As data are combined from different research teams, institutions and nations — or even from non-medical sources — new access models will need to be developed that make data widely available to researchers while protecting the privacy and other interests of the data originator. Establishing robust and flexible core data standards that make data more sharable by design can lower barriers for data sharing, and help avoid researchers expending time and effort trying to establish the conditions of their use.

At the same time, we need policies that protect citizens against undue exploitation of their data. Consent needs to be understood by individuals — including the complex and far-reaching implications of providing genetic information — and should provide effective enforcement mechanisms to protect them against data misuse. Privacy concerns about digital, highly sensitive data are important and should not be de-emphasised as a subordinate goal to advancing dementia research. Beyond releasing data in a protected environments, allowing people to voluntarily “donate data”, and making consent understandable and enforceable, we also need governance mechanisms that safeguard appropriate data use for a wide range of purposes. This is particularly important as the significance of data changes with its context of use, and data will never be fully anonymisable.

We also need a favourable ecosystem with stable and beneficial legal frameworks, and links between academic researchers and private organisations for exchange of data and expertise. Legislation needs to account of the growing importance of global research communities in terms of funding and making best use of human and data resources. Also important is sustainable funding for data infrastructures, as well as an understanding that funders can have considerable influence on how research data, in particular, are made available. One of the most fundamental challenges in terms of data sharing is that there are relatively few incentives or career rewards that accrue to data creators and curators, so ways to recognise the value of shared data must be built into the research system.

In terms of skills, we need more health-/bioinformatics talent, as well as collaboration with those disciplines researching factors “below the neck”, such as cardiovascular or metabolic diseases, as scientists increasingly find that these may be associated with dementia to a larger extent than previously thought. Linking in engineers, physicists or innovative private sector organisations may prove fruitful for tapping into new skill sets to separate the signal from the noise in big data approaches.

In summary, everyone involved needs to adopt a mindset of responsible data sharing, collaborative effort, and a long-term commitment to building two-way connections between basic science, clinical care and the healthcare in everyday life. Fully capturing the health-related potential of big data requires “out of the box” thinking in terms of how to profit from the huge amounts of data being generated routinely across all facets of our everyday lives. This sort of data offers ways for individuals to become involved, by actively donating their data to research efforts, participating in consumer-led research, or engaging as citizen scientists. Empowering people to be active contributors to science may help alleviate the common feeling of helplessness faced by those whose lives are affected by dementia.

Of course, to do this we need to develop a culture that promotes trust between the people providing the data and those capturing and using it, as well as an ongoing dialogue about new ethical questions raised by collection and use of big data. Technical, legal and consent-related mechanisms to protect individual’s sensitive biomedical and lifestyle-related data against misuse may not always be sufficient, as the recent Nuffield Council on Bioethics report has argued. For example, we need a discussion around the direct and indirect benefits to participants of engaging in research, when it is appropriate for data collected for one purpose to be put to others, and to what extent individuals can make decisions particularly on genetic data, which may have more far-reaching consequences for their own and their family members’ professional and personal lives if health conditions, for example, can be predicted by others (such as employers and insurance companies).

Policymakers and the international community have an integral leadership role to play in informing and driving the public debate on responsible use and sharing of medical data, as well as in supporting the process through funding, incentivising collaboration between public and private stakeholders, creating data sharing incentives (for example, via taxation), and ensuring stability of research and legal frameworks.

Dementia is a disease that concerns all nations in the developed and developing world, and just as diseases have no respect for national boundaries, neither should research into dementia (and the data infrastructures that support it) be seen as a purely national or regional priority. The high personal, societal and economic importance of improving the prevention, diagnosis, treatment and cure of dementia worldwide should provide a strong incentive for establishing robust and safe mechanisms for data sharing.


Read the full report: Deetjen, U., E. T. Meyer and R. Schroeder (2015) Big Data for Advancing Dementia Research. Paris, France: OECD Publishing.

]]>
Does a market-approach to online privacy protection result in better protection for users? https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/ https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/#comments Wed, 25 Feb 2015 11:21:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3123 Ed: You examined the voluntary provision by commercial sites of information privacy protection and control under the self-regulatory policy of the U.S. Federal Trade Commission (FTC). In brief, what did you find?

Yong Jin: First, because we rely on the Internet to perform almost all types of transactions, how personal privacy is protected is perhaps one of the important issues we face in this digital age. There are many important findings: the most significant one is that the more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected. This is surprising because one might expect “the more popular, the better privacy protection” — a sort of marketplace magic that automatically solves the issue of personal privacy online. This was not the case at all, because the popular sites with more resources did not provide better privacy protection. Of course, the Internet in general is a malleable medium. This means that commercial sites can design, modify, or easily manipulate user interfaces to maximize the ease with which users can protect their personal privacy. The fact that this is not really happening for commercial websites in the U.S. is not only alarming, but also suggests that commercial forces may not have a strong incentive to provide privacy protection.

Ed: Your sample included websites oriented toward young users and sensitive data relating to health and finance: what did you find for them?

Yong Jin: Because the sample size for these websites was limited, caution is needed in interpreting the results. But what is clear is that just because the websites deal with health or financial data, they did not seem to be better at providing more privacy protection. To me, this should raise enormous concerns from those who use the Internet for health information seeking or financial data. The finding should also inform and urge policymakers to ask whether the current non-intervention policy (regarding commercial websites in the U.S.) is effective, when no consideration is given for the different privacy needs in different commercial sectors.

Ed: How do your findings compare with the first investigation into these matters by the FTC in 1998?

Yong Jin: This is a very interesting question. In fact, at least as far as the findings from this study are concerned, it seems that no clear improvement has been made in almost two decades. Of course, the picture is somewhat complicated. On the one hand, we see (on the surface) that websites have a lot more interactive features. But this does not necessarily mean improvement, because when it comes to actually informing users of what features are available for their privacy control and protection, they still tend to perform poorly. Note that today’s privacy policies are longer and are likely to carry more pages and information, which makes it even more difficult for users to understand what options they do have. I think informing people about what they can actually do is harder, but is getting more important in today’s online environment.

Ed: Is this just another example of a US market-led vs European regulation-led approach to a particular problem? Or is the situation more complicated?

Yong Jin: The answer is yes and no. Yes, it is because a US market-led approach clearly presents no strong statuary ground to mandate privacy protection in commercial websites. However, the answer is also no: even in the EU there is no regulatory mandate for websites to have certain interface-protections concerning how users should get informed about their personal data, and interact with websites to control its use. The difference is more on the fundamental principle of the “opt-in” EU approach. Although the “opt-in” is stronger than the “opt-out” approach in the U.S. this does not require websites to have certain interface-design aspects that are optimized for users’ data control. In other words, to me, the reality of the EU regulation (despite its robust policy approach) will not necessarily be rosier than the U.S., because commercial websites in the EU context also operate under the same incentive of personal data collection and uses. Ultimately, this is an empirical question that will require further studies. Interestingly, the next frontier of this debate will be on privacy in mobile platforms – and useful information concerning this can be found at the OII’s project to develop ethical privacy guidelines for mobile connectivity measurements.

Ed: Awareness of issues around personal data protection is pretty prominent in Europe — witness the recent European Court of Justice ruling about the ‘Right to Forget’ — how prominent is this awareness in the States? Who’s interested in / pushing / discussing these issues?

Yong Jin: The general public in the U.S. has an enormous concern for personal data privacy, since the Edward Snowden revelations in 2013 revealed extensive government surveillance activities. Yet my sense is that public awareness concerning data collection and surveillance by commercial companies has not yet reached the same level. Certainly, the issue such as the “Right to Forget” is being discussed among only a small circle of scholars, website operators, journalists, and policymakers, and I see the general public mostly remains left out of this discussion. In fact, a number of U.S. scholars have recently begun to weigh the pros and cons of a “Right to Forget” in terms of the public’s right to know vs the individual’s right to privacy. Given the strong tradition of freedom of speech, however, I highly doubt that U.S. policymakers will have a serious interest in pushing a similar type of approach in the foreseeable future.

My own work on privacy awareness, digital literacy, and behavior online suggests that public interest and demand for strong legislation such as a “Right to Forget” is a long shot, especially in the context of commercial websites.

Ed: Given privacy policies are notoriously awful to deal with (and are therefore generally unread) — what is the solution? You say the situation doesn’t seem to have improved in ten years, and that some aspects — such as readability of policies — might actually have become worse: is this just ‘the way things are always going to be’, or are privacy policies something that realistically can and should be addressed across the board, not just for a few sites?

Yong Jin: A great question, and I see no easy answer! I actually pondered a similar question when I conducted this study. I wonder: “Are there any viable solutions for online privacy protection when commercial websites are so desperate to use personal data?” My short answer is No. And I do think the problem will persist if the current regulatory contours in the U.S. continue. This means that there is a need for appropriate policy intervention that is not entirely dependent on market-based solutions.

My longer answer would be that realistically, to solve the notoriously difficult privacy problems on the Internet, we will need multiple approaches — which means a combination of appropriate regulatory forces by all the entities involved: regulatory mandates (government), user awareness and literacy (public), commercial firms and websites (market), and interface design (technology). For instance, it is plausible to perceive a certain level of readability of policy statement is to be required of all websites targeting children or teenagers. Of course, this will function with appropriate organizational behaviors, users’ awareness and interest in privacy, etc. In my article I put a particular emphasis on the role of the government (particularly in the U.S.) where the industry often ‘captures’ the regulatory agencies. The issue is quite complicated because for privacy protection, it is not just the FTC but also Congress who should enact to empower the FTC in its jurisdiction. The apparent lack of improvement over the years since the FTC took over online privacy regulation in the mid 1990s reflects this gridlock in legislative dynamics — as much as it reflects the commercial imperative for personal data collection and use.

I made a similar argument for multiple approaches to solve privacy problems in my article Offline Status, Online Status Reproduction of Social Categories in Personal Information Skill and Knowledge, and related, excellent discussions can be found in Information Privacy in Cyberspace Transactions (by Jerry Kang), and Exploring Identity and Identification in Cyberspace, by Oscar Gandy.

Read the full article: Park, Y.J. (2014) A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. Policy & Internet 6 (4) 360-376.


Yong Jin Park was taking to blog editor David Sutcliffe.

Yong Jin Park is an Associate Professor at the School of Communications, Howard University. His research interests center on social and policy implications of new technologies; current projects examine various dimensions of digital privacy.

]]>
https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/feed/ 1
The Future of Europe is Science — and ethical foresight should be a priority https://ensr.oii.ox.ac.uk/the-future-of-europe-is-science/ Thu, 20 Nov 2014 17:15:38 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3014 On October 6 and 7, the European Commission, with the participation of Portuguese authorities and the support of the Champalimaud Foundation, organised in Lisbon a high-level conference on “The Future of Europe is Science”. Mr. Barroso, President of the European Commission, opened the meeting. I had the honour of giving one of the keynote addresses.

The explicit goal of the conference was twofold. On the one hand, we tried to take stock of European achievements in science, engineering, technology and innovation (SETI) during the last 10 years. On the other hand, we looked into potential future opportunities that SETI may bring to Europe, both in economic terms (growth, jobs, new business opportunities) and in terms of wellbeing (individual welfare and higher social standards).

One of the most interesting aspects of the meeting was the presentation of the latest report on “The Future of Europe is Science” by the President’s Science and Technology Advisory Council (STAC). The report addresses some very big questions: How will we keep healthy? How will we live, learn, work and interact in the future? How will we produce and consume and how will we manage resources? It also seeks to outline some key challenges that will be faced by Europe over the next 15 years. It is well written, clear, evidence-based and convincing. I recommend reading it. In what follows, I wish to highlight three of its features that I find particularly significant.

First, it is enormously refreshing and reassuring to see that the report treats science and technology as equally important and intertwined. The report takes this for granted, but anyone stuck in some Greek dichotomy between knowledge (episteme, science) and mere technique (techne, technology) will be astonished. While this divorcing of the two has always been a bad idea, it is still popular in contexts where applied science, e.g. applied physics or engineering, is considered a Cinderella. During my talk, I referred to Galileo as a paradigmatic scientist who had to be innovative in terms of both theories and instruments.

Today, technology is the outcome of innovative science and there is almost no science that is independent of technology, in terms of reliance on digital data and processing or (and this is often an inclusive or) in terms of investigations devoted to digital phenomena, e.g. in the social sciences. Of course, some Fields Medallists may not need computers to work, and may not work on computational issues, but they represent an exception. This year, Hiroshi Amano, Shuji Nakamura and Isamu Akasaki won the Nobel in physics “for the invention of efficient blue light-emitting diodes which has enabled bright and energy-saving white light sources”. Last year, François Englert and Peter Higgs were awarded the Nobel in physics “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN’s Large Hadron Collider”. Without the technologically sophisticated work done at CERN, their theoretical discovery would have remained unsupported. The hope is that universities, research institutions, R&D centres as well as national research agencies will follow the approach espoused by STAC and think strategically in terms of technoscience.

The second point concerns some interesting statistics. The report uses several sources—especially the 2014 Eurobarometer survey of “Public perception of science, research and innovation”—to analyse and advise about the top priorities for SETI over the next 15 years, as identified by EU respondents. The picture that emerges is an ageing population worried, first of all, about its health, then about its children’s jobs, and only after that about the environment: 55 % of respondents identified “health and medical care” as among what they thought should be the main priorities for science and technological development over the next 15 years; 49 % opted for “job creation”; 33 % privileged “education and skills”. So we spent most of the meeting in Lisbon discussing these three areas. Other top priorities include “protection of the environment” (30 %), “energy supply” (25 %) and the “fight against climate change” (22 %).

So far so predictable, although it is disappointing to see such a low concern about the environment, a clear sign that even educated Europeans (with the exception of Danish and Swedish respondents) may not be getting the picture: there is no point in being healthy and employed in a desert. Yet this is not what I wish to highlight. Rather, on p. 14 of the report, the authors themselves admit that: “Contrary to our expectations, citizens do not consider the protection of personal data to be a high priority for SET in the next 15 years (11 %)”. This is very interesting. As a priority, data protection ranks as low as quality of housing: nice, but very far from essential. The authors quickly add that “but this might change in the future if citizens are confronted with serious security problems”.

They are right, but the point remains that, at the moment, all the fuss about privacy in the EU is a political rather than a social priority. Recall that this is an ageing population of grown-ups, not a bunch of teenagers in love with pictures of cats and friends online, allegedly unable to appreciate what privacy means (a caricature increasingly unbelievable anyway). Perhaps we “do not get it” when we should (a bit like the environmental issues) and need to be better informed. Or perhaps we are informed and still think that other issues are much more pressing. Either way, our political representatives should take notice.

Finally, and most importantly, the report contains a recommendation that I find extremely wise and justified. On p. 19, the Advisory Council acknowledges that, among the many foresight activities to be developed by the Commission, one in particular “should also be a priority”: ethical foresight. This must be one of the first times that ethical foresight is theorised as a top priority in the development of science and technology. The recommendation is based on the crucial and correct realisation that ethical choices, values, options and constraints influence the world of SETI much more than any other force. The evaluation of what is morally good, right or necessary shapes public opinion, hence the socially acceptable and the politically feasible and so, ultimately, the legally enforceable.

In the long run, business is constrained by law, which is constrained by ethics. This essential triangle means that—in the context of technoscientific research, development and innovation—ethics cannot be a mere add-on, an afterthought, a latecomer or an owl of Minerva that takes its flight only when the shades of night are gathering, once bad solutions have been implemented and mistakes have been made. Ethics must sit at the table of policy-making and decision-taking procedures from day one. It must inform our strategies about SETI especially at the beginning, when changing the course of action is easier and less costly, in terms of resources and impact. We must think twice but above all we must think before taking important steps, in order to avoid wandering into what Galileo defined as the dark labyrinth of ignorance.

As I stressed at the end of my keynote, the future of Europe is science, and this is why our priority must be ethics now.

Read the editorial: Floridi, L. (2014) Technoscience and Ethics Foresight. Editorial, Philosophy & Technology 27 (4) 499-501.


Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information. His research areas are the philosophy of Information, information and computer ethics, and the philosophy of technology. His most recent book is The Fourth Revolution – How the infosphere is reshaping human reality (2014, Oxford University Press).

]]>
Designing Internet technologies for the public good https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/ https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/#comments Wed, 08 Oct 2014 11:48:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2887
Caption
MEPs failed to support a Green call to protect Edward Snowden as a whistleblower, in order to allow him to give his testimony to the European Parliament in March. Image by greensefa.
Computers have developed enormously since the Second World War: alongside a rough doubling of computer power every two years, communications bandwidth and storage capacity have grown just as quickly. Computers can now store much more personal data, process it much faster, and rapidly share it across networks.

Data is collected about us as we interact with digital technology, directly and via organisations. Many people volunteer data to social networking sites, and sensors – in smartphones, CCTV cameras, and “Internet of Things” objects – are making the physical world as trackable as the virtual. People are very often unaware of how much data is gathered about them – let alone the purposes for which it can be used. Also, most privacy risks are highly probabilistic, cumulative, and difficult to calculate. A student sharing a photo today might not be thinking about a future interview panel; or that the heart rate data shared from a fitness gadget might affect future decisions by insurance and financial services (Brown 2014).

Rather than organisations waiting for something to go wrong, then spending large amounts of time and money trying (and often failing) to fix privacy problems, computer scientists have been developing methods for designing privacy directly into new technologies and systems (Spiekermann and Cranor 2009). One of the most important principles is data minimization; that is, limiting the collection of personal data to that needed to provide a service – rather than storing everything that can be conveniently retrieved. This limits the impact of data losses and breaches, for example by corrupt staff with authorised access to data – a practice that the UK Information Commissioner’s Office (2006) has shown to be widespread.

Privacy by design also protects against function creep (Gürses et al. 2011). When an organisation invests significant resources to collect personal data for one reason, it can be very tempting to use it for other purposes. While this is limited in the EU by data protection law, government agencies are in a good position to push for changes to national laws if they wish, bypassing such “purpose limitations”. Nor do these rules tend to apply to intelligence agencies.

Another key aspect of putting users in control of their personal data is making sure they know what data is being collected, how it is being used – and ideally being asked for their consent. There have been some interesting experiments with privacy interfaces, for example helping smartphone users understand who is asking for their location data, and what data has been recently shared with whom.

Smartphones have enough storage and computing capacity to do some tasks, such as showing users adverts relevant to their known interests, without sharing any personal data with third parties such as advertisers. This kind of user-controlled data storage and processing has all kinds of applications – for example, with smart electricity meters (Danezis et al. 2013), and congestion charging for roads (Balasch et al. 2010).

What broader lessons can be drawn about shaping technologies for the public good? What is the public good, and who gets to define it? One option is to look at opinion polling about public concerns and values over long periods of time. The European Commission’s Eurobarometer polls reveal that in most European countries (including the UK), people have had significant concerns about data privacy for decades.

A more fundamental view of core social values can be found at the national level in constitutions, and between nations in human rights treaties. As well as the protection of private life and correspondence in the European Convention on Human Rights’ Article 8, the freedom of thought, expression, association and assembly rights in Articles 9-11 (and their equivalents in the US Bill of Rights, and the International Covenant on Civil and Political Rights) are also relevant.

This national and international law restricts how states use technology to infringe human rights – even for national security purposes. There are several US legal challenges to the constitutionality of NSA communications surveillance, with a federal court in Washington DC finding that bulk access to phone records is against the Fourth Amendment [1] (but another court in New York finding the opposite [2]). The UK campaign groups Big Brother Watch, Open Rights Group, and English PEN have taken a case to the European Court of Human Rights, arguing that UK law in this regard is incompatible with the Human Rights Convention.

Can technology development be shaped more broadly to reflect such constitutional values? One of the best-known attempts is the European Union’s data protection framework. Privacy is a core European political value, not least because of the horrors of the Nazi and Communist regimes of the 20th century. Germany, France and Sweden all developed data protection laws in the 1970s in response to the development of automated systems for processing personal data, followed by most other European countries. The EU’s Data Protection Directive (95/46/EC) harmonises these laws, and has provisions that encourage organisations to use technical measures to protect personal data.

An update of this Directive, which the European parliament has been debating over the last year, more explicitly includes this type of regulation by technology. Under this General Data Protection Regulation, organisations that are processing personal data will have to implement appropriate technical measures to protect Regulation rights. By default, organisations should only collect the minimum personal data they need, and allow individuals to control the distribution of their personal data. The Regulation would also require companies to make it easier for users to download all of their data, so that it could be uploaded to a competitor service (for example, one with better data protection) – bringing market pressure to bear (Brown and Marsden 2013).

This type of technology regulation is not uncontroversial. The European Commissioner responsible until July for the Data Protection Regulation, Viviane Reding, said that she had seen unprecedented and “absolutely fierce” lobbying against some of its provisions. Legislators would clearly be foolish to try and micro-manage the development of new technology. But the EU’s principles-based approach to privacy has been internationally influential, with over 100 countries now having adopted the Data Protection Directive or similar laws (Greenleaf 2014).

If the EU can find the right balance in its Regulation, it has the opportunity to set the new global standard for privacy-protective technologies – a very significant opportunity indeed in the global marketplace.

[1] Klayman v. Obama, 2013 WL 6571596 (D.D.C. 2013)

[2] ACLU v. Clapper, No. 13-3994 (S.D. New York December 28, 2013)

References

Balasch, J., Rial, A., Troncoso, C., Preneel, B., Verbauwhede, I. and Geuens, C. (2010) PrETP: Privacy-preserving electronic toll pricing. 19th USENIX Security Symposium, pp. 63–78.

Brown, I. (2014) The economics of privacy, data protection and surveillance. In J.M. Bauer and M. Latzer (eds.) Research Handbook on the Economics of the Internet. Cheltenham: Edward Elgar.

Brown, I. and Marsden, C. (2013) Regulating Code: Good Governance and Better Regulation in the Information Age. Cambridge, MA: MIT Press.

Danezis, G., Fournet, C., Kohlweiss, M. and Zanella-Beguelin, S. (2013) Smart Meter Aggregation via Secret-Sharing. ACM Smart Energy Grid Security Workshop.

Greenleaf, G. (2014) Sheherezade and the 101 data privacy laws: Origins, significance and global trajectories. Journal of Law, Information & Science.

Gürses, S., Troncoso, C. and Diaz, C. (2011) Engineering Privacy by Design. Computers, Privacy & Data Protection.

Haddadi, H, Hui, P., Henderson, T. and Brown, I. (2011) Targeted Advertising on the Handset: Privacy and Security Challenges. In Müller, J., Alt, F., Michelis, D. (eds) Pervasive Advertising. Heidelberg: Springer, pp. 119-137.

Information Commissioner’s Office (2006) What price privacy? HC 1056.

Spiekermann, S. and Cranor, L.F. (2009) Engineering Privacy. IEEE Transactions on Software Engineering 35 (1).


Read the full article: Keeping our secrets? Designing Internet technologies for the public good, European Human Rights Law Review 4: 369-377. This article is adapted from Ian Brown’s 2014 Oxford London Lecture, given at Church House, Westminster, on 18 March 2014, supported by Oxford University’s Romanes fund.

Professor Ian Brown is Associate Director of Oxford University’s Cyber Security Centre and Senior Research Fellow at the Oxford Internet Institute. His research is focused on information security, privacy-enhancing technologies, and Internet regulation.

]]>
https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/feed/ 1
Past and Emerging Themes in Policy and Internet Studies https://ensr.oii.ox.ac.uk/past-and-emerging-themes-in-policy-and-internet-studies/ Mon, 12 May 2014 09:24:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2673 Caption
We can’t understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) “Stop Watching Us” Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody.

The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision.

At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams is fundamental rights in a mediated society (11 articles), which focuses particularly on privacy and freedom of expression. Related streams are children and child protection (six articles), copyright and piracy (five articles), and general e-commerce regulation (six articles), including taxation. A recently emerged stream in the journal is hate speech and cybersecurity (four articles). Of course, an enduring research stream is Internet governance, or the regulation of technical infrastructures and economic institutions that constitute the material basis of the Internet (seven articles). In recent years, the research agenda in this stream has been influenced by national policy debates around broadband market competition and network neutrality (Hahn and Singer 2013). Another enduring stream deals with the Internet and public health (eight articles).

Looking specifically at “the Internet and policy processes” category, the largest stream is e-participation, or the role of the Internet in engaging citizens in national and local government policy processes, through methods such as online deliberation, petition platforms, and voting advice applications (18 articles). Two other streams are e-government, or the use of Internet technologies for government service provision (seven articles), and e-politics, or the use of the Internet in mainstream politics, such as election campaigning and communications of the political elite (nine articles). Another stream that has gained pace during recent years, is online collective action, or the role of the Internet in activism, ‘clicktivism,’ and protest campaigns (16 articles). Last year the journal published a special issue on online collective action (Calderaro and Kavada 2013), and the next forthcoming issue includes an invited article on digital civics by Ethan Zuckerman, director of MIT’s Center for Civic Media, with commentary from prominent scholars of Internet activism. A trajectory discernible in this stream over the years is a movement from discussing mere potentials towards analyzing real impacts—including critical analyses of the sometimes inflated expectations and “democracy bubbles” created by digital media (Shulman 2009; Karpf 2012; Bryer 2012).

The final category, discussion of novel methodologies, consists of articles that develop, analyze, and reflect critically on methodological innovations in policy and Internet studies. Empirical articles published in the journal have made use of a wide range of conventional and novel research methods, from interviews and surveys to automated content analysis and advanced network analysis methods. But of those articles where methodology is the topic rather than merely the tool, the majority deal with so-called “big data,” or the use of large-scale transactional data sources in research, commerce, and evidence-based public policy (nine articles). The journal recently devoted a special issue to the potentials and pitfalls of big data for public policy (Margetts and Sutcliffe 2013), based on selected contributions to the journal’s 2012 big data conference: Big Data, Big Challenges? In general, the notion of data science and public policy is a growing research theme.

This brief analysis suggests that research published in the journal over the last five years has indeed followed the broad contours of the original vision. The two challenges, namely policy implications of Internet-related social change and Internet-related changes in policy processes, have both been addressed. In particular, research has addressed the implications of the Internet’s increasing role in social and political life. The journal has also furthered the development of new methodologies, especially the use of online network analysis techniques and large-scale transactional data sources (aka ‘big data’).

As expected, authors from a wide range of disciplines have contributed their perspectives to the journal, and engaged with other disciplines, while retaining the rigor of their own specialisms. The geographic scope of the contributions has been truly global, with authors and research contexts from six continents. I am also pleased to note that a characteristic common to all the published articles is polish; this is no doubt in part due to the high level of editorial support that the journal is able to afford to authors, including copyediting. The justifications for the journal’s establishment five years ago have clearly been borne out, so that the journal now performs an important function in fostering and bringing together research on the public policy implications of an increasingly Internet-mediated society.

And what of my own research interests as an editor? In the inaugural editorial, Helen Margetts highlighted work, finance, exchange, and economic themes in general as being among the prominent areas of Internet-related social change that are likely to have significant future policy implications. I think for the most part, these implications remain to be addressed, and this is an area that the journal can encourage authors to tackle better. As an editor, I will work to direct attention to this opportunity, and welcome manuscript submissions on all aspects of Internet-enabled economic change and its policy implications. This work will be kickstarted by the journal’s 2014 conference (26-27 September), which this year focuses on crowdsourcing and online labor.

Our published articles will continue to be highlighted here in the journal’s blog. Launched last year, we believe this blog will help to expand the reach and impact of research published in Policy and Internet to the wider academic and practitioner communities, promote discussion, and increase authors’ citations. After all, publication is only the start of an article’s public life: we want people reading, debating, citing, and offering responses to the research that we, and our excellent reviewers, feel is important, and worth publishing.

Read the full editorial:  Lehdonvirta, V. (2014) Past and Emerging Themes in Policy and Internet Studies. Policy & Internet 6(2): 109-114.

References

Bryer, T.A. (2011) Online Public Engagement in the Obama Administration: Building a Democracy Bubble? Policy & Internet 3 (4).

Calderaro, A. and Kavada, A. (2013) Challenges and Opportunities of Online Collective Action for Policy Change. Policy & Internet (5) 1.

Hahn, R. and Singer, H. (2013) Is the U.S. Government’s Internet Policy Broken? Policy & Internet 5 (3) 340-363.

Karpf, D. (2012) Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism. Policy & Internet 2 (4) 7-41.

Margetts, H. (2009) The Internet and Public Policy. Policy and Internet 1 (1).

Margetts, H. and Sutcliffe, D. (2013) Addressing the Policy Challenges and Opportunities of ‘Big Data.’ Policy & Internet 5 (2) 139-146.

Shulman, S.W. (2009) The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking. Policy & Internet 1 (1) 23-53.

]]>
Unpacking patient trust in the “who” and the “how” of Internet-based health records https://ensr.oii.ox.ac.uk/unpacking-patient-trust-in-the-who-and-the-how-of-internet-based-health-records/ Mon, 03 Mar 2014 08:50:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2615 In an attempt to reduce costs and improve quality, digital health records are permeating health systems all over the world. Internet-based access to them creates new opportunities for access and sharing – while at the same time causing nightmares to many patients: medical data floating around freely within the clouds, unprotected from strangers, being abused to target and discriminate people without their knowledge?

Individuals often have little knowledge about the actual risks, and single instances of breaches are exaggerated in the media. Key to successful adoption of Internet-based health records is, however, how much a patient places trust in the technology: trust that data will be properly secured from inadvertent leakage, and trust that it will not be accessed by unauthorised strangers.

Situated in this context, my own research has taken a closer look at the structural and institutional factors influencing patient trust in Internet-based health records. Utilising a survey and interviews, the research has looked specifically at Germany – a very suitable environment for this question given its wide range of actors in the health system, and often being referred to as a “hard-line privacy country”. Germany has struggled for years with the introduction of smart cards linked to centralised Electronic Health Records, not only changing its design features over several iterations, but also battling negative press coverage about data security.

The first element to this question of patient trust is the “who”: that is, does it make a difference whether the health record is maintained by either a medical or a non-medical entity, and whether the entity is public or private? I found that patients clearly expressed a higher trust in medical operators, evidence of a certain “halo effect” surrounding medical professionals and organisations driven by patient faith in their good intentions. This overrode the concern that medical operators might be less adept at securing the data than (for example) most non-medical IT firms. The distinction between public and private operators is much more blurry in patients’ perception. However, there was a sense among the interviewees that a stronger concern about misuse was related to a preference for public entities who would “not intentionally give data to others”, while data theft concerns resulted in a preference for private operators – as opposed to public institutions who might just “shrug their shoulders and finger-point at subordinate levels”.

Equally important to the question of “who” is managing the data may be the “how”: that is, is the patient’s ability to access and control their health-record content perceived as trust enhancing? While the general finding of this research is that having the opportunity to both access and control their records helps to build patient trust, an often overlooked (and discomforting) factor is that easy access for the patient may also mean easy access for the rest of the family. In the words of one interviewee: “For example, you have Alzheimer’s disease or dementia. You don’t want everyone around you to know. They will say ‘show us your health record online’, and then talk to doctors about you – just going over your head.” Nevertheless, for most people I surveyed, having access and control of records was perceived as trust enhancing.

At the same time, a striking survey finding is how greater access and control of records can be less trust-enhancing for those with lower Internet experience, confidence, and breadth of use: as one older interviewee put it – “I am sceptical because I am not good at these Internet things. My husband can help me, but somehow it is not really worth this effort.” The quote reveals one of the facets of digital divides, and additionally highlights the relevance of life-stage in the discussion. Older participants see the benefits of sharing data (if it means avoiding unnecessary repetition of routine examinations) and are less concerned about outsider access, while younger people are more apprehensive of the risk of medical data falling into the wrong hands. An older participant summarised this very effectively: “If I was 30 years younger and at the beginning of my professional career or my family life, it would be causing more concern for me than now”. Finally, this reinforces the importance of legal regulations and security audits ensuring a general level of protection – even if the patient chooses not to be (or cannot be) directly involved in the management of their data.

Interestingly, the research also uncovered what is known as the certainty trough: not only are those with low online affinity highly suspicious of Internet-based health records – the experts are as well! The more different activities a user engaged in, the higher the suspicion of Internet-based health records. This confirms the notion that with more knowledge and more intense engagement with the Internet, we tend to become more aware of the risks – and lose trust in the technology and what the protections might actually be worth.

Finally, it is clear that the “who” and the “how” are interrelated, as a low degree of trust goes hand in hand with a desire for control. For a generally less trustworthy operator, access to records is not sufficient to inspire patient trust. While access improves knowledge and may allow for legal steps to change what is stored online, few people make use of this possibility; only direct control of what is stored online helps to compensate for a general suspicion about the operator. It is noteworthy here that there is a discrepancy between how much importance people place on having control, and how much they actually use it, but in the end, trust is a subjective concept that doesn’t necessarily reflect actual privacy and security.

The results of this research provide valuable insights for the further development of Internet-based health records. In short: to gain patient trust, the operator should ideally be of a medical nature and should allow the patients to get involved in how their health records are maintained. Moreover, policy initiatives designed to increase the Internet and health literacy of the public are crucial in reaching all parts of the population, as is an underlying legal and regulatory framework within which any Internet-based health record should be embedded.


Read the full paper: Rauer, Ulrike (2012) Patient Trust in Internet-based Health Records: An Analysis Across Operator Types and Levels of Patient Involvement in Germany. Policy and Internet 4 (2).

]]>
Ethical privacy guidelines for mobile connectivity measurements https://ensr.oii.ox.ac.uk/ethical-privacy-guidelines-for-mobile-connectivity-measurements/ Thu, 07 Nov 2013 16:01:33 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2386 Caption
Four of the 6.8 billion mobile phones worldwide. Measuring the mobile Internet can expose information about an individual’s location, contact details, and communications metadata. Image by Cocoarmani.

Ed: GCHQ / the NSA aside … Who collects mobile data and for what purpose? How can you tell if your data are being collected and passed on?

Ben: Data collected from mobile phones is used for a wide range of (divergent) purposes. First and foremost, mobile operators need information about mobile phones in real-time to be able to communicate with individual mobile handsets. Apps can also collect all sorts of information, which may be necessary to provide entertainment, location specific services, to conduct network research and many other reasons.

Mobile phone users usually consent to the collection of their data by clicking “I agree” or other legally relevant buttons, but this is not always the case. Sometimes data is collected lawfully without consent, for example for the provision of a mobile connectivity service. Other times it is harder to substantiate a relevant legal basis. Many applications keep track of the information that is generated by a mobile phone and it is often not possible to find out how the receiver processes this data.

Ed: How are data subjects typically recruited for a mobile research project? And how many subjects might a typical research data set contain?

Ben: This depends on the research design; some research projects provide data subjects with a specific app, which they can use to conduct measurements (so called ‘active measurements’). Other apps collect data in the background and, in effect, conduct local surveillance of the mobile phone use (so called passive measurements). Other research uses existing datasets, for example provided by telecom operators, which will generally be de-identified in some way. We purposely do not use the term anonymisation in the report, because much research and several case studies have shown that real anonymisation is very difficult to achieve if the original raw data is collected about individuals. Datasets can be re-identified by techniques such as fingerprinting or by linking them with existing, auxiliary datasets.

The size of datasets differs per release. Telecom operators can provide data about millions of users, while it will be more challenging to reach such a number with a research specific app. However, depending on the information collected and provided, a specific app may provide richer information about a user’s behaviour.

Ed: What sort of research can be done with this sort of data?

Ben: Data collected from mobile phones can reveal much interesting and useful information. For example, such data can show exact geographic locations and thus the movements of the owner, which can be relevant for the social sciences. On a larger scale, mass movements of persons can be monitored via mobile phones. This information is useful for public policy objectives such as crowd control, traffic management, identifying migration patterns, emergency aid, etc. Such data can also be very useful for commercial purposes, such as location specific advertising, studying the movement of consumers, or generally studying the use of mobile phones.

Mobile phone data is also necessary to understand the complex dynamics of the underlying Internet architecture. The mobile Internet is has different requirements than the fixed line Internet, so targeted investments in future Internet architecture will need to be assessed by detailed network research. Also, network research can study issues such as censorship or other forms of blocking information and transactions, which are increasingly carried out through mobile phones. This can serve as early warning systems for policy makers, activists and humanitarian aid workers, to name only a few stakeholders.

Ed: Some of these research datasets are later published as ‘open data’. What sorts of uses might researchers (or companies) put these data to? Does it tend to be mostly technical research, or there also social science applications?

Ben: The intriguing characteristic of the open data concept is that secondary uses can be unpredictable. A re-use is not necessarily technical, even if the raw data has been collected for a purely technical network research. New social science research could be based on existing technical data, or existing research analyses may be falsified or validated by other researchers. Artists, developers, entrepreneurs or public authorities can also use existing data to create new applications or to enrich existing information systems. There have been many instances when open data has been re-used for beneficial or profitable means.

However, there is also a flipside to open data, especially when the dataset contains personal information, or information that can be linked to individuals. A working definition of open data is that one makes entire databases available, in standardized, machine readable and electronic format, to any secondary user, free of charge and free of restrictions or obligations, for any purpose. If a dataset contains information about your Internet browsing habits, your movements throughout the day or the phone numbers you have called over a specific period of time, it could be quite troubling if you have no control over who re-uses this information.

The risks and harms of such re-use are very context dependent, of course. In the Western world, such data could be used as means for blackmail, stalking, identity theft, unsolicited commercial communications, etc. Further, if there is a chance our telecom operators just share data on how we use our mobile phones, we may refrain from activities, such as taking part in demonstrations, attending political gatherings, or accessing certain socially unacceptable information. Such self-censorship will damage the free society we expect. In the developing world, or in authoritarian regimes, risks and harms can be a matter of life and death for data subjects, or at least involve the risk of physical harm. This is true for all citizens, but also diplomats, aid workers and journalists or social media users.

Finally, we cannot envisage how political contexts will change in the future. Future malevolent governments, even in Europe or the US, could easily use datasets containing sensitive information to harm or control specific groups of society. One only need look at the changing political landscape in Hungary to see how specific groups are suddenly targeted in what we thought was becoming a country that adheres to Western values.

Ed: The ethical privacy guidelines note the basic relation between the level of detail in information collected and the resulting usefulness of the dataset (datasets becoming less powerful as subjects are increasingly de-identified). This seems a fairly intuitive and fundamentally unavoidable problem; is there anything in particular to say about it?

Ben: Research often requires rich datasets for worthwhile analyses to be conducted. These will inevitably sometimes contain personal information, as it can be important to relate specific data to data subjects, whether anonymised, pseudonymised or otherwise. Far reaching deletion, aggregation or randomisation of data can make the dataset useless for the research purposes.

Sophisticated methods of re-identifying datasets, and unforeseen methods which will be developed in future, mean that much information must be deleted or aggregated in order for a dataset containing personal information to be truly anonymous. It has become very difficult to determine when a dataset is sufficiently anonymised to the extent that it can enjoy the legal exception offered by data protection laws around the world and therefore be distributed as open data, without legal restrictions.

As a result, many research datasets cannot simply be released. The guidelines do not force the researcher to a zero-risk situation, where only useless or meaningless datasets can be released. The guidelines force the researcher to think very carefully about the type of data that will be collected, about data processing techniques and different disclosure methods. Although open data is an attractive method of disseminating research data, sometimes managed access systems may be more appropriate. The guidelines constantly trigger the researcher to consider the risks to data subjects in their specific context during each stage of the research design. They serve as a guide, but also a normative framework for research that is potentially privacy invasive.

Ed: Presumably mobile companies have a duty to delete their data after a certain period; does this conflict with open datasets, whose aim is to be available indefinitely?

Ben: It is not a requirement for open data to be available indefinitely. However, once information is published freely on the Internet, it is very hard – if not impossible – to delete it. The researcher loses all control over a dataset once it is published online. So, if a dataset is sufficiently de-identified for the re-identification techniques that are known today, this does not mean that future techniques cannot re-identify the dataset. We can’t expect researchers to take into account all science-fiction type future developments, but the guidelines to force the researcher to consider what successful re-identification would reveal about data subjects.

European mobile phone companies do have a duty to keep logs of communications for 6 months to 2 years, depending on the implication of the misguided data retention directive. We have recently learned that intelligence services worldwide have more or less unrestricted access to such information. We have no idea how long this information is stored in practice. Recently it has been frequently been stated that deleting data has become more expensive than just keeping it. This means that mobile phone operators and intelligence agencies may keep data on our mobile phone use forever. This must be taken into account when assessing which auxiliary datasets could be used to re-identify a research dataset. An IP-address could be sufficient to link much information to an individual.

Ed: Presumably it’s impossible for a subject to later decide they want to be taken out of an open dataset; firstly due to cost, but also because (by definition) it ought to be impossible to find them in an anonymised dataset. Does this present any practical or legal problems?

Ben: In some countries, especially in Europe, data subjects have a legal right to object to their data being processed, by withdrawing consent or engaging in a legal procedure with the data processor. Although this is an important right, exercising it may lead to undesirable consequences for research. For example, the underlying dataset will be incomplete for secondary researchers who want to validate findings.

Our guidelines encourage researchers to be transparent about their research design, data processing and foreseeable secondary uses of the data. On the one hand, this builds trust in the network research discipline. On the other, it gives data subjects the necessary information to feel confident to share their data. Still, data subjects should be able to retract their consent via electronic means, instead of sending letters, if they can substantiate an appreciable harm to them.

Ed: How aware are funding bodies and ethics boards of the particular problems presented by mobile research; and are they categorically different from other human-subject research data? (eg interviews / social network data / genetic studies etc.)

Ben: University ethical boards or funding bodies are be staffed by experts in a wide range of disciplines. However, this does not mean they understand the intricate details of complex Internet measurements, de-identification techniques or the state of affairs with regards to re-identification techniques, nor the harms a research programme can inflict given a specific context. For example, not everyone’s intuitive moral privacy compass will be activated when they read in a research proposal that the research systems will “monitor routing dynamics, by analysing packet traces collected from cell towers and internet exchanges”, or similar sentences.

Our guidelines encourage the researcher to write up the choices made with regards to personal information in a manner that is clear and understandable for the layperson. Such a level of transparency is useful for data subjects —  as well as ethical boards and funding bodies — to understand exactly what the research entails and how risks have been accommodated.

Ed: Linnet Taylor has already discussed mobile data mining from regions of the world with weak privacy laws: what is the general status of mobile privacy legislation worldwide?

Ben: Privacy legislation itself is about as fragmented and disputed as it gets. The US generally treats personal information as a commodity that can be traded, which enables Internet companies in Silicon Valley to use data as the new raw material in the information age. Europe considers privacy and data protection as a fundamental right, which is currently regulated in detail, albeit based on a law from 1995. The review of European data protection regulation has been postponed to 2015, possibly as a result of the intense lobbying effort in Brussels to either weaken or strengthen the proposed law. Some countries have not regulated privacy or data protection at all. Other countries have a fundamental right to privacy, which is not further developed in a specific data protection law and thus hardly enforced. Another group of countries have transplanted the European approach, but do not have the legal expertise to apply the 1995 law to the digital environment. The future of data protection is very much up in the air and requires much careful study.

The guidelines we have publishing take the international human rights framework as a base, while drawing inspiration from several existing legal concepts such as data minimisation, purpose limitation, privacy by design and informed consent. The guidelines give a solid base for privacy aware research design. We do encourage researchers to discuss their projects with colleagues and legal experts as much as possible, though, because best practices and legal subtleties can vary per country, state or region.

Read the guidelines: Zevenbergen, B., Brown,I., Wright, J., and Erdos, D. (2013) Ethical Privacy Guidelines for Mobile Connectivity Measurements. Oxford Internet Institute, University of Oxford.


Ben Zevenbergen was talking to blog editor David Sutcliffe.

]]>
The scramble for Africa’s data https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/ https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/#comments Mon, 08 Jul 2013 09:21:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1230 Mobile phone advert in Zomba, Malawi
Africa is in the midst of a technological revolution, and the current wave of digitisation has the potential to make the continent’s citizens a rich mine of data. Intersection in Zomba, Malawi. Image by john.duffell.

 

After the last decade’s exponential rise in ICT use, Africa is fast becoming a source of big data. Africans are increasingly emitting digital information with their mobile phone calls, internet use and various forms of digitised transactions, while on a state level e-government starts to become a reality. As Africa goes digital, the challenge for policymakers becomes what the WRR, a Dutch policy organisation, has identified as ‘i-government’: moving from digitisation to managing and curating digital data in ways that keep people’s identities and activities secure.

On one level, this is an important development for African policymakers, given that accurate information on their populations has been notoriously hard to come by and, where it exists, has not been shared. On another, however, it represents a tremendous challenge. The WRR has pointed out the unpreparedness of European governments, who have been digitising for decades, for the age of i-government. How are African policymakers, as relative newcomers to digital data, supposed to respond?

There are two possible scenarios. One is that systems will develop for the release and curation of Africans’ data by corporations and governments, and that it will become possible, in the words of the UN’s Global Pulse initiative, to use it as a ‘public good’ – an invaluable tool for development policies and crisis response. The other is that there will be a new scramble for Africa: a digital resource grab that may have implications as great as the original scramble amongst the colonial powers in the late 19th century.

We know that African data is not only valuable to Africans. The current wave of digitisation has the potential to make the continent’s citizens a rich mine of data about health interventions, human mobility, conflict and violence, technology adoption, communication dynamics and financial behaviour, with the default mode being for this to happen without their consent or involvement, and without ethical and normative frameworks to ensure data protection or to weigh the risks against the benefits. Orange’s recent release of call data from Cote d’Ivoire both represents an example of the emerging potential of African digital data, but also the challenge of understanding the kind of anonymisation and ethical challenge that it represents.

I have heard various arguments as to why data protection is not a problem for Africans. One is that people in African countries don’t care about their privacy because they live in a ‘collective society’. (Whatever that means.) Another is that they don’t yet have any privacy to protect because they are still disconnected from the kinds of system that make data privacy important. Another more convincing and evidence-based argument is that the ends may justify the means (as made here by the ICRC in a thoughtful post by Patrick Meier about data privacy in crisis situations), and that if significant benefits can be delivered using African big data these outweigh potential or future threats to privacy. The same argument is being made by Global Pulse, a UN initiative which aims to convince corporations to release data on developing countries as a public good for use in devising development interventions.

There are three main questions: what can incentivise African countries’ citizens and policymakers to address privacy in parallel with the collection of massive amounts of personal data, rather than after abuses occur? What are the models that might be useful in devising privacy frameworks for groups with restricted technological access and sophistication? And finally, how can such a system be participatory enough to be relevant to the needs of particular countries or populations?

Regarding the first question, this may be a lost cause. The WRR’s i-government work suggests that only public pressure due to highly publicised breaches of data security may spur policymakers to act. The answer to the second question is being pursued, among others, by John Clippinger and Alex Pentland at MIT (with their work on the social stack); by the World Economic Forum, which is thinking about the kinds of rules that should govern personal data worldwide; by the aforementioned Global Pulse, which has a strong interest in building frameworks which make it safe for corporations to share people’s data; by Microsoft, which is doing some serious thinking about differential privacy for large datasets; by independent researchers such as Patrick Meier, who is looking at how crowdsourced data about crises and human rights abuses should be handled; and by the Oxford Internet Institute’s new M-Data project which is devising privacy guidelines for collecting and using mobile connectivity data.

Regarding the last question, participatory systems will require African country activists, scientists and policymakers to build them. To be relevant, they will also need to be made enforceable, which may be an even greater challenge. Privacy frameworks are only useful if they are made a living part of both governance and citizenship: there must be the institutional power to hold offenders accountable (in this case extremely large and powerful corporations, governments and international institutions), and awareness amongst ordinary people about the existence and use of their data. This, of course, has not really been achieved in developed countries, so doing it in Africa may not exactly be a piece of cake.

Notwithstanding these challenges, the region offers an opportunity to push researchers and policymakers – local and worldwide – to think clearly about the risks and benefits of big data, and to make solutions workable, enforceable and accessible. In terms of data privacy, if it works in Burkina Faso, it will probably work in New York, but the reverse is unlikely to be true. This makes a strong argument for figuring it out in Burkina Faso.

Some may contend that this discussion only points out the massive holes in the governance of technology that prevail in Africa – and in fact a whole other level of problems regarding accountability and power asymmetries. My response: Yes. Absolutely.


Linnet Taylor’s research focuses on social and economic aspects of the diffusion of the internet in Africa, and human mobility as a factor in technology adoption (.. read her blog). Her doctoral research was on Ghana, where she looked at mobility’s influence on the formation and viability of internet cafes in poor and remote areas, networking amongst Ghanaian technology professionals and ICT4D policy. At the OII she works on a Sloan Foundation funded project on Accessing and Using Big Data to Advance Social Science Knowledge.

]]>
https://ensr.oii.ox.ac.uk/the-scramble-for-africas-data/feed/ 1
Time for debate about the societal impact of the Internet of Things https://ensr.oii.ox.ac.uk/time-for-debate-about-the-societal-impact-of-the-internet-of-things/ Mon, 22 Apr 2013 14:32:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=931
European conference on the Internet of Things
The 2nd Annual Internet of Things Europe 2010: A Roadmap for Europe, 2010. Image by Pierre Metivier.
On 17 April 2013, the US Federal Trade Commission published a call for inputs on the ‘consumer privacy and security issues posed by the growing connectivity of consumer devices, such as cars, appliances, and medical devices’, in other words, about the impact of the Internet of Things (IoT) on the everyday lives of citizens. The call is in large part one for information to establish what the current state of technology development is and how it will develop, but it also looks for views on how privacy risks should be weighed against potential societal benefits.

There’s a lot that’s not very new about the IoT. Embedded computing, sensor networks and machine to machine communications have been around a long time. Mark Weiser was developing the concept of ubiquitous computing (and prototyping it) at Xerox PARC in 1990.  Many of the big ideas in the IoT — smart cars, smart homes, wearable computing — are already envisaged in works such as Nicholas Negroponte’s Being Digital, which was published in 1995 before the mass popularisation of the internet itself. The term ‘Internet of Things’ has been around since at least 1999. What is new is the speed with which technological change has made these ideas implementable on a societal scale. The FTC’s interest reflects a growing awareness of the potential significance of the IoT, and the need for public debate about its adoption.

As the cost and size of devices falls and network access becomes ubiquitous, it is evident that not only major industries but whole areas of consumption, public service and domestic life will be capable of being transformed. The number of connected devices is likely to grow fast in the next few years. The Organisation for Economic Co-operation and Development (OECD) estimates that while a family with two teenagers may have 10 devices connected to the internet, in 2022 this may well grow to 50 or more. Across the OECD area the number of connected devices in households may rise from an estimated 1.7 billion today to 14 billion by 2022. Programmes such as smart cities, smart transport and smart metering will begin to have their effect soon. In other countries, notably in China and Korea, whole new cities are being built around smart infrastructuregiving technology companies the opportunity to develop models that could be implemented subsequently in Western economies.

Businesses and governments alike see this as an opportunity for new investment both as a basis for new employment and growth and for the more efficient use of existing resources. The UK Government is funding a strand of work under the auspices of the Technology Strategy Board on the IoT, and the IoT is one of five themes that are the subject of the Department for Business, Innovation & Skills (BIS)’s consultation on the UK’s Digital Economy Strategy (alongside big data, cloud computing, smart cities, and eCommerce).

The enormous quantity of information that will be produced will provide further opportunities for collecting and analysing big data. There is consequently an emerging agenda about privacy, transparency and accountability. There are challenges too to the way we understand and can manage the complexity of interacting systems that will underpin critical social infrastructure.

The FTC is not alone in looking to open public debate about these issues. In February, the OII and BCS (the Chartered Institute for IT) ran a joint seminar to help the BCS’s consideration about how it should fulfil its public education and lobbying role in this area. A summary of the contributions is published on the BCS website.

The debate at the seminar was wide ranging. There was no doubt that the train has left the station as far as this next phase of the Internet is concerned. The scale of major corporate investment, government encouragement and entrepreneurial enthusiasm are not to be deflected. In many sectors of the economy there are already changes that are being felt already by consumers or will be soon enough. Smart metering, smart grid, and transport automation (including cars) are all examples. A lot of the discussion focused on risk. In a society which places high value on audit and accountability, it is perhaps unsurprising that early implementations have often been in using sensors and tags to track processes and monitor activity. This is especially attractive in industrial structures that have high degrees of subcontracting.

Wider societal risks were also discussed. As for the FTC, the privacy agenda is salient. There is real concern that the assumptions which underlie the data protection regimeespecially its reliance on data minimisationwill not be adequate to protect individuals in an era of ubiquitous data. Nor is it clear that the UK’s regulatorthe Information Commissionerwill be equipped to deal with the volume of potential business. Alongside privacy, there is also concern for security and the protection of critical infrastructure. The growth of reliance on the IoT will make cybersecurity significant in many new ways. There are issues too about complexity and the unforeseenand arguably unforeseeableconsequences of the interactions between complex, large, distributed systems acting in real time, and with consequences that go very directly to the wellbeing of individuals and communities.

There are great opportunities and a pressing need for social research into the IoT. The data about social impacts has been limited hitherto given the relatively few systems deployed. This will change rapidly. As Governments consult and bodies like the BCS seek to advise, it’s very desirable that public debate about privacy and security, access and governance, take place on the basis of real evidence and sound analysis.

]]>
eHealth: what is needed at the policy level? New special issue from Policy and Internet https://ensr.oii.ox.ac.uk/ehealth-what-is-needed-at-the-policy-level/ Thu, 24 May 2012 16:36:23 +0000 http://blogs.oii.ox.ac.uk/policy/?p=399 The explosive growth of the Internet and its omnipresence in people’s daily lives has facilitated a shift in information seeking on health, with the Internet now a key information source for the general public, patients, and health professionals. The Internet also has obvious potential to drive major changes in the organization and delivery of health services efforts, and many initiatives are harnessing technology to support user empowerment. For example, current health reforms in England are leading to a fragmented, marketized National Health Service (NHS), where competitive choice designed to drive quality improvement and efficiency savings is informed by transparency and patient experiences, and with the notion of an empowered health consumer at its centre.

Is this aim of achieving user empowerment realistic? In their examination of health queries submitted to the NHS Direct online enquiry service, John Powell and Sharon Boden find that while patient empowerment does occur in the use of online health services, it is constrained and context dependent. Policymakers wishing to promote greater choice and control among health system users should therefore take account of the limits to empowerment as well as barriers to participation. The Dutch government’s online public national health and care portal similarly aims to facilitate consumer decision-making behavior and increasing transparency and accountability to improve quality of care and functioning of health markets. Interestingly, Hans Ossebaard, Lisette van Gemert-Pijnen and Erwin Seydel find the influence of the Dutch portal on choice behavior, awareness, and empowerment of users to actually be small.

The Internet is often discussed in terms of empowering (or even endangering) patients through broadening of access to medical and health-related information, but there is evidence that concerns about serious negative effects of using the Internet for health information may be ill-founded. The cancer patients in the study by Alison Chapple, Julie Evans and Sue Ziebland gave few examples of harm from using the Internet or of damage caused to their relationships with health professionals. While policy makers have tended to focus on regulating the factual content of online information, in this study it was actually the consequences of stumbling on factually correct (but unwelcome) information that most concerned the patients and families; good practice guidelines for health information may therefore need to pay more attention to website design and user routing, as well as to the accuracy of content.

Policy makers and health professionals should also acknowledge the often highly individual strategies people use to access health information online, and understand how these practices are shaped by technology — the study by Astrid Mager found that the way people collected and evaluated online information about chronic diseases was shaped by search engines as much as by their individual medical preferences.

Many people still lack the necessary skills to navigate online content effectively. Eszter Hargittai and Heather Young examined the experiences a diverse group of young adults looking for information about emergency contraception online, finding that the majority of the study group could not identify the most efficient way of acquiring emergency contraception in a time of need. Given the increasing trend for people to turn to the Internet for health information, users must possess the necessary skills to make effective and efficient use of it; an important component of this may concern educational efforts to help people better navigate the Web. Improving general e-Health literacy is one of several recommendations by Maria De Jesus and Chenyang Xiao, who examined how Hispanic adults in the United States search for health information online. They report a striking language divide, with English proficiency of the user largely predicting online health information-seeking behavior.

Lastly, but no less importantly, is the policy challenge of addressing the issue of patient trust. The study by Ulrike Rauer on the structural and institutional factors that influence patient trust in Internet-based health records found that while patients typically considered medical operators to be more trustworthy than non-medical ones, there was no evidence of a “public–private” divide; patients perceived physicians and private health insurance providers to be more trustworthy than the government and corporations. Patient involvement in terms of access and control over their records was also found to be trust enhancing.

A lack of policy measures is a common barrier to success of eHealth initiatives; it is therefore essential that we develop measures that facilitate the adoption of initiatives and that demonstrate their success through improvement in services and the health status of the population. The articles presented in this special issue of Policy & Internet provide the sort of evidence-based insight that is urgently needed to help shape these policy measures. The empirical research and perspectives gathered here will make a valuable contribution to future efforts in this area.

]]>
Personal data protection vs the digital economy? OII policy forum considers our digital footprints https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/ https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/#comments Thu, 03 Feb 2011 11:12:13 +0000 http://blogs.oii.ox.ac.uk/policy/?p=177 Catching a bus, picking up some groceries, calling home to check on the children – all simple, seemingly private activities that characterise many people’s end to the working day. Yet each of these activities leaves a data trail that enables companies, even the state, to track the most mundane aspects of our lives. Add to this the range and quantity of personal data that many of us willingly post online on our blogs, Facebook walls or Google docs, and it is clear that the trail of digital footprints we leave is long and hard to erase.

Even if in most cases, this data is only likely to be used in an anonymised and aggregated form to identify trends in transport or shopping patterns, or to personalise the Internet services available to us, the fact that its collection is now so routine and so extensive should make us question whether the regulatory system governing data collection, storage and use is fit for purpose. A forthcoming OII policy forum on Tracing the Policy Implications of the Future Digital Economy (16 Feb) will consider this question, bringing together leading academics from across several disciplines with policy-makers and industry experts.

This is a topic which the OII is well-placed to address. Ian Brown’s Privacy Values Network project addresses a major knowledge gap, measuring the various costs and benefits to individuals of handing over data in different contexts, as without this we simply don’t know how much people value their privacy (or indeed understand its limits). The last Oxford Internet Survey (OxIS) rather surprisingly showed that in 2009 people were significantly less concerned about privacy online in the UK than in previous years (45% of all those surveyed in 2009 against 66% in 2007); we wait to see whether this finding is repeated when OxIS 2011 goes into the field next month.

Our faculty also have much to say about the adequacy (or otherwise) of the regulatory framework: a recent report by Ian Brown and Douwe Korff on New Challenges to Data Protection identified for the European Commission the scale of challenges presented to the current data protection regime, whilst Viktor-Mayer Schoenberger’s book Delete: The Virtue of Forgetting in the Digital Age has rightly raised the suggestion that personal information online should have an expiration date, to ensure it doesn’t hang around for years to embarrass us at a later date.

The forum will consider the way in which the market for information storage and collection is rapidly changing with the advent of new technologies, and on this point, one conclusion is clear: if we accept Helen Nissenbaum’s contention that personal information and data should be collected and protected according to the social norms governing different social contexts, then we need to get to grips pretty fast with the way in which these technologies are playing out in the way we work, play, learn and consume.

]]>
https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/feed/ 1
Internet, Politics, Policy 2010: Closing keynote by Viktor Mayer-Schönberger https://ensr.oii.ox.ac.uk/internet-politics-policy-2010-closing-keynote-by-viktor-mayer-schonberger/ Fri, 17 Sep 2010 15:48:04 +0000 http://blogs.oii.ox.ac.uk/policy/?p=94 Our two-day conference is coming to a close with a keynote by Viktor Mayer-Schönberger who is soon to be joining the faculty of the Oxford Internet Institute as Professor of Internet Governance and Regulation.

Viktor talked about the theme of his recent book“Delete: The Virtue of Forgetting in the Digital Age”(a webcast of this keynote will be available soon on the OII website but you can also listen to a previous talk here). It touches on many of the recent debates about information that has been published on the web in some context and which might suddenly come back to us in a completely different context, e.g. when applying for a job and being confronted with some drunken picture of us obtained from Facebook.

Viktor puts that into a broad perspective, contrasting the two themes of “forgetting” and “remembering”. He convincingly argues how for most of human history, forgetting has been the default. This state of affairs has experienced quite a dramatic change with the advances of the computer technology, data storage and information retrieval technologies available on a global information infrastructure.  Now remembering is the default as most of the information stored digitally is available forever and in multiple places.

What he sees at stake is power because of the permanent threat of our activities are being watched by others – not necessarily now but possibly even in the future – can result in altering our behaviour today. What is more, he says that without forgetting it is hard for us to forgive as we deny us and others the possibility to change.

No matter to what degree you are prepared to follow the argument, the most intriguing question is how the current state of remembering could be changed to forgetting. Viktor discusses a number of ideas that pose no real solution:

  1. privacy rights – don’t go very far in changing actual behaviour
  2. information ecology – the idea to store only as much as necessary
  3. digital abstinence – just not using these digital tools but this is not very practical
  4. full contextualization – store as much information as possible in order to provide necessary context for evaluating the informations from the past
  5. cognitive adjustments – humans have to change in order to learn how to discard the information but this is very difficult
  6. privacy digital rights management – requires the need to create a global infrastructure that would create more threats than solutions

Instead Viktor wants to establish mechanisms that ease forgetting, primarily by making it a little bit more difficult to remember. Ideas include

  • expiration date for information, less in order to technically force deletion but to socially force thinking about forgetting
  • making older information a bit more difficult to retrieve

Whatever the actual tool, the default should be forgetting and to prompt its users to reflect and choose about just how long a certain piece of information should be valid.

Nice closing statement: “Let us remember to forget!

]]>
New issue of Policy and Internet (2,2) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-22/ Thu, 19 Aug 2010 12:17:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=128 Welcome to the second issue of Policy & Internet for 2010! We are pleased to present six articles which investigate the role of the Internet in a wide range of policy processes and sectors: agenda setting in online and traditional media; environmental policy networks; online deliberation on climate change; data protection and privacy; net neutrality; and digital inclusion/exclusion. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Ben Sayre, Leticia Bode, Dhavan Shah, Dave Wilcox, and Chirag Shah: Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News

Kathleen McNutt and Adam Wellstead: Virtual Policy Networks in Forestry and Climate Change in the U.S. and Canada: Government Nodality, Internationalization and Actor Complexity

Julien Talpin and Stéphanie Wojcik: Deliberating Environmental Policy Issues: Comparing the Learning Potential of Online and Face-To-Face Discussions on Climate Change

Andrew A. Adams, Kiyoshi Murata, and Yohko Orito: The Development of Japanese Data Protection

Scott Jordan: The Application of Net Neutrality to Wireless Networks Based on Network Architecture

Alison Powell, Amelia Bryne, and Dharma Dailey: The Essential Internet: Digital Exclusion in Low-Income American Communities

]]>