data – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:38 +0000 en-GB hourly 1 From private profit to public liabilities: how platform capitalism’s business model works for children https://ensr.oii.ox.ac.uk/from-private-profit-to-public-liabilities-how-platform-capitalisms-business-model-works-for-children/ Thu, 14 Sep 2017 08:52:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4395 Two concepts have recently emerged that invite us to rethink the relationship between children and digital technology: the “datafied child” (Lupton & Williamson, 2017) and children’s digital rights (Livingstone & Third, 2017). The concept of the datafied child highlights the amount of data that is being harvested about children during their daily lives, and the children’s rights agenda includes a response to ethical and legal challenges the datafied child presents.

Children have never been afforded the full sovereignty of adulthood (Cunningham, 2009) but both these concepts suggest children have become the points of application for new forms of power that have emerged from the digitisation of society. The most dominant form of this power is called “platform capitalism” (Srnicek, 2016). As a result of platform capitalism’s success, there has never been a stronger association between data, young people’s private lives, their relationships with friends and family, their life at school, and the broader political economy. In this post I will define platform capitalism, outline why it has come to dominate children’s relationship to the internet and suggest two reasons in particular why this is problematic.

Children predominantly experience the Internet through platforms

‘At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects’ (Srnicek 2016, p43). Examples of platforms capitalism include the technology superpowers – Google, Apple, Facebook, and Amazon. There are, however, many relevant instances of platforms that children and young people use. This includes platforms for socialising, platforms for audio-visual content, platforms that communicate with smart devices and toys, and platforms for games and sports franchises and platforms that provide services (including within in the public sector) that children or their parents use.

Young people choose to use platforms for play, socialising and expressing their identity. Adults have also introduced platforms into children’s lives: for example Capita SIMS is a platform used by over 80% of schools in the UK for assessment and monitoring (over the coming months at the Oxford Internet Institute we will be studying such platforms, including SIMS, for The Oak Foundation). Platforms for personal use have been facilitated by the popularity of tablets and smartphones.

Amongst the young, there has been a sharp uptake in tablet and smart phone usage at the expense of PC or laptop use. Sixteen per cent of 3-4 year olds have their own tablet, with this incidence doubling for 5-7 year olds. By the age of 12, smartphone ownership begins to outstrip tablet ownership (Ofcom, 2016). For our research at the OII, even when we included low-income families in our sample, 93% of teenagers owned a smartphone. This has brought forth the ‘appification’ of the web that Zittrain predicted in 2008. This means that children and young people predominately experience the internet via platforms that we can think of as controlled gateways to the open web.

Platforms exist to make money for investors

In public discourse some of these platforms are called social media. This term distracts us from the reason many of these publicly floated companies exist: to make money for their investors. It is only logical for all these companies to pursue the WeChat model that is becoming so popular in China. WeChat is a closed circuit platform, in that it keeps all engagements with the internet, including shopping, betting, and video calls, within its corporate compound. This brings WeChat closer to monopoly on data extraction.

Platforms have consolidated their success by buying out their competitors. Alphabet, Amazon, Apple, Facebook and Microsoft have made 436 acquisitions worth $131 billion over the last decade (Bloomberg, 2017). Alternatively, they just mimic the features of their competitors. For example, when Facebook acquired Instagram it introduced Stories, a feature use by Snapchat, which lets its users upload photos and videos as a ‘story’ (that automatically expires after 24 hours).

The more data these companies capture that their competitors are unable to capture, the more value they can extract from it and the better their business model works. It is unsurprising therefore that during our research we asked groups of teenagers to draw a visual representation of what they thought the world wide web and internet looked like – almost all of them just drew corporate logos (they also told us they had no idea that platforms such as Facebook own WhatsApp and Instagram, or that Google owns YouTube). Platform capitalism dominates and controls their digital experiences — but what provisions do these platforms make for children?

The General Data Protection Regulation (GDPR) (set to be implemented in all EU states, including the UK, in 2018) says that platforms collecting data about children below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian. Because most platforms are American-owned, they tend to apply a piece of Federal legislation known as COPPA; the age of consent for using Snapchat, WhatsApp, Facebook, and Twitter, for example, is therefore set at 13. Yet, the BBC found last year that 78% of children aged 10 to 12 had signed up to a platform, including Facebook, Instagram, Snapchat and WhatsApp.

Platform capitalism offloads its responsibilities onto the user

Why is this a problem? Firstly, because platform capitalism offloads any responsibility onto problematically normative constructs of childhood, parenting, and paternal relations. The owners of platforms assume children will always consult their parents before using their services and that parents will read and understand their terms and conditions, which, research confirms, in reality few users, children or adults, even look at.

Moreover, we found in our research many parents don’t have the knowledge, expertise, or time to monitor what their children are doing online. Some parents, for instance, worked night shifts or had more than one job. We talked to children who regularly moved between homes and whose estranged parents didn’t communicate with each other to supervise their children online. We found that parents who are in financial difficulties, or affected by mental and physical illness, are often unable to keep on top of their children’s digital lives.

We also interviewed children who use strategies to manage their parent’s anxieties so they would leave them alone. They would, for example, allow their parents to be their friends on Facebook, but do all their personal communication on other platforms that their parents knew nothing about. Often then the most vulnerable children offline, children in care for example, are the most vulnerable children online. My colleagues at the OII found 9 out of 10 of the teenagers who are bullied online also face regular ‘traditional’ bullying. Helping these children requires extra investment from their families, as well as teachers, charities and social services. The burden is on schools too to address the problem of fake news and extremism such as Holocaust denialism that children can read on platforms.

This is typical of platform capitalism. It monetises what are called social graphs: i.e. the networks of users who use its platforms that it then makes available to advertisers. Social graphs are more than just nodes and edges representing our social lives: they are embodiments of often intimate or very sensitive data (that can often be de-anonymised by linking, matching and combining digital profiles). When graphs become dysfunctional and manifest social problems such as abuse, doxxing, stalking, and grooming), local social systems and institutions — that are usually publicly funded — have to deal with the fall-out. These institutions are often either under-resourced and ill-equipped to these solve such problems, or they are already overburdened.

Are platforms too powerful?

The second problem is the ecosystems of dependency that emerge, within which smaller companies or other corporations try to monetise their associations with successful platforms: they seek to get in on the monopolies of data extraction that the big platforms are creating. Many of these companies are not wealthy corporations and therefore don’t have the infrastructure or expertise to develop their own robust security measures. They can cut costs by neglecting security or they subcontract out services to yet more companies that are added to the network of data sharers.

Again, the platforms offload any responsibility onto the user. For example, WhatsApp tells its users; “Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services”. These ecosystems are networks that are only as strong as their weakest link. There are many infamous examples that illustrate this, including the so-called ‘Snappening’ where sexually explicit pictures harvested from Snapchat — a platform that is popular with teenagers — were released on to the open web. There is also a growing industry in fake apps that enable illegal data capture and fraud by leveraging the implicit trust users have for corporate walled gardens.

What can we do about these problems? Platform capitalism is restructuring labour markets and social relations in such a way that opting out from it is becoming an option available only to a privileged few. Moreover, we found teenagers whose parents prohibited them from using social platforms often felt socially isolated and stigmatised. In the real world of messy social reality, platforms can’t continue to offload their responsibilities on parents and schools.

We need some solutions fast because, by tacitly accepting the terms and conditions of platform capitalism – particularly when that they tell us it is not responsible for the harms its business model can facilitate – we may now be passing an event horizon where these companies are becoming too powerful, unaccountable, and distant from our local reality.

References

Hugh Cunningham (2009) Children and Childhood in Western Society Since 1500. Routledge.

Sonia Livingstone, Amanda Third (2017) Children and young people’s rights in the digital age: An emerging agenda. New Media and Society 19 (5).

Deborah Lupton, Ben Williamson (2017) The datafied child: The dataveillance of children and implications for their rights. New Media and Society 19 (5).

Nick Srnicek (2016) Platform Capitalism. Wiley.

]]>
How and why is children’s digital data being harvested? https://ensr.oii.ox.ac.uk/how-and-why-is-childrens-digital-data-being-harvested/ Wed, 10 May 2017 11:43:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4149 Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

]]>
Exploring the world of self-tracking: who wants our data and why? https://ensr.oii.ox.ac.uk/exploring-the-world-of-self-tracking-who-wants-our-data-and-why/ Fri, 07 Apr 2017 07:14:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4052 Benjamin Franklin used to keep charts of his time spent and virtues lived up to. Today, we use technology to self-track: our hours slept, steps taken, calories consumed, medications administered. But what happens when we turn our everyday experience — in particular, health and wellness-related experience — into data?

Self-Tracking” (MIT Press) by Gina Neff and Dawn Nafus examines how people record, analyze, and reflect on this data — looking at the tools they use and the communities they become part of, and offering an introduction to the essential ideas and key challenges of using these technologies. In considering self-tracking as a social and cultural phenomenon, they describe not only the use of data as a kind of mirror of the self but also how this enables people to connect to, and learn from, others.

They also consider what’s at stake: who wants our data and why, the practices of serious self-tracking enthusiasts, the design of commercial self-tracking technology, and how people are turning to self-tracking to fill gaps in the healthcare system. None of us can lead an entirely untracked life today, but in their book, Gina and Dawn show us how to use our data in a way that empowers and educates us.

We caught up with Gina to explore the self-tracking movement:

Ed.: Over one hundred million wearable sensors were shipped last year to help us gather data about our lives. Is the trend and market for personal health-monitoring devices ever-increasing, or are we seeing saturation of the device market and the things people might conceivably want to (pay to) monitor about themselves?

Gina: By focusing on direct-to-consumer wearables and mobile apps for health and wellness in the US we see a lot of tech developed with very little focus on impact or efficacy. I think to some extent we’ve hit the trough in the ‘hype’ cycle, where the initial excitement over digital self-tracking is giving way to the hard and serious work of figuring out how to make things that improve people’s lives. Recent clinical trial data show that activity trackers, for example, don’t help people to lose weight. What we try to do in the book is to help people figure out what self-tracking to do for them and advocate for people being able to access and control their own data to help them ask — and answer — the questions that they have.

Ed.: A question I was too shy to ask the first time I saw you speak at the OII — how do you put the narrative back into the data? That is, how do you make stories that might mean something to a person, out of the vast piles of strangely meaningful-meaningless numbers that their devices accumulate about them?

Gina: We really emphasise community. It might sound clichéd but it truly helps. When I read some scholars’ critiques of the Quantified Self meetups that happen around the world I wonder if we have actually been to the same meetings. Instead of some kind of technophilia there are people really working to make sense of information about their lives. There’s a lot of love for tech, but there are also people trying to figure out what their numbers mean, are they normal, and how to design their own ‘n of 1’ trials to figure out how to make themselves better, healthier, and happier. Putting narrative back into data really involves sharing results with others and making sense together.

Ed.: There’s already been a lot of fuss about monetisation of NHS health records: I imagine the world of personal health / wellness data is a vast Wild West of opportunity for some (i.e. companies) and potential exploitation of others (i.e. the monitored), with little law or enforcement? For a start .. is this health data or social data? And are these equivalent forms of data, or are they afforded different protections?

Gina: In an opinion piece in Wired UK last summer I asked what happens to data ownership when your smartphone is your doctor. Right now we afford different privacy protection to health-related data than other forms of personal data. But very soon trace data may be useful for clinical diagnoses. There are already in place programmes for using trace data for early detection of mood disorders, and research is underway on using mobile data for the diagnosis of movement disorders. Who will have control and access to these potential early alert systems for our health information? Will it be legally protected to the same extent as the information in our medical records? These are questions that society needs to settle.

Ed.: I like the central irony of “mindfulness” (a meditation technique involving a deep awareness of your own body), i.e. that these devices reveal more about certain aspects of the state of your body than you would know yourself: but you have to focus on something outside of yourself (i.e. a device) to gain that knowledge. Do these monitoring devices support or defeat “mindfulness”?

Gina: I’m of two minds, no pun intended. Many of the Quantified Self experiments we discuss in the book involved people playing with their data in intentional ways and that level of reflection in turn influences how people connect the data about themselves to the changes they want to make in their behaviour. In other words, the act of self-tracking itself may help people to make changes. Some scholars have written about the ‘outsourcing’ of the self, while others have argued that we can develop ‘exosenses’ outside our bodies to extend our experience of the world, bringing us more haptic awareness. Personally, I do see the irony in smartphone apps intended to help us reconnect with ourselves.

Ed.: We are apparently willing to give up a huge amount of privacy (and monetizable data) for convenience, novelty, and to interact with seductive technologies. Is the main driving force of the wearable health-tech industry the actual devices themselves, or the data they collect? i.e. are these self-tracking companies primarily device/hardware companies or software/data companies?

Gina: Sadly, I think it is neither. The drop off in engagement with wearables and apps is steep with the majority falling into disuse after six months. Right now one of the primary concerns I have as an Internet scholar is the apparent lack of empathy companies seem to have for their customers in this space. People operate under the assumption that the data generated by the devices they purchase is ‘theirs’, yet companies too often operate as if they are the sole owners of that data.

Anthropologist Bill Maurer has proposed replacing data ownership with a notion of data ‘kinship’ – that both technology companies and their customers have rights and responsibilities to the data that they produce together. Until we have better social contracts and legal frameworks for people to have control and access to their own data in ways that allow them to extract it, query it, and combine it with other kinds of data, then that problem of engagement will continue and activity trackers will sit unused on bedside tables or uncharged in the back of drawers. The ability to help people ask the next question or design the next self-tracking experiment is where most wearables fail today.

Ed.: And is this data at all clinically useful / interoperable with healthcare and insurance systems? i.e. do the companies producing self-monitoring devices work to particular data and medical standards? And is there any auditing and certification of these devices, and the data they collect?

Gina: This idea that the data is just one interoperable system away from usefulness is seductive but so, so wrong. I was recently at a panel of health innovators, the title of which was ‘No more Apps’. The argument was that we’re not going to get to meaningful change in healthcare simply by adding a new data stream. Doctors in our study said things like ‘I don’t need more data; I need more resources.’ Right now we have few protections for individuals that this data won’t be able to harm their rights to insurance, or won’t be used to discriminate against them and yet there are few results that show how the commercially available wearable devices are delivering clinical value. There’s still a lot of work needed before this can happen.

Ed.: Lastly — just as we share our music on iTunes; could you see a scenario where we start to share our self-status with other device wearers? Maybe to increase our sociability and empathy by being able to send auto-congratulations to people who’ve walked a lot that day, or to show concern to people with elevated heart rates / skin conductivity (etc.)? Given the logical next step to accumulating things is to share them..

Gina: We can see that future scenario now in groups like Patients Like Me, Cure Together, and Quantified Self meetups. What these ‘edge’ use cases teach us for more everyday self-tracking uses is that real support and community can form around people sharing their data with others. These are projects that start from individuals with information about themselves and work to build toward collective, social knowledge. Other types of ‘citizen science’ projects are underway like the Personal Genome Project where people can donate their health data for science. The Stanford-led MyHeart Counts study on iPhone and Apple Watch recruited in its first two weeks 6,000 people for its study and now has over 40,000 US participants. Those are numbers for clinical studies that we’ve just never seen before.

My co-author led the development of an interesting tool, Data Sense, that lets people without stats training visualize the relationships among variables in their own data or easily combine their data with data from other people. When people can do that they can begin asking the questions that matter for them and for their communities. What we know won’t work in the future of self-tracking data, though, are the lightweight online communities that technology brands just throw together. I’m just not going to be motivated by a random message from LovesToWalk1949, but under the right conditions I might be motivated by my mom, my best friend or my social network. There is still a lot of hard work that has to be done to get the design of self-tracking tools, practices, and communities for social support right.


Gina Neff was talking to blog editor David Sutcliffe about her book (with Dawn Naffs) “Self-Tracking” (MIT Press).

]]>
Could data pay for global development? Introducing data financing for global good https://ensr.oii.ox.ac.uk/could-data-pay-for-global-development-introducing-data-financing-for-global-good/ Tue, 03 Jan 2017 15:12:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3903 “If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study.

The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific — very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good?

Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal.

Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the UNITAID air ticket levy used to advance global health.

Data financing, then, is a subset of innovative financing that refers to mechanisms that attempt to redirect a slice of the value created in the global data economy towards broader social objectives. For instance, a Global Internet Subsidy funded by large Internet companies could help to educate and and build infrastructure in the world’s marginalized regions, in the long run also growing the market for Internet companies’ services. But such a model would need well-designed governance mechanisms to avoid the pitfalls of current Internet subsidization initiatives, which risk failing because of well-founded concerns that they further entrench Internet giants’ dominance over emerging digital markets.

Besides the Global Internet Subsidy, other data financing models examined in the report are a Privacy Insurance for personal data processing, a Shared Knowledge Duty payable by businesses profiting from open and public data, and an Attention Levy to disincentivise intrusive marketing. Many of these have been considered before, and they come with significant economic, legal, political, and technical challenges. Our report considers these challenges in turn, assesses the feasibility of potential solutions, and presents rough estimates of potential financial impacts.

Some of the prevailing business models of the data economy — provoking users’ attention, extracting their personal information, and monetizing it through advertising — are more or less taken for granted today. But they are something of a historical accident, an unanticipated corollary to some of the technical and political decisions made early in the Internet’s design. Certainly they are not any inherent feature of data as such. Although our report focuses on the technical, legal, and political practicalities of the idea of data financing, it also invites a careful reader to question some of the accepted truths on how a data-intensive economy could be organized, and what business models might be possible.

Read the report: Lehdonvirta, V., Mittelstadt, B. D., Taylor, G., Lu, Y. Y., Kadikov, A., and Margetts, H. (2016) Data Financing for Global Good: A Feasibility Study. University of Oxford: Oxford Internet Institute.

]]>
Exploring the Ethics of Monitoring Online Extremism https://ensr.oii.ox.ac.uk/exploring-the-ethics-of-monitoring-online-extremism/ Wed, 23 Mar 2016 09:59:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3616 (Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.

Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
Surveillance in NYC’s financial district. Photo by Jonathan McIntosh (flickr).

Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?

Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!

Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?

Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.

This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.

Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities.  How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?

Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”

One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.

Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh and Ian were talking to Blog Editor Bertie Vidgen.

]]>
New Voluntary Code: Guidance for Sharing Data Between Organisations https://ensr.oii.ox.ac.uk/new-voluntary-code-guidance-for-sharing-data-between-organisations/ Fri, 08 Jan 2016 10:40:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3540 Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

]]>
Government “only” retaining online metadata still presents a privacy risk https://ensr.oii.ox.ac.uk/government-only-retaining-online-metadata-still-presents-a-privacy-risk/ Mon, 30 Nov 2015 08:14:56 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3514 Issues around data capture, retention and control are gaining significant attention in many Western countries — including in the UK. In this piece originally posted on the Ethics Centre Blog, the OII’s Brent Mittelstadt considers the implications of metadata retention for privacy. He argues that when considered in relation to individuals’ privacy, metadata should not be viewed as fundamentally different to data about the content of a communication.

From 13 October onwards telecommunications providers in Australia will be required to retain metadata on communications for two years. Image by r2hox (Flickr).
Since 13 October 2015 telecommunications providers in Australia have been required to retain metadata on communications for two years. Image by h2hox (Flickr)

Australia’s new data retention law for telecommunications providers, comparable to extant UK and US legislation, came into effect 13 October 2015. Telecoms and ISPs are now required to retain metadata about communications for two years to assist law enforcement agencies in crime and terrorism investigation. Despite now being in effect, the extent and types of data to be collected remain unclear. The law has been widely criticised for violating Australians’ right to privacy by introducing overly broad surveillance of civilians. The Government has argued against this portrayal. They argue the content of communications will not be retained but rather the “data about the data” – location, time, date and duration of a call.

Metadata retention raises complex ethical issues often framed in terms of privacy which are relevant globally. A popular argument is that metadata offers a lower risk of violating privacy compared to primary data – the content of communication. The distinction between the “content” and “nature” of a communication implies that if the content of a message is protected, so is the privacy of the sender and receiver.

The assumption that metadata retention is more acceptable because of its lower privacy risks is unfortunately misguided. Sufficient volumes of metadata offer comparable opportunities to generate invasive information about civilians. Consider a hypothetical. I am given access to a mobile carrier’s dataset that specifies time, date, caller and receiver identity in addition to a continuous record of location constructed with telecommunication tower triangulation records. I see from this that when John’s wife Jane leaves the house, John often calls Jill and visits her for a short period from afterwards. From this I conclude that John may be having an affair with Jill. Now consider the alternative. Instead of metadata I have access to recordings of the calls between John and Jill with which I reach the same conclusion.

From a privacy perspective the method I used to infer something about John’s marriage is trivial. In both cases I am making an intrusive inference about John based on data that describes his behaviours. I cannot be certain but in both cases I am sufficiently confident that my inference is correct based on the data available. My inferences are actionable – I treat them as if they are reliable, accurate knowledge when interacting with John. It is this willingness to act on uncertainty (which is central to ‘Big Data’) that makes metadata ethically similar to primary data. While it is comparatively difficult to learn something from metadata, the potential is undeniable. Both types allow for invasive inferences to be made about the lives and behaviours of people.

Going further, some would argue that metadata can actually be more invasive than primary data. Variables such as location, time and duration are easier to assemble into a historical record of behaviour than content. These concerns are deepened by the difficulty of “opting out” of metadata surveillance. While a person can hypothetically forego all modern communication technologies, privacy suddenly has a much higher cost in terms of quality of life.

Technologies such as encrypted communication platforms, virtual private networks (VPN) and anonymity networks have all been advocated as ways to subvert metadata collection by hiding aspects of your communications. It is worth remembering that these techniques remain feasible only so long as they remain legal, one has the technical knowledge and (in some cases) ability to pay. These technologies raise a question of whether a right to anonymity exists. Perhaps privacy enhancing technologies are immoral? Headlines about digital piracy and the “dark web” show how quickly technologically hiding one’s identity and behaviours can take on a criminal and immoral tone. The status quo of privacy subtly shifts when techniques to hide aspects of one’s personal life are portrayed as necessarily subversive. The technologies to combat metadata retention are not criminal or immoral – they are privacy enhancing technologies.

Privacy is historically a fundamental human value. Individuals have a right to privacy. Violations must be justified by a competing interest. In discussing the ethics of metadata retention and anonymity technologies it is easy to forget this status quo. Privacy is not something that individuals have to justify or argue for – it should be assumed.


Brent Mittelstadt is a Postdoctoral Research Fellow at the Oxford Internet Institute working on the ‘Ethics of Biomedical Big Data‘ project with Prof. Luciano Floridi. His research interests include the ethics of information handled by medical ICT, theoretical developments in discourse and virtue ethics, and epistemology of information.

]]>
Can drones, data and digital technology provide answers to nature conservation challenges? https://ensr.oii.ox.ac.uk/can-drones-data-and-digital-technology-provide-answers-to-nature-conservation-challenges/ Mon, 08 Dec 2014 16:50:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3044 Caption
Drone technology for conservation purposes is new, and its cost effectiveness — when compared with other kinds of intervention, such as training field observers — not yet proven. Drone by ConservationDrones.org.

Drones create headlines. Like AI, their unfettered use plays on our most sci-fi induced fears. Military deployment both intrigues us and serves to increase our suspicions. Commerce enthusiastically explores their value to meet consumers ‘on-demand’ expectations. Natural history film-makers experiment with drone technology to give us rare access to the natural world. In June 2014, the US National Park Service banned them from 401 parks while it worked out a new management policy and this week a pilot reported sighting a UAV near the flight path of a passenger jet. It’s clear that the increasing use of drones presents complex challenges.

UAVs or Unmanned Aerial Vehicles offer intriguing possibilities for nature conservation. One dominant focus of experimentation is their use for data capture and monitoring alongside data analytics and other spatial and reporting tools. The potential of integrated digital capabilities such GPS / satellite tagging and geo-mapping, cloud services, mobile devices, camera-traps, radio telemetry, LiDAR and data from field observation, to better understand and respond to nature conservation concerns may be significant.

Suggestive of this promise, is the facility to assess biodiversity in difficult or rapidly changing terrain, to track rehabilitated species and gather data to refine reintroduction programmes or to pick-up real-time changes in animal behaviour that may indicate imminent threat of poaching. The need has never been greater. The findings of the WWF Living Planet Report 2014 are sobering: ‘Species populations worldwide have declined by 52% since 1970’. Many digitally enabled conservation projects are now underway, and hopes are high.

Professor Serge Wich, Professor in Primate Biology, Liverpool John Moores University and Lian Pin Koh (Founding Directors of Conservationdrones.org), have developed drone technology to support tracking of orangutans and to assess their habitat loss. This work produces 3d geometrically accurate computer representations of the forest in near real-time that can help for example, to detect forest fires and illegal logging activity. In the course of this work they have continued to innovate and refine their model. The orangutan drone is more robust, it employs a Google maps interface and the photo data now uses automatic object recognition to detect orangutan nests. Integrating such technology with radio collars (and in some cases small sensors implanted into newly released animals), camera-traps, spy microphones and wifi data collection, can build a far more detailed picture of the behaviour and range of endangered species. This when overlaid with other data-sets, is likely to prove a valuable tool for conservation planners, if integration challenges can be resolved.

While use of digital capabilities is broadly embraced by academics and field researchers, drones do however present some very particular issues. A number of academics working in drone research for nature conservation have for example, stressed the need for caution in their rapid deployment to support anti-poaching initiatives: the concern being to avoid creating a perception of ‘fortress conservation’. Organisations are urged to consider how on-the-ground relationships may be affected by fear of ‘sinister technologies of surveillance’, thereby undoing the significant strides made through local engagement.

Academics and field-researchers also point out that drone technology for conservation purposes is very new, and its cost effectiveness — when compared with other kinds of intervention, such as training field observers — not yet proven. Due to their use in remote environments, drones (and similar tracking technologies) are also subject to issues such as battery life, range and GPS coverage, data consistency and analytics systems management. The validity of drone data in securing poaching convictions is another area of consideration and, as the US National Park Service decision illustrates, the broader legal and political impacts are still to be thought through.

Nature conservation initiatives are often, due to their hard to navigate, large scale operating context, costly and resource-intensive. They have to overcome issues associated with prevailing political unrest, lack of economic support and natural sensitivities to local customs and concerns. Innovation in such environments requires far more thought than the simple application of technologies demonstrated to work elsewhere. Every environment and each species presents particular challenges. Digital technologies, data interpretation and local participation require a precise set of conditions to work optimally.

And there is also a danger of reinventing the wheel; solutions to many of the problems nature conservationists seek to solve might be achieved, for example, by harnessing lessons from the rapid iteration of digital health technologies (device innovation remains critical for many nature conservation projects) or by surveying solutions already in use by the military and extraction industries. Many NGOs have worked hard to establish collaborations that facilitate such valuable knowledge exchange.

In 2011 one consortium created the SMART tool (spatial monitoring and reporting software). This helps local rangers using GPS to collect data, which is analysed automatically to support the swift diversion of resources to areas of need. It’s simple, quick, and visual — using maps to present findings — and the integration of low-cost robust mobile devices enables triangulation for better navigation and synching with cameras. It will also support tracking even when turned off to preserve battery life.

Another collective, Global Forest Watch established in 2012 and led by Lilian Pintea, VP Conservation Science, has developed a platform for different groups to record and share research such as deforestation alerts. Nature conservation groups can access a common cloud-based mapping system with shared variables that also supports integration with other data sets and overlay of hi-resolution images, such as those gathered by drones.

The Jane Goodall Institute is working with local partners to exploit these advances in remote sensing for global land cover maps for chimpanzee and gorilla surveying. Many other organisations and governments are experimenting with drones in efforts to protect other particularly fragile environments. In Belize for example conservation drones now monitor the health of marine ecosystems.

It’s a significant task but recent projects of this kind are yielding valuable results. Other work that employs digital innovation and UAV capabilities to support nature conservation – beyond monitoring and data capture – includes the development of robot insects at the Harvard Robotics Lab (including robot bees which could act as a ‘stop gap’ while populations recover) and ‘Robirds’ built at the University of Twente (which connects simple digital technologies with an understanding of predator-prey relationships to keep raptors away from areas of danger such as airfields).

Digital innovation for nature conservation will benefit from cross-disciplinary creativity. The marriage of digital media, technology, on-the-ground engagement and research expertise has great potential to address concerns and improve opportunities to collaborate. The questions that persist are how best to share knowledge and harness new digital technologies and innovation? How to build on the hard work already done on the ground with policy-makers and across boundaries? How to ensure new technologies are used appropriately, are cost-effective and integrated with sensitivity to local needs and resource constraints? And most importantly of all…how can we best to employ such capabilities to protect and share the wonder of the natural world?


Lisa Sargood is director of digital strategy at Horton Wolfe and Academic Visitor at the OII. Prior to this she was Exec Producer and Commissioner for digital output across BBC Nature and BBC Science. Her projects include Spring/Autumn-watch, Blue Planet, Planet Earth, Stargazing Live, Horizon, LabUK (Brain Test Britain etc.), Big Cat Live and Virtual Revolution, which won a BAFTA and an International Emmy®. Her research focuses on the potential for digital media, technology and public engagement to support nature conservation and drive pro-environment behaviours.

Contact: lisa.sargood@oii.ox.ac.uk Twitter: @lisasargood

]]>
Unpacking patient trust in the “who” and the “how” of Internet-based health records https://ensr.oii.ox.ac.uk/unpacking-patient-trust-in-the-who-and-the-how-of-internet-based-health-records/ Mon, 03 Mar 2014 08:50:54 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2615 In an attempt to reduce costs and improve quality, digital health records are permeating health systems all over the world. Internet-based access to them creates new opportunities for access and sharing – while at the same time causing nightmares to many patients: medical data floating around freely within the clouds, unprotected from strangers, being abused to target and discriminate people without their knowledge?

Individuals often have little knowledge about the actual risks, and single instances of breaches are exaggerated in the media. Key to successful adoption of Internet-based health records is, however, how much a patient places trust in the technology: trust that data will be properly secured from inadvertent leakage, and trust that it will not be accessed by unauthorised strangers.

Situated in this context, my own research has taken a closer look at the structural and institutional factors influencing patient trust in Internet-based health records. Utilising a survey and interviews, the research has looked specifically at Germany – a very suitable environment for this question given its wide range of actors in the health system, and often being referred to as a “hard-line privacy country”. Germany has struggled for years with the introduction of smart cards linked to centralised Electronic Health Records, not only changing its design features over several iterations, but also battling negative press coverage about data security.

The first element to this question of patient trust is the “who”: that is, does it make a difference whether the health record is maintained by either a medical or a non-medical entity, and whether the entity is public or private? I found that patients clearly expressed a higher trust in medical operators, evidence of a certain “halo effect” surrounding medical professionals and organisations driven by patient faith in their good intentions. This overrode the concern that medical operators might be less adept at securing the data than (for example) most non-medical IT firms. The distinction between public and private operators is much more blurry in patients’ perception. However, there was a sense among the interviewees that a stronger concern about misuse was related to a preference for public entities who would “not intentionally give data to others”, while data theft concerns resulted in a preference for private operators – as opposed to public institutions who might just “shrug their shoulders and finger-point at subordinate levels”.

Equally important to the question of “who” is managing the data may be the “how”: that is, is the patient’s ability to access and control their health-record content perceived as trust enhancing? While the general finding of this research is that having the opportunity to both access and control their records helps to build patient trust, an often overlooked (and discomforting) factor is that easy access for the patient may also mean easy access for the rest of the family. In the words of one interviewee: “For example, you have Alzheimer’s disease or dementia. You don’t want everyone around you to know. They will say ‘show us your health record online’, and then talk to doctors about you – just going over your head.” Nevertheless, for most people I surveyed, having access and control of records was perceived as trust enhancing.

At the same time, a striking survey finding is how greater access and control of records can be less trust-enhancing for those with lower Internet experience, confidence, and breadth of use: as one older interviewee put it – “I am sceptical because I am not good at these Internet things. My husband can help me, but somehow it is not really worth this effort.” The quote reveals one of the facets of digital divides, and additionally highlights the relevance of life-stage in the discussion. Older participants see the benefits of sharing data (if it means avoiding unnecessary repetition of routine examinations) and are less concerned about outsider access, while younger people are more apprehensive of the risk of medical data falling into the wrong hands. An older participant summarised this very effectively: “If I was 30 years younger and at the beginning of my professional career or my family life, it would be causing more concern for me than now”. Finally, this reinforces the importance of legal regulations and security audits ensuring a general level of protection – even if the patient chooses not to be (or cannot be) directly involved in the management of their data.

Interestingly, the research also uncovered what is known as the certainty trough: not only are those with low online affinity highly suspicious of Internet-based health records – the experts are as well! The more different activities a user engaged in, the higher the suspicion of Internet-based health records. This confirms the notion that with more knowledge and more intense engagement with the Internet, we tend to become more aware of the risks – and lose trust in the technology and what the protections might actually be worth.

Finally, it is clear that the “who” and the “how” are interrelated, as a low degree of trust goes hand in hand with a desire for control. For a generally less trustworthy operator, access to records is not sufficient to inspire patient trust. While access improves knowledge and may allow for legal steps to change what is stored online, few people make use of this possibility; only direct control of what is stored online helps to compensate for a general suspicion about the operator. It is noteworthy here that there is a discrepancy between how much importance people place on having control, and how much they actually use it, but in the end, trust is a subjective concept that doesn’t necessarily reflect actual privacy and security.

The results of this research provide valuable insights for the further development of Internet-based health records. In short: to gain patient trust, the operator should ideally be of a medical nature and should allow the patients to get involved in how their health records are maintained. Moreover, policy initiatives designed to increase the Internet and health literacy of the public are crucial in reaching all parts of the population, as is an underlying legal and regulatory framework within which any Internet-based health record should be embedded.


Read the full paper: Rauer, Ulrike (2012) Patient Trust in Internet-based Health Records: An Analysis Across Operator Types and Levels of Patient Involvement in Germany. Policy and Internet 4 (2).

]]>
The challenges of government use of cloud services for public service delivery https://ensr.oii.ox.ac.uk/challenges-government-use-cloud-services-public-service-delivery/ Mon, 24 Feb 2014 08:50:15 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2584 Caption
Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally — presenting particular challenges to government. Image by NASA Goddard Photo and Video

Ed: You open your recent Policy and Internet article by noting that “the modern treasury of public institutions is where the wealth of public information is stored and processed” … what are the challenges of government use of cloud services?

Kristina: The public sector is a very large user of information technology but data handling policies, vendor accreditation and procurement often predate the era of cloud computing. Governments first have to put in place new internal policies to ensure the security and integrity of their information assets residing in the cloud. Through this process governments are discovering that their traditional notions of control are challenged because cloud services are virtual, dynamic, and operate across borders.

One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data — after all, public sector information embodies the past, the present, and the future of a country. The ability to govern presupposes command and control over government information to the extent necessary to deliver public services, protect citizens’ personal data and to ensure the integrity of the state, among other considerations. One could even assert that in today’s interconnected world national sovereignty is conditional upon adequate data sovereignty.

Ed: A basic question: if a country’s health records (in the cloud) temporarily reside on / are processed on commercial servers in a different country: who is liable for the integrity and protection of that data, and under who’s legal scheme? ie can a country actually technically lose sovereignty over its data?

Kristina: There is always one line of responsibility flowing from the contract with the cloud service provider. However, when these health records cross borders they are effectively governed under a third country’s jurisdiction where disclosure authorities vis-à-vis the cloud service provider can likely be invoked. In some situations the geographical whereabouts of the public health records is not even that important because certain countries’ legislation has extra-territorial reach and it suffices that the cloud service provider is under an obligation to turn over data in its custody. In both situations countries’ exclusive sovereignty over public sector information would be contested. And service providers may find themselves in a Catch22 when they have to decide their legitimate course of action.

Ed: Is there a sense of how many government services are currently hosted “in the cloud”; and have there been any known problems so far about access and jurisdiction?

Kristina: The US has published some targets but otherwise we have no sense of the magnitude of government cloud computing. It is certainly an ever growing phenomenon in leading countries, for example both the US Federal Cloud Computing Strategy and the United Kingdom’s G-Cloud Framework leverage public sector cloud migration with a cloud-first strategy and they operate government application stores where public authorities can self-provision themselves with cloud-based IT services. Until now, the issues of access and jurisdiction have primarily been discussed in terms of risk (as I showed in my article) with governments adopting strategies to keep their public records within national territory, even if they are residing on a cloud service.

Ed: Is there anything about the cloud that is actually functionally novel; ie that calls for new regulation at national or international level, beyond existing data legislation?

Kristina: Cloud services are not meant to recognize national frontiers, but to thrive on economies of scale and scope globally. The legal risks arising from its transnationality won’t be solved by more legislation at the national level; even if this is a pragmatic solution, the resurrection of territoriality in cloud service contracts with the government conflicts with scalability. My article explores various avenues at the international level, for example extending diplomatic immunity, international agreements for cross-border data transfers, and reliance on mutual legal assistance treaties but in my opinion they do not satisfyingly restore a country’s quest for data sovereignty in the cloud context. In the EU a regional approach could be feasible and I am very much drawn by the idea of a European cloud environment where common information assurance principles prevail — also curtailing individual member states’ disclosure authorities.

Ed: As the economies of scale of cloud services kick in, do you think we will see increasing commercialisation of public record storing and processing (with a possible further erosion of national sovereignty)?

Kristina: Where governments have the capability they adopt a differentiated, risk-based approach corresponding to the information’s security classification: data in the public domain or that have low security markings are suitable for cloud services without further restrictions. Data that has medium security markings may still be processed on cloud services but are often confined to the national territory. Beyond this threshold, i.e. for sensitive and classified information, cloud services are not an option, judging from analysis of the emerging practice in the U.S., the UK, Canada and Australia. What we will increasingly see is IT-outsourcing that is labelled “cloud” despite not meeting the specifications of a true cloud service. Some governments are more inclined to introduce dedicated private “clouds” that are not fully scalable, in other words central data centres. For a vast number of countries, including developing ones, the options are further limited because there is no local cloud infrastructure and/or the public sector cannot afford to contract a dedicated government cloud. In this situation I could imagine an increasing reliance on transnational cloud services, with all the attendant pros and cons.

Ed: How do these sovereignty / jurisdiction / data protection questions relate to the revelations around the NSA’s PRISM surveillance programme?

Kristina: It only confirms that disclosure authorities are extensively used for intelligence gathering and that legal risks have to be taken as seriously as technical vulnerabilities. As a consequence of the Snowden revelations it is quite likely that the sensitivity of governments (as well as private sector organizations) to the impact of foreign jurisdictions will become even more pronounced. For example, there are reports estimating that the lack of trust in US-based cloud services is bound to affect the industry’s growth.

Ed: Could this usher in a whole new industry of ‘guaranteed’ national clouds..? ie how is the industry responding to these worries?

Kristina: This is already happening; in particular, European and Asian players are being very vocal in terms of marketing their regional or national cloud offerings as compatible with specific jurisdiction or national data protection frameworks.

Ed: And finally, who do you think is driving the debate about sovereignty and cloud services: government or industry?

Kristina: In the Western world it is government with its special security needs and buying power to which industry is responsive. As a nascent technology cloud services nonetheless thrive on business with governments because it opens new markets where previously in-house IT services dominated in the public sector.


Read the full paper: Kristina Irion (2013) Government Cloud Computing and National Data Sovereignty. Policy and Internet 4 (3/4) 40–71.

Kristina Irion was talking to blog editor David Sutcliffe.

]]>