regulation – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:26:22 +0000 en-GB hourly 1 Should adverts for social casino games be covered by gambling regulations? https://ensr.oii.ox.ac.uk/should-adverts-for-social-casino-games-be-covered-by-gambling-regulations/ Wed, 24 May 2017 07:05:19 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4108 Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry — social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetized features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators.

Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable.

However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorizing and normalization of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers. Given the receptivity of young people to messages that encourage gambling, the authors recommend that gaming companies embrace corporate social responsibility standards, including adding warning messages to advertisements for gambling-themed games. They hope that their qualitative research may complement existing quantitative findings, and facilitate discussions about appropriate policies for advertisements for social casino games and other gambling-themed games.

We caught up with Brett to discuss their findings:

Ed.: You say there are no policies related to the advertising of social casino games — why is this? And do you think this will change?

Brett: Social casino games are regulated under general consumer regulations, but there are no specific regulations for these types of games and they do not fall under gambling regulation. Although several gambling regulatory bodies have considered these games, as they do not require payment to play and prizes have no monetary value they are not considered gambling activities. Where the games include branding for gambling companies or are considered advertising, they may fall under relevant legislation. Currently it is up to individual consumers to consider if they are relevant, which includes parents considering their children’s’ use of the games.

Ed.: Is there work on whether these sorts of games actually encourage gambling behaviour? As opposed to gambling behaviour simply pre-existing — i.e. people are either gamblers or not, susceptible or not.

Brett: We have conducted previous research showing that almost one-fifth of adults who played social casino games had gambled for money as a direct result of these games. Research also found that two-thirds of adolescents who had paid money to play social casino games had gambled directly as a result of these games. This builds on other international research suggesting that there is a pathway between games and gambling. For some people, the games are perceived to be a way to ‘try out’ or practice gambling without money and most are motivated to gamble due to the possibility of winning real money. For some people with gambling problems, the games can trigger the urge to gamble, although for others, the games are used as a way to avoid gambling in an attempt to cut back. The pathway is complicated and needs further specific research, including longitudinal studies.

Ed.: Possibly a stupid question: you say social games are a huge and booming market, despite being basically free to play. Where does the revenue come from?

Brett: Not a stupid question at all! When something is free, of course it makes sense to question where the money comes from. The revenue in these business models comes from advertisements and players. The advertisement revenue model is similar to other revenue models, but the player revenue model, which is based largely on micropayments, is a major component of how these games make money. Players can typically play free, and micropayments are voluntary. However, when they run out of free chips, players have to wait to continue to play, or they can purchase additional chips.

The micropayments can also improve game experience, such as to obtain in-game items, as a temporary boost in the game, to add lives/strength/health to an avatar or game session, or unlock the next stage in the game. In social casino games, for example, micropayments can be made to acquire more virtual chips with which to play the slot game. Our research suggests that only a small fraction of the player base actually makes micropayments, and a smaller fraction of these pay very large amounts. Since many of these games are free to play, but one can pay to advance through game in certain ways, they have colloquially been referred to as “freemium” games.

Ed.: I guess social media (like Facebook) are a gift to online gambling companies: i.e. being able to target (and A/B test) their adverts to particular population segments? Are there any studies on the intersection of social media, gambling and behavioural data / economics?

Brett: There is a reasonable cross-over in social casino game players and gamblers – our Australian research found 25% of Internet and 5% of land-based gamblers used social casino games and US studies show around one-third of social casino gamers visit land-based casinos. Many of the most popular and successful social casino games are owned by companies that also operate gambling, in venues and online. Some casino companies offer social casino games to continue to engage with customers when they are not in the venue and may offer prizes that can be redeemed in venues. Games may allow gambling companies to test out how popular games will be before they put them in venues. Although, as most players do not pay to play social casino games, they may engage with these differently from gambling products.

Ed.: We’ve seen (with the “fake news” debate) social media companies claiming to simply be a conduit to others’ content, not content providers themselves. What do they say in terms of these social games: I’m assuming they would either claim that they aren’t gambling, or that they aren’t responsible for what people use social media for?

Brett: We don’t want to speak for the social media companies themselves, and they appear to leave quite a bit up to the game developers. Advertising standards have become more lax on gambling games – the example we give in our article is Google, who had a strict policy against advertisements for gambling-related content in the Google Play store but in February 2015 began beta testing advertisements for social casino games. In some markets where online gambling is restricted, online gambling sites offer ‘free’ social casino games that link to real money sites as a way to reach these markets.

Ed.: I guess this is just another example of the increasingly attention-demanding, seductive, sexualised, individually targeted, ubiquitous, behaviourally attuned, monetised environment we (and young children) find ourselves in. Do you think we should be paying attention to this trend (e.g. noticing the close link between social gaming and gambling) or do you think we’ll all just muddle along as we’ve always done? Is this disturbing, or simply people doing what they enjoy doing?

Brett: We should certainly be paying attention to this trend, but don’t think the activity of social casino games is disturbing. A big part of the goal here is awareness, followed by conscious action. We would encourage companies to take more care in controlling who accesses their games and to whom their advertisements are targeted. As you note, David, we are in such a highly-targeted, specified state of advertising. As a result, we should, theoretically, be able to avoid marketing games to young kids. Companies should also certainly be mindful of the potential effect of cartoon games. We don’t automatically assign a sneaky, underhanded motive to the industry, but at the same time there is a percentage of the population that is at risk for gambling problems and we don’t want to exacerbate the situation by inadvertently advertising to young people, who are more susceptible to this type of messaging.

Read the full article: Abarbanel, B., Gainsbury, S.M., King, D., Hing, N., and Delfabbro, P.H. (2017) Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults? Policy & Internet 9 (2). DOI: 10.1002/poi3.135.


Brett Abarbanel was talking to blog editor David Sutcliffe.

]]>
New Voluntary Code: Guidance for Sharing Data Between Organisations https://ensr.oii.ox.ac.uk/new-voluntary-code-guidance-for-sharing-data-between-organisations/ Fri, 08 Jan 2016 10:40:37 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3540 Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

]]>
Does a market-approach to online privacy protection result in better protection for users? https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/ https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/#comments Wed, 25 Feb 2015 11:21:42 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3123 Ed: You examined the voluntary provision by commercial sites of information privacy protection and control under the self-regulatory policy of the U.S. Federal Trade Commission (FTC). In brief, what did you find?

Yong Jin: First, because we rely on the Internet to perform almost all types of transactions, how personal privacy is protected is perhaps one of the important issues we face in this digital age. There are many important findings: the most significant one is that the more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected. This is surprising because one might expect “the more popular, the better privacy protection” — a sort of marketplace magic that automatically solves the issue of personal privacy online. This was not the case at all, because the popular sites with more resources did not provide better privacy protection. Of course, the Internet in general is a malleable medium. This means that commercial sites can design, modify, or easily manipulate user interfaces to maximize the ease with which users can protect their personal privacy. The fact that this is not really happening for commercial websites in the U.S. is not only alarming, but also suggests that commercial forces may not have a strong incentive to provide privacy protection.

Ed: Your sample included websites oriented toward young users and sensitive data relating to health and finance: what did you find for them?

Yong Jin: Because the sample size for these websites was limited, caution is needed in interpreting the results. But what is clear is that just because the websites deal with health or financial data, they did not seem to be better at providing more privacy protection. To me, this should raise enormous concerns from those who use the Internet for health information seeking or financial data. The finding should also inform and urge policymakers to ask whether the current non-intervention policy (regarding commercial websites in the U.S.) is effective, when no consideration is given for the different privacy needs in different commercial sectors.

Ed: How do your findings compare with the first investigation into these matters by the FTC in 1998?

Yong Jin: This is a very interesting question. In fact, at least as far as the findings from this study are concerned, it seems that no clear improvement has been made in almost two decades. Of course, the picture is somewhat complicated. On the one hand, we see (on the surface) that websites have a lot more interactive features. But this does not necessarily mean improvement, because when it comes to actually informing users of what features are available for their privacy control and protection, they still tend to perform poorly. Note that today’s privacy policies are longer and are likely to carry more pages and information, which makes it even more difficult for users to understand what options they do have. I think informing people about what they can actually do is harder, but is getting more important in today’s online environment.

Ed: Is this just another example of a US market-led vs European regulation-led approach to a particular problem? Or is the situation more complicated?

Yong Jin: The answer is yes and no. Yes, it is because a US market-led approach clearly presents no strong statuary ground to mandate privacy protection in commercial websites. However, the answer is also no: even in the EU there is no regulatory mandate for websites to have certain interface-protections concerning how users should get informed about their personal data, and interact with websites to control its use. The difference is more on the fundamental principle of the “opt-in” EU approach. Although the “opt-in” is stronger than the “opt-out” approach in the U.S. this does not require websites to have certain interface-design aspects that are optimized for users’ data control. In other words, to me, the reality of the EU regulation (despite its robust policy approach) will not necessarily be rosier than the U.S., because commercial websites in the EU context also operate under the same incentive of personal data collection and uses. Ultimately, this is an empirical question that will require further studies. Interestingly, the next frontier of this debate will be on privacy in mobile platforms – and useful information concerning this can be found at the OII’s project to develop ethical privacy guidelines for mobile connectivity measurements.

Ed: Awareness of issues around personal data protection is pretty prominent in Europe — witness the recent European Court of Justice ruling about the ‘Right to Forget’ — how prominent is this awareness in the States? Who’s interested in / pushing / discussing these issues?

Yong Jin: The general public in the U.S. has an enormous concern for personal data privacy, since the Edward Snowden revelations in 2013 revealed extensive government surveillance activities. Yet my sense is that public awareness concerning data collection and surveillance by commercial companies has not yet reached the same level. Certainly, the issue such as the “Right to Forget” is being discussed among only a small circle of scholars, website operators, journalists, and policymakers, and I see the general public mostly remains left out of this discussion. In fact, a number of U.S. scholars have recently begun to weigh the pros and cons of a “Right to Forget” in terms of the public’s right to know vs the individual’s right to privacy. Given the strong tradition of freedom of speech, however, I highly doubt that U.S. policymakers will have a serious interest in pushing a similar type of approach in the foreseeable future.

My own work on privacy awareness, digital literacy, and behavior online suggests that public interest and demand for strong legislation such as a “Right to Forget” is a long shot, especially in the context of commercial websites.

Ed: Given privacy policies are notoriously awful to deal with (and are therefore generally unread) — what is the solution? You say the situation doesn’t seem to have improved in ten years, and that some aspects — such as readability of policies — might actually have become worse: is this just ‘the way things are always going to be’, or are privacy policies something that realistically can and should be addressed across the board, not just for a few sites?

Yong Jin: A great question, and I see no easy answer! I actually pondered a similar question when I conducted this study. I wonder: “Are there any viable solutions for online privacy protection when commercial websites are so desperate to use personal data?” My short answer is No. And I do think the problem will persist if the current regulatory contours in the U.S. continue. This means that there is a need for appropriate policy intervention that is not entirely dependent on market-based solutions.

My longer answer would be that realistically, to solve the notoriously difficult privacy problems on the Internet, we will need multiple approaches — which means a combination of appropriate regulatory forces by all the entities involved: regulatory mandates (government), user awareness and literacy (public), commercial firms and websites (market), and interface design (technology). For instance, it is plausible to perceive a certain level of readability of policy statement is to be required of all websites targeting children or teenagers. Of course, this will function with appropriate organizational behaviors, users’ awareness and interest in privacy, etc. In my article I put a particular emphasis on the role of the government (particularly in the U.S.) where the industry often ‘captures’ the regulatory agencies. The issue is quite complicated because for privacy protection, it is not just the FTC but also Congress who should enact to empower the FTC in its jurisdiction. The apparent lack of improvement over the years since the FTC took over online privacy regulation in the mid 1990s reflects this gridlock in legislative dynamics — as much as it reflects the commercial imperative for personal data collection and use.

I made a similar argument for multiple approaches to solve privacy problems in my article Offline Status, Online Status Reproduction of Social Categories in Personal Information Skill and Knowledge, and related, excellent discussions can be found in Information Privacy in Cyberspace Transactions (by Jerry Kang), and Exploring Identity and Identification in Cyberspace, by Oscar Gandy.

Read the full article: Park, Y.J. (2014) A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. Policy & Internet 6 (4) 360-376.


Yong Jin Park was taking to blog editor David Sutcliffe.

Yong Jin Park is an Associate Professor at the School of Communications, Howard University. His research interests center on social and policy implications of new technologies; current projects examine various dimensions of digital privacy.

]]>
https://ensr.oii.ox.ac.uk/does-a-market-approach-to-online-privacy-protection-result-in-better-protection-for-users/feed/ 1
Will digital innovation disintermediate banking — and can regulatory frameworks keep up? https://ensr.oii.ox.ac.uk/will-digital-innovation-disintermediate-banking-and-can-regulatory-frameworks-keep-up/ Thu, 19 Feb 2015 12:11:45 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3114
Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Image by MPD01605.
Innovation doesn’t just fall from the sky. It’s not distributed proportionately or randomly around the world or within countries, or found disproportionately where there is the least regulation, or in exact linear correlation with the percentage of GDP spent on R&D. Innovation arises in cities and countries, and perhaps most importantly of all, in the greatest proportion in ecosystems or clusters. Many of Europe’s economies are hampered by a waning number of innovations, partially attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Specifically, Europe’s innovation finance ecosystem lacks the necessary scale, plurality, and appetite for risk to drive investments in long-term initiatives aiming to produce a disruptive new technology. Such long-term investments are taking place more in the rising economies of Asia than in Europe.

While these problems could be addressed by new approaches and technologies for financing dynamism in Europe’s economies, financing of (potentially risky) innovation could also be held back by financial regulation that focuses on stability, avoiding forum shopping (i.e., looking for the most permissive regulatory environment), and preventing fraud, to the exclusion of other interests, particularly innovation and renewal. But the role of finance in enabling the development and implementation of new ideas is vital — an economy’s dynamism depends on innovative competitors challenging, and if successful, replacing complacent players in the markets.

However, newcomers obviously need capital to grow. As a reaction to the markets having priced risk too low before the financial crisis, risk is now being priced too high in Europe, starving the innovation efforts of private financing at a time when much public funding has suffered from austerity measures. Of course, complementary (non-bank) sources of finance can also help fund entrepreneurship, and without that petrol of money, the engine of the new technology economy will likely stall.

The Internet has made it possible to fund innovation in new ways like crowd funding — an innovation in finance itself — and there is no reason to think that financial institutions should be immune to disruptive innovation produced by new entrants that offer completely novel ways of saving, insuring, loaning, transferring and investing money. New approaches such as crowdfunding and other financial technology (aka “FinTech”) initiatives could provide depth and a plurality of perspectives, in order to foster innovation in financial services and in the European economy as a whole.

The time has come to integrate these financial technologies into the overall financial frameworks in a manner that does not neuter their creativity, or lower their potential to revitalize the economy. There are potential synergies with macro-prudential policies focused on mitigating systemic risk and ensuring the stability of financial systems. These platforms have great potential for cross-border lending and investment and could help to remedy the retreat of bank capital behind national borders since the financial crisis. It is time for a new perspective grounded in an “innovation-friendly” philosophy and regulatory approach to emerge.

Crowdfunding is a newcomer to the financial industry, and as such, actions (such as complex and burdensome regulatory frameworks or high levels of guaranteed compensation for losses) that could close it down or raise high barriers of entry should be avoided. Competition in the interests of the consumer and of entrepreneurs looking for funding should be encouraged. Regulators should be ready to step in if abuses do, or threaten to, arise while leaving space for new ideas around crowdfunding to gain traction rapidly, without being overburdened by regulatory requirements at an early stage.

The interests of both “financing innovation” and “innovation in the financial sector” also coincide in the FinTech entrepreneurial community. Schumpeter wrote in 1942: “[the] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” An economy’s dynamism depends on innovative competitors challenging, and if successful, taking the place of complacent players in the markets. Keeping with the theme of Schumpeterian creative destruction, the financial sector is one seen by banking sector analysts and commentators as being particularly ripe for disruptive innovation, given its current profits and lax competition. Technology-driven disintermediation of many financial services is on the cards, for example, in financial advice, lending, investing, trading, virtual currencies and risk management.

The UK’s Financial Conduct Authority’s regulatory dialogues with FinTech developers to provide legal clarity on the status of their new initiatives are an example of good practice , as regulation in this highly monitored sector is potentially a serious barrier to entry and new innovation. The FCA also proactively addresses enabling innovation with Project Innovate, an initiative to assist both start-ups and established businesses in implementing innovative ideas in the financial services markets through an Incubator and Innovation Hub.

By its nature, FinTech is a sector that can benefit and benefit from the EU’s Digital Single Market and make Europe a sectoral global leader in this field. In evaluating possible future FinTech regulation, we need to ensure an optimal regulatory framework and specific rules. The innovation principle I discuss in my article should be part of an approach ensuring not only that regulation is clear and proportional — so that innovators can easily comply — but also ensuring that we are ready, when justified, to adapt regulation to enable innovations. Furthermore, any regulatory approaches should be “future proofed” and should not lock in today’s existing technologies, business models or processes.

Read the full article: Zilgalvis, P. (2014) The Need for an Innovation Principle in Regulatory Impact Assessment: The Case of Finance and Innovation in Europe. Policy and Internet 6 (4) 377–392.


Pēteris Zilgalvis, J.D. is a Senior Member of St Antony’s College, University of Oxford, and an Associate of its Political Economy of Financial Markets Programme. In 2013-14 he was a Senior EU Fellow at St Antony’s. He is also currently Head of Unit for eHealth and Well Being, DG CONNECT, European Commission.

]]>
Designing Internet technologies for the public good https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/ https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/#comments Wed, 08 Oct 2014 11:48:59 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2887
Caption
MEPs failed to support a Green call to protect Edward Snowden as a whistleblower, in order to allow him to give his testimony to the European Parliament in March. Image by greensefa.
Computers have developed enormously since the Second World War: alongside a rough doubling of computer power every two years, communications bandwidth and storage capacity have grown just as quickly. Computers can now store much more personal data, process it much faster, and rapidly share it across networks.

Data is collected about us as we interact with digital technology, directly and via organisations. Many people volunteer data to social networking sites, and sensors – in smartphones, CCTV cameras, and “Internet of Things” objects – are making the physical world as trackable as the virtual. People are very often unaware of how much data is gathered about them – let alone the purposes for which it can be used. Also, most privacy risks are highly probabilistic, cumulative, and difficult to calculate. A student sharing a photo today might not be thinking about a future interview panel; or that the heart rate data shared from a fitness gadget might affect future decisions by insurance and financial services (Brown 2014).

Rather than organisations waiting for something to go wrong, then spending large amounts of time and money trying (and often failing) to fix privacy problems, computer scientists have been developing methods for designing privacy directly into new technologies and systems (Spiekermann and Cranor 2009). One of the most important principles is data minimization; that is, limiting the collection of personal data to that needed to provide a service – rather than storing everything that can be conveniently retrieved. This limits the impact of data losses and breaches, for example by corrupt staff with authorised access to data – a practice that the UK Information Commissioner’s Office (2006) has shown to be widespread.

Privacy by design also protects against function creep (Gürses et al. 2011). When an organisation invests significant resources to collect personal data for one reason, it can be very tempting to use it for other purposes. While this is limited in the EU by data protection law, government agencies are in a good position to push for changes to national laws if they wish, bypassing such “purpose limitations”. Nor do these rules tend to apply to intelligence agencies.

Another key aspect of putting users in control of their personal data is making sure they know what data is being collected, how it is being used – and ideally being asked for their consent. There have been some interesting experiments with privacy interfaces, for example helping smartphone users understand who is asking for their location data, and what data has been recently shared with whom.

Smartphones have enough storage and computing capacity to do some tasks, such as showing users adverts relevant to their known interests, without sharing any personal data with third parties such as advertisers. This kind of user-controlled data storage and processing has all kinds of applications – for example, with smart electricity meters (Danezis et al. 2013), and congestion charging for roads (Balasch et al. 2010).

What broader lessons can be drawn about shaping technologies for the public good? What is the public good, and who gets to define it? One option is to look at opinion polling about public concerns and values over long periods of time. The European Commission’s Eurobarometer polls reveal that in most European countries (including the UK), people have had significant concerns about data privacy for decades.

A more fundamental view of core social values can be found at the national level in constitutions, and between nations in human rights treaties. As well as the protection of private life and correspondence in the European Convention on Human Rights’ Article 8, the freedom of thought, expression, association and assembly rights in Articles 9-11 (and their equivalents in the US Bill of Rights, and the International Covenant on Civil and Political Rights) are also relevant.

This national and international law restricts how states use technology to infringe human rights – even for national security purposes. There are several US legal challenges to the constitutionality of NSA communications surveillance, with a federal court in Washington DC finding that bulk access to phone records is against the Fourth Amendment [1] (but another court in New York finding the opposite [2]). The UK campaign groups Big Brother Watch, Open Rights Group, and English PEN have taken a case to the European Court of Human Rights, arguing that UK law in this regard is incompatible with the Human Rights Convention.

Can technology development be shaped more broadly to reflect such constitutional values? One of the best-known attempts is the European Union’s data protection framework. Privacy is a core European political value, not least because of the horrors of the Nazi and Communist regimes of the 20th century. Germany, France and Sweden all developed data protection laws in the 1970s in response to the development of automated systems for processing personal data, followed by most other European countries. The EU’s Data Protection Directive (95/46/EC) harmonises these laws, and has provisions that encourage organisations to use technical measures to protect personal data.

An update of this Directive, which the European parliament has been debating over the last year, more explicitly includes this type of regulation by technology. Under this General Data Protection Regulation, organisations that are processing personal data will have to implement appropriate technical measures to protect Regulation rights. By default, organisations should only collect the minimum personal data they need, and allow individuals to control the distribution of their personal data. The Regulation would also require companies to make it easier for users to download all of their data, so that it could be uploaded to a competitor service (for example, one with better data protection) – bringing market pressure to bear (Brown and Marsden 2013).

This type of technology regulation is not uncontroversial. The European Commissioner responsible until July for the Data Protection Regulation, Viviane Reding, said that she had seen unprecedented and “absolutely fierce” lobbying against some of its provisions. Legislators would clearly be foolish to try and micro-manage the development of new technology. But the EU’s principles-based approach to privacy has been internationally influential, with over 100 countries now having adopted the Data Protection Directive or similar laws (Greenleaf 2014).

If the EU can find the right balance in its Regulation, it has the opportunity to set the new global standard for privacy-protective technologies – a very significant opportunity indeed in the global marketplace.

[1] Klayman v. Obama, 2013 WL 6571596 (D.D.C. 2013)

[2] ACLU v. Clapper, No. 13-3994 (S.D. New York December 28, 2013)

References

Balasch, J., Rial, A., Troncoso, C., Preneel, B., Verbauwhede, I. and Geuens, C. (2010) PrETP: Privacy-preserving electronic toll pricing. 19th USENIX Security Symposium, pp. 63–78.

Brown, I. (2014) The economics of privacy, data protection and surveillance. In J.M. Bauer and M. Latzer (eds.) Research Handbook on the Economics of the Internet. Cheltenham: Edward Elgar.

Brown, I. and Marsden, C. (2013) Regulating Code: Good Governance and Better Regulation in the Information Age. Cambridge, MA: MIT Press.

Danezis, G., Fournet, C., Kohlweiss, M. and Zanella-Beguelin, S. (2013) Smart Meter Aggregation via Secret-Sharing. ACM Smart Energy Grid Security Workshop.

Greenleaf, G. (2014) Sheherezade and the 101 data privacy laws: Origins, significance and global trajectories. Journal of Law, Information & Science.

Gürses, S., Troncoso, C. and Diaz, C. (2011) Engineering Privacy by Design. Computers, Privacy & Data Protection.

Haddadi, H, Hui, P., Henderson, T. and Brown, I. (2011) Targeted Advertising on the Handset: Privacy and Security Challenges. In Müller, J., Alt, F., Michelis, D. (eds) Pervasive Advertising. Heidelberg: Springer, pp. 119-137.

Information Commissioner’s Office (2006) What price privacy? HC 1056.

Spiekermann, S. and Cranor, L.F. (2009) Engineering Privacy. IEEE Transactions on Software Engineering 35 (1).


Read the full article: Keeping our secrets? Designing Internet technologies for the public good, European Human Rights Law Review 4: 369-377. This article is adapted from Ian Brown’s 2014 Oxford London Lecture, given at Church House, Westminster, on 18 March 2014, supported by Oxford University’s Romanes fund.

Professor Ian Brown is Associate Director of Oxford University’s Cyber Security Centre and Senior Research Fellow at the Oxford Internet Institute. His research is focused on information security, privacy-enhancing technologies, and Internet regulation.

]]>
https://ensr.oii.ox.ac.uk/designing-internet-technologies-for-the-public-good/feed/ 1
The complicated relationship between Chinese Internet users and their government https://ensr.oii.ox.ac.uk/the-complicated-relationship-between-chinese-internet-users-and-their-government/ Thu, 01 Aug 2013 06:28:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1827 David:For our research, we surveyed postgraduate students from all over China who had come to Shanghai to study. We asked them five questions to which they provided mostly rather lengthy answers. Despite them being young university students and being very active online, their answers managed to surprise us. Notably, the young Chinese who took part in our research felt very ambiguous about the Internet and its supposed benefits for individual people in China. They appreciated the greater freedom the Internet offered when compared to offline China, but were very wary of others abusing this freedom to their detriment.

Ed: In your paper you note that the opinions of many young people closely mirrored those of the government’s statements about the Internet — in what way?

David: In 2010 the government published a White Paper on the Internet in China in which they argued that the main uses of the Internet were for obtaining information, and for communicating with others. In contrast to Euro-American discourses around the Internet as a ‘force for democracy’, the students’ answers to our questions agreed with the evaluation of the government and did not see the Internet as a place to begin organising politically. The main reason for this — in my opinion — is that young Chinese are not used to discussing ‘politics’, and are mostly focused on pursuing the ‘Chinese dream’: good job, large flat or house, nice car, suitable spouse; usually in that order.

Ed: The Chinese Internet has usually been discussed in the West as a ‘force for democracy’ — leading to the inevitable relinquishing of control by the Chinese Communist Party. Is this viewpoint hopelessly naive?

David: Not naive as such, but both deterministic and limited, as it assumes that the introduction of technology can only have one ‘built-in’ outcome, thus ignoring human agency, and as it pretends that the Chinese Communist Party does not use technology at all. Given the intense involvement of Party and government offices, as well as of individual party members and government officials with the Internet it makes little sense to talk about ‘the Party’ and ‘the Internet’ as unconnected entities. Compared to governments in Europe or America, the Chinese Communist Party and the Chinese government have embraced the Internet and treated it as a real and valid communication channel between citizens and government/Party at all levels.

Ed: Chinese citizens are being encouraged by the government to engage and complain online, eg to expose inefficiency and corruption. Is the Internet just a space to blow off steam, or is it really capable of ‘changing’ Chinese society, as many have assumed?

David: This is mostly a matter of perspective and expectations. The Internet has NOT changed the system in China, nor is it likely to. In all likelihood, the Internet is bolstering the legitimacy and the control of the Chinese Communist Party over China. However, in many specific instances of citizen unhappiness and unrest, the Internet has proved a powerful channel of communication for the people to achieve their goals, as the authorities have reacted to online protests and supported the demands of citizens. This is a genuine change and empowerment of the people, though episodic and local, not global.

Ed: Why do you think your respondents were so accepting (and welcoming) of government control of the Internet in China: is this mainly due to government efforts to manage online opinion, or something else?

David: I think this is a reflex response fairly similar to what has happened elsewhere as well. If e.g. children manage to access porn sites, or an adult manages to groom several children over the Internet the mass media and the parents of the children call for ‘government’ to protect the children. This abrogation of power and shifting of responsibility to ‘the government’ by individuals — in the example by parents, in our study by young Chinese — is fairly widespread, if deplorable. Ultimately this demand for government ‘protection’ leads to what I would consider excessive government surveillance and control (and regulation) of online spaces in the name of ‘protection’ and the public’s acquiescence of the policing of cyberspace. In China, this takes the form of a widespread (resigned) acceptance of government censorship; in the UK it led to the acceptance of GCHQ’s involvement in Prism, or of the sentencing of Deyka Ayan Hassan or of Liam Stacey, which have turned the UK into the only country in the world in which people have been arrested for posting single, offensive posts on microblogs.

Ed: How does the central Government manage and control opinion online?

David: There is no unified system of government control over the Internet in China. Instead, there are many groups and institutions at all levels from central to local with overlapping areas of responsibility in China who are all exerting an influence on Chinese cyberspaces. There are direct posts by government or Party officials, posts by ‘famous’ people in support of government decisions or policies, paid, ‘hidden’ posters or even people sympathetic to the government. China’s notorious online celebrity Han Han once pointed out that the term ‘the Communist Party’ really means a population group of over 300 million people connected to someone who is an actual Party member.

In addition to pro-government postings, there are many different forms of censorship that try to prevent unacceptable posts. The exact definition of ‘unacceptable’ changes from time to time and even from location to location, though. In Beijing, around October 1, the Chinese National Day, many more websites are inaccessible than, for example in Shenzhen during April. Different government or Party groups also add different terms to the list of ‘unacceptable’ topics (or remove them), which contributes to the flexibility of the censorship system.

As a result of the often unpredictable ‘current’ limits of censorship, many Internet companies, forum and site managers, as well as individual Internet users add their own ‘self-censorship’ to the mix to ensure their own uninterrupted presence online. This ‘self-censorship’ is often stricter than existing government or Party regulations, so as not to even test the limits of the possible.

Ed: Despite the constant encouragement / admonishment of the government that citizens should report and discuss their problems online; do you think this is a clever (ie safe) thing for citizens to do? Are people pretty clever about negotiating their way online?

David: If it looks like a duck, moves like a duck, talks like a duck … is it a duck? There has been a lot of evidence over the years (and many academic articles) that demonstrate the government’s willingness to listen to criticism online without punishing the posters. People do get punished if they stray into ‘definitely illegal’ territory, e.g. promoting independence for parts of China, or questioning the right of the Communist Party to govern China, but so far people have been free to express their criticism of specific government actions online, and have received support from the authorities for their complaints.

Just to note briefly; one underlying issue here is the definition of ‘politics’ and ‘power’. Following Foucault, in Europe and America ‘everything’ is political, and ‘everything’ is a question of power. In China, there is a difference between ‘political’ issues, which are the responsibility of the Communist Party, and ‘social’ issues, which can be discussed (and complained about) by anybody. It might be worth exploring this difference of definitions without a priori acceptance of the Foucauldian position as ‘correct’.

Ed: There’s a lot of emphasis on using eg social media to expose corrupt officials and hold them to account; is there a similar emphasis on finding and rewarding ‘good’ officials? Or of officials using online public opinion to further their own reputations and careers? How cynical is the online public?

David: The online public is very cynical, and getting ever more so (which is seen as a problem by the government as well). The emphasis on ‘bad’ officials is fairly ‘normal’, though, as ‘good’ officials are not ‘newsworthy’. In the Chinese context there is the additional problem that socialist governments like to promote ‘model workers’, ‘model units’, etc. which would make the praising of individual ‘good’ officials by Internet users highly suspect. Other Internet users would simply assume the posters to be paid ‘hidden’ posters for the government or the Party.

Ed: Do you think (on balance) that the Internet has brought more benefits (and power) to the Chinese Government or new problems and worries?

David: I think the Internet has changed many things for many people worldwide. Limiting the debate on the Internet to the dichotomies of government vs Internet, empowered netizens vs disenfranchised Luddites, online power vs wasting time online, etc. is highly problematic. The open engagement with the Internet by government (and Party) authorities has been greater in China than elsewhere; in my view, the Chinese authorities have reacted much faster, and ‘better’ to the Internet than authorities elsewhere. As the so-called ‘revelations’ of the past few months have shown, governments everywhere have tried and are trying to control and use Internet technologies in pursuit of power.

Although I personally would prefer the Internet to be a ‘free’ and ‘independent’ place, I realise that this is a utopian dream given the political and economic benefits and possibilities of the Internet. Given the inevitability of government controls, though, I prefer the open control exercised by Chinese authorities to the hypocrisy of European and American governments, even if the Chinese controls (apparently) exceed those of other governments.


Dr David Herold is an Assistant Professor of Sociology at Hong Kong Polytechnic University, where he researches Chinese culture and contemporary PRC society, China’s relationship with other countries, and Chinese cyberspace and online society. His paper Captive Artists: Chinese University Students Talk about the Internet was presented at the presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.

David Herold was talking to blog editor David Sutcliffe.

]]>
Staying free in a world of persuasive technologies https://ensr.oii.ox.ac.uk/staying-free-in-a-world-of-persuasive-technologies/ Mon, 29 Jul 2013 10:11:17 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1541 iPhone apps
We’re living through a crisis of distraction. Image: “What’s on my iPhone” by Erik Mallinson

Ed: What persuasive technologies might we routinely meet online? And how are they designed to guide us towards certain decisions?

There’s a broad spectrum, from the very simple to the very complex. A simple example would be something like Amazon’s “one-click” purchase feature, which compresses the entire checkout process down to a split-second decision. This uses a persuasive technique known as “reduction” to minimise the perceived cost to a user of going through with a purchase, making it more likely that they’ll transact. At the more complex end of the spectrum, you have the whole set of systems and subsystems that is online advertising. As it becomes easier to measure people’s behaviour over time and across media, advertisers are increasingly able to customise messages to potential customers and guide them down the path toward a purchase.

It isn’t just commerce, though: mobile behavior-change apps have seen really vibrant growth in the past couple years. In particular, health and fitness: products like Nike+, Map My Run, and Fitbit let you monitor your exercise, share your performance with friends, use social motivation to help you define and reach your fitness goals, and so on. One interesting example I came across recently is called “Zombies, Run!” which motivates by fright, spawning virtual zombies to chase you down the street while you’re on your run.

As one final example, If you’ve ever tried to deactivate your Facebook account, you’ve probably seen a good example of social persuasive technology: the screen that comes up saying, “If you leave Facebook, these people will miss you” and then shows you pictures of your friends. Broadly speaking, most of the online services we think we’re using for “free” — that is, the ones we’re paying for with the currency of our attention — have some sort of persuasive design goal. And this can be particularly apparent when people are entering or exiting the system.

Ed: Advertising has been around for centuries, so we might assume that we have become clever about recognizing and negotiating it — what is it about these online persuasive technologies that poses new ethical questions or concerns?

The ethical questions themselves aren’t new, but the environment in which we’re asking them makes them much more urgent. There are several important trends here. For one, the Internet is becoming part of the background of human experience: devices are shrinking, proliferating, and becoming more persistent companions through life. In tandem with this, rapid advances in measurement and analytics are enabling us to more quickly optimise technologies to reach greater levels of persuasiveness. That persuasiveness is further augmented by applying knowledge of our non-rational psychological biases to technology design, which we are doing much more quickly than in the design of slower-moving systems such as law or ethics. Finally, the explosion of media and information has made it harder for people to be intentional or reflective about their goals and priorities in life. We’re living through a crisis of distraction. The convergence of all these trends suggests that we could increasingly live our lives in environments of high persuasive power.

To me, the biggest ethical questions are those that concern individual freedom and autonomy. When, exactly, does a “nudge” become a “push”? When we call these types of technology “persuasive,” we’re implying that they shouldn’t cross the line into being coercive or manipulative. But it’s hard to say where that line is, especially when it comes to persuasion that plays on our non-rational biases and impulses. How persuasive is too persuasive? Again, this isn’t a new ethical question by any means, but it is more urgent than ever.

These technologies also remind us that the ethics of attention is just as important as the ethics of information. Many important conversations are taking place across society that deal with the tracking and measurement of user behaviour. But that information is valuable largely because it can be used to inform some sort of action, which is often persuasive in nature. But we don’t talk nearly as much about the ethics of the persuasive act as we do about the ethics of the data. If we did, we might decide, for instance, that some companies have a moral obligation to collect more of a certain type of user data because it’s the only way they could know if they were persuading a person to do something that was contrary to their well-being, values, or goals. Knowing a person better can be the basis not only for acting more wrongly toward them, but also more rightly.

As users, then, persuasive technologies require us to be more intentional about how we define and express our own goals. The more persuasion we encounter, the clearer we need to be about what it is we actually want. If you ask most people what their goals are, they’ll say things like “spending more time with family,” “being healthier,” “learning piano,” etc. But we don’t all accomplish the goals we have — we get distracted. The risk of persuasive technology is that we’ll have more temptations, more distractions. But its promise is that we can use it to motivate ourselves toward the things we find fulfilling. So I think what’s needed is more intentional and habitual reflection about what our own goals actually are. To me, the ultimate question in all this is how we can shape technology to support human goals, and not the other way around.

Ed: What if a persuasive design or technology is simply making it easier to do something we already want to do: isn’t this just ‘user centered design’? (ie a good thing?)

Yes, persuasive design can certainly help motivate a user toward their own goals. In these cases it generally resonates well with user-centered design. The tension really arises when the design leads users toward goals they don’t already have. User-centered design doesn’t really have a good way to address persuasive situations, where the goals of the user and the designer diverge.

To reconcile this tension, I think we’ll probably need to get much better at measuring people’s intentions and goals than we are now. Longer-term, we’ll probably need to rethink notions like “design” altogether. When it comes to online services, it’s already hard to talk about “products” and “users” as though they were distinct entities, and I think this will only get harder as we become increasingly enmeshed in an ongoing co-evolution.

Governments and corporations are increasingly interested in “data-driven” decision-making: isn’t that a good thing? Particularly if the technologies now exist to collect ‘big’ data about our online actions (if not intentions)?

I don’t think data ever really drives decisions. It can definitely provide an evidentiary basis, but any data is ultimately still defined and shaped by human goals and priorities. We too often forget that there’s no such thing as “pure” or “raw” data — that any measurement reflects, before anything else, evidence of attention.

That being said, data-based decisions are certainly preferable to arbitrary ones, provided that you’re giving attention to the right things. But data can’t tell you what those right things are. It can’t tell you what to care about. This point seems to be getting lost in a lot of the fervour about “big data,” which as far as I can tell is a way of marketing analytics and relational databases to people who are not familiar with them.

The psychology of that term, “big data,” is actually really interesting. On one hand, there’s a playful simplicity to the word “big” that suggests a kind of childlike awe where words fail. “How big is the universe? It’s really, really big.” It’s the unknown unknowns at scale, the sublime. On the other hand, there’s a physicality to the phrase that suggests an impulse to corral all our data into one place: to contain it, mould it, master it. Really, the term isn’t about data abundance at all – it reflects our grappling with a scarcity of attention.

The philosopher Luciano Floridi likens the “big data” question to being at a buffet where you can eat anything, but not everything. The challenge comes in the choosing. So how do you choose? Whether you’re a government, a corporation, or an individual, it’s your ultimate aims and values — your ethical priorities — that should ultimately guide your choosiness. In other words, the trick is to make sure you’re measuring what you value, rather than just valuing what you already measure.


James Williams is a doctoral student at the Oxford Internet Institute. He studies the ethical design of persuasive technology. His research explores the complex boundary between persuasive power and human freedom in environments of high technological persuasion.

James Williams was talking to blog editor Thain Simon.

]]>
Is China changing the Internet, or is the Internet changing China? https://ensr.oii.ox.ac.uk/is-china-changing-the-internet-or-is-the-internet-changing-china/ Fri, 12 Jul 2013 08:13:52 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1741 The rising prominence of China is one of the most important developments shaping the Internet. Once typified primarily by Internet users in the US, there are now more Internet users in China than there are Americans on the planet. By 2015, the proportion of Chinese language Internet users is expected to exceed the proportion of English language users. These are just two aspects of a larger shift in the centre of gravity of Internet use, in which the major growth is increasingly taking place in Asia and the rapidly developing economies of the Global South, and the BRIC nations of Brazil, Russia, India — and China.

The 2013 ICA Preconference “China and the New Internet World” (14 July 2013), organised by the OII in collaboration with many partners at collaborating universities, explored the issues raised by these developments, focusing on two main interrelated questions: how is the rise of China reshaping the global use and societal implications of the Internet? And in turn, how is China itself being reshaped by these regional and global developments?

As China has become more powerful, much attention has been focused on the number of Internet users: China now represents the largest group of Internet users in the world, with over half a billion people online. But how the Internet is used is also important; this group doesn’t just include passive ‘users’, it also includes authors, bloggers, designers and architects — that is, people who shape and design values into the Internet. This input will undoubtedly affect the Internet going forward, as Chinese institutions take on a greater role in shaping the Internet, in terms of policy, such as around freedom of expression and privacy, and practice, such as social and commercial uses, like shopping online.

Most discussion of the Internet tends to emphasise technological change and ignore many aspects of the social changes that accompany the Internet’s evolution, such as this dramatic global shift in the concentration of Internet users. The Internet is not just a technological artefact. In 1988, Deng Xiaoping declared that “science and technology are primary productive forces” that would be active and decisive factors in the new Chinese society. At the time China naturally paid a great deal of attention to technology as a means to lift its people out of poverty, but it may not have occurred to Deng that the Internet would not just impact the national economy, but that it would come to affect a person’s entire life — and society more generally — as well. In China today, users are more apt to shop online, but also to discuss political issues online than most of the other 65 nations across the world surveyed in a recent report [1].

The transformative potential of the Internet has challenged top-down communication patterns in China, by supporting multi-level and multi-directional flows of communication. Of course, communications systems reflect economic and political power to a large extent: the Internet is not a new or separate world, and its rules reflect offline rules and structures. In terms of the large ‘digital divide’ that exists in China (whose Internet penetration currently stands at a bit over 40%, meaning that 700 million people are still not online), we have to remember that this digital divide is likely to reflect other real economic and political divides, such as lack of access to other basic resources.

While there is much discussion about how the Internet is affecting China’s domestic policy (in terms of public administration, ensuring reliable systems of supply and control, the urban-rural divide and migration, and policy on things like anonymity and free speech), less time is spent discussing the geopolitics of the Internet. China certainly has the potential for great influence beyond its own borders, for example affecting communication flows worldwide and the global division of power. For such reasons, it is valuable to move beyond ‘single country studies’ to consider global shifts in attitudes and values shaping the Internet across the world. As a contested and contestable space, the political role of the Internet is likely to be a focal point for traditional discussions of key values, such as freedom of expression and assembly; remember Hilary Clinton’s 2010 ‘Internet freedom’ speech, delivered at Washington’s Newseum Institute. Contemporary debates over privacy and freedom of expression are indeed increasingly focused on Internet policy and practice.

Now is not the first time in the histories of the US and China that their respective foreign policies have been of great interest and importance to the other. However this might also be a period of anxiety-driven (rather than rational) policy making, particularly if increased exposure and access to information around the world leads to efforts to create Berlin walls of the digital age. In this period of national anxieties on the part of governments and citizens — who may feel that “something must be done” — there will inevitably be competition between the US, China, and the EU to drive national Internet policies that assert local control and jurisdiction. Ownership and control of the Internet by countries and companies is certainly becoming an increasingly politicized issue. Instead of supporting technical innovation and the diffusion of the Internet, nations are increasingly focused on controlling the flow of online content and exploiting the Internet as a means for gauging public sentiment and opinion, rather than as a channel to help shape public policy and social accountability.

For researchers, it is time to question a myopic focus on national units of analysis when studying the Internet, since many activities of critical importance take place in smaller regions, such as Silicon Valley, larger regions, such as the global South, and in virtual spaces that are truly global. We tend to think of single places: “the Internet” / “the world” / “China”: but as a number of conference speakers emphasized, there is more than one China, if we consider for example Taiwan, Hong Kong, rural China, and the factory zones — each with their different cultural, legal and economic dynamics. Similarly, there are a multitude of actors, for example corporations, which are shaping the Chinese Internet as surely as Beijing is. As Jack Qui, one of the opening panelists, observed: “There are many Internets, and many worlds.” There are also multiple histories of the Internet in China, and as yet no standard narrative.

The conference certainly made clear that we are learning a lot about China, as a rapidly growing number of Chinese scholars increasingly research and publish on the subject. The vitality of the Chinese Journal of Communication is one sign of this energy, but Internet research is expanding globally as well. Some of the panel topics will be familiar to anyone following the news, even if there is still not much published in the academic literature: smart censorship, trust in online information, human flesh search, political scandal, democratisation. But there were also interesting discussions from new perspectives, or perspectives that are already very familiar in a Western context: social networking, job markets, public administration, and e-commerce.

However, while international conferences and dedicated panels are making these cross-cultural (and cross-topic) discussions and conversations easier, we still lack enough published content about China and the Internet, and it can be difficult to find material, due to its recent diffusion, and major barriers such as language. This is an important point, given how easy it is to oversimplify another culture. A proper comparative analysis is hard and often frustrating to carry out, but important, if we are to see our own frameworks and settings in a different way.

One of the opening panelists remarked that two great transformations had occurred during his academic life: the emergence of the Internet, and the rise of China. The intersection of the two is providing fertile ground for research, and the potential for a whole new, rich research agenda. Of course the challenge for academics is not simply to find new, interesting and important things to say about a subject, but to draw enduring theoretical perspectives that can be applied to other nations and over time.

In returning to the framing question: “is China changing the Internet, or is the Internet changing China?” obviously the answer to both is “yes”, but as the Dean of USC Annenberg School, Ernest Wilson put it, we need to be asking “how?” and “to what degree?” I hope this preconference encouraged more scholars to pursue these questions.

Reference

[1] Bolsover, G., Dutton, W.H., Law, G. and Dutta, S. (2013) Social Foundations of the Internet in China and the New Internet World: A Cross-National Comparative Perspective. Presented at “China and the New Internet World”, International Communication Association (ICA) Preconference, Oxford Internet Institute, University of Oxford, June 2013.


The OII’s Founding Director (2002-2011), Professor William H. Dutton is Professor of Internet Studies, University of Oxford, and Fellow of Balliol College. Before coming to Oxford in 2002, he was a Professor in the Annenberg School for Communication at the University of Southern California, where he is now an Emeritus Professor. His most recent books include World Wide Research: Reshaping the Sciences and Humanities, co-edited with P. Jeffreys (MIT Press, 2011) and the Oxford Handbook of Internet Studies (Oxford University Press, 2013). Read Bill’s blog.

]]>
How effective is online blocking of illegal child sexual content? https://ensr.oii.ox.ac.uk/how-effective-is-online-blocking-of-illegal-child-sexual-content/ Fri, 28 Jun 2013 09:30:18 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1576 Anonymous Belgium
The recent announcement by ‘Anonymous Belgium’ (above) that they would ‘liberate the Belgian Web’ on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking?

Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email.

Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites – notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are?

Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures.

It is, in our view, especially important at a time when governments do not hesitate to carry out secret online surveillance of citizens without any transparency or accountability, that any interference with online content must be clearly prescribed by law, have a legitimate aim and, most importantly, be proportional and not go beyond what is necessary to achieve that aim. In addition, the role of private actors, such as ISPs, search engine companies or social networks, must be very carefully considered. It must be clear that decisions about which content or behaviours are illegal and/or harmful must be taken or at least be surveyed by the judicial power in a democratic society.

Ed: You suggest that removal of websites at their source (mostly in the US and Canada) is a more effective means of stopping the distribution of child pornography — but that European law enforcement has often been insufficiently committed to such action. Why is this? And how easy are cross-jurisdictional efforts to tackle this sort of content?

Authors: The blocking of websites is, although questionably ineffective as a method of making the content inaccessible, a quick way to be seen to take action against the appearance of unwanted material on the Internet. The removal of content on the other hand requires not only the identification of those responsible for hosting the content but more importantly the actual perpetrators. This is of course a more intrusive and lengthy process, for which law enforcement agencies currently lack resources.

Moreover, these agencies may indeed run into obstacles related to territorial jurisdiction and difficult international cooperation. However, prioritising and investing in actual removal of content, even though not feasible in certain circumstances, will ensure that child sexual abuse images do not further circulate, and, hence, that the risk of repetitive re-victimization of abused children is reduced.


Read the full paper: Karel Demeyer, Eva Lievens and Jos Dumortier (2012) Blocking and Removing Illegal Child Sexual Content: Analysis from a Technical and Legal Perspective. Policy and Internet 4 (3-4).

Karel Demeyer, Eva Lievens and Jos Dumortier were talking to blog editor Heather Ford.

]]>
Time for debate about the societal impact of the Internet of Things https://ensr.oii.ox.ac.uk/time-for-debate-about-the-societal-impact-of-the-internet-of-things/ Mon, 22 Apr 2013 14:32:22 +0000 http://blogs.oii.ox.ac.uk/policy/?p=931
European conference on the Internet of Things
The 2nd Annual Internet of Things Europe 2010: A Roadmap for Europe, 2010. Image by Pierre Metivier.
On 17 April 2013, the US Federal Trade Commission published a call for inputs on the ‘consumer privacy and security issues posed by the growing connectivity of consumer devices, such as cars, appliances, and medical devices’, in other words, about the impact of the Internet of Things (IoT) on the everyday lives of citizens. The call is in large part one for information to establish what the current state of technology development is and how it will develop, but it also looks for views on how privacy risks should be weighed against potential societal benefits.

There’s a lot that’s not very new about the IoT. Embedded computing, sensor networks and machine to machine communications have been around a long time. Mark Weiser was developing the concept of ubiquitous computing (and prototyping it) at Xerox PARC in 1990.  Many of the big ideas in the IoT — smart cars, smart homes, wearable computing — are already envisaged in works such as Nicholas Negroponte’s Being Digital, which was published in 1995 before the mass popularisation of the internet itself. The term ‘Internet of Things’ has been around since at least 1999. What is new is the speed with which technological change has made these ideas implementable on a societal scale. The FTC’s interest reflects a growing awareness of the potential significance of the IoT, and the need for public debate about its adoption.

As the cost and size of devices falls and network access becomes ubiquitous, it is evident that not only major industries but whole areas of consumption, public service and domestic life will be capable of being transformed. The number of connected devices is likely to grow fast in the next few years. The Organisation for Economic Co-operation and Development (OECD) estimates that while a family with two teenagers may have 10 devices connected to the internet, in 2022 this may well grow to 50 or more. Across the OECD area the number of connected devices in households may rise from an estimated 1.7 billion today to 14 billion by 2022. Programmes such as smart cities, smart transport and smart metering will begin to have their effect soon. In other countries, notably in China and Korea, whole new cities are being built around smart infrastructuregiving technology companies the opportunity to develop models that could be implemented subsequently in Western economies.

Businesses and governments alike see this as an opportunity for new investment both as a basis for new employment and growth and for the more efficient use of existing resources. The UK Government is funding a strand of work under the auspices of the Technology Strategy Board on the IoT, and the IoT is one of five themes that are the subject of the Department for Business, Innovation & Skills (BIS)’s consultation on the UK’s Digital Economy Strategy (alongside big data, cloud computing, smart cities, and eCommerce).

The enormous quantity of information that will be produced will provide further opportunities for collecting and analysing big data. There is consequently an emerging agenda about privacy, transparency and accountability. There are challenges too to the way we understand and can manage the complexity of interacting systems that will underpin critical social infrastructure.

The FTC is not alone in looking to open public debate about these issues. In February, the OII and BCS (the Chartered Institute for IT) ran a joint seminar to help the BCS’s consideration about how it should fulfil its public education and lobbying role in this area. A summary of the contributions is published on the BCS website.

The debate at the seminar was wide ranging. There was no doubt that the train has left the station as far as this next phase of the Internet is concerned. The scale of major corporate investment, government encouragement and entrepreneurial enthusiasm are not to be deflected. In many sectors of the economy there are already changes that are being felt already by consumers or will be soon enough. Smart metering, smart grid, and transport automation (including cars) are all examples. A lot of the discussion focused on risk. In a society which places high value on audit and accountability, it is perhaps unsurprising that early implementations have often been in using sensors and tags to track processes and monitor activity. This is especially attractive in industrial structures that have high degrees of subcontracting.

Wider societal risks were also discussed. As for the FTC, the privacy agenda is salient. There is real concern that the assumptions which underlie the data protection regimeespecially its reliance on data minimisationwill not be adequate to protect individuals in an era of ubiquitous data. Nor is it clear that the UK’s regulatorthe Information Commissionerwill be equipped to deal with the volume of potential business. Alongside privacy, there is also concern for security and the protection of critical infrastructure. The growth of reliance on the IoT will make cybersecurity significant in many new ways. There are issues too about complexity and the unforeseenand arguably unforeseeableconsequences of the interactions between complex, large, distributed systems acting in real time, and with consequences that go very directly to the wellbeing of individuals and communities.

There are great opportunities and a pressing need for social research into the IoT. The data about social impacts has been limited hitherto given the relatively few systems deployed. This will change rapidly. As Governments consult and bodies like the BCS seek to advise, it’s very desirable that public debate about privacy and security, access and governance, take place on the basis of real evidence and sound analysis.

]]>
Personal data protection vs the digital economy? OII policy forum considers our digital footprints https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/ https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/#comments Thu, 03 Feb 2011 11:12:13 +0000 http://blogs.oii.ox.ac.uk/policy/?p=177 Catching a bus, picking up some groceries, calling home to check on the children – all simple, seemingly private activities that characterise many people’s end to the working day. Yet each of these activities leaves a data trail that enables companies, even the state, to track the most mundane aspects of our lives. Add to this the range and quantity of personal data that many of us willingly post online on our blogs, Facebook walls or Google docs, and it is clear that the trail of digital footprints we leave is long and hard to erase.

Even if in most cases, this data is only likely to be used in an anonymised and aggregated form to identify trends in transport or shopping patterns, or to personalise the Internet services available to us, the fact that its collection is now so routine and so extensive should make us question whether the regulatory system governing data collection, storage and use is fit for purpose. A forthcoming OII policy forum on Tracing the Policy Implications of the Future Digital Economy (16 Feb) will consider this question, bringing together leading academics from across several disciplines with policy-makers and industry experts.

This is a topic which the OII is well-placed to address. Ian Brown’s Privacy Values Network project addresses a major knowledge gap, measuring the various costs and benefits to individuals of handing over data in different contexts, as without this we simply don’t know how much people value their privacy (or indeed understand its limits). The last Oxford Internet Survey (OxIS) rather surprisingly showed that in 2009 people were significantly less concerned about privacy online in the UK than in previous years (45% of all those surveyed in 2009 against 66% in 2007); we wait to see whether this finding is repeated when OxIS 2011 goes into the field next month.

Our faculty also have much to say about the adequacy (or otherwise) of the regulatory framework: a recent report by Ian Brown and Douwe Korff on New Challenges to Data Protection identified for the European Commission the scale of challenges presented to the current data protection regime, whilst Viktor-Mayer Schoenberger’s book Delete: The Virtue of Forgetting in the Digital Age has rightly raised the suggestion that personal information online should have an expiration date, to ensure it doesn’t hang around for years to embarrass us at a later date.

The forum will consider the way in which the market for information storage and collection is rapidly changing with the advent of new technologies, and on this point, one conclusion is clear: if we accept Helen Nissenbaum’s contention that personal information and data should be collected and protected according to the social norms governing different social contexts, then we need to get to grips pretty fast with the way in which these technologies are playing out in the way we work, play, learn and consume.

]]>
https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/feed/ 1
New issue of Policy and Internet (2,3) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-23/ Thu, 04 Nov 2010 12:08:24 +0000 http://blogs.oii.ox.ac.uk/policy/?p=121 Welcome to the third issue of Policy & Internet for 2010. We are pleased to present five articles focusing on substantive public policy issues arising from widespread use of the Internet: regulation of trade in virtual goods; development of electronic government in Korea; online policy discourse in UK elections; regulatory models for broadband technologies in the US; and alternative governance frameworks for open ICT standards.

Three of the articles are the first to be published from the highly successful conference ‘Internet, Politics and Policy‘ held by the journal in Oxford, 16th-17th September 2010. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Vili Lehdonvirta and Perttu Virtanen: A New Frontier in Digital Content Policy: Case Studies in the Regulation of Virtual Goods and Artificial Scarcity

Joon Hyoung Lim: Digital Divides in Urban E-Government in South Korea: Exploring Differences in Municipalities’ Use of the Internet for Environmental Governance

Darren G. Lilleker and Nigel A. Jackson: Towards a More Participatory Style of Election Campaigning: The Impact of Web 2.0 on the UK 2010 General Election

Michael J. Santorelli: Regulatory Federalism in the Age of Broadband: A U.S. Perspective

Laura DeNardis: E-Governance Policies for Interoperability and Open Standards

]]>
Internet, Politics, Policy 2010: Closing keynote by Viktor Mayer-Schönberger https://ensr.oii.ox.ac.uk/internet-politics-policy-2010-closing-keynote-by-viktor-mayer-schonberger/ Fri, 17 Sep 2010 15:48:04 +0000 http://blogs.oii.ox.ac.uk/policy/?p=94 Our two-day conference is coming to a close with a keynote by Viktor Mayer-Schönberger who is soon to be joining the faculty of the Oxford Internet Institute as Professor of Internet Governance and Regulation.

Viktor talked about the theme of his recent book“Delete: The Virtue of Forgetting in the Digital Age”(a webcast of this keynote will be available soon on the OII website but you can also listen to a previous talk here). It touches on many of the recent debates about information that has been published on the web in some context and which might suddenly come back to us in a completely different context, e.g. when applying for a job and being confronted with some drunken picture of us obtained from Facebook.

Viktor puts that into a broad perspective, contrasting the two themes of “forgetting” and “remembering”. He convincingly argues how for most of human history, forgetting has been the default. This state of affairs has experienced quite a dramatic change with the advances of the computer technology, data storage and information retrieval technologies available on a global information infrastructure.  Now remembering is the default as most of the information stored digitally is available forever and in multiple places.

What he sees at stake is power because of the permanent threat of our activities are being watched by others – not necessarily now but possibly even in the future – can result in altering our behaviour today. What is more, he says that without forgetting it is hard for us to forgive as we deny us and others the possibility to change.

No matter to what degree you are prepared to follow the argument, the most intriguing question is how the current state of remembering could be changed to forgetting. Viktor discusses a number of ideas that pose no real solution:

  1. privacy rights – don’t go very far in changing actual behaviour
  2. information ecology – the idea to store only as much as necessary
  3. digital abstinence – just not using these digital tools but this is not very practical
  4. full contextualization – store as much information as possible in order to provide necessary context for evaluating the informations from the past
  5. cognitive adjustments – humans have to change in order to learn how to discard the information but this is very difficult
  6. privacy digital rights management – requires the need to create a global infrastructure that would create more threats than solutions

Instead Viktor wants to establish mechanisms that ease forgetting, primarily by making it a little bit more difficult to remember. Ideas include

  • expiration date for information, less in order to technically force deletion but to socially force thinking about forgetting
  • making older information a bit more difficult to retrieve

Whatever the actual tool, the default should be forgetting and to prompt its users to reflect and choose about just how long a certain piece of information should be valid.

Nice closing statement: “Let us remember to forget!

]]>
New issue of Policy and Internet (2,2) https://ensr.oii.ox.ac.uk/new-issue-of-policy-and-internet-22/ Thu, 19 Aug 2010 12:17:12 +0000 http://blogs.oii.ox.ac.uk/policy/?p=128 Welcome to the second issue of Policy & Internet for 2010! We are pleased to present six articles which investigate the role of the Internet in a wide range of policy processes and sectors: agenda setting in online and traditional media; environmental policy networks; online deliberation on climate change; data protection and privacy; net neutrality; and digital inclusion/exclusion. You may access any of the articles below at no charge.

Helen Margetts: Editorial

Ben Sayre, Leticia Bode, Dhavan Shah, Dave Wilcox, and Chirag Shah: Agenda Setting in a Digital Age: Tracking Attention to California Proposition 8 in Social Media, Online News and Conventional News

Kathleen McNutt and Adam Wellstead: Virtual Policy Networks in Forestry and Climate Change in the U.S. and Canada: Government Nodality, Internationalization and Actor Complexity

Julien Talpin and Stéphanie Wojcik: Deliberating Environmental Policy Issues: Comparing the Learning Potential of Online and Face-To-Face Discussions on Climate Change

Andrew A. Adams, Kiyoshi Murata, and Yohko Orito: The Development of Japanese Data Protection

Scott Jordan: The Application of Net Neutrality to Wireless Networks Based on Network Architecture

Alison Powell, Amelia Bryne, and Dharma Dailey: The Essential Internet: Digital Exclusion in Low-Income American Communities

]]>