Vicki Nash – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:49 +0000 en-GB hourly 1 Internet Filtering: And Why It Doesn’t Really Help Protect Teens https://ensr.oii.ox.ac.uk/internet-filtering-and-why-it-doesnt-really-help-protect-teens/ Wed, 29 Mar 2017 08:25:06 +0000 http://blogs.oii.ox.ac.uk/policy/?p=4035 Young British teens (between 12-15 years) spend nearly 19 hours a week online, raising concerns for parents, educators, and politicians about the possible negative experiences they may have online. Schools and libraries have long used Internet-filtering technologies as a means of mitigating adolescents’ experiences online, and major ISPs in Britain now filter new household connections by default.

However, a new article by Andrew Przybylski and Victoria Nash, “Internet Filtering Technology and Aversive Online Experiences in Adolescents”, published in the Journal of Pediatrics, finds equivocal to strong evidence that household-level Internet filtering does not reduce the chance of adolescents having recent aversive online experiences. The authors analysed data from 1030 in-home interviews conducted with early adolescents as part of Ofcom’s Children and Parents Media Use and Attitudes Report.

The Internet is now a central fixture of modern life, and the positives and negatives of online Internet use need to be balanced by caregivers. Internet filters have been adopted as a tool for limiting the negatives; however, evidence of their effectiveness is dubious. They are expensive to develop and maintain, and also carry significant informational costs: even sophisticated filters over-block, which is onerous for those seeking information about sexual health, relationships, or identity, and might have a disproportionate effect on vulnerable groups. Striking the right balance between protecting adolescents and respecting their rights to freedom of expression and information presents a formidable challenge.

In conducting their study to address this uncertainty, the authors found convincing evidence that Internet filters were not effective at shielding early adolescents from aversive experiences online. Given this finding, they propose that evidence derived from a randomized controlled trial and registered research designs are needed to determine how far Internet-filtering technology supports or thwarts young people online. Only then will parents and policymakers be able to make an informed decision as to whether their widespread use justifies their costs.

We caught up with Andy and Vicki to discuss the implications of their study:

Ed.: Just this morning when working from home I tried to look up an article’s author and was blocked, Virgin Media presumably having decided he might be harmful to me. Where does this recent enthusiasm for default-filtering come from? Is it just that it’s a quick, uncomplicated (technological) fix, which I guess is what politicians / policy-people like?

Vicki: In many ways this is just a typical response to the sorts of moral panic which have long arisen around the possible risks of new technologies. We saw the same concerns arise with television in the 1960s, for example, and in that case the UK’s policy response was to introduce a ‘watershed’, a daily time after which content aimed at adults could be shown. I suppose I see filtering as fulfilling the same sort of policy gap, namely recognising that certain types of content can be legally available but should not be served up ‘in front of the children’.

Andy: My reading of the psychological and developmental literature suggests that filters provide a way of creating a safe walled space in schools, libraries, and homes for young people to use the internet. This of course does not mean that reading our article will be harmful!

Ed.: I suppose that children desperate to explore won’t be stopped by a filter; those who aren’t curious probably wouldn’t encounter much anyway — what is the profile of the “child” and the “harm-filtering” scenario envisaged by policy-makers? And is Internet filtering basically just aiming at the (easy) middle of the bell-curve?

Vicki: This is a really important point. Sociologists recognised many years ago that the whole concept of childhood is socially constructed, but we often forget about this when it comes to making policy. There’s a tendency for politicians, for example, either to describe children as inherently innocent and vulnerable, or to frame them as expert ‘digital natives’, yet there’s plenty of academic research which demonstrates the extent to which children’s experiences of the Internet vary by age, education, income and skill level.

This matters because it suggests a ‘one-size-fits-all’ approach may fail. In the context of this paper, we specifically wanted to check whether children with the technical know-how to get around filters experienced more negative experiences online than those who were less tech-savvy. This is often assumed to be true, but interestingly, our analysis suggests this factor makes very little difference.

Ed.: In all these discussions and policy decisions: is there a tacit assumption that these children are all growing up in a healthy, supportive (“normal”) environment — or is there a recognition that many children will be growing up in attention-poor (perhaps abusive) environments and that maybe one blanket technical “solution” won’t fit everyone? Is there also an irony that the best protected children will already be protected, and the least protected, probably won’t be?

Andy: Yes this is ironic and somewhat tragic dynamic. Unfortunately because the evidence for filtering effectiveness is at such an early age it’s not possible to know which young people (if any) are any more or less helped by filters. We need to know how effective filters are in general before moving on to see the young people for whom they are more or less helpful. We would also need to be able to explicitly define what would constitute an ‘attention-poor’ environment.

Vicki: From my perspective, this does always serve as a useful reminder that there’s a good reason why policy-makers turn to universalistic interventions, namely that this is likely to be the only way of making a difference for the hardest-to-reach children whose carers might never act voluntarily. But admirable motives are no replacement for efficacy, so as Andy notes, it would make more sense to find evidence that household internet filtering is effective, and second that it can be effective for this vulnerable group, before imposing default-on filters on all.

Ed.: With all this talk of potential “harm” to children posed by the Internet .. is there any sense of how much (specific) harm we’re talking about? And conversely .. any sense of the potential harms of over-blocking?

Vicki: No, you are right to see that the harms of Internet use are quite hard to pin down. These typically take the form of bullying, or self-harm horror stories related to Internet use. The problem is that it’s often easier to gauge how many children have been exposed to certain risky experiences (e.g. viewing pornography) than to ascertain whether or how they were harmed by this. Policy in this area often abides by what’s known as ‘the precautionary principle’.

This means that if you lack clear evidence of harm but have good reason to suspect public harm is likely, then the burden of proof is on those who would prove it is not likely. This means that policies aimed at protecting children in many contexts are often conservative, and rightly so. But it also means that it’s important to reconsider policies in the light of new evidence as it comes along. In this case we found that there is not as much evidence that Internet filters are effective at preventing exposure to negative experiences online as might be hoped.

Ed.: Stupid question: do these filters just filter “websites”, or do they filter social media posts as well? I would have thought young teens would be more likely to find or share stuff on social media (i.e. mobile) than “on a website”?

Andy: My understanding is that there are continually updated ‘lists’ of website that contain certain kinds of content such as pornography, piracy, gambling, or drug use (see this list on Wikipedia, for example) as these categories vary by UK ISP.

Vicki: But it’s not quite true to say that household filtering packages don’t block social media. Some of the filtering options offered by the UK’s ‘Big 4’ ISPs enable parents and carers to block social media sites for ‘homework time’ for example. A bigger issue though, is that much of children’s Internet use now takes place outside the home. So, household-level filters can only go so far. And whilst schools and libraries usually filter content, public wifi or wifi in friends’ houses may not, and content can be easily exchanged directly between kids’ devices via Bluetooth or messaging apps.

Ed: Do these blocked sites (like the webpage of that journal author I was trying to access) get notified that they have been blocked and have a chance to appeal? Would a simple solution to over blocking simply be to allow (e.g. sexual health, gender-issue, minority, etc.) sites to request that they be whitelisted, or apply for some “approved” certification?

Vicki: I don’t believe so. There are whitelisted sites, indeed that was a key outcome of an early inquiry into ‘over-blocking’ by the UK Children’s Council on Internet Safety. But in order for this to be a sufficient response, it would be necessary for all sites and apps that are subject to filtering to be notified, to allow for possible appeal. The Open Rights Group provide a tool that allows site owners to check the availability of their sites, but there is no official process for seeking whitelisting or appeal.

Ed.: And what about age verification as an alternative? (however that is achieved / validated), i.e. restricting content before it is indexed, rather than after?

Andy: To evaluate this we would need to conduct a randomised controlled trial where we tested how the application of age verification for different households, selected at random, would relate (or not) to young people encountering potentially aversive content online.

Vicki: But even if such a study could prove that age verification tools were effective in restricting access to underage Internet users, it’s not clear this would be a desirable scenario. It makes most sense for content that is illegal to access below a certain age, such as online gambling or pornography. But if content is age-gated without legal requirement, then it could prove a very restrictive tool, removing the possibility of parental discretion and failing to make any allowances for the sorts of differences in ability or maturity between children that I pointed out at the beginning.

Ed: Similarly to the the arguments over Google making content-blocking decisions (e.g. over the “right to forget”): are these filtering decisions left to the discretion of ISPs / the market / the software providers, or to some government dept / NGO? Who’s ultimately in charge of who sees what?

Vicki: Obviously when it comes to content that is illegal for children or adults to access then broad decisions about the delineation of what is illegal falls to governments and is then interpreted and applied by private companies. But when it comes to material that is not illegal, but just deemed harmful or undesirable, then ISPs and social media platforms are left to decide for themselves how to draw the boundaries and then how to apply their own policies. This increasing self-regulatory role for what Jonathan Zittrain has called ‘private sherriffs’ is often seen a flexible and appropriate response, but it does bring reduced accountability and transparency.

Ed.: I guess it’s ironic with all this attention paid to children, that we now find ourselves in an information environment where maybe we should be filtering out (fake) content for adults as well (joke..). But seriously: with all these issues around content, is your instinct that we should be using technical fixes (filtering, removing from indexes, etc.) or trying to build reflexivity, literacy, resilience in users (i.e. coping strategies). Or both? Both are difficult.

Andy: It is as ironic as it is tragic. When I talk to parents (both Vicki and I are parents) I hear that they have been let down by the existing advice which often amounts to little more than ‘turn it off’. Their struggles have nuance (e.g. how do I know who is in my child’s WhatsApp groups? Is snapchat OK if they’re just using it amongst best friends?) and whilst general broad advice is heard, this more detailed information and support is hard for parents to find.

Vicki: I agree. But I think it’s inevitable that we’ll always need a combination of tools to deal with the incredible array of content that develops online. No technical tool will ever be 100% reliable in blocking content we don’t want to see, and we need to know how to deal with whatever gets through. That certainly means having a greater social and political focus on education but also a willingness to consider that building resilience may mean exposure to risk, which is hard for some groups to accept.

Every element of our strategy should be underpinned by whatever evidence is available. Ultimately, we also need to stop thinking about these problems as technology problems: fake news is just as much a feature of increasing political extremism and alienation just as online pornography is a feature of a heavily sexualised mainstream culture. And we can be certain: neither of these broader social trends will be resolved by simple efforts to block out what we don’t wish to see.

Read the full article: Przybylski, A. and Nash, V. (2017) Internet Filtering Technology and Aversive Online Experiences in Adolescents. Journal of Pediatrics. DOI: http://dx.doi.org/10.1016/j.jpeds.2017.01.063


Andy Przybylski and Vicki Nash were talking to blog editor David Sutcliffe.

]]>
Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/ https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/#comments Tue, 29 Jul 2014 10:47:28 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2847
The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger
Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions.

Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm and activities online, albeit often outside the social sciences. With support from the OUP Fell Fund, I worked with colleagues Vera Slavtcheva-Petkova and Monica Bulger to review the extent of evidence available across these other disciplines. Looking at journal articles published between 1997 and 2012, we aimed to identify any empirical evidence detailing Internet-related harms experienced by children and adolescents and to gain a sense of the types of harm recorded, their severity and frequency.

Our findings demonstrate that there are many good studies out there which do address questions of harm, rather than just risk. The narrowly drawn search found 148 empirical studies which either clearly delineated evidence of very specific harms, or offered some evidence of less well-defined harms. Further, these studies offer rich insights into three broad types of harm: health-related (including harms relating to the exacerbation of eating disorders, self-harming behaviour and suicide attempts); sex-related (largely focused on studies of online solicitation and child abuse); and bullying-related (including the effects on mental health and behaviour). Such a range of coverage would come as no surprise to most researchers focusing on children’s Internet use – these are generally well-documented areas, albeit with the focus more normally on risk rather than harm. Perhaps more surprising was the absence in our search of evidence of harm in relation to privacy violations or economic well-being, both of which are increasingly discussed as significant concerns or risks for minors using the Internet. This gap might have been a factor of our search terms, of course, but given the policy relevance of both issues, more empirical study of not just risk but actual harm would seem to be merited in these areas.

Another important gap in the literature concerned the absence of literature demonstrating that severe harms often befall those without prior evidence of vulnerability or risky behaviour. For example, in relation to websites promoting self-harm or eating disorders, there is little evidence that young people previously unaffected by self-harm or eating disorders are influenced by these websites. This isn’t unexpected – other researchers have shown that harm more often befalls those who display riskier behaviour, but this is important to bear in mind when devising treatment or policy strategies for reducing such harms.

It’s also worth noting how difficult it is to determine the prevalence of harms. The best-documented cases are often those where medical, police or court records provide great depth of qualitative detail about individual suffering in cases of online grooming and abuse, eating disorders or self-harm. Yet these cases provide little insight into prevalence. And whilst survey research offers more sense of scale, we found substantial disparities in the levels of harm reported on some issues, with the prevalence of cyber-bullying, for example, varying from 9% to 72% across studies with similar age groups of children. It’s also clear that we quite simply need much more research and policy attention on certain issues. The studies relating to the online grooming of children and production of abuse images are an excellent example of how a broad research base can make an important contribution to our understanding of online risks and harms. Here, journal articles offered a remarkably rich understanding, drawing on data from police reports, court records or clinical files as well as surveys and interviews with victims, perpetrators and carers. There would be real benefits to taking a similarly thorough approach to the study of users of pro-eating disorder, self-harm and pro-suicide websites.

Our review flagged up some important lessons for policy-makers. First, whilst we (justifiably) devote a wealth of resources to the small proportion of children experiencing severe harms as a result of online experiences, the number of those experiencing more minor harms such as those caused by online bullying is likely much higher and may thus deserve more attention than currently received. Second, the diversity of topics discussed and types of harm identified seems to suggest that a one-size-fits-all solution will not work when it comes to online protection of minors. Simply banning or filtering all potentially harmful websites, pages or groups might be more damaging than useful if it drives users to less public means of communicating. Further, whilst some content such as child sexual abuse images are clearly illegal and generate great harms, other content and sites is less easy to condemn if the balance between perpetuating harmful behavior and provide valued peer support is hard to call. It should also be remembered that the need to protect young people from online harms must always be balanced against the need to protect their rights (and opportunities) to freely express themselves and seek information online.

Finally, this study makes an important contribution to public debates about child online safety by reminding us that risk and harm are not equivalent and should not be conflated. More children and young people are exposed to online risks than are actually harmed as a result and our policy responses should reflect this. In this context, the need to protect minors from online harms must always be balanced against their rights and opportunities to freely express themselves and seek information online.

A more detailed account of our findings can be found in this Information, Communication and Society journal article: Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research. If you can’t access this, please e-mail me for a copy.


Victoria Nash is a Policy and Research Fellow at the Oxford Internet Institute (OII), responsible for connecting OII research with policy and practice. Her own particular research interests draw on her background as a political theorist, and concern the theoretical and practical application of fundamental liberal values in the Internet era. Recent projects have included efforts to map the legal and regulatory trends shaping freedom of expression online for UNESCO, analysis of age verification as a tool to protect and empower children online, and the role of information and Internet access in the development of moral autonomy.

]]>
https://ensr.oii.ox.ac.uk/evidence-on-the-extent-of-harms-experienced-by-children-as-a-result-of-online-risks-implications-for-policy-and-research/feed/ 1
Responsible research agendas for public policy in the era of big data https://ensr.oii.ox.ac.uk/responsible-research-agendas-for-public-policy-in-the-era-of-big-data/ Thu, 19 Sep 2013 15:17:01 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2164 Last week the OII went to Harvard. Against the backdrop of a gathering storm of interest around the potential of computational social science to contribute to the public good, we sought to bring together leading social science academics with senior government agency staff to discuss its public policy potential. Supported by the OII-edited journal Policy and Internet and its owners, the Washington-based Policy Studies Organization (PSO), this one-day workshop facilitated a thought-provoking conversation between leading big data researchers such as David Lazer, Brooke Foucault-Welles and Sandra Gonzalez-Bailon, e-government experts such as Cary Coglianese, Helen Margetts and Jane Fountain, and senior agency staff from US federal bureaus including Labor Statistics, Census, and the Office for the Management of the Budget.

It’s often difficult to appreciate the impact of research beyond the ivory tower, but what this productive workshop demonstrated is that policy-makers and academics share many similar hopes and challenges in relation to the exploitation of ‘big data’. Our motivations and approaches may differ, but insofar as the youth of the ‘big data’ concept explains the lack of common language and understanding, there is value in mutual exploration of the issues. Although it’s impossible to do justice to the richness of the day’s interactions, some of the most pertinent and interesting conversations arose around the following four issues.

Managing a diversity of data sources. In a world where our capacity to ask important questions often exceeds the availability of data to answer them, many participants spoke of the difficulties of managing a diversity of data sources. For agency staff this issue comes into sharp focus when available administrative data that is supposed to inform policy formulation is either incomplete or inadequate. Consider, for example, the challenge of regulating an economy in a situation of fundamental data asymmetry, where private sector institutions track, record and analyse every transaction, whilst the state only has access to far more basic performance metrics and accounts. Such asymmetric data practices also affect academic research, where once again private sector tech companies such as Google, Facebook and Twitter often offer access only to portions of their data. In both cases participants gave examples of creative solutions using merged or blended data sources, which raise significant methodological and also ethical difficulties which merit further attention. The Berkman Center’s Rob Faris also noted the challenges of combining ‘intentional’ and ‘found’ data, where the former allow far greater certainty about the circumstances of their collection.

Data dictating the questions. If participants expressed the need to expend more effort on getting the most out of available but diverse data sources, several also canvassed against the dangers of letting data availability dictate the questions that could be asked. As we’ve experienced at the OII, for example, the availability of Wikipedia or Twitter data means that questions of unequal digital access (to political resources, knowledge production etc.) can often be addressed through the lens of these applications or platforms. But these data can provide only a snapshot, and large questions of great social or political importance may not easily be answered through such proxy measurements. Similarly, big data may be very helpful in providing insights into policy-relevant patterns or correlations, such as identifying early indicators of seasonal diseases or neighbourhood decline, but seem ill-suited to answer difficult questions regarding say, the efficacy of small-scale family interventions. Just because the latter are harder to answer using currently vogue-ish tools doesn’t mean we should cease to ask these questions.

Ethics. Concerns about privacy are frequently raised as a significant limitation of the usefulness of big data. Given that with two or more data sets even supposedly anonymous data subjects may be identified, the general consensus seems to be that ‘privacy is dead’. Whilst all participants recognised the importance of public debate around this issue, several academics and policy-makers expressed a desire to get beyond this discussion to a more nuanced consideration of appropriate ethical standards. Accountability and transparency are often held up as more realistic means of protecting citizens’ interests, but one workshop participant also suggested it would be helpful to encourage more public debate about acceptable and unacceptable uses of our data, to determine whether some uses might simply be deemed ‘off-limits’, whilst other uses could be accepted as offering few risks.

Accountability. Following on from this debate about the ethical limits of our uses of big data, discussion exposed the starkly differing standards to which government and academics (to say nothing of industry) are held accountable. As agency officials noted on several occasions it matters less what they actually do with citizens’ data, than what they are perceived to do with it, or even what it’s feared they might do. One of the greatest hurdles to be overcome here concerns the fundamental complexity of big data research, and the sheer difficulty of communicating to the public how it informs policy decisions. Quite apart from the opacity of the algorithms underlying big data analysis, the explicit focus on correlation rather than causation or explanation presents a new challenge for the justification of policy decisions, and consequently, for public acceptance of their legitimacy. As Greg Elin of Gitmachines emphasised, policy decisions are still the result of explicitly normative political discussion, but the justifiability of such decisions may be rendered more difficult given the nature of the evidence employed.

We could not resolve all these issues over the course of the day, but they served as pivot points for honest and productive discussion amongst the group. If nothing else, they demonstrate the value of interaction between academics and policy-makers in a research field where the stakes are set very high. We plan to reconvene in Washington in the spring.

*We are very grateful to the Policy Studies Organization (PSO) and the American Public University for their generous support of this workshop. The workshop “Responsible Research Agendas for Public Policy in the Era of Big Data” was held at the Harvard Faculty Club on 13 September 2013.

Also read: Big Data and Public Policy Workshop by Eric Meyer, workshop attendee and PI of the OII project Accessing and Using Big Data to Advance Social Science Knowledge.


Victoria Nash received her M.Phil in Politics from Magdalen College in 1996, after completing a First Class BA (Hons) Degree in Politics, Philosophy and Economics, before going on to complete a D.Phil in Politics from Nuffield College, Oxford University in 1999. She was a Research Fellow at the Institute of Public Policy Research prior to joining the OII in 2002. As Research and Policy Fellow at the OII, her work seeks to connect OII research with policy and practice, identifying and communicating the broader implications of OII’s research into Internet and technology use.

]]>
Personal data protection vs the digital economy? OII policy forum considers our digital footprints https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/ https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/#comments Thu, 03 Feb 2011 11:12:13 +0000 http://blogs.oii.ox.ac.uk/policy/?p=177 Catching a bus, picking up some groceries, calling home to check on the children – all simple, seemingly private activities that characterise many people’s end to the working day. Yet each of these activities leaves a data trail that enables companies, even the state, to track the most mundane aspects of our lives. Add to this the range and quantity of personal data that many of us willingly post online on our blogs, Facebook walls or Google docs, and it is clear that the trail of digital footprints we leave is long and hard to erase.

Even if in most cases, this data is only likely to be used in an anonymised and aggregated form to identify trends in transport or shopping patterns, or to personalise the Internet services available to us, the fact that its collection is now so routine and so extensive should make us question whether the regulatory system governing data collection, storage and use is fit for purpose. A forthcoming OII policy forum on Tracing the Policy Implications of the Future Digital Economy (16 Feb) will consider this question, bringing together leading academics from across several disciplines with policy-makers and industry experts.

This is a topic which the OII is well-placed to address. Ian Brown’s Privacy Values Network project addresses a major knowledge gap, measuring the various costs and benefits to individuals of handing over data in different contexts, as without this we simply don’t know how much people value their privacy (or indeed understand its limits). The last Oxford Internet Survey (OxIS) rather surprisingly showed that in 2009 people were significantly less concerned about privacy online in the UK than in previous years (45% of all those surveyed in 2009 against 66% in 2007); we wait to see whether this finding is repeated when OxIS 2011 goes into the field next month.

Our faculty also have much to say about the adequacy (or otherwise) of the regulatory framework: a recent report by Ian Brown and Douwe Korff on New Challenges to Data Protection identified for the European Commission the scale of challenges presented to the current data protection regime, whilst Viktor-Mayer Schoenberger’s book Delete: The Virtue of Forgetting in the Digital Age has rightly raised the suggestion that personal information online should have an expiration date, to ensure it doesn’t hang around for years to embarrass us at a later date.

The forum will consider the way in which the market for information storage and collection is rapidly changing with the advent of new technologies, and on this point, one conclusion is clear: if we accept Helen Nissenbaum’s contention that personal information and data should be collected and protected according to the social norms governing different social contexts, then we need to get to grips pretty fast with the way in which these technologies are playing out in the way we work, play, learn and consume.

]]>
https://ensr.oii.ox.ac.uk/personal-data-protection-vs-the-digital-economy-forthcoming-oii-policy-forum/feed/ 1