Policy & Internet logo

The Policy and Internet Blog

Understanding public policy online

University of Oxford
  • P&I Articles
  • Politics & Government
  • Economics
  • Ethics
  • Social Data Science
  • IPP2018 Conference
    • IPP2018 Programme
    • IPP2018 Papers
    • IPP2018 Call
    • 2016 – Archive
    • 2014 – Archive
    • 2012 – Archive
    • 2010 – Archive
Search
  • Policy & Internet Journal >
  • Submit your Paper >
  • P&I Articles
  • Politics & Government
  • Economics
  • Ethics
  • Social Data Science
  • IPP2018 Conference
    • IPP2018 Programme
    • IPP2018 Papers
    • IPP2018 Call
    • 2016 – Archive
    • 2014 – Archive
    • 2012 – Archive
    • 2010 – Archive
Home Politics & Government Collective Action Exploring the Ethics of Monitoring Online Extremism
  • Politics & Government
  • Collective Action
  • Democracy
  • Ethics
  • Governance & Security

Exploring the Ethics of Monitoring Online Extremism

By
Bertram Vidgen
-
23 March 2016
Facebook
Twitter
Google+
Pinterest
WhatsApp
    Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
    Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).

    (Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.

    Surveillance in NYC's financial district. Photo by Jonathan McIntosh (flickr).
    Surveillance in NYC’s financial district. Photo by Jonathan McIntosh (flickr).

    Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?

    Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!

    Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?

    Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.

    This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.

    Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities.  How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?

    Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”

    One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.

    Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


    Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.

    Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

    Josh and Ian were talking to Blog Editor Bertie Vidgen.

    • TAGS
    • big data
    • civil society
    • Conversation
    • counter-extremism
    • data
    • Extremism
    • government
    • human rights
    • ISIS
    • metadata
    • OII
    • Online extremism
    • Online politics
    • police
    • politics
    • Radicalisation
    • terrorism
    Previous articleP-values are widely used in the social sciences, but often misunderstood: and that’s a problem.
    Next articleAlan Turing Institute and OII: Summit on Data Science for Government and Policy Making
    Bertram Vidgen

    RELATED ARTICLESMORE FROM AUTHOR

    From private profit to public liabilities: how platform capitalism’s business model works for children

    What are the barriers to big data analytics in local government?

    How and why is children’s digital data being harvested?

    This blog investigates the relationship between the Internet and public policy. It covers work by the Oxford Internet Institute, and work published in its journal Policy & Internet (Wiley-Blackwell).
    Contact us: policyandinternet@oii.ox.ac.uk

    POPULAR POSTS

    Did you consider Twitter’s (lack of) representativeness before doing that predictive...

    10 April 2017

    How ready is Africa to join the knowledge economy?

    22 June 2017

    Tracing our every move: Big data and multi-method research

    30 April 2015

    POPULAR CATEGORY

    • Politics & Government63
    • Articles from Policy & Internet59
    • Social Data Science31
    • Governance & Security30
    • Development21
    • Economics20
    • Ethics20
    • Methods16
    © This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (i.e. do cross-post, but please attribute, link back, and let us know!) DISCLAIMER: posts reflect the views of the authors, and not necessarily those of the Oxford Internet Institute or the University of Oxford.
    © Oxford Internet Institute 2018 | Terms of Use | Privacy Policy | Cookie Statement | Copyright Policy | Accessibility | Email Webmaster
    MORE STORIES

    Why are citizens migrating to Uber and Airbnb, and what should...

    Vili Lehdonvirta - 27 July 2015
    0
    Edit with Live CSS
    Save
    Write CSS OR LESS and hit save. CTRL + SPACE for auto-complete.