• Authors
  • All posts

The Policy and Internet Blog Understanding public policy online

Oxford University
Oxford Internet Institute
  • Home
  • Wellbeing
  • Politics & Government
  • Governance & Security
  • Economics
  • Development
  • Ethics
  • Social Data Science
  • RSS for Entries
  • Follow on Twitter
  • Connect on Facebook
18 February 2016

Assessing the Ethics and Politics of Policing the Internet for Extremist Material

Filed under: Civic Engagement,Democracy,Ethics,Governance & Security,Politics & Government,Security
By
Bertram Vidgen and Josh Cowls

The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

 

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organize, radicalize people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organization, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance – including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalization. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalization, but if they do get radicalized and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept – extremist material – and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorized processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.

Share this article

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to email this to a friend (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Google+ (Opens in new window)

Note: This article gives the views of the authors, and not the position of the Policy and Internet Blog, nor of the Oxford Internet Institute.

Related posts

ABOUT THIS BLOG

This blog investigates the relationship between the Internet and public policy. It covers work by the Oxford Internet Institute, and work published in its journal Policy & Internet (Wiley-Blackwell).

Policy and Internet journalPolicy & Internet calls for papers that present genuinely new approaches to policy questions or problems relating to the Internet and related ICTs. Ed-in-Chief: Helen Margetts. Submit your paper.


Top-Read Posts

  • Human Rights and Internet Technology: Six Considerations
    Human Rights and Internet Technology: Six Considerations
  • Crowdsourcing for public policy and government
    Crowdsourcing for public policy and government
  • The blockchain paradox: Why distributed ledger technologies may do little to transform the economy
    The blockchain paradox: Why distributed ledger technologies may do little to transform the economy
  • How do the mass media affect levels of trust in government?
    How do the mass media affect levels of trust in government?
  • P-values are widely used in the social sciences, but often misunderstood: and that's a problem.
    P-values are widely used in the social sciences, but often misunderstood: and that's a problem.
  • Estimating the Local Geographies of Digital Inequality in Britain: London and the South East Show Highest Internet Use -- But Why?
    Estimating the Local Geographies of Digital Inequality in Britain: London and the South East Show Highest Internet Use -- But Why?
  • The challenges of government use of cloud services for public service delivery
    The challenges of government use of cloud services for public service delivery
  • New Report: Risks and Rewards of Online Gig Work at the Global Margins
    New Report: Risks and Rewards of Online Gig Work at the Global Margins
  • Censorship or rumour management? How Weibo constructs "truth" around crisis events
    Censorship or rumour management? How Weibo constructs "truth" around crisis events
  • Is China changing the Internet, or is the Internet changing China?
    Is China changing the Internet, or is the Internet changing China?
View all posts

Elections and the Internet

Visit the OII's Elections and the Internet website: exploring the extent to which data from the social web can be used to predict interesting social and political phenomena, especially elections.

Special series: The Internet in China

The Internet in China with posts from the ICA2013 preconference "China and the New Internet World" (Oxford, 14-15 June).

RSS Latest OII News

  • How to hack frequent flyer miles for fun and profit
  • Artist ‘vandalises’ Snapchat’s AR Balloon Dog sculpture
  • Prof. Floridi to collaborate with Facebook on enhancing ethical uses of data
  • Ten years of Google Maps – Tech Weekly podcast
  • What is a knowledge economy?

RSS Events

  • Live Virtual Q&A for Prospective OII DPhil Students
  • Normalizing Digital Trace Data
  • OxDeg: Anthropology, Ethnography, and the Study of Online Sexual Offending
  • OxDeg: Introducing Project Seventy: A disruption of visual culture
  • OxDeg: The impacts of official conspiracy theories: The Gezi Park protests

Blog Tags

activism agenda-setting algorithms big data broadband campaigning censorship child protection China collective action crowd sourcing data data-sharing data protection data science democracy development digital divides digital goods economics elections findings governance government ICT4D ipp2010 media MENA MENA-Wikipedia mobilisation networks news media OII open data participation personal data policy politics privacy public policy regulation representation social media Twitter Wikipedia
Contact the editors: policyandinternet@oii.ox.ac.uk
Log in -
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.