(Part 2 of 2) The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. In the second of a two-part post, Josh Cowls and Ian Brown discuss the report with blog editor Bertie Vidgen. Read the first post.
Ed: Josh, political science has long posed a distinction between public spaces and private ones. Yet it seems like many platforms on the Internet, such as Facebook, cannot really be categorized in such terms. If this correct, what does it mean for how we should police and govern the Internet?
Josh: I think that is right – many online spaces are neither public nor private. This is also an issue for some for privacy legal frameworks (especially in the US).. A lot of the covenants and agreements were written forty or fifty years ago, long before anyone had really thought about the Internet. That has now forced governments, societies and parliaments to adapt these existing rights and protocols for the online sphere. I think that we have some fairly clear laws about the use of human intelligence sources, and police law in the offline sphere. The interesting question is how we can take that online. How can the pre-existing standards, like the requirement that procedures are necessary and proportionate, or the ‘right to appeal’, be incorporated into online spaces? In some cases there are direct analogies. In other cases there needs to be some re-writing of the rule book to try figure out what we mean. And, of course, it is difficult because the internet itself is always changing!
Ed: So do you think that concepts like proportionality and justification need to be updated for online spaces?
Josh: I think that at a very basic level they are still useful. People know what we mean when we talk about something being necessary and proportionate, and about the importance of having oversight. I think we also have a good idea about what it means to be non-discriminatory when applying the law, though this is one of those areas that can quickly get quite tricky. Consider the use of online data sources to identify people. On the one hand, the Internet is ‘blind’ in that it does not automatically codify social demographics. In this sense it is not possible to profile people in the same way that we can offline. On the other hand, it is in some ways the complete opposite. It is very easy to directly, and often invisibly, create really firm systems of discrimination – and, most problematically, to do so opaquely.
This is particularly challenging when we are dealing with extremism because, as we pointed out in the report, extremists are generally pretty unremarkable in terms of demographics. It perhaps used to be true that extremists were more likely to be poor or to have had challenging upbringings, but many of the people going to fight for the Islamic State are middle class. So we have fewer demographic pointers to latch onto when trying to find these people. Of course, insofar as there are identifiers they won’t be released by the government. The real problem for society is that there isn’t very much openness and transparency about these processes.
Ed: Governments are increasingly working with the private sector to gain access to different types of information about the public. For example, in Australia a Telecommunications bill was recently passed which requires all telecommunication companies to keep the metadata – though not the content data – of communications for two years. A lot of people opposed the Bill because metadata is still very informative, and as such there are some clear concerns about privacy. Similar concerns have been expressed in the UK about an Investigatory Powers Bill that would require new Internet Connection Records about customers, online activities. How much do you think private corporations should protect people’s data? And how much should concepts like proportionality apply to them?
Ian: To me the distinction between metadata and content data is fairly meaningless. For example, often just knowing when and who someone called and for how long can tell you everything you need to know! You don’t have to see the content of the call. There are a lot of examples like this which highlight the slightly ludicrous nature of distinguishing between metadata and content data. It is all data. As has been said by former US CIA and NSA Director Gen. Michael Hayden, “we kill people based on metadata.”
One issue that we identified in the report is the increased onus on companies to monitor online spaces, and all of the legal entanglements that come from this given that companies might not be based in the same country as the users. One of our interviewees called this new international situation a ‘very different ballgame’. Working out how to deal with problematic online content is incredibly difficult, and some huge issues of freedom of speech are bound up in this. On the one hand, there is a government-led approach where we use the law to take down content. On the other hand is a broader approach, whereby social networks voluntarily take down objectionable content even if it is permissible under the law. This causes much more serious problems for human rights and the rule of law.
Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.
Ian Brown is Professor of Information Security and Privacy at the OII. His research is focused on surveillance, privacy-enhancing technologies, and Internet regulation.
Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.
Josh and Ian were talking to Blog Editor Bertie Vidgen.