surveillance – The Policy and Internet Blog https://ensr.oii.ox.ac.uk Understanding public policy online Mon, 07 Dec 2020 14:25:42 +0000 en-GB hourly 1 Iris scanners can now identify us from 40 feet away https://ensr.oii.ox.ac.uk/iris-scanners-can-now-identify-us-from-40-feet-away/ Thu, 21 May 2015 10:23:02 +0000 http://blogs.oii.ox.ac.uk/policy/?p=3369 Public anxiety and legal protections currently pose a major challenge to anyone wanting to introduce eye-scanning security technologies. Reposted from The Conversation.

 

Biometric technologies are on the rise. By electronically recording data about individual’s physical attributes such as fingerprints or iris patterns, security and law enforcement services can quickly identify people with a high degree of accuracy.

The latest development in this field is the scanning of irises from a distance of up to 40 feet (12 metres) away. Researchers from Carnegie Mellon University in the US demonstrated they were able to use their iris recognition technology to identify drivers from an image of their eye captured from their vehicle’s side mirror.

The developers of this technology envisage that, as well as improving security, it will be more convenient for the individuals being identified. By using measurements of physiological characteristics, people no longer need security tokens or cumbersome passwords to identify themselves.

However, introducing such technology will come with serious challenges. There are both legal issues and public anxiety around having such sensitive data captured, stored, and accessed.

Social resistance

We have researched this area by presenting people with potential future scenarios that involved biometrics. We found that, despite the convenience of long-range identification (no queuing in front of scanners), there is a considerable reluctance to accept this technology.

On a basic level, people prefer a physical interaction when their biometrics are being read. “I feel negatively about a remote iris scan because I want there to be some kind of interaction between me and this system that’s going to be monitoring me,” said one participant in our research.

But another serious concern was that of “function creep”, whereby people slowly become accustomed to security and surveillance technologies because they are introduced gradually. This means the public may eventually be faced with much greater use of these systems than they would initially agree to.

Crowd control Shutterstock

For example, implementing biometric identification in smart phones and other everyday objects such as computers or cars could make people see the technology as useful and easy to operate, This may increase their willingness to adopt such systems. “I could imagine this becoming normalised to a point where you don’t really worry about it,“ said one research participant.

Such familiarity could lead to the introduction of more invasive long-distance recognition systems. This could ultimately produce far more widespread commercial and governmental usage of biometric identification than the average citizen might be comfortable with. As one participant put it: “[A remote scan] could be done every time we walk into a big shopping centre, they could just identify people all over the place and you’re not aware of it.”

Legal barriers

The implementation of biometric systems is not just dependent on user acceptance or resistance. Before iris-scanning technology could be introduced in the EU, major data protection and privacy considerations would have to be made.

The EU has a robust legal framework on privacy and data protection. These are recognised as fundamental rights and so related laws are among the highest ranking. Biometric data, such as iris scans, are often treated as special due to the sensitivity of the information they can contain. Our respondents also acknowledged this: “I think it’s a little too invasive and to me it sounds a bit creepy. Who knows what they can find out by scanning my irises?”

Before iris technology could be deployed, certain legal steps would need to be taken. Under EU law and the European Convention on Human Rights, authorities would need to demonstrate it was a necessary and proportionate solution to a legitimate, specific problem. They would also need to prove iris recognition was the least intrusive way to achieve that goal. And a proportionality test would have to take into account the risks the technology brings along with the benefits.

The very fact that long-range iris scanners can capture data without the collaboration of their subject also creates legal issues. EU law requires individuals to be informed when such information was being collected, by whom, for what purposes, and the existence of their rights surrounding the data.

Another issue is how the data is kept secure, particularly in the case of iris-scanning by objects such as smart phones. Scans stored on the device and/or on the cloud for purposes of future authentication would legally require robust security protection. Data stored on the cloud tends to move around between different servers and countries, which makes preventing unauthorised access more difficult.

The other issue with iris scanning is that, while the technology could be precise, it is not infallible. At its current level, the technology can still be fooled (see video above). And processing data accurately is another principle of EU data protection law.

Even if we do find ourselves subject to unwanted iris-scanning from 40 feet, safeguards for individuals should always be in place to ensure that they do not bear the burden of technological imperfections.

]]>
Monitoring Internet openness and rights: report from the Citizen Lab Summer Institute 2014 https://ensr.oii.ox.ac.uk/monitoring-internet-openness-and-rights-report-from-citizen-lab-summer-institute/ Tue, 12 Aug 2014 11:44:58 +0000 http://blogs.oii.ox.ac.uk/policy/?p=2916 Caption
Jon Penny presenting on the US experience of Internet-related corporate transparency reporting.

根据相关法律法规和政策,部分搜索结果未予显示 could be a warning message we will see displayed more often on the Internet; but likely translations thereof. In Chinese, this means “according to the relevant laws, regulations, and policies, a portion of search results have not been displayed.” The control of information flows on the Internet is becoming more commonplace, in authoritarian regimes as well as in liberal democracies, either via technical or regulatory means. Such information controls can be defined as “[…] actions conducted in or through information and communications technologies (ICTs), which seek to deny (such as web filtering), disrupt (such as denial-of-service attacks), shape (such as throttling), secure (such as through encryption or circumvention) or monitor (such as passive or targeted surveillance) information for political ends. Information controls can also be non-technical and can be implemented through legal and regulatory frameworks, including informal pressures placed on private companies. […]” Information controls are not intrinsically good or bad, but much is to be explored and analysed about their use, for political or commercial purposes.

The University of Toronto’s Citizen Lab organised a one-week summer institute titled “Monitoring Internet Openness and Rights” to inform the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights. A week full of presentations and workshops on the intersection of technical tools, social science research, ethical and legal reflections and policy implications was attended by a distinguished group of about 60 community members, amongst whom were two OII DPhil students; Jon Penney and Ben Zevenbergen. Conducting Internet measurements may be considered to be a terra incognita in terms of methodology and data collection, but the relevance and impacts for Internet policy-making, geopolitics or network management are obvious and undisputed.

The Citizen Lab prides itself in being a “hacker hothouse”, or an “intelligence agency for civil society” where security expertise, politics, and ethics intersect. Their research adds the much-needed geopolitical angle to the deeply technical and quantitative Internet measurements they conduct on information networks worldwide. While the Internet is fast becoming the backbone of our modern societies in many positive and welcome ways, abundant (intentional) security vulnerabilities, the ease with which human rights such as privacy and freedom of speech can be violated, threats to the neutrality of the network and the extent of mass surveillance threaten to compromise the potential of our global information sphere. Threats to a free and open internet need to be uncovered and explained to policymakers, in order encourage informed, evidence-based policy decisions, especially in a time when the underlying technology is not well-understood by decision makers.

Participants at the summer institute came with the intent to make sense of Internet measurements and information controls, as well as their social, political and ethical impacts. Through discussions in larger and smaller groups throughout the Munk School of Global Affairs – as well as restaurants and bars around Toronto – the current state of the information controls, their regulation and deployment became clear, and multi-disciplinary projects to measure breaches of human rights on the Internet or its fundamental principles were devised and coordinated.

The outcomes of the week in Toronto are impressive. The OII DPhil students presented their recent work on transparency reporting and ethical data collection in Internet measurement.

Jon Penney gave a talk on “the United States experience” with Internet-related corporate transparency reporting, that is, the evolution of existing American corporate practices in publishing “transparency reports” about the nature and quantity of government and law enforcement requests for Internet user data or content removal. Jon first began working on transparency issues as a Google Policy Fellow with the Citizen Lab in 2011, and his work has continued during his time at Harvard’s Berkman Center for Internet and Society. In this talk, Jon argued that in the U.S., corporate transparency reporting largely began with the leadership of Google and a few other Silicon Valley tech companies like Twitter, but in the Post-Snowden era, has been adopted by a wider cross section of not only technology companies, but also established telecommunications companies like Verizon and AT&T previously resistant to greater transparency in this space (perhaps due to closer, longer term relationships with federal agencies than Silicon Valley companies). Jon also canvassed evolving legal and regulatory challenges facing U.S. transparency reporting and means by which companies may provide some measure of transparency— via tools like warrant canaries— in the face of increasingly complex national security laws.

Ben Zevenbergen has recently launched ethical guidelines for the protection of privacy with regards to Internet measurements conducted via mobile phones. The first panel of the week on “Network Measurement and Information Controls” called explicitly for more concrete ethical and legal guidelines for Internet measurement projects, because the extent of data collection necessarily entails that much personal data is collected and analyzed. In the second panel on “Mobile Security and Privacy”, Ben explained how his guidelines form a privacy impact assessment for a privacy-by-design approach to mobile network measurements. The iterative process of designing a research in close cooperation with colleagues, possibly from different disciplines, ensures that privacy is taken into account at all stages of the project development. His talk led to two connected and well-attended sessions during the week to discuss the ethics of information controls research and Internet measurements. A mailing list has been set up for engineers, programmers, activists, lawyers and ethicists to discuss the ethical and legal aspects of Internet measurements. A data collection has begun to create a taxonomy of ethical issues in the discipline to inform forthcoming peer-reviewed papers.

The Citizen Lab will host its final summer institute of the series in 2015.

Caption
Ben Zevenbergen discusses ethical guidelines for Internet measurements conducted via mobile phones.

Photo credits: Ben Zevenbergen, Jon Penney. Writing Credits: Ben Zevenbergen, with small contribution from Jon Penney.

Ben Zevenbergen is an OII DPhil student and Research Assistant working on the EU Internet Science project. He has worked on legal, political and policy aspects of the information society for several years. Most recently he was a policy advisor to an MEP in the European Parliament, working on Europe’s Digital Agenda.

Jon Penney is a legal academic, doctoral student at the Oxford Internet Institute, and a Research Fellow / Affiliate of both The Citizen Lab an interdisciplinary research lab specializing in digital media, cyber-security, and human rights, at the University of Toronto’s Munk School for Global Affairs, and at the Berkman Center for Internet & Society, Harvard University.

]]>
How are internal monitoring systems being used to tackle corruption in the Chinese public administration? https://ensr.oii.ox.ac.uk/how-are-internal-monitoring-systems-being-used-to-tackle-corruption-in-the-chinese-public-administration/ Fri, 26 Jul 2013 07:30:25 +0000 http://blogs.oii.ox.ac.uk/policy/?p=1774 The Great Hall of the People
China has made concerted efforts to reduce corruption at the lowest levels of government. Image of the 18th National Congress of the CPC in the Great Hall of the People, Beijing, by: Bert van Dijk.

Ed: Investment by the Chinese government in internal monitoring systems has been substantial: what components make it up?

Jesper: Two different information systems are currently in use. Within the government there is one system directed towards administrative case-processing. In addition to this, the Communist Party has its own monitoring system, which is less sophisticated in terms of real-time surveillance, but which has a deeper structure, as it collects and cross-references personal information about party-members working in the administration. These two systems parallel the existing institutional arrangements found in the dual structure consisting of the Discipline Inspection Commissions and the Bureaus of Supervision on different levels of government. As such, the e-monitoring system has particular ‘Chinese characteristics’, reflecting the bureaucracy’s Leninist heritage where Party-affairs and government-affairs are handled separately, applying different sets of rules.

On the government’s e-monitoring platform the Bureau of Supervision (the closest we get to an Ombudsman function in the Chinese public administration) can collect data from several other data systems, such as the e-government systems of the individual bureaus involved in case processing; feeds from surveillance cameras in different government organisations; and even geographical data from satellites. The e-monitoring platform does not, however, afford scanning of information outside the government systems. For instance, social media are not part of the administration surveillance infrastructure.

Ed: How centralised is it as a system? Is local or province-level monitoring of public officials linked up to the central government?

Jesper: The architecture of the e-monitoring systems integrates the information flows to the provincial level, but not to the central level. One reason for this may be found by following the money. Funding for these systems mainly comes from local sources, and the construction was initially based on municipal-level systems supported by the provincial level. Hence, at the early stages the path towards individual local-level systems was the natural choice. A reason for why the build up was not initially envisioned to comprise the central level could be that the Chinese central government is comparatively small, and they could be worried about information overload. It could, however, also be an expression of provinces wanting to handle ‘internal affairs’ themselves rather than having central actors involved; possibly a case of provincial resistance to central monitoring.

Ed: Digital systems allow for the efficient control and recording of vast numbers of transactions (e.g. by timestamping, alerting, etc.). But all systems are subvertible: is there any evidence that this is happening?

Jesper: There are certainly attempts to shirk work or continue corrupt activities despite the monitoring system. For instance, some urban managers who work in the streets (which are hard to monitor by video surveillance) have used fake photos to ‘prove’ that a particular maintenance task had been completed, thereby saving themselves the time and energy of verifying that the problem had in fact been solved. They could do this because the system did not stamp the photo with geographical information data, and hence they could claim that a photo was taken at any location.

However, administrative processes that take place in an office rather than ‘in the wild’ are easier to monitor. Administrative approval processes that relate to, e.g., tax and business licensing, which the government handles in one-stop-shopping service centres, tend to be less corrupt after the introduction of the e-monitoring system. To be sure, this does not mean that the administration is clean now; instead the corruption moves to other places, such as applications for business licenses for larger companies, which is only partly covered by e-monitoring.

Ed: We are used to a degree of audit and oversight of our working behaviour and performance in the West; does this personal monitoring go beyond what might be considered normal (or just) to us?

Jesper: The notion of being video surveilled during office work would probably be met with resistance by employees in Western government agencies. This is, however, a widespread practice in call centres in the West, so in this sense it is not entirely unknown in work settings. Additionally, government one-stop shops in the West are often equipped with closed-circuit television, but this is mostly — as I understand — used to document client violations of the public employees rather than the other way round. Another aspect that sets apart the Chinese administration is that the options for recourse (e.g. for a wrongfully accused public employee) only include the authorities already dealing with the case.

Ed: Could these systems also be used to monitor the behaviour of citizens?

Jesper: Indeed, the monitoring system enables access to information from a number of different sources, such as registers of tax payment, social welfare benefits and real-estate holdings, and to some extent it is already used in relation to citizens. For instance the tax register and the real-estate register are cross-referenced. If a real-estate owner has a tax debt then documentation for the real estate cannot be printed until the debt is paid. We must expect further development of these kinds of functions. This e-monitoring ‘architecture of control’ can thus be activated both towards the administration itself as well as outward towards citizens.

Ed: There is oversight of the actions of government officials by the Bureau of Supervision; but is there any public oversight of, e.g., the government’s decision-making process, particularly of potentially embarrassing decisions? Who watches the watchers?

Jesper: Currently in China there are two digitally mediated mechanisms working simultaneously to reduce corruption. The first is the e-monitoring system described here, which mainly addresses administrative corruption. The second is what we might call a ‘fire alarm’ mechanism whereby citizens point public attention to corruption scandals or local government failures — often through the use of microblogs. E-monitoring addresses corruption in the work process but does not include government decision-making. The ‘fire alarm’ in part addresses the latter concern as citizens can vent their frustrations online. However, even though microblogging has empowered citizens to speak out against corruption and counter-productive policies, this does not reflect institutionalised control but happens on an ad hoc basis. If the Bureau of Supervision and the Disciplinary Inspection Commission do not wish to act, there is no further backstop. The Internet-based e-monitoring systems, hence, do not alter the institutional setup of the system and there is no-one to ‘watch the watchers’ except for in the occasional cases where the fire alarm mechanism works.

Ed: Is there a danger that public disclosure of power abuses might generate dissatisfaction and mistrust in government, without necessarily solving the issue of corruption itself?

Jesper: Over the last few years a number of corruption scandals have been brought to public attention through microblogs. Civil servants have been punished, and obviously these incidents have not improved public satisfaction with the particular local governments involved. Apart from the negative consequences of public mistrust, one could speculate that the microblogging ‘fire alarm’ only works when it is allowed to do so by the government. Technically speaking it is relatively simple for the sophisticated Chinese censoring apparatus to stop debates that touch upon issues that are too sensitive for the Party. So, it would be naive to believe that this mechanism is revealing more than the tip of the iceberg in terms of corruption.

Ed: Both Russia and India have big problems with corruption: do you know if there are similar electronic oversight systems embedded in their public administrations? What makes China different if not?

Jesper: In this area, China has made concerted efforts to reduce corruption at the lowest levels of government, as a result of dissatisfaction from both the business communities and the general public. Similarly, in Russia and India (and a number of Asian states) many functions such as taxation, business licensing, etc., have been incorporated in e-government systems and through this process been made more transparent and easy to track than previous processes. However, to my knowledge, the Chinese system is at the forefront when it comes to integrating these different platforms into a larger monitoring system ecology.


Jesper Schlæger is an Associate Professor at Sichuan University, School of Public Administration. His current research topics include comparative public administration, e-government, electronic monitoring, public values, and urban management in a comparative perspective. His latest book is E-Government in China: Technology, Power and Local Government Reform (Routledge, 2013).

Jesper Schlæger was talking to blog editor David Sutcliffe.

]]>