As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection — and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.
This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist” — an autonomous individual who makes informed decisions about the disclosure of their personal information.
But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference — a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and benefits associated with information exchange.
Academic critiques of this model have tended to focus on the methodological and theoretical validity of the pragmatist framework; however, in light of two recent studies that suggest individuals are resigned to the loss of privacy online, this article argues for the need to examine a possibility that has been overlooked as a consequence of this focus on Westin’s typology of privacy preferences: that people have opted out of the discussion altogether. Considering a theory of resignation alters how the problem of privacy is framed and opens the door to alternative discussions around policy solutions.
We caught up with Nora to discuss her findings:
Ed.: How easy is it even to discuss privacy (and people’s “rational choices”), when we know so little about what data is collected about us through a vast number of individually innocuous channels — or the uses to which it is put?
Nora: This is a fundamental challenge in current discussions around privacy. There are steps that we can take as individuals that protect us from particular types of intrusion, but in an environment where seemingly benign data flows are used to understand and predict our behaviours, it is easy for personal privacy protection to feel like an uphill battle. In such an environment, it is increasingly important that we consider resigned inaction to be a rational choice.
Ed.: I’m not surprised that there will be people who basically give up in exhaustion, when faced with the job of managing their privacy (I mean, who actually reads the Google terms that pop up every so often?). Is there a danger that this lack of engagement with privacy will be normalised during a time that we should actually be paying more, not less, attention to it?
Nora: This feeling of powerlessness around our ability to secure opportunities for privacy has the potential to discourage individual or collective action around privacy. Anthropologists Peter Benson and Stuart Kirsch have described the cultivation of resignation as a strategy to discourage collective action against undesirable corporate practices. Whether or not these are deliberate efforts, the consequences of creating a nearly unnavigable privacy landscape is that people may accept undesirable practices as inevitable.
Ed.: I suppose another irony is the difficulty of getting people to care about something that nevertheless relates so fundamentally and intimately to themselves. How do we get privacy to seem more interesting and important to the general public?
Nora: People experience the threats of unwanted visibility very differently. For those who are used to the comfortable feeling of public invisibility — the types of anonymity we feel even in public spaces — the likelihood of an unwanted privacy breach can feel remote. This is one of the problems of thinking about privacy purely as a personal issue. When people internalize the idea that if they have done nothing wrong, they have no reason to be concerned about their privacy, it can become easy to dismiss violations when they happen to others. We can become comfortable with a narrative that if a person’s privacy has been violated, it’s likely because they failed to use the appropriate safeguards to protect their information.
This cultivation of a set of personal responsibilities around privacy is problematic not least because it has the potential to blame victims rather than those parties responsible for the privacy incursions. I believe there is real value in building empathy around this issue. Efforts to treat privacy as a community practice and, perhaps, a social obligation may encourage us to think about privacy as a collective rather than individual value.
Ed.: We have a forthcoming article that explores the privacy views of Facebook / Google (companies and employees), essentially pointing out that while the public may regard privacy as pertaining to whether or not companies collect information in the first place, the companies frame it as an issue of “control” — they collect it, but let users subsequently “control” what others see. Is this fundamental discrepancy (data collection vs control) something you recognise in the discussion?
Nora: The discursive and practical framing of privacy as a question of control brings together issues addressed in your previous two questions. By providing individuals with tools to manage particular aspects of their information, companies are able to cultivate an illusion of control. For example, we may feel empowered to determine who in our digital network has access to a particular posted image, but little ability to determine how information related to that image — for example, its associated metadata or details on who likes, comments, or reposts it — is used.
The “control” framework further encourages us to think about privacy as an individual responsibility. For example, we may assume that unwanted visibility related to that image is the result of an individual’s failure to correctly manage their privacy settings. The reality is usually much more complicated than this assigning of individual blame allows for.
Ed.: How much of the privacy debate and policy making (in the States) is skewed by economic interests — i.e. holding that it’s necessary for the public to provide data in order to keep business competitive? And is the “Europe favours privacy, US favours industry” truism broadly true?
Nora: I don’t have a satisfactory answer to this question. There is evidence from past surveys I’ve done with colleagues that people in the United States are more alarmed by the collection and use of personal information by political parties than they are by similar corporate practices. Even that distinction, however, may be too simplistic. Political parties have an established history of using consumer information to segment and target particular audience groups for political purposes. We know that the U.S. government has required private companies to share information about consumers to assist in various surveillance efforts. Discussions about privacy in the U.S. are often framed in terms of tradeoffs with, for example, technological and economic innovation. This is, however, only one of the ways in which the value of privacy is undermined through the creation of false tradeoffs. Daniel Solove, for example, has written extensively on how efforts to frame privacy in opposition to safety encourages capitulation to transparency in the service of national security.
Ed.: There are some truly terrible US laws (e.g. the General Mining Act of 1872) that were developed for one purpose, but are now hugely exploitable. What is the situation for privacy? Is the law still largely fit for purpose, in a world of ubiquitous data collection? Or is reform necessary?
Nora: One example of such a law is the Electronic Communication Privacy Act (ECPA) of 1986. This law was written before many Americans had email accounts, but continues to influence the scope authorities have to access digital communications. One of the key issues in the ECPA is the differential protection for messages depending on when they were sent. The ECPA, which was written when emails would have been downloaded from a server onto a personal computer, treats emails stored for more than 180 days as “abandoned.” While messages received in the past 180 days cannot be accessed without a warrant, so-called abandoned messages require only a subpoena. Although there is some debate about whether subpoenas offer adequate privacy protections for messages stored on remote servers, the issue is that the time-based distinction created by “180-day rule” makes little sense when access to cloud storage allows people to save messages indefinitely. Bipartisan efforts to introduce the Email Privacy Act, which would extend warrant protections to digital communication that is over 180 days old has received wide support from those in the tech industry as well as from privacy advocacy groups.
Another challenge, which you alluded to in your first question, pertains to the regulation of algorithms and algorithmic decision-making. These technologies are often described as “black boxes” to reflect the difficulties in assessing how they work. While the consequences of algorithmic decision-making can be profound, the processes that lead to those decisions are often opaque. The result has been increased scholarly and regulatory attention on strategies to understand, evaluate, and regulate the processes by which algorithms make decisions about individuals.
Read the full article: Draper, N.A. (2017) From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates. Policy & Internet 9 (2). doi:10.1002/poi3.142.
Nora A. Draper was talking to blog editor David Sutcliffe.