25 February 2015

Does a market-approach to online privacy protection result in better protection for users?

yong2 While prior studies have focused on what's written in privacy policy statements, systematic attention on the interactive aspects of the Web have been scant. Yong Jin Park (Howard University) discusses his article published in Policy & Internet: A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. His analysis, based on a sample of 398 commercial sites in the US, shows that more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected.

Even in the EU there is no particular interface-protection mandate for websites, for example when users want to interact with websites to control use of their personal data. Image of the ECJ by katarina_dzurekova.

Ed: You examined the voluntary provision by commercial sites of information privacy protection and control under the self-regulatory policy of the U.S. Federal Trade Commission (FTC). In brief, what did you find?

Yong Jin: First, because we rely on the Internet to perform almost all types of transactions, how personal privacy is protected is perhaps one of the important issues we face in this digital age. There are many important findings: the most significant one is that the more popular sites did not necessarily provide better privacy control features for users than sites that were randomly selected. This is surprising because one might expect “the more popular, the better privacy protection” — a sort of marketplace magic that automatically solves the issue of personal privacy online. This was not the case at all, because the popular sites with more resources did not provide better privacy protection. Of course, the Internet in general is a malleable medium. This means that commercial sites can design, modify, or easily manipulate user interfaces to maximize the ease with which users can protect their personal privacy. The fact that this is not really happening for commercial websites in the U.S. is not only alarming, but also suggests that commercial forces may not have a strong incentive to provide privacy protection.

Ed: Your sample included websites oriented toward young users and sensitive data relating to health and finance: what did you find for them?

Yong Jin: Because the sample size for these websites was limited, caution is needed in interpreting the results. But what is clear is that just because the websites deal with health or financial data, they did not seem to be better at providing more privacy protection. To me, this should raise enormous concerns from those who use the Internet for health information seeking or financial data. The finding should also inform and urge policymakers to ask whether the current non-intervention policy (regarding commercial websites in the U.S.) is effective, when no consideration is given for the different privacy needs in different commercial sectors.

Ed: How do your findings compare with the first investigation into these matters by the FTC in 1998?

Yong Jin: This is a very interesting question. In fact, at least as far as the findings from this study are concerned, it seems that no clear improvement has been made in almost two decades. Of course, the picture is somewhat complicated. On the one hand, we see (on the surface) that websites have a lot more interactive features. But this does not necessarily mean improvement, because when it comes to actually informing users of what features are available for their privacy control and protection, they still tend to perform poorly. Note that today’s privacy policies are longer and are likely to carry more pages and information, which makes it even more difficult for users to understand what options they do have. I think informing people about what they can actually do is harder, but is getting more important in today’s online environment.

Ed: Is this just another example of a US market-led vs European regulation-led approach to a particular problem? Or is the situation more complicated?

Yong Jin: The answer is yes and no. Yes, it is because a US market-led approach clearly presents no strong statuary ground to mandate privacy protection in commercial websites. However, the answer is also no: even in the EU there is no regulatory mandate for websites to have certain interface-protections concerning how users should get informed about their personal data, and interact with websites to control its use. The difference is more on the fundamental principle of the “opt-in” EU approach. Although the “opt-in” is stronger than the “opt-out” approach in the U.S. this does not require websites to have certain interface-design aspects that are optimized for users’ data control. In other words, to me, the reality of the EU regulation (despite its robust policy approach) will not necessarily be rosier than the U.S., because commercial websites in the EU context also operate under the same incentive of personal data collection and uses. Ultimately, this is an empirical question that will require further studies. Interestingly, the next frontier of this debate will be on privacy in mobile platforms – and useful information concerning this can be found at the OII’s project to develop ethical privacy guidelines for mobile connectivity measurements.

Ed: Awareness of issues around personal data protection is pretty prominent in Europe — witness the recent European Court of Justice ruling about the ‘Right to Forget’ — how prominent is this awareness in the States? Who’s interested in / pushing / discussing these issues?

Yong Jin: The general public in the U.S. has an enormous concern for personal data privacy, since the Edward Snowden revelations in 2013 revealed extensive government surveillance activities. Yet my sense is that public awareness concerning data collection and surveillance by commercial companies has not yet reached the same level. Certainly, the issue such as the “Right to Forget” is being discussed among only a small circle of scholars, website operators, journalists, and policymakers, and I see the general public mostly remains left out of this discussion. In fact, a number of U.S. scholars have recently begun to weigh the pros and cons of a “Right to Forget” in terms of the public’s right to know vs the individual’s right to privacy. Given the strong tradition of freedom of speech, however, I highly doubt that U.S. policymakers will have a serious interest in pushing a similar type of approach in the foreseeable future.

My own work on privacy awareness, digital literacy, and behavior online suggests that public interest and demand for strong legislation such as a “Right to Forget” is a long shot, especially in the context of commercial websites.

Ed: Given privacy policies are notoriously awful to deal with (and are therefore generally unread) — what is the solution? You say the situation doesn’t seem to have improved in ten years, and that some aspects — such as readability of policies — might actually have become worse: is this just ‘the way things are always going to be’, or are privacy policies something that realistically can and should be addressed across the board, not just for a few sites?

Yong Jin: A great question, and I see no easy answer! I actually pondered a similar question when I conducted this study. I wonder: “Are there any viable solutions for online privacy protection when commercial websites are so desperate to use personal data?” My short answer is No. And I do think the problem will persist if the current regulatory contours in the U.S. continue. This means that there is a need for appropriate policy intervention that is not entirely dependent on market-based solutions.

My longer answer would be that realistically, to solve the notoriously difficult privacy problems on the Internet, we will need multiple approaches — which means a combination of appropriate regulatory forces by all the entities involved: regulatory mandates (government), user awareness and literacy (public), commercial firms and websites (market), and interface design (technology). For instance, it is plausible to perceive a certain level of readability of policy statement is to be required of all websites targeting children or teenagers. Of course, this will function with appropriate organizational behaviors, users’ awareness and interest in privacy, etc. In my article I put a particular emphasis on the role of the government (particularly in the U.S.) where the industry often ‘captures’ the regulatory agencies. The issue is quite complicated because for privacy protection, it is not just the FTC but also Congress who should enact to empower the FTC in its jurisdiction. The apparent lack of improvement over the years since the FTC took over online privacy regulation in the mid 1990s reflects this gridlock in legislative dynamics — as much as it reflects the commercial imperative for personal data collection and use.

I made a similar argument for multiple approaches to solve privacy problems in my article Offline Status, Online Status Reproduction of Social Categories in Personal Information Skill and Knowledge, and related, excellent discussions can be found in Information Privacy in Cyberspace Transactions (by Jerry Kang), and Exploring Identity and Identification in Cyberspace, by Oscar Gandy.

Read the full article: Park, Y.J. (2014) A Broken System of Self-Regulation of Privacy Online? Surveillance, Control, and Limits of User Features in U.S. Websites. Policy & Internet 6 (4) 360-376.

Yong Jin Park was taking to blog editor David Sutcliffe.

Yong Jin Park is an Associate Professor at the School of Communications, Howard University. His research interests center on social and policy implications of new technologies; current projects examine various dimensions of digital privacy.

Note: This article gives the views of the authors, and not the position of the Policy and Internet Blog, nor of the Oxford Internet Institute.

One Response to Does a market-approach to online privacy protection result in better protection for users?

  1. “… no clear improvement has been made in almost two decades …” Indeed. And you could go back fifty. As long as no one really cares if an unknown person “piggy backs” on them when they enter the “secure” section of their office building in the morning, then those people surely will not care about online security either. The former are e.g. all too busy to get into the building to really want to wait until the door closes behind (!!!) them again before they proceed. They just don’t CARE! And those people eventually become CEO after letting total strangers trail them into the building on THEIR security ID (essentially). And so the CEO has the same mindset. And then that is what he/she does when it comes to all (!) other areas of security. And which is why neither do they know who has access to their computer systems (only in theory) nor do they know how many people have to be evacuated in case of alarm (only in theory), because no one accounts for the “piggy backers”.