Jeff Kosseff, United States Naval Academy
For 20 years, the United States legal system has operated under a system of online ‘intermediary immunity,’in which websites, Internet service providers, and other online service providers generally are not legally responsible for the actions of their users. Section 230 of the Communications Decency Act of 1996 states that, with a few narrow exceptions, online services do not face a legal requirement to delete or edit user-generated content, though they are free to do so at their discretion. For instance, if a user posts defamatory content on a website, the Internet service provider that connects viewers to the site generally is not liable, nor is the service provider that hosts the website. The liability typically rests with the individual that posted the content.
In recent years, as the magnitude and scope of cybercrime and harassment has increased significantly, some advocates have called for the United States to eliminate or scale back Section 230’s intermediary immunity. Online anonymity tools, they contend, often make it impossible to hold bad actors responsible for their activities in cyberspace. They argue that the most effective way to combat illicit online activity is to hold the service providers responsible for their users’actions in criminal and civil court. In this paper, I argue that such proposals are short-sighted and unnecessary, as they would eliminate one of the most important drivers of innovation and growth on the Internet, while not providing any significant improvements in the fight against cybercrime.
This paper first examines the history of Section 230, and the broad interpretation that U.S. courts have taken. I analyze dozens of published U.S. court opinions that interpret Section 230 to examine the scope of the protections that the United States provides for online service providers. This longitudinal evaluation demonstrates the broad and nearly absolute protection that the United States has provided to online service providers regarding user-generated content over two decades.
The paper next assesses the benefits that Section 230 has created since 1996. In short, bulletin boards, social media, chat apps, and other services that have defined the Internet would not have been feasible in their current forms if service providers had been held legally responsible for the content provided by users. This section explains the legal, economic, and technical mechanics of how Section 230 has shaped our modern conception of the Internet.
The paper then addresses the very significant concerns about illegal and objectionable user content, and argues that service providers have developed their own, highly effective mechanisms to police such content. The paper does so by presenting case studies that examine how online service providers have responded to illicit and malicious use of their services. Among the case studies are social media sites’response to the use of their services by ISIS and other terrorist groups, efforts to collaborate with the government on fighting botnets and other malicious actors, and cloud and email services’proactive efforts to prevent the use of their services to distribute child pornography. I conclude that the service providers voluntarily implement strong measures to block illegal and objectionable content and help law enforcement. Indeed, online services find it to be in their commercial interests to keep illegal and objectionable content off of their services. Accordingly, liability for online service providers is unnecessary because these providers have an independent business reason to ensure that their services are safe for users, and not associated with bad actors.
Moreover, I argue, intermediary liability ultimately would result in an overall reduction in legitimate free speech and association online. For example, if social media companies were civilly and criminally liable for the postings of their users, they likely would not allow users to fully control the content that they post on social media. Social media companies likely would prescreen content, or prohibit user-generated content altogether. Either change would result in an overall reduction in free and legitimate speech, and disempower individuals by depriving them of their ability of free expression. This reduction in free speech would be a victory for malicious actors, whose actions will have stifled the free speech on which democracies have depended for centuries.
In short, the U.S. experience with Section 230 over the past 20 years has demonstrated that intermediary immunity is a catalyst for online innovation and economic growth, and that despite this immunity, online service providers act responsibly to prevent illegal and objectionable content. The U.S. experience with broad intermediary immunity can help inform other countries as they determine liability frameworks for online actors.