This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

Concerns about misinformation could lead to limits on key media freedoms

Just about anyone can publish just about anything on the internet, without editing or censorship. That’s its glory. That’s also its biggest problem, and an increasing point of scrutiny, as important voices suggest changes to internet freedom laws.

U.S. law allows special freedoms to internet publishers. In the print world, a publisher is legally responsible for everything it publishes. The New York Times must employ editors to vet every piece of content, and is liable even for every letter to the editor it publishes. But section 230 of the 1996 Communications Decency Act exempts internet publishers from liability for third-party content. So Facebook, under section 230, has practically no liability for its users’ content.

Essentially, section 230 exempts all internet intermediaries from liability for their users’ content, with only a few exemptions. Congress and the internet industry viewed this special internet freedom as necessary to allow the internet to grow and flourish. After all, if AOL, CompuServe, and other internet pioneers had to vet every single posting by all of their users, while fearing their own liability, far less content would have been posted in those early days.

For more than 20 years, section 230 has been a foundational law for the internet in the United States. It has protected traditional internet intermediaries, message boards, social media companies, news publishers who allow users to comment on their articles, and thousands of others – everyone from news organizations to internet dating services. But there has always been a nagging concern about section 230. Does it give too much freedom to internet posters? Does it encourage and allow racist, hateful, harmful, and deceptive posts?

One of the answers to these concerns has been section 230 itself, because in addition to exempting intermediaries from liability for users’ post, it also gives them great freedom to set content standards. Section 230’s so-called Good Samaritan provision assures intermediaries that they won’t be liable for their selection, editing, or deletion of user posts, for material they judge obscene, filthy, excessively violent, harassing, or “otherwise objectionable.” This power, it was felt, would allow intermediaries to set appropriate standards and keep out bad and harmful content.

From the beginning, some analysts warned that unless publishers used their Good Samaritan right to keep internet discourse from disintegrating, Congress might revoke the statute’s immunity provisions. An early court decision even warned that the provision’s voluntary nature meant that it protects intermediaries “even where the self-policing is unsuccessful or not even attempted.”

Some authorities are now pondering the failure of self-policing, and looking at other regulatory approaches. For example, the report by a UK parliamentary committee, “Disinformation And Fake News,” explored many of the misinformation and disinformation campaigns of the last few years, how Facebook accommodated them, and how Facebook officials, in the committee’s view, failed to own up to its responsibility for allowing them.

The committee’s final report recommends that companies like Facebook should no longer have the protections of the UK’s law that shields intermediaries from liability for the customers’ posts. Specifically, the committee recommended recognizing a special category of internet company “which is not necessarily either a platform or a publisher.” This kind of company—presumably, social media companies like Facebook—would “assume legal liability for content identified as harmful after it has been posted by users.”

Such a standard—making intermediaries liable for content after they have notice of its harmfulness—is similar to one of the pre-section 230 legal standard in the U.S., set in the case of Cubby v. CompuServe. In that case, the court ruled that intermediaries like CompuServe, once put on notice of legally actionable content by their uses, faced liability if they did not modify or take it down that content. CompuServe and other internet companies told Congress in 1996 that such a standard would be unworkable, and would likely lead to extensive self-censorship, as companies greatly limited all user posts that created potential liability.

A few U.S. policymakers are attacking the status quo by suggesting a fairness doctrine for social media, and a few litigants are seeking in test cases to hold social media companies accountable as semi-public entities for their republication procedures (including the automated algorithms that have permitted disinformation to get posted and multiply). Various state and territorial attorneys general are attempting to narrow section 230 by exempting state criminal claims from its coverage – thus allowing the states to criminally charge intermediaries based on their users’ content.  But many of these initiatives raise unintended consequences of their own. The recent proposal by Senator Joshua Hawley of Missouri, for example, which seeks to regulate Internet content based on perceived political bias, has been criticized for potentially eviscerating the Good Samaritan provision.

As a result of the increasing attacks on section 230, its defenders have been forced to speak up, defending what they long considered a settled part of our legal landscape.

Better private policing using the Good Samaritan provision is probably the least disruptive solution. One policy analyst, Mark McCarthy, recently discussed ways of holding social media companies more accountable for following their own stated practices. He concluded that “Platforms need to do more in controlling harmful conduct, and policymakers need to think clearly and creatively about how to encourage them to do this.”

Some platforms and scholars are working on better methods of content moderation, but the developments so far, like the Santa Clara Principles regarding transparency in content moderation, still leave unanswered the substantive question of how content moderation can effectively address the avalanche of misinformation in today’s social media.

The Parliamentary committee’s recommendation sounds a warning call for all internet intermediaries. As more and more internet content is found to be seriously deceptive and harmful—not just to individuals but also to commerce and electoral processes—section 230 is likely to come under more and more scrutiny. It will be up to internet intermediaries and publishers—including social media companies—to show that they are making the right balance between two freedoms granted by section 230—freedom to publish, and freedom to set responsible publication standards.

Mark Sableman is a partner in Thompson Coburn’s Intellectual Property group.

Tags

social media, 1666 telecommunications act, section 230, internet censorship, social media censorship, internet publishing law, media law, internet publishing, internet freedom, internet law twist & turns, blogs