This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

Muting Misinformation: What’s the role of social media companies?

Can we rely on social media companies to vet and clean up their content, so that misinformation doesn’t pollute the 2020 political campaign? Having concluded in a previous post that lawsuits can’t effectively stop Internet misinformation, we turn to social media companies as the next logical guard in the fight against misinformation.

A lot of people get misinformation from social media sites; it’s easy for anyone to join a social media site and post content there, and content links often appear benign when they reach social media users.

Additionally, social media companies thrive on activity, including user posts and reposts. Many reposts occur through automated practices (bots), and bots often accentuate misinformation and cause it to dominate over legitimate information. A recent study, for example, revealed that bots are responsible for large portions of tweets and retweets about the coronavirus and “reopening America.” Misinformation can occur through bots’ amplification of marginal views because it distorts readers’ understanding about what others are thinking and saying.

Finally, one report notes, “the most important and dangerous feature” of misinformation isn’t its falsity, but its “spreadability” — and social media services spread misinformation rapidly and broadly.

Our main U.S. Internet law, Section 230 of the federal Communications Act, at first blush seems to contemplate that social media companies would indeed fight against objectionable content on their sites. Section 230, enacted in the early days of the commercial Internet, freed intermediaries from the task of pre-vetting all user content (that’s Section 230(c)(1)), but also gave them the freedom and ability to remove objectionable content with impunity (that’s Section 230(c)(2)).

The OK-to-remove-content portion of Section 230 is sometimes known as the “Good Samaritan” provision, because it encourages intermediaries (like social media companies) to voluntarily clean up their systems, and it promises them in return that they won’t be liable for such actions. It was drafted initially with vulgar and harassing content in mind, so it refers to material that the provider considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The “otherwise objectionable” category can clearly cover political misinformation.

So can we confidently expect social media companies to embrace their Good Samaritan role and vigorously search for and remove false and misleading information on their systems? Unfortunately, no.

The Good Samaritan provision, though helpful, isn’t an absolute. It refers to the provider’s actions “voluntarily taken in good faith,” and social media companies see that “good faith” standard as a possible opening for claims challenging their takedown decisions.

So rather than rely on their Good Samaritan takedown right, they look instead to their near-absolute immunity under Subsection (c)(1) as their authority for the actions they take with respect to user content. Subsection (c)(1) provides that Internet intermediaries can’t be viewed as publishers with respect to their user content, so it provides nearly equivalent authority to (c)(2) as to user content takedowns.  But the threat of litigation over the good faith issue probably nonetheless restrains social media companies in their Good Samaritan takedown activities.

Rather than take down objectionable content, intermediaries may find it less risky to edit or annotate their user content. Light editing is permitted under Section 230, as are links to other third-party content. That may be why Twitter, in response to tweets by President Trump about mail-in voting, added a link to third-party fact-checking resources on the subject.

Particularly as to elections, social media companies must walk a fine line between policing content and maintaining a forum in which candidates can speak. Twitter’s solution of providing links to fact-checking sources follows some earlier recommendations and suggests a third way beyond “taking things down versus leaving them up — the binary that dominates these discussions and suggests simplistic solutions to complex problems,” scholar Evelyn Douek has written. She characterized Twitter’s use of fact-checking links a “watershed moment.”

Then there is the question of social media companies’ time, resources and priorities. Congress enacted Section 230 in part so that Internet intermediaries wouldn’t need to employ big staffs to review and vet every item of content. At the time, Congress had in mind thousands of Internet service providers, all more akin to telephone companies than publishers.

As it developed, social media giants now publish the bulk of the most-read content on the Internet. That unexpected development means two things. First, the vetting task is even greater than originally foreseen, given the vast size of social media networks. And second, users see the networks as something more than a mere utility like a phone company, meaning that the companies have had to institute some content controls.

Facebook, for example, employs staffers and outsourcing providers to find and remove various kinds of highly offensive content from its network. It has even retained expert executives and consultants to determine what kind of content it should address, and what rules it should apply to them. This is essentially a matter of customer satisfaction, not government edict; although Section 230 frees Facebook from almost all content liabilities, the company realizes that its users will not be comfortable unless certain content, such as sexually explicit and offensive content, is removed.

If social media giants like Facebook and Twitter do some vetting of content on sexually explicit and similarly offensive content, doesn’t this suggest social media companies will vet political content, too? Not necessarily — the considerations are different. While parents and other customers care deeply about a forum free of offensive sexual content, there’s no specific group that insists on a political-misinformation-free forum. And the costs of searching out, vetting and policing political misinformation is far higher than the offensive-speech categories that Facebook and other social media giants regularly police for, like sexually explicit content and exploitation of children and women. The Electronic Frontier Foundation has noted that moderating content at the scale of major social media companies is “a difficult and unsolved problem” involving “impossible decisions that will inevitably result in mistakes and inconsistencies.” A NATO study found that social media platforms leave 95% of reported fake accounts up on their services.

Facebook’s founder and majority owner, Mark Zuckerberg, told Congress in 2019 that Facebook would not police political ads in 2020, even though it has been identified as a primary forum for misleading and disruptive Russian-placed political posts in 2016. Laissez-faire is clearly the easiest (and most profitable) path on this issue for social media companies.

That doesn’t mean that no social media companies will take on the Good Samaritan task of keeping political misinformation off their services. Some might take steps in this regard, as with Twitter’s ban on political ads and its use of links to fact-checking sites. But for both legal and business reasons, the default response of most social media services is likely to be, “We’re just a portal; don’t count on us to vouch for the truthfulness of what you find here.”

In short, self-policing by social media companies is unlikely to keep political misinformation off of their pages. That is one reason why various interest groups and Congress are looking at possible changes to existing Internet laws—the subject of our next post.

Mark Sableman is a member of Thompson Coburn's Media and Internet industry group, and a member of the Firm's Intellectual Property practice group.

Tags

media literacy, social media misinformation, good samaritan provision, twitter, facebook, social media law, social media, political misinformation, section 230, political speech, 2020 election, disinformation, misinformation, internet law twist & turns, blogs