As we enter the 2020 election season, Americans are likely to be flooded with misinformation and disinformation, particularly on social media. How do we deal with this problem? In this and the next several posts, we will examine different methods for dealing with these problems: existing legal remedies, vetting by intermediaries (such as social media companies), new laws and voter self-help.
The obvious starting point is legal remedies. Defamation and false advertising laws are designed to discourage false claims — libel and slander covering false claims that damage an individual’s reputation, and false advertising covering false claims about products and services. Can these legal claims be used to address the spread of false election-related information through social media?
The first hurdle is that a lot of election misinformation is posted anonymously, or through pseudonyms. And both legal and practical hurdles make it difficult to track down the real authors. Legally, courts protect anonymous speech and require special showings before an anonymous or pseudonymous writer is unmasked. As a practical matter, it takes a series of steps to unmask an unsophisticated poster who uses a false name. For the sophisticated Russian misinformation operation of the 2016 election campaign, it took the investigative powers of the Justice Department to identify the perpetrators. (To clarify, I’m using the term “misinformation” to cover both misinformation, the inadvertent spread of false or misleading information, and disinformation, the deliberate use of false information to deceive.)
Then there’s the question of what can be done even if you identify and catch an anonymous poster of false information. If they are outside the U.S., or have no significant assets, the threat of tort damages is ineffective. And while some scholars suggest that injunctions may be available to prevent further false posts, the general rule against prior restraints makes that path difficult.
Where the original poster is unmasked, or where their real name was used, the next step is picking a legal claim. If the post is a false advertisement, one might think first of a false advertising claim. We have strong laws, especially Section 43(a) of the federal Lanham Act, requiring commercial advertising to be truthful as to all factual claims (excluding only “puffing,” the vague boasts that consumers don’t take factually.)
But Section 43(a) and state laws on advertising generally cover only commercial advertising. And for First Amendment reasons, such strict-liability truthfulness requirements couldn’t apply to political advertising. A strict-truth requirement for political ads, allowing litigation over the literal accuracy of every word and sentence, and the overall message, would have a chilling effect, preventing many useful messages from reaching the electorate. And political issues often can’t be neatly judged true or false.
The next logical legal claim is defamation — libel or slander. Libel would apply to social media posts, which are written communications. But various libel privileges and other defenses, arising from common law and the Constitution, limit this tort’s power to address false political claims.
Libel claims address false and seriously disparaging statements injuring someone’s reputation. So initially, there must be a statement about someone, injuring his or her reputation. This initial “of and concerning” requirement can be a serious hurdle. For example, in a lawsuit by the Trump campaign complaining of a political ad about President Trump’s handling of the coronavirus crisis, the defendant raised the defense that the ad was about the President, not his campaign, so the campaign could not bring the case. (Had the President brought the case personally, it would have undercut his arguments in other cases that he doesn’t have time for litigation, and it would have opened him up to broad discovery about his past and his reputation.)
Moreover, misinformation campaigns often do not relate to specific persons; they assert false claims about social issues, or laws and political conduct in general, rather than specific statements about a specific individual. That kind of misinformation simply doesn’t come close to the orbit of libel law.
Then, even when statements do relate to a specific individual and are definitely false, they still may not support a libel claim. The statement must be the kind that seriously injures one’s reputation in the community — something akin to an accusation of serious criminal conduct. A false accusation that a candidate opposed a certain bill or supported an unpopular issue probably won’t qualify. And the false statement must be substantially false — not just false in a few particulars. You can mislead a lot with literally false statements that don’t rise to the level of substantial falsity for libel purposes.
Of course, the defendant can claim the constitutional defense of New York Times v. Sullivan, which allows libel suits by officials or candidates to succeed only on clear and convincing proof that the defendant subjectively knew the statements were substantially false (or acted with reckless disregard to their falsity) when they were published. This standard was meant as, and is, a high hurdle for libel claims involving candidates and public officials. And the subjective standard may be especially hard to surmount in today’s world, where different political factions live by different views of the world.
Finally, statements of opinion and “rhetorical hyperbole” fall outside libel law. Some of President Trump’s pre-White House lawsuits set precedents in this regard, and his unique manner of campaigning, featuring name-calling and invective, has already pushed courts to categorize extreme and inflated political campaign allegations as falling outside libel suits. Essentially, the more extreme and hyperbolic the claims, the less likely they are to make out a libel case.
Despite all these problems, some political players still try to use libel suits against advertisements or posts they claim are damaging. One popular strategy is to sue an intermediary, such as a TV station or social media company, rather than the creator of the post. But social media companies are immune from liability for their users’ posts, and broadcasters are immune from suits over candidate-paid political ads. Even as to the non-candidate ads for which broadcasters are potentially liable, most broadcasters realize that these suits are often intended solely to intimidate, and they can often rely on the advertiser to justify its ad and defend the suit.
Of course, as Professor Eugene Volokh has pointed out, there are some situations where lawsuits against misinformation can work, like cases of deception for financial gain, but these exceptions fall outside the mainstream of political misinformation.
Political misinformation is a serious problem, identified with abuse of power, disengagement and distrust, election interference and lingering effects on beliefs and attitudes. But it won’t be remedied through lawsuits. In the next posts, we’ll discuss relying on vetting and screening by social media companies, the possibility of new legislation or enforcement efforts, or leaving it to the readers/viewers to figure things out themselves.
Mark Sableman is a member of Thompson Coburn's Media and Internet industry group, and a member of the Firm's Intellectual Property practice group.