Trade Law Daily is a Warren News publication.
‘Very Hard Questions’

Facebook Supports Industry Setting Content Moderation Standards

Facebook supports industry setting content moderation standards, said Public Policy Manager Lori Moylan Friday. Speaking at a George Mason University Antonin Scalia Law School event, she said companies should collaboratively define terms like manipulated media and deepfake.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

Senate Judiciary Committee Chairman Lindsey Graham, R-S.C., recently suggested tech platforms earn content liability immunity under Section 230 of the Communications Decency Act through industry-written best business practices (see 1907090062). Sen. Josh Hawley, R-Mo., introduced a bill that would have platforms earn immunity through political neutrality certification by the FTC (see 1906190047).

Asked after the panel about earning liability protection, Moylan directed questions to the company. During her appearance, she said industry standards would allow users to know what content to expect on platforms: “They’re actually very hard questions to figure out in a way to create certainty for users.” Facebook wants content moderation to remain competitive for companies, she noted. For instance, Facebook likes to identify with “peachy-level,” or PG-rated content for users.

Political neutrality is a very difficult thing to define, Moylan said. When the company banned right-wing conspiracy theorist Alex Jones, claims of censorship and bias came from the right, she said. When CEO Mark Zuckerberg testified that the platform doesn’t fact-check political ads, there was backlash from the left, she said, so there are very different answers about what’s acceptable.

Self-regulation allows a more durable approach to consumer protection, said Uber Senior Regulatory Counsel Kathryn Ciano Mauler. Government should respond to innovation, not try to anticipate products and services, she said. There will always be a lag in this regard, she said.

Platforms should earn Section 230 protection by following nondiscrimination obligations, argued Michigan State University law professor Adam Candeub. He told us later the proposals from Graham and Hawley are “realistic.” It’s possible to define political neutrality, he said, citing years of First Amendment jurisprudence, and discrimination is determined in employment disputes and other contexts constantly. Section 230 is a “tremendous gift” that Congress shouldn’t continue to give away for free, he said.

Two other academics disagreed. “Mild” anti-discrimination obligations aren’t so mild when considering claims of political injection and First Amendment concerns, said Adam Thierer of George Mason University's Mercatus Center. Section 230 is a “subsidy” in the public’s interest that allows open communication, he said. Policymakers should err on the side of more free speech, not less, said Thierer, and issues with platform quality can be better solved by injecting competition.

Increased liability, which Candeub suggested, would only increase suppression of controversial speech, said Georgetown Law professor Anupam Chander. Or it could lead to a hands-off approach that would result in more harmful speech and drive mainstream users away, he said.

Government limitations on free speech are far more impactful than private control of speech, Moylan said, citing laws in the EU that bar certain types of speech. She also rejected the notion that Facebook doesn’t have competition. Its messaging services compete with offerings like iMessage from Apple and email. Facebook also competes with Twitter and “definitely” TikTok for social media shares, she said: “The threat of competition is incredibly real.”