Trade Law Daily is a Warren News publication.
‘Slow to Respond’

House Panel Asks How to Get Big Tech Buy-in on Disinformation

No matter how sophisticated technology for combating deep fakes and disinformation is, it’s useless without buy-in from large tech platforms, which profit from the rise of sensational content, the House Investigations and Oversight Subcommittee heard Thursday. The worry is companies like Facebook, YouTube and Twitter are more focused on growth than oversight and user support functions, said Rep. Jennifer Wexton, D-Va. Platforms disclaim responsibility for user content and have a disincentive to purge fake and bot accounts, she said. Wexton cited a July 2018 report on how Twitter’s stock dropped 8.5 percent after it purged 70 million suspicious accounts over two months. Twitter shares increased about 20 percent between January and December 2018.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

The bad news is some have been “slow to respond” for decades, from child sex abuse to terrorism to illegal item sales, testified University of California-Berkeley electrical engineering professor Hany Farid. Companies fought against change because their business models rely on data, which is reduced when content is removed, he said. House Science Committee ranking member Frank Lucas, R-Okla., asked how Congress can get companies “to the table.” Regulatory pressure is the key because self-regulation doesn’t work, Farid said, suggesting looking to partners in the UK, EU, Canada, Australia and abroad.

Online hoaxes spread six times as fast as true stories, said subcommittee Chair Mikie Sherrill, D-N.J. Users are more likely to believe sensational stories if the content supports their political narrative, she added. She called the hearing a start to understanding how fraudsters are behaving and how to respond.

Deepfake technology is getting more sophisticated and easier to use, said Lucas. Soon, anyone with a decent computer and access to basic software will be able to spread false and misleading content, which is disruptive for society, he said. He noted the committee passed bipartisan legislation Wednesday to improve technology for detecting deepfakes. HR-4355, the Identifying Outputs of Generative Adversarial Networks Act from Rep. Anthony Gonzalez, R-Ohio, has the support of Reps. Jim Baird, R-Ind.; Haley Stevens, D-Mich.; and Katie Hill, D-Calif.

Panel members cited threats from Russia and China. Rep. Don Beyer, D-Va., said they can “wreak havoc” with deep fake technology. The U.S. risks falling behind on the detection front, said Rep. Michael Waltz, R-Fla. A potentially large problem is people pointing to real content as fake, said Farid. Once fake material is disseminated, it’s very hard to know what’s real, said Farid.

Since October, Twitter has published information on 25,084 potentially fake accounts associated with information operations in 10 different countries, testified Graphika Chief Innovation Officer Camille Francois. In the past two years, Facebook has taken down “12,085 accounts, pages, groups and Instagram accounts for engaging in what it calls ‘coordinated inauthentic behavior,’” she said. About 40 million accounts followed one or more of the accounts in question.

The U.S. should establish a program at the National Science Foundation dedicated to similar emerging technology, said University at Albany, State University of New York computer science professor Siwei Lyu. He said investigators should be trained and educated on deepfake technology. Farid urged education of students and the next generation of internet users.