Trade Law Daily is a Warren News publication.
'Fraught'

Simington Sees Little Need for FCC to Revisit Net Neutrality Rules

FCC Commissioner Nathan Simington questioned the need for the FCC to revisit net neutrality rules, during a keynote interview at the State of the Net conference Monday. Simington asked whether the U.S. doesn't “have de facto net neutrality at this moment.” It’s unclear what to do on net neutrality when it already exists, he said.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

When was the last time your ISP blocked a legal website, prevented you from connecting an application of your choice … a TV, phone computer, tablet, baby monitor?” Simington asked: “Your ISP isn’t stopping you from connecting any of those things. Your ISP isn’t stopping you from running any legal application either on your computer or as a network application. Your ISP is probably not blocking or throttling either.”

Imposing Communication Act Title II regulation on broadband, as the commission did under former Chairman Tom Wheeler, before being reversed under Ajit Pai (see 1712140039), would be “fraught,” Simington said. “We’re in a very different legal environment now than we were in 2015,” he said. Others have questioned whether the Supreme Court would uphold a new order based on recent decisions and the evolving major questions doctrine (see 2206300066).

An approach "where you have to rewrite so much of the statute, via the forbearance power” as we saw in the 2015 order “might not be viable today” because the use of forbearance “might face greater scrutiny,” Simington said. “There are a lot of ways a Title II order would have to be very, very carefully thought through, and I would question whether Title II brings anything to the table in terms of expected consumer benefits,” he said.

Simington acknowledged his limited role as a minority commissioner in determining what’s next on net neutrality. “No one really cares what I think,” he said: “If it’s going to happen it’s going to happen regardless of what I have to say.”

The FCC has some role in cybersecurity but only in limited areas, Simington said. “It’s really important that we step up and act in the areas where we’re good at cybersecurity, and in the areas we’re not we should back off and leave that to the domain experts,” he said.

Another big topic at the conference was the future of AI regulation. The EU and the states, led by Connecticut, California and New York, are leading the way on AI policy, said Bertram Lee, Future of Privacy Forum senior policy counsel-data decision-making and AI. Most of the focus is on asking companies and organization that use AI to be more transparent, he said.

Transparency is going to be the next big wave within AI regulation -- can you cite your work, can you show your work,” Lee said: Will you be willing to show “this is how we did testing, this is how we did risk management, this is how we do bias testing. There are all the things that are so incredibly important.” The scale of how AI is being used is scary to regulators, policymakers and consumers, and is why there is a push to regulate, he said. AI has the ability to make “thousands and thousands, if not millions of decisions that are really consequential to the lives of people every single day,” he said.

The National Institute of Standards and Technology’s AI Risk Management Framework, released in January (see 2302230046), looks at “what’s an AI system, what are the risks of AI systems” and how do the risks differ from other data systems, said Elham Tabassi, chief of staff at NIST’s Information Technology Laboratory. “It’s measurable, and that’s really near and dear to our hearts because if you cannot measure it you cannot improve it,” she said. As it developed the framework, NIST heard from more than 240 organizations and received more than 600 sets of comments, she said.

To improve AI any approach must provide a methodology for measuring trustworthiness, Tabassi said. A risk-based approach is a very powerful approach, and if it’s done correctly it’s also flexible,” she said.

The release of AI chatbot ChatGPT in November “captured a lot of attention, across the media and everywhere,” but a lot of researchers in the AI field were already aware of generative AI and were already thinking about the implications, Tabassi said. She noted developing a risk profile for generative AI will be “complicated and complex.”

Right now there is a real heavy emphasis on bias, but there is not as much about testing and about data” and those conversations “are just as important,” Lee said.

Businesses are going to use AI, said Michael Richards, director of the Policy, Technology Engagement Center at the U.S. Chamber of Commerce. “We need to make sure that the use of AI is going to be trustworthy and mitigates potential issues around it,” he said. The Chamber has a Commission on AI, which will release recommendations Thursday, he said. “We saw this as so important that it was necessary to bring together everyone,” he said.

The Senate already has limited AI/IoT legislation teed up for markup in the Commerce Committee, said John Beezer, a Democratic aide to the committee. Beezer noted the bill, introduced in January, is sponsored by Sens. Maria Cantwell, D-Wash., and Ted Cruz, R-Texas, normally “polar opposites” politically. “It’s relatively simple -- it’s about if a device is capable of listening to you, it should let you know that,” he said. “We’re not exactly burning up the track and leading Europe to the punch, but we’re not doing nothing either.”