Trade Law Daily is a Warren News publication.
'Shortage of Expertise'

Government Could Do More to Develop AI Policy, Experts Agree

Federal agencies need better coordination on AI as the U.S. works toward a national AI policy, said Lynne Parker, former director of the White House National Artificial Intelligence Office, on a Center for Data Innovation webinar Thursday. Experts said the Biden administration should do more to follow up on initiatives started under President Donald Trump. Meanwhile, a new survey by NVIDA found that 95% of industry respondents said they’re looking at or using AI, though most use is at an early stage.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

There is a shortage of expertise [on AI] across our government,” said Parker, now director of the AI Tennessee Initiative at the University of Tennessee. “We have a handful of people in a number of different places” in the government who “understand AI policy,” she said. Coordination means the federal government can “look at what the holes are -- look at what we need to consider more deeply,” she said. “There’s certainly more that we can do,” she said.

The White House released a blueprint for a bill of rights last year, which is “probably the defining document” from the Biden administration that “contextualizes the harms and risks of AI,” said Alex Engler, fellow at the Brookings Institution. The blueprint focuses on proliferation of AI in areas that are typically regulated like hiring, education, healthcare provisioning and financial services, Engler said. “There’s a little less” on AI use in critical infrastructure or online information “ecosystems,” he said.

The biggest “downside” is “the complete absence” of focus on law enforcement, “both from the scope of the AI bill of the rights and from the associated agency actions,” Engler said: “If you were hoping that law enforcement was going to implement some rules on itself and its own use of AI tools, for instance facial recognition … we haven’t seen that yet.” Engler said he would like the Biden administration to do more on two executive orders issued under Trump.

EO 13960 said federal agencies should classify their use of AI, Engler said. The response to the order “has been a little underwhelming” and “the resulting inventory seems suspiciously sparse and not super informative,” he said, saying he wishes the Biden administration would push agencies to do more to comply with the order. The other order, EO 13859, asked agencies to create a plan for how their current rules affect AI systems, and “was a really good idea,” he said. Only the Department of Health and Services complied, he said: “They did a really thorough job, talking about 12 or 13 different statutes and a dozen or so emerging use cases and some research that was going on in the field.”

The government has released some policy documents on AI and principles for regulation, Parker said. “The implementation of what we have is what has been lagging,” she said. Parker said the change in administrations, with different priorities, probably slowed work on the executive orders. Implementation of past orders is “not the immediate priority” and “agencies pick up on that,” she said.

Paul Lekas, Software & Information Industry Association senior vice president-global public policy, was more optimistic that federal AI policy is moving forward. The National Institute of Standards and Technology issued its first version of an AI Risk Management Framework in January, which is “foundational document” for the government to work with companies and other stakeholders “to try to guide the development and the implementation of AI” following “responsible principles,” Lekas said.

The voluntary framework looks at key concerns about safety, security and rights and has broad industry support, Lekas said. It follows current thinking “on how do we get AI out there to maximize the benefits … while also minimizing the risks,” he said. It’s a “complement” to the AI bill of rights, he said.

Telcos have announced they’ve tested AI-enabled solutions for network operations, base station site planning, truck-routing optimization, and machine learning data analytics,” the NVIDA report said: “To improve the customer experience, some are adopting recommendation engines, virtual assistants, and digital avatars.” Only 34% of respondents reported using AI for more than six months, while 23% said they’re still learning about different options and 18% reported being in a trial or pilot project.