Trade Law Daily is a Warren News publication.
Transforming Everything

AI, Digital Healthcare Key Tech Topics for 2023, CES Speakers Say

Digital healthcare offers promise for doctors and their patients, but doctors have to play a role as technology unfolds, physician Bobby Mukkamala, immediate past board chair of the American Medical Association, told CES Friday. Telehealth has been a recurring focus of the FCC under Chairwoman Jessica Rosenworcel, with a telehealth item teed up for a commissioner vote later this month (see 2301050048). Mukkamala and other speakers also noted challenges posed by AI.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

Digital technologies, from wearables to AI, have almost limitless potential to transform healthcare,” said Mukkamala. “The AMA believes, as do I, that without direct input from physicians at those early stages in the conception and the design, that far too many of these technologies will fail to deliver on that promise, or worse, they’ll complicate healthcare,” he said.

Despite our investment in healthcare … there’s more people with pre-diabetes that don’t even know it, more people with hypertension that don’t even know it,” Mukkamala said. There are “more people with sleep apnea who don’t even realize it until they come to my office and I look and see that there’s barely an airway there,” he said. CES is full of examples of how remote healthcare could help address chronic conditions, he said.

AI has been a hot topic at CES this year, with numerous speakers addressing early government efforts to find guidelines.

Physicians are focused on AI “and making sure we’re creating policies that will guide the creation of AI and its application in healthcare in a way that is useful to us and our patients,” Mukkamala said. There's “a lot of hope that AI, if done correctly, has enormous potential to improve health outcomes,” he said. Physicians are also concerned about transparency and being able to see “the science that says this tool is actually going to be helping us,” he said.

AI will “transform everything,” said Nasdaq CEO Adena Friedman. “The cloud has really unlocked the ability for data to be leveraged in new ways,” she said. “You’ve got AI coming in and being able to make sense ... of that vast amount of data,” she said. “Invest in cloud, invest in AI,” Friedman advised companies: “Anyone who is ignoring those massive megatrends are going to find themselves falling behind.”

In the U.S., work continues on a National Institute of Standards and Technology AI risk management framework, said Elham Tabassi, NIST senior research scientist. Congress gave NIST two years to develop the framework, which expires this month, and it will be released soon, she said.

We all agree that AI has an enormous potential for improving our lives in ways that we don’t even know yet, at the same time it comes with its own risks and possible negative consequences,” Tabassi said. “At NIST, we launched an open, transparent collaborative process to develop a voluntary framework for AI risk management in a flexible, but structured and measured way.”

The framework has to be flexible “to allow for innovation to happen” but measured “because if you cannot measure it you cannot improve it,” Tabassi said. “It takes a rights-preserving, or rights-affirming approach, meaning that it puts the protections of individual rights at the forefront and in a way tries to kind of operationalize values” in high-level documents from various institutions across the world, she said.

View From Europe

The European Parliament is “really striving” to finalize a legal framework for AI by March “and hopefully have it voted” by next year, said Laura Caroli, a parliamentary assistant working on the framework. “It takes a long time, I’m aware, but it’s a complex piece of legislation and there are a lot of tensions politically,” she said. The rules will likely take effect in 2026, she said.

Because AI is “such an innovative technology, and constantly evolving, we don’t want to regulate everything, but just have a gradual approach to risk that allows some flexibility,” Caroli said. “We really target specific use cases” with four levels of risk, she said. Low-risk use cases aren’t regulated, “then we have some applications that will require some transparency, such as chatbots when an AI is interacting with a human the individual is supposed to know … and deep fakes because sometimes they can have a negative effect even on democracy,” she said: “We want some disclosure that you are watching a deep fake right now, it’s not a real person.”

High-risk areas are the main target of the legislation, Caroli said. “There is a list of use cases that are considered high risk, such as law enforcement, education, creditworthiness, use cases that have legal effects or real impacts on peoples’ lives and also critical infrastructure,” she said. For these cases, the law imposes a list of requirements including data governance, transparency, accuracy, cybersecurity and human oversight, she said.

We’re able to use AI to help generate insights and help, maybe, with some predictive identification of those that may be at risk so we can plan proactive outreach,” said Stephanie Fiore, Elevance Health director-digital health policy. “We’re seeing things like using AI to help with retina scans or X-ray image scans to detect early disease states than maybe the human eye could have before,” she said.