News You Need to Know Today
View Message in Browser

Regulating GenAI | AI news watcher’s blog | Partner voice

Tuesday, November 5, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence AI in healthcare

4 points crucial to the nimble regulation of GenAI

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

A new report from the American Enterprise Institute fleshes out the how’s and why’s of this position. 

“Regulation based on risk management cannot prevent harm arising from outcomes that cannot be known” based on forecasts, writes AEI senior fellow Bronwyn Howell, sole author of the report. “Some harm is inevitable as society learns about these new [GenAI] applications and use contexts. Rules that are use-case specific rather than generic … offer a principled way of enabling efficient development and deployment of AI applications.”

Embedded in the paper are four points relevant to AI regulation stakeholders across industries and sectors. 

1. Managing uncertainty is different from managing risk, so a different sort of regulatory framework is needed for the age of generative AI.  

“Whereas classical risk management requires the ability to define and quantify both the probability and occurrence of harm,” Howell writes, “in situations of uncertainty, neither of these can be adequately defined or quantified, particularly in the case of GenAI models.” More: 

‘Arguably, insurance arrangements for managing outcome uncertainties provide a more constructive way forward than do risk management regimes, which presume knowledge of outcomes that is just not available.’

2. Classic risk management systems have been largely applicable in the development of classic AI systems. GenAI is changing that paradigm.  

GenAI models, Howell points out, “are characterized by the intersection of complex AI systems—which have unknown and unpredictable outcomes—with complex human systems, which have unknowable and unpredictable outcomes.” More: 

‘Historic risk management systems are unlikely to safeguard end users and society from unexpected harms.’

3. We should expect unexpected harms, especially in the application of open-source models, which are exempt from most risk management obligations. 

While not mandatory, arrangements for risk management developed in the U.S. tend to follow standard risk management processes, Howell notes. “Firms following U.S. guidelines will provide greater assurances and harm reduction than those following the EU regulations.” More: 

‘However, the costs of compliance will be higher. Neither set of arrangements is well suited to managing the unexpected outcomes arising from GenAI deployment and use. Consequently, we should expect unexpected outcomes—and harms.’

4. Regulators need to be honest about their limitations in regulating to prevent harm and engender confidence in AI systems. 

“They should focus on educating end users and society about the AI environment and their role in managing personal exposure,” Howell writes. “However, there may also be some benefit in considering the extent to which GenAI developers make their models and training data available to independent third parties for evaluation.” More: 

‘Given that we can expect unexpected harms, regulators should consider establishing an insurance fund or funds and associated governance—potentially at an international level—to enable compensation when inevitable harms arise.’

Howell likens the present moment in the history of AI to the period in which motor vehicles were new. 

“We are on the cusp of a range of new technologies that will be equally or even more transformative,” she writes. “We must become more comfortable about knowing that human advancement comes from facing the unexpected when it occurs and learning from it.” More: 

‘Not taking a journey because we cannot be assured that no harm will occur is to guarantee no progress is made.’

Read the full report.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

knownwell leverages Nabla's athenahealth integration to enhance patient care - Knownwell leverages Nabla’s integration with athenahealth to streamline clinical documentation and enable more personalized patient interactions in weight management care. Read more in this blogpost.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • Michelle Tarver, MD, PhD, has her work cut out for her. As the incoming head of the FDA’s Center for Devices and Radiological Devices, she’s taking charge of a division she’s worked in for 15 years. But familiarity doesn’t make inherited fires easy to extinguish. Among the most pressing to-do’s in her inbox: settling nerves over outgoing CDRH head Jeffrey Shuren’s tight ties with industry, dealing with high-profile criticisms from Robert F. Kennedy Jr. about such connections and figuring out what to do with brain-computer interfaces like the one designed by Elon Musk’s Neuralink. The New York Times breaks down Tarver’s challenges in an article posted Nov. 1. The article quotes FDA commissioner Robert Califf, MD. At last month’s HLTH conference, Califf commented on the agency’s lack of capacity to surveil AI-equipped devices once they’re in use. “It’s so bad,” he remarked. “If you said: ‘Well, the FDA has got to keep an eye on 100% of it,’ we would need an FDA two to three times bigger than it currently is.”
     
  • Dr. Califf has really been getting around lately. And fielding a lot of questions about healthcare AI. This week he did an interview with NPR. Asked by the host what he would tell folks troubled by the notion of “a computer” helping to make their diagnoses or direct their care, he didn’t hem or haw. “A computer doesn’t get tired, and it’s not distracted by other things in the environment,” Califf said. “In my 40 years of working on computer-human interfaces, the combination is always better as long as the rules of the road are right.” 
     
  • The FDA chief also recently said he doesn’t know of a single health system in the U.S. that’s even capable of doing proper AI validation. By this Califf surely means nobody has yet demonstrated the know-how—or maybe the will—to refine their algorithms with local demographic data and then test the updated AI’s performance over time. Reporting on what hospitals are up against in this arena, Axios healthcare editor Tina Reed recounts a quote given her by digital health executive David Newman, MD, of Sanford Health System based in Sioux Falls, S.D. “I looked at my inbox yesterday and I had 22 emails from AI companies,” Newman says. “I don’t know if they’ve been validated or not. I don’t know if they’re solving a problem at all. It’s really hard to wade through that to see what actually is useful for our patients and providers.”
     
  • One hospital C-suiter would like to see hospital leaders harness vendor hype and ‘turn it into a commitment.’ He’s Jason Hill, MD, chief innovation officer at New Orleans-based Ochsner Health. If a clinical department purchases AI on the basis of a measurable claim made by a software supplier—and the product fails to hit the target—”we’re going to cancel the contract,” Hill tells MedCity News. Clinical departments, he adds, “need to have some skin in the game for if their thing works.” 
     
  • Some parents trust AI chatbots more than their kids’ doctors for pediatric medical advice. Researchers found this out when they asked mothers and fathers to rate text written by either a physician or ChatGPT. Reporting on the study, Fox News Digital sought perspective from a physician with expertise in AI. “AI can provide valuable preliminary information, but it cannot fully grasp a child’s unique medical history, subtle symptoms and nuances from years of specialized training,” says the physician, Harvey Castro, MD. It’s crucial, he adds, to keep not just a human but “the right human” in the loop. 
     
  • It’s with an eye on shifting to AI and healthcare that Oracle is downsizing hundreds of jobs from its cloud operation. From a news item posted by Techstory: “As the firm looks to maximize its staff and concentrate resources on high-growth sectors, reports suggest that Oracle is implementing these layoffs as part of its continuous strategy to adapt its operations in response to changing needs in the cloud computing sector.”
     
  • Meanwhile Nvidia has plans to land robots in hospitals for taking X-rays, delivering linens and all sorts of tasks. “This physical AI thing is coming where your whole hospital is going to turn into an AI,” Nvidia healthcare VP Kimberly Powell tells Business Insider. “You’re going to have eyes operating on your behalf, robots doing what is otherwise automatable work and smart digital devices. So we’re super excited about that, and we’re doing a lot of investments.”
     
  • Recent research in the news:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare