News You Need to Know Today
View Message in Browser

The AI gold rush upon us | Healthcare AI newsmakers | Partner news

Wednesday, March 20, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo

Activeloop logo

artificial intelligence gold rush

Healthcare AI: Welcome to our generation’s ‘great Gold Rush’

It’s more critical for U.S. healthcare to get medical AI right than to get it adopted far, wide and ASAP.

That’s the view of two people who’ve thought both scenarios through—a U.S. Congressman who happens to be a practicing physician and a data scientist who teaches at a leading medical school.

Of utmost importance in deploying the technology, the duo emphasizes, is “applying the principles that guide clinical research, including the respect for the human person, maximization of benefits and avoidance of harms to patients, just distribution of benefits, meaningful informed consent and protection of patient confidential information.”

The writers are Representative (and urologist) Greg Murphy, MD, (R-North Carolina) and Michael Pencina, PhD, professor of biostatistics and bioinformatics at Duke University School of Medicine. The politics and policy outlet The Hill published their commentary March 19.  

“The emergence of artificial intelligence is reminiscent of the great Gold Rush, a frenzied time bursting with unlimited potential yet filled with uncertainty, speculation and unforeseen consequences,” Murphy and Pencina write. In fleshing out this perspective, the two make several strong points. Consider these five:

1. It remains to be seen how medical professionals and patients will interact with and utilize healthcare AI.

Noting two glaring AI weaknesses already out in the open—algorithmic bias and automated claims denials—Murphy and Pencina highlight the inadequacy of human-in-the-loop touches to counterbalance such failings:

We must not merely be one dimension of the progressive machine learning system; humans must remain atop the hierarchy. We need to control AI, not the other way around.

2. Facilitating broad innovation while guarding against unacceptable risk is a massive challenge—one that the federal government cannot handle unilaterally.

Other parts of the world may try “top down” approaches all they like, but the quality of life we expect in the U.S. calls for public-private partnerships, Murphy and Pencina contend. To develop guidelines and guardrails—and to validate the value and trustworthiness of healthcare AI here—we need to create independent assurance laboratories, they add. These labs would be charged with evaluating AI models according to “commonly accepted principles.” In a nutshell:

We need more than one hen guarding the chicken house.

3. Avoiding missteps like those that hindered the integration of now-mature technologies—we’re looking at you, EMRs and EHRs—is paramount.

It’s right and good to expect federal offices to help anticipate such trip-ups and course-correct for them. In fact, denizens of D.C. have a key role to play in this endeavor—that of a convener and enabler for those creating national standards, Murphy and Pencina state. However, they add, the implementation of such standards “should be deferred as much as possible to the local governance at the health system level with federal authorities intervening only when necessary.” More:

Progress will not be free, but we must learn from past mistakes.

4. As we pursue the mainstreaming of AI across U.S. healthcare, we must make sure ethical considerations reign supreme.

Patients in rural or low-income communities must have access to the benefits of this technology, Murphy and Pencina underscore. “Further, it is imperative AI used on or by [underserved] communities is as trustworthy as AI used by premier health systems.” More:  

Just as access to healthcare is not a guarantee of quality, access to artificial intelligence systems will not certify the capacity or reliability of what is available.

5. The advancement of AI brings medicine to the precipice of truly transformational change.

The technology can “help reduce existing burdens and inefficiencies while at the same time improving patient care and experience,” Murphy and Pencina reiterate, citing examples “from ambient voice transcription tools to diagnostic devices—and the list is growing daily.” More:

[Healthcare AI’s] applications are nearly limitless; a new revolution has arrived.

Read the rest.

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence code face

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • The U.S. Food & Drug Administration is telling how it plans to coordinate AI oversight among and between four sub-agencies that deal with medical products. In a paper released March 15, FDA outlines four priorities guiding it in this expansive (and surely daunting) effort. The priorities are fostering collaboration, advancing regulation, promoting standards and supporting research. The sub-agencies are CBER, CDER, CDRH and OCP. Unscrambled, the alphabet soup stands for the Center for Biologics Evaluation and Research, the Center for Drug Evaluation and Research, the Center for Devices and Radiological Health and the Office of Combination Products. Read the paper here.
     
  • All well and good there, but one close FDA watcher believes the agency needs an entirely new approach for dealing with AI-enabled medical devices. The watcher is Benjamin Zegarelli, JD, a regulatory compliance specialist with the Mintz law firm. Zegarelli has every confidence FDA will adapt to the fast-changing field of AI regulation and carry out its duties with aplomb. However, he clarifies, the agency can’t be expected to go it alone. Zegarelli and colleagues “hope that Congress will act at some point to give FDA additional authority to classify, authorize and regulate AI/machine learning devices in a way that fits the technology, enables and incentivizes innovation, and enhances patient safety.” Full piece here.
     
  • Chatbots packing large language AI for mental healthcare should be regarded with caution. Among those who should be extra wary are individuals who might be vulnerable to the charms of an amateurishly conceived bot and blind to its shortcomings. This is the warning of Thomas Heston, MD, a clinical instructor of family medicine at UW Medicine in Washington. Expounding on research he recently published, Heston tells UW’s newsroom: “Chatbot hobbyists creating these bots need to be aware that this isn’t a game. Their models are being used by people with real mental health problems, and they should begin the interaction by giving the caveat: ‘I’m just a robot. If you have real issues, talk to a human.’” News item with link to study here.
     
  • Generative AI isn’t always so great at matching work candidates with job openings, either. So found one brave explorer who took up a recruiter’s offer of positions to consider. The prospect was assured by the recruiter that the openings had been “chosen by AI just for you.” Or something along those lines. Alas, the recommended jobs “were so off the mark, they were laughable,” writes the tester, who by the way wasn’t even looking to make a career change when he received the unsolicited email that started the contact. “Almost all were outside of my industry, most were in a discipline where I have absolutely no experience or credibility, and many were for positions out of sync with my experience.” The writer, Bradley Lohmeyer of the advisory firm Ankura, draws out some observations on generative AI that extend beyond the anecdote. Read it all.
     
  • ‘Any doctor who can be replaced by a computer deserves to be replaced by a computer.’ The late Harvard physician and clinical informaticist Dr. Warner Slack (d. 2018) famously said so way back in the 1960s. This week the proverb gets an update. “Any healthcare task that can be made safer with AI,” writes Karandeep Singh, MD, chief health AI officer at UC San Diego Health, “deserves to be made safer with AI.”
     
  • Nvidia’s healthcare AI blitz continues:
     
  • AI investments of note:
     

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Exploring Most Suitable ML Models in Healthcare: EfficientNet for Diabetic Retinopathy Detection - Early detection of Diabetic Retinopathy, a critical aspect of patient care, benefits greatly from ML models like EfficientNet. This model allows practitioners to create computationally efficient and highly accurate ML systems in analyzing medical imagery. The blog post delves into utilizing Deep Lake, the database for AI to train healthcare models, ensuring optimal use of GPU resources and enhancing ML model training speed with scalable plug-and-play architecture used by leaders in healthcare. Explore EfficientNet and state-of-the-art ML training tools in this article.
 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare