Healthcare AI newswatch: Devices of unknown generalizability, promises of uncertain keepability, more

Buzzworthy developments of the past few days.

  • Makers of AI-equipped medical devices must give the FDA enough info to clear the gear for sale in the U.S. Of course they must. And do. But many aren’t sharing enough data for the agencies to know if the devices will prove generalizable in real-world settings—both now and later. This is no small problem. Why? Because lacking vigorous evidence of generalizability, these devices may fail at safety and/or effectiveness “when used outside of the controlled conditions in which they were initially validated.” The warning is sounded by researchers in a study published April 30 in JAMA Network Open. George Siontis, MD, PhD, and colleagues examined information on the 900-plus AI-enabled medical devices listed on the FDA website as of last August. They found not many more than half, only 55%, included comprehensive performance metrics from clinical studies. “Performance data—both overall and within specific sex and age subgroups—were frequently lacking,” the authors stress. “Ongoing monitoring and regular re-evaluation,” they emphasize, “are essential to detect and address unexpected changes in performance during broader clinical use.” Read the whole thing.
     
  • AI for flagging patients with high cardiovascular risk is thrilling some users. It’s happening at the University of Texas Medical Branch, where every CT scan of the midsection gets run through an algorithm. The new standard operating procedure is applied even for—or maybe especially for—patients with no known heart problems. “What I love about this is that AI doesn’t have to do anything superhuman,” Peter McCaffrey, the institution’s chief AI officer, tells Venture Beat. “It’s performing a low intellect task but at very high volume. That provides a lot of value, because we’re constantly finding things that we miss.”
     
  • Healthcare AI can do a lot of things. One thing it can’t do is relate. The CEO of a 45-site organization in the post-acute-care space describes some of each in a piece published May 1 by Modern Healthcare. Kenny Rozenberg, MPH, of Centers Health Care, which operates in the Northeast, checks off a few AI use cases his outfit is enjoying. Wound care tracking, admissions data cleaning, length-of-stay shortening all make his short list of AI can-do’s. For the “can’t-do” side, he tells about an elderly woman who didn’t care about high-tech clinical capabilities. She was just tired of being home alone. “We helped place her in a facility with a vibrant recreation program and a lively social calendar,” Rozenberg shares. “Within weeks, she was active, engaged and genuinely happy. Her daughter was relieved. That outcome didn’t come from AI. It came from a conversation.”
     
  • Here’s an option for healthcare professionals wishing for college-level training in healthcare-specific AI. Purdue University has launched an all-online course. It’s aimed not only at patient-facing workers but also administrators and other support staff. Those who complete the curriculum become certified in healthcare AI as well as, where applicable, receiving continuing education credits. Details here.  
     
  • Sometimes medical chatbots can be easier to talk to than medical professionals. One circumstance that springs to mind is when a talkative physician gives you too much to think about while you’re still woozy from a procedure. A chatbot might answer your question the next day. And do so succinctly and at a time of your choosing. Which is to say, as Pymnts does, that medical chatbots “can provide essential support, offering assistance around the clock.” A subject matter expert qualifies the shoutout for the outlet. “It’s crucial to remember that, while medical chatbots can offer valuable assistance, they are not a replacement for professional medical advice,” he says. “The integration of AI in healthcare also raises important concerns about data privacy and security that need to be addressed when implementing these tools.”
     
  • And then there are the ethical questions swirling around the use of AI in healthcare. TechTarget fleshes out a bunch of those. Hot spots include creating and enforcing ethics policies, maintaining data security and patient privacy, applying human oversight to AI recommendations and—not least—ensuring a positive patient experience. “Part of patient involvement also includes clear and concise patient consent,” senior technology editor Stephen Bigelow writes. This “delineates the information collected, why it’s needed and how it’s used—including further AI training, if needed—and allowing patients to opt out of certain data uses.”
     
  • Overpromising AI capabilities to stakeholders—now there’s a mistake healthcare AI enthusiasts have been known to make. Other pitfalls to avoid: unresolved issues with data quality, poor integrations with existing workflows and one-size-fits-all model development. HackerNoon walks you through these four and four more here
     
  • We already knew the new CMS administrator to be a fan of AI in healthcare. He expressed his enthusiasm for the technology before he’d even taken up his post. This week he made known his simple expectations for CMS personnel and, by extension, medical practitioners. “Ask real questions and be curious about the answers,” he said at a health innovation summit hosted by the U.S. Chamber of Commerce. “When you get them, have courage, be compassionate and look out for people.”
     
  • Recent research in the news:
     
  • FDA activity: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners: 
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.