Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- Where do you live? How susceptible is your neighborhood to environmental toxins and street crime? And how easy or difficult is your access to primary care and cancer screenings? These are the kinds of real-world variables AI algorithms should be crunching when assessing an individual’s social determinants of health. The goal should be to look “beyond traditional socioeconomic parameters and create tailored healthcare plans to eliminate health disparities.” That’s the view of Mayo Clinic President John Halamka, MD. He presented Mayo’s experiences with AI vis-à-vis “sustainable development impact” at a recent meeting of the World Economic Forum. More on his talk here.
- Healthcare payers favor software that helps their companies personalize member experience and engagement. The finding is from a survey of 450 payer executives and managers conducted by the healthcare SaaS company HealthEdge. Respondents collectively identified as key areas of investment AI solutions (16%), care management workflow solutions (13%), payment integrity solutions (13%) and member-facing mobile apps (12%). The survey report’s authors comment that such investments can improve consumer engagement while reducing “payment friction.”
- To date, all AI-enabled medical devices cleared by the FDA use locked algorithms. This means the models are not continuously learning or automatically evolving in the field. The strength of locked AI is its consistency and predictability. However, locked algorithms “can become less clinically valuable over time due to evolutions in clinical practice, changes in patient populations and other factors that contribute to ‘drift.’” So notes AdvaMed in a new overview of AI in healthcare. Human-in-the-loop AI “can help to ensure accuracy and ethical oversight,” the authors point out, “but can slow down decision-making processes or intervention.” Full paper available here.
- AI-powered virtual reality is so attractive to student nurses that it may need to be reined in. At least, that’s the case at the UNC Greensboro School of Nursing. There the applications for the technology include simulated bedside encounters with patients. “The use of AI was mind-blowing to me at first,” says nursing student Richelle Hensen. “AI thinks creatively and can illuminate new avenues in nursing studies.” Hensen is foresighted enough to recognize a potential downside in the technology’s power. “I do fear,” she says, “that it could be used too much.”
- One state’s legislators are putting healthcare AI under the microscope. The state is Georgia. The interrogators include members of both major political parties. “Are [patients] going to really, truly have the ability to say, ‘I don’t want that; I don’t want to be monitored [by AI]?” asks a Republican. And what about cultural or ethnic bias arising from AI algorithms? adds a Democrat. The debate went down at a meeting of the Peach State’s House and Senate AI committees last week. Answering questions and concerns was Alistair Erskine, chief information and digital officer at Emory Healthcare. He explained that healthcare providers can analyze patient outcomes and look for opportunities to deracialize data, but he also acknowledged that AI is not perfect. Coverage by Rough Draft Atlanta.
- Around the world, policymakers should avoid reinventing the wheel just for AI. That’s the stance of Divya Srivastavad, PhD, of City St George’s University of London in the U.K. “International forums offer a space for sharing collective learning to identify policy responses, joint problem solving and coordination to mitigate barriers,” she writes in a paper published by LSE Public Policy Review. “AI has become the use case for ongoing collaboration and learning in global health. Indeed, this brings to the fore a notion articulated almost two decades ago around a model for continuous learning by the National Academy of Medicine—learning health systems—an approach that resonates when it comes to AI in health and is more pressing now than ever before.” Read the whole thing.
- ‘Is AI in healthcare a timely solution or a ticking time bomb?’ If you were to ask me that question, I’d want to ask back: Can the answer be a “both/and” rather than an “either/or?” Be that as it may, a young writer fleshes out the deciding factors in a piece posted at HackerNoon. There the writer uses a pen name, Juxtathinka, but elsewhere she’s not secretive about her real identity: Gimbiya Galadima, a medical student and creative writer from Nigeria.
- A startup has developed an AI-powered toilet camera that analyzes human waste to provide ‘valuable health insights.’ The Texas-based company is creatively named Throne. Covering the development for Daily Galaxy, journalist Samir Sebti points out: “It is crucial for potential users to weigh the benefits of health insights against their personal comfort levels with this type of technology.” Ya think?
- Recent research in the news:
- American Heart Association: AI-powered tool may offer contactless way to detect high blood pressure, diabetes
- University of Washington: Flagship AI-ready dataset released in type 2 diabetes study
- American Heart Association: AI-powered tool may offer contactless way to detect high blood pressure, diabetes
- Funding news of note:
- From AIin.Healthcare’s news partners:
- Health Imaging: New AI-based software uses ultrasound images to guide clinical decisions during childbirth
- Cardiovascular Business: Predicting sudden cardiac death after a heart attack may be impossible—for now
- Health Imaging: New AI-based software uses ultrasound images to guide clinical decisions during childbirth