Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Keep AI regulation adaptable. That’s the gist of an exhortation the American Hospital Association offered to Congress May 6. The group fleshed out the advice after Democratic Rep. Ami Bera, MD, of California requested perspectives on the current state of AI in healthcare. “AI is not a monolithic technology, and thus a one-size-fits-all approach could stifle innovation in patient care and hospital operations,” AHA points out. “Such an approach may even prove inadequate at addressing the risks to safety and privacy that are unique to healthcare.” Read the whole thing.
     
  • Deep-pocketed investors are dumping big dollars into AI startups. And the frenzy has some stock market watchers worried. Could the climate be setting up a bubble as these small companies scramble to turn their hype into saleable products and real revenues? The Wall Street Journal asks the question. “Everyone believes AI is the future, so we are going to see an extraordinary amount of investment until proven otherwise,” answers Alex Clayton, a general partner at the venture firm Meritech. “The problem is that we don’t know what these business models are going to look like at scale. You can have theories about it, but you really don’t know.”
     
  • The human brain has evolved to fear uncertainty. This may help explain the mixed emotions many people have over AI. A psychologist and cognitive neuroscientist at the University of New South Wales in Australia thinks it does. In an interview with the Australian Broadcasting Corp., he calls for more research into the psychological effects of the technology on mental wellbeing. “You can’t compare [AI] to tools,” warns the mindful mind scientist, Joel Pearson, PhD. “The industrial revolution, the printing press, TVs, computers—this is radically different in ways that we don’t fully understand.”
     
  • AI’s impact on mental wellness may be at its most direct in digital psychiatry. Generative AI in particular has potential to go straight to the mind—and yet it’s “just a big, complicated hunk of math.” So notes Brent Nelson, MD, an adult interventional psychiatrist and CMIO with PrairieCare in Minnesota. “It’s going to give you outputs that are just based on how it was trained or what it found on the internet,” Nelson tells a local affiliate of CBS News. “You really have to be thoughtful about questioning information it’s giving you.”
     
  • In similar manner, some nurses worry about wearable, AI-equipped monitoring devices. The concern is that these could put patients at risk of various unintended consequences. Fortune magazine’s online health and wellness outlet, Well, looks at the scenario by focusing on one technology, the BioButton from BioInelliSense, in one hospital, Houston Methodist. Giving the nurses’ point of view is Michelle Mahon, RN, an assistant director of the union National Nurses United. “The hype around a lot of these devices is they provide care at scale for less labor costs,” Mahon says. “This is a trend that we find disturbing.” Countering that view is Sarah Pletcher, system vice president at Houston Methodist. “Because we catch things earlier [with the BioButton], patients are doing better,” Pletcher says. “We don’t have to wait for the bedside team to notice if something is going wrong.” Read the rest.
     
  • But wait. Here’s a physician who loves AI. Rosemary Lall, MD, worked for almost 30 years as a family doctor in Ontario, ever growing increasingly overburdened by administrative duties. EMR data entry was an especially demanding drag on her time. At times she thought of quitting. That all changed last year. On Christmas she found herself at her first celebration in all those years in which she didn’t have to update medical records. GenAI notetaking software had her covered. “For me, this [technology] has changed things,” Lall tells the Global News of Canada. “It’s made me really happy. I will never go back.”
     
  • Here’s something to file in the ‘Things Elon Says That Make You Go Hmm’ department. “It’s very important to have a maximum truth-seeking AI … and a maximumly curious AI. While biological intelligence can serve as a … backstop, as a … buffer of intelligence … almost all intelligence will be digital.” Really? Almost all intelligence? Anyway, Musk made the ponderous comment at this week’s Milken Institute Global Conference in Los Angeles. AIin.Healthcare couldn’t let the remark go by because you never know. He’s been oddly right before and just might be so again.
     
  • Recent research roundup:
     
  • Funding news of note:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.