Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Ambient documentation AI has won over clinicians en masse at Mass General Brigham. After drafting notes with close to 90% accuracy in a pilot of 500 patient-clinician interactions, the smartphone-based technology is headed for broad adoption across a good chunk of the Harvard-affiliated enterprise. Its next phase will put it in the hands of 800 healthcare workers. That’s double the number that planners initially intended. “Because there was such overwhelming interest in participating and such positive feedback early on, [our] senior leadership committed to making this available to more clinicians,” explains Amanda Centi, PhD, the institution’s innovation manager for emerging technologies and solutions. The secret of the toolkit’s success in making so many friends so fast: It keeps clinicians in the room with the patient rather than “putting up a barrier like a lot of technology does in our lives.” That quote is from Rebecca Mishuris, MD, MPH, chief medical information officer and vice president. Read more from both here.
     
  • Utah has set up a new office of AI policy. Healthcare is its first order of business. Specifically, the office will concentrate out of the gate on using generative AI to improve mental healthcare. State official Margaret Busse says they’re starting with that use case—even prioritizing it over AI in K-12 education—for three reasons. One, mental health issues are widespread in Utah. Two, resources to deal with the problem at scale are short. And three, the application will yield learnings on multiple AI issues, including data privacy. Announcing the launch July 8, Gov. Spencer Cox said the work will encourage collaborations between government and industry so as to balance technological innovation with consumer protections. “I’m proud of the ‘Utah Way’ that encourages us to do this,” Cox said. “Business and government can work side by side in a way that helps everyone and elevates our state in a powerful way.”
     
  • Nurses have nothing to fear from AI. Not only are their jobs safe, but their input on AI is essential. A blogpost at Nurse.com offers this reassurance while encouraging nurses to get involved. The integration of AI in nursing “must be approached thoughtfully, with a focus on augmenting rather than replacing the human elements of care,” the blogger writes. After reminding readers of some un-automatable care components—empathy, intuition, sensitive situational awareness—he issues something of a call to arms. “Ensuring that nurses are involved in the development and implementation of AI technologies,” the blogger writes, “is crucial for creating tools that truly support their work.
     
  • Young people have high expectations for AI. High enough that they believe it should be used to modernize healthcare. The findings are from the U.K., but they may well reflect the disposition of the young toward AI wherever it’s up and coming. Researchers from University College London and Great Ormond Street Hospital asked U.K. residents ranging in age from 6 to 23 about their views on AI. When the questions turned to how they’d like AI to be used in healthcare, the respondents expressed openness. However, they wanted the tools to be supervised by healthcare professionals “as the young people feel there are elements of care—such as empathy and ethical decision-making—that AI cannot mimic,” according to a news item posted by UCL. “When faced [to choose] between a human and computer, they would be more willing to trust the human.”
     
  • ‘In the infancy of the AI age, all physicians become kindergarten teachers, unwittingly molding AI models through our very interactions with it.’ And today’s kindergartner models are tomorrow’s trusted AI toolkits. Watch how you raise them up. The word picture is fleshed out with commendable thoughtfulness at HealthyDebate.ca by Angela (Hong Tian) Dong, MD, an internal medicine resident at the University of Toronto. “Physicians will need to understand the limitations of high-yield AI systems applied in a clinical setting,” she writes, “provide ongoing expert feedback to prevent post-market algorithmic drift, and recognize their role as canaries in the coal mine if healthcare AI systems drift away from patient-centered priorities and incentives.” Wait. Canaries or kindergarten teachers? No matter. The metaphors are mixed, but the point is well-made.
     
  • The Coalition for Health AI has lost two board members. Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, and Micky Tripathi, PhD, the national coordinator for health IT at HHS, resigned from their CHAI roles for unknown reasons. Tripathi tells Fierce Healthcare he made his decision after being appointed chief AI officer  and co-chair of the Biden Administration’s AI task force. Tripathi says the latter positions have him formally working across numerous federal agencies, putting him into situations that could present conflicts. The resignation, he says, is “not a reflection at all on CHAI, their mission, the strength of the collaboration they’re building, and work that they’re doing to advance responsible and trustworthy AI.”
     
  • The quality of GenAI’s outputs is a reflection of not only a model’s training data but also of the end-user’s query. Testing the latter is the work of professionals called “prompt engineers.” Here prompt has nothing to do with being on time and everything to do with the teeing up of the queries. VentureBeat gives a nice primer, with examples, by Vidisha Vijay, a data scientist at CVS Health and an aficionado of prompt engineering. “Ethically designed prompts can reduce biases and promote fairness in LLMs,” Vijay writes. “It is also essential to continuously monitor AI outputs to identify and address new biases” that may emerge over time. Read the whole thing.
     
  • From the AI hype vs. AI substance file: “AI—whether generative AI, machine learning, deep learning or you name it—was never going to be able to sustain the immense expectations we’ve foisted upon it,” writes Matt Asay, JD, at InfoWorld. “This doesn’t mean GenAI is useless for software development or other areas, but it does mean we need to reset our expectations and approach.” Hear him out.
     
  • Recent research roundup:
     
  • Funding news of note:
     
  • From AIinHealthcare’s news partners:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.