Industry Watcher’s Digest
Buzzworthy developments of the past few days.
- Americans trust healthcare AI less than they trust wearables and telehealth. However, their general trust in healthcare AI is higher than their outright distrust in healthcare AI, and by a wide margin—45% to 15%. These and more interesting findings emerged from a Healthcare.com survey of 1,039 adults. The point of the project was gauging consumer trust in medical technology, and the health-insurance shopping company has posted some key findings.
- When existing patients talk, prospective patients listen. Healthcare providers challenged to keep up with online patient reviews have an AI ghostwriter awaiting assignments. The software, marketed by Weave of Lehi, Utah, lets provider staff bang out a first draft with a single click, then allows editing before posting. In announcing the product’s launch, Weave’s CTO says more than half of patients browse online physician reviews before scheduling appointments—yet fewer than half of providers ask patients to complete a review.
- Is ‘cybersickness’ a bona fide medical condition? Regardless, researchers are working to crack its causes and recommend remedies. At the University of Missouri, a team is using explainable AI to learn how people develop the malady, which is akin to motion sickness, in augmented and virtual reality. And at the University of Waterloo, a group wants to understand why some people get nauseated playing VR games while others don’t.
- Teens conduct consequential AI-powered medical research. Three high-school students are co-authors of a sophisticated study on the use of AI to identify therapeutic targets for malignant brain tumors. Hailing from Norway, China and the U.S., the students interned with the AI drug-discovery company Insilico Medicine of the United Arab Emirates. It’s a cool story. Insilico publicizes it here, and the journal Aging has posted the study in full for free.
- Clinical–business partnership puts cancer in the crosshairs. The University of Texas MD Anderson Cancer Center is tapping Generate:Biomedicines of Somerville, Mass., for help developing therapeutics to fight advanced cancers in the lungs and elsewhere. Full announcement here.
- When assisted by image-interpretation AI, radiologists of all experience levels are susceptible to “automation bias.” Another term for the phenomenon is “mindless acceptance.” The proclivity is documented in a study of mammography readers published in Radiology and covered by Health Imaging.
- ChatGPT mastermind Sam Altman would like to see a global agency regulating AI. Topping his wish list is something along the lines of the International Atomic Energy Agency. “You know, something that has real international power by treaty and that gets to inspect the labs, set regulations and make sure we have a cohesive global strategy. That’d be a great start.” The quote is from a long interview conducted by Bari Weiss of the Free Press. Read the whole thing.