Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Some fear AI will hasten the end of humankind. Others believe such thinking is exactly upside-down. Take Thomas Fuchs, DSc, dean for AI and human health at the Icahn School of Medicine in New York City. “Today patients are dying not because of AI but because of the lack of it,” Fuchs told attendees of a recent symposium on the “new wave” of AI in healthcare. Fuchs was one of a handful of subject matter experts who addressed the gathering. The event was co-hosted by the New York Academy of Sciences, which was founded when James and Dolley Madison would have been living in the White House had the British not burned it down. (Sorry, love historical trivia.) The academy has posted a summary of the proceedings.
     
  • Microsoft and Epic are doing a mind meld over AI. Together they’ll work to outbrain all sorts of issues nagging U.S. healthcare. Epic will ask healthcare-specific questions. Microsoft will ramp up AI and cloud computing to come up with answers. The output will be “dozens of copilot solutions” to help provider orgs find qualified staffing, ease financial pain and expand patient access. Microsoft AI veep Eric Boyd gives some details.
     
  • During World War II, England’s Bletchley Park hosted the use of early AI technology to help defeat the Nazis. (If you saw 2014’s The Imitation Game, you know the story.) In early November the site will welcome AI experts from around the world. This time the focus will be on marshaling the will of nations to prioritize safety in AI development and deployment. Details.
     
  • Half of digital healthcare marketers admit to using unethical means of boosting search engine optimization. Well, what goes around comes around: Some 65% of those who went “black hat”—paying for links, stuffing keywords, hiding text, et cetera—experienced negative repercussions. Fear of losing a job to AI may help explain the widespread willingness to play dirty, as 1 in 5 of these digital marketers worries about just that. On the other hand, more than 60% believe large language models will make healthcare SEO a better gig. The findings are from a survey conducted by Tebra as reported in The Intake. See the rest of the findings.
     
  • In China, suppliers of healthcare chatbots and large language AI models must “adhere to core socialist values” when designing their systems. Such adherence is to be demonstrated by refusing to create content that “incites subversion of state power and the overthrow of the socialist system, endangers national security and interests, damages the image of the country, incites secession from the country, undermines national unity and social stability, promotes terrorism, extremism, national hatred and ethnic discrimination, violence, obscenity and pornography.” To be fair, right now the rules, 41 in number, are in the proposal phase. Then again, they were drafted by the influential Beijing Municipal Health Commission. The more things change
     
  • In Pennsylvania, control over AI isn’t so draconian. In fact, more than a few people want more of it. Among them are state representatives miffed by the idea of algorithms declining health insurance claims with no human oversight. Pittsburgh’s NPR affiliate has the story.
     
  • Select vendor news straight from the sources:
     
  • Research & education roundup:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup