Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • What is Microsoft looking to do—make itself the IT department of OpenAI? Some snarky observers have been suggesting as much ever since the Redmond, Wash., behemoth pumped $13 billion into the San Fran overnight sensation. And now the snickering may turn serious. That’s because someone leaked word the two are planning a mega-supercomputer costing $100 billion or more. The beast is to be called “Stargate.” The Information had the scoop last week behind a paywall, and now Business Insider is reporting a Microsoft spokesperson “declined to comment directly on the report but highlighted the company’s demonstrated ability to build pioneering AI infrastructure.” Microsoft is “always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability,” someone with close ties to Redmond whispered to BI.
     
  • The same OpenAI is holding back on releasing its Voice Engine to the general public. This is the company’s iteration of a tool for impersonating a human’s voice and speaking style. In internal testing, OpenAI has said, Voice Engine proved it can convincingly ape a person’s oral emanations from sound clips of just 15 seconds. And the audio doesn’t need to be of especially high quality. Why not put the nifty tech out there? Because, OpenAI says in a March 29 blogpost, “we are taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse.” That’s probably wise in the runup to this year’s presidential election. Then again, these tools will only get better for the 2028 craziness. We can run, but we can’t hide. Read the post.
     
  • Federal spending on AI skyrocketed from 2022 to 2023, nearly reaching $700 million as of last summer. But healthcare AI watchers shouldn’t get too excited. The lion’s share of the spree went to the military. The Brookings Institution analyzes the relevant federal contracts in considerable detail in a report posted March 26. Noting a clear shift from experimentation fundings to implementation contracts, the report’s authors suggest this development, considered alongside the heavy DoD allotment, “reflects a strategic response to global competition and security challenges.” Full report here.
     
  • Think twice before hiring a chatbot as your therapist. That’s the advice of a technology researcher keenly interested in the potential and pitfalls of artificial emotional intelligence. “When emotional AI is deployed for mental health care or companionship, it risks creating a superficial semblance of empathy that lacks the depth and authenticity of human connections,” warns A.T. Kingsmith, PhD, of Ontario College of Art & Design University. The technology’s shortcomings are “particularly concerning in therapeutic settings, where understanding the full spectrum of a person’s emotional experience is crucial for effective treatment.” Read the rest at The Conversation.
     
  • To continue optimizing care while minimizing risk, healthcare AI will need to increasingly incorporate ‘dynamic consent.’ This is what you call it when patients and research participants can give or revoke data permission at will, depending on how they feel about what the bytes are to be used for. Sounds complicated, but evidently in Australia they’re looking to cover all kinds of contingencies. Individual ownership of data, the authors of a newly updated report explain, “can be achieved through several approaches, such as distributed storage and homeomorphic encryption of data, self-sovereign identity for management of credentials and tamper-proof decentralized dynamic consent objects.”
     
  • Hopes in the promise of healthcare AI are running high in Rwanda. Local coverage of a conference held last week in Kigali, the country’s capital city, quotes radiologist Emmanuel Rudakemwa, MD. “The Rwandan government has come up with very many innovative solutions to circumvent the issues of low human resource capacity that we have,” the physician says. “We are trying to see how AI, computing or machine learning—be it machine-machine or man-machine, or deployment of the internet of things—can support the little human resource[s] that we have.”
     
  • The international AI community, if there is such a thing, now has a knight to call its own. It’s Demis Hassabis, CEO and co-founder of Google’s AI subsidiary DeepMind. His native U.K. awarded him the high honor for his “services to artificial intelligence.” And by the way, when still a lad in London, Sir Demis was a chess prodigy. Bet you didn’t know that till now. TechCrunch has more.
     
  • March 31 marked the 25th birthday of The Matrix. TechRadar looks back to consider the foresight of its makers. “The chilling plot at its heart—namely the rise of an artificial general intelligence (AGI) network that enslaves humanity— has remained consigned to fiction more so than it’s ever been considered a serious scientific possibility,” writes channel editor Keumars Afifi-Sabet. “With the heat of the spotlight now on AI, however, ideas like the Wachowskis’ are beginning to feel closer to home than we had anticipated.” Read the piece.
     
  • Research headlines of note:
     

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.