News You Need to Know Today
View Message in Browser

Who’s afraid of AI in healthcare? | AI newsmakers

Wednesday, May 29, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

ai in healthcare artificial intelligence

The only thing to fear from AI in healthcare is fear of AI in healthcare itself

As it continues to grow in proficiency and increase its reach, healthcare AI will disappoint both those who expect it to produce miracles and those who fear it will cause catastrophes.

That’s one takeaway that one could gather from a reading of two thought leaders in healthcare and technology. Commenting on the combination in the Chicago Tribune May 23, Sheldon Jacobson, PhD, and Janet Jokela, MD, MPH, specifically ask:

Will the threats associated with AI in healthcare be as bad as some fear? Or will healthcare AI be relatively benign?

The answer, they suggest, will probably fall somewhere between the two.

Jacobson is a professor of computer science at the University of Illinois at Urbana-Champaign. Jokela is the senior associate dean of engagement for the Carle Illinois College of Medicine at the University of Illinois at Urbana-Champaign. Here are five of their supporting arguments.  

1. AI has no feelings and therefore cannot replace functions that demand human interactions, empathy and sensitivity.

AI can neither feel emotion nor exercise moral agency. But it doesn’t need those qualities to help bring on joy-occasioning outcomes and make sound, evidence-based judgments. “What patients want and certainly need from their physicians is their time and their attention,” Jacobson and Jokela write, “which demands patience—something that AI systems have in abundance.” More:

‘Indeed, patience may be construed by some as a surrogate for human empathy and sensitivity, while impatience may be interpreted as the antithesis of such human characteristics.’

2. AI medical systems can process massive stores of information infinitely more quickly and thoroughly than any human clinician.

Thanks to its vast capacity for spotting patterns and connections, healthcare AI “may spot an unusual condition that could expedite a diagnosis, identify an appropriate treatment plan and save lives—all at a lower cost,” Jacobson and Jokela point out.

‘AI models may even identify a novel condition by exhaustively eliminating the possibility of all possible known diseases, effectively creating new knowledge by a process of elimination.’

3. On the other hand, AI medical systems have limitations and risks.

“The plethora of data being used to train AI medical systems has come from physicians and human-centric healthcare delivery,” Jacobson and Jokela note. “If such sources of data are overwhelmed by AI-generated data, at some point, AI medical systems will be primarily relying upon data generated from AI medical care.”

‘Will this compromise the quality of care that AI medical systems deliver?’

4. Few if any healthcare personnel understand the complex statistical associations that yield medical AI outputs.

“Of course, much of clinical medicine is evidence-based, which in turn is based on clinical trials or extended observational experience,” Jacobson and Jokela write.

‘When viewed in this context, AI medical systems are taking a similar approach, with the time window to glean insights infinitesimally compressed.’

5. Anything that cannot be easily understood may elicit fear.

Healthcare AI certainly qualifies as a thing not readily comprehended, Jacobson and Jokela state. “In a world filled with uncertainty and risk, AI systems of all kinds offer tremendous benefits,” they remark. “Yet the uncertainty and risk that surround us will not miraculously go away with AI. There are no free lunches in this regard.”

‘Prudence and caution are reasonable. Efforts to stop or even slow AI advances are what we should really fear.’

Full piece here

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • OpenAI is gathering an internal group to ride herd on risk management. The more aggressive stance on safety and security is almost certainly connected to the widely suspected training of the successor model to ChatGPT-4. Will GPT-5—or whatever name it takes—represent a first real shot at achieving artificial general intelligence, aka AGI? People are asking. At Ars Technica, seasoned tech reporter Benj Edwards says that, in the case of OpenAI’s new S&S committee, safety “partially means the usual ‘we won’t let the AI go rogue and take over the world.’” But it also connotes the broader set of “processes and safeguards” that OpenAI itemized in a May 21 update. Ars Technica piece here, lots more coverage here.
     
  • Suddenly Nvidia is not only a masterful chip seller but also a ravenous cloud buyer. The AI-market powerhouse went from budgeting $3.5 billion for cloud services in January to almost $9 billion in May. The Information points out a lot of that wealth is sure to land in accounts receivable operations at cloud giants Amazon, Microsoft, Google and Oracle. However, there’s more afoot than simple cash-register transactions. Tech journalist Anissa Gardizy reports that Nvidia’s planned spending is to support R&D and Nvidia’s DGX Cloud service, as the company wants to sell its own cloud services along with its chips. DGX Cloud “helps Nvidia get closer to some of its customers and guard against a future where its largest customers compete with it,” she writes in a piece published May 28. “Amazon, Google and Microsoft are working on their own AI chips that could lessen their dependence on Nvidia.” Meanwhile Nvidia will pitch DGX Cloud as a star cloud performer. Article here (behind paywall).
     
  • Not to be outdone, Elon Musk’s xAI has raised $6 billion to sock into the AI supercomputer it’s working on. The beast will heavily draw from lessons learned by Grok, the large language chatbot currently available to premium subscribers of X Premium. According to The Information, xAI will “string together” 100,000 specialized semiconductors into what Musk is calling a “gigafactory of compute.” Plenty of non-paywalled coverage here.
     
  • Planning to use AI to recruit and/or hire new employees? Watch what happens in Colorado. There the governor just signed a bill addressing the use of AI in “consumer settings,” including employment. Set to kick in Feb. 1, 2026, the law includes provisions that require both developers and deployers of AI tools to “use reasonable care to avoid discrimination through the use of ‘high risk’ AI systems.” And what will constitute a “high-risk” AI system in the Centennial State? Any that “makes, or is a substantial factor in making, a consequential decision,” including decisions with respect to employment or employment opportunities. The law firm Foley & Lardner has posted a brief but informative analysis.
     
  • You would expect cybercriminals to use generative AI tools to make their jobs easier. But few seem to be availing themselves of the augmentation to their expertise. Why is that? Two main reasons, according to Trend Micro, the American-Japanese cybersecurity software supplier. One, large language models are pretty good at recognizing and disregarding malicious commands. And two, criminals are “generally wary of directly accessing services like ChatGPT for fear of being tracked and exposed.” Learn more.
     
  • Congressional staffers are using new AI tools to prepare for a potential shooting war with China. It’s hard to say which is scarier—lawmakers’ aides doing that or any human guinea pigs on our side not doing that. “Our war game exercise, which will test human decision-making against an AI large language model, will tangibly illustrate for staff how AI might be incorporated into national security decision-making, including how it might support or modify human choices during a crisis,” explains Jamil Jaffer, founder of the National Security Institute. The Washington Times has more.
     
  • You should eat at least one small rock per day. So said Google’s AI Search when someone asked for the recommended daily rock allowance. The thing credited “geologists at UC Berkeley” with the advice, possibly using the news parody site The Onion as a primary source. This anecdote has been making the online rounds the past few days. Whether or not Google is being unfairly goofed on may be beside the point. After all, the delusional outputs would not be without precedent. CNET has the story.
     
  • Two AI chatbots walk into a bar … This really happened, sort of. A recent TikTok video shows a human video host introducing two ChatGPT-4 bots, then stepping aside to let them converse with one another. As recounted in the U.K. news site The Focus, the talk quickly “escalated” from an exchange of pleasantries to a discussion on quantum computing. (The punchline I’d hoped for: “That’s no quantum computer. That’s my wife!”)
     
  • Recent research roundup:
     
  • AI funding news of note:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare

Trimed Popup