News You Need to Know Today
View Message in Browser

Overheard around hospital halls | AI news blog | Partner news

Friday, September 27, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

AI in healthcare

Said and heard this week in and around healthcare

 

10 notable quotes about AI from the past 5 days

 

POINT: ‘[AI] is not perfect, but people are picking up dimes of productivity savings.’

—AI “bull” George Lee, co-head of Goldman Sachs Global Institute. (Source: “Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.” The New York Times, Sep. 23)

 

COUNTERPOINT: ‘Overbuilding things the world doesn’t have use for, or is not ready for, typically ends badly.’

—AI “bear” Jim Covello, head of stock research at Goldman Sachs. (Source: “Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.” The New York Times, Sep. 23) 

 

‘[T]echnology is increasingly core to our experience of healthcare. But it’s also true that when we leave technology to its own devices, we run the risk of technology really being an orchestra without a conductor.’

—Chris DeRienzo, MD, chief physician executive of the American Hospital Association. (Source: “AI Is the Only Unchecked US ‘Sector of Consequence,’ Says Healthcare Exec.” Newsweek, Sep. 26) 

 

‘AI technologies are quickly becoming the beating heart of modern medicine. The physicians, clinicians and medical specialists of tomorrow must learn to be adept users and knowledgeable about how AI works and what it can contribute.’

—Sharief Taraman, MD, chief executive officer of Cognoa. (Source: “AI in the Syllabus: Preparing Tomorrow’s Doctors Today.” Forbes, Sep. 24)

 

‘[T]he use of AI in the healthcare setting is exacerbating long-standing issues that healthcare professionals have bargained over, including staffing ratios and discretion in patient care.’

—Analysts Patrick Oakford, Josh Bivens and Celine McNicholas. (Source: “Federal AI legislation: An evaluation of existing proposals and a road map forward.” Economic Policy Institute, Sep. 25)

 

‘For as much good as AI can bring to healthcare organizations, there’s also the bad. It will and should be a long time before any provider/clinician accepts the output of an AI application at face value.’

—MJ Stojak, managing director of the data, analytics and AI practice for Pivot Point Consulting. (Source: “The Good, the Bad, the Ugly When Leveraging AI in Healthcare.” HIT Consultant, Sep. 25) 

 

‘AI applications in medical writing within pharmaceutical and drug production sectors promise increased efficiency, accuracy and innovation, contributing to the development of safer and more effective therapies for various medical conditions.’

—Research and Markets. (Source: “Artificial Intelligence in Medical Writing Market Research Report 2024.” News release, Sep. 25) 

 

‘Physicians and healthcare professionals often imagine AI as a futuristic, benevolent, childlike humanoid with a unique ability to love, as depicted in the 2001 Steven Spielberg movie A.I. Artificial Intelligence. However, the United States Department of Justice disagrees.’

—Muhamad Aly Rifai, MD, chief executive, chief psychiatrist and internist at Blue Mountain Psychiatry in Pennsylvania’s Lehigh Valley. (Source: “The use of artificial intelligence in the enforcement of healthcare regulations.” KevinMD, Sep. 24) 

 

‘AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations and appropriate use. Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.’

—Ken Paxton, attorney general of Texas. (Source: “Texas attorney general, generative AI company settle over accuracy allegations.” Healthcare Dive, Sep. 23) 

 

‘Restructuring around a core for-profit entity formalizes what outsiders have known for some time: OpenAI is seeking to profit in an industry that has received an enormous influx of investment in the last few years. [So much for] OpenAI’s founding emphasis on safety, transparency and an aim of not concentrating power.’

—Sarah Kreps, director of Cornell University’s Tech Policy Institute. (Source: “OpenAI as we knew it is dead.” Vox, Sep. 26) 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla Now Supports 35 Languages to Advance Culturally Responsive Care - Clinicians can now leverage AI-powered documentation in any of the 35 languages supported to cut down on charting time, focus on patient care and enjoy better work-life balance. Patients receive care instructions in their preferred language, ensuring clarity and compliance throughout their healthcare journey. Read more here
 


 

 Share on Facebook Share on Linkedin Send in Mail

Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • If you’re not waist-deep in development of GenAI products, wait no longer to wade further in. That’s a little complimentary advice for software vendors. It comes from Bain & Company. In an extensive new report, the big business consultancy notes how easy it is to get lost in the hype around generative AI. But look at the private equity investors who are excelling in the category. They’re doubling down on plans for genAI tools known to bring measurable benefits. And they’re envisioning ways to “enhance or reimagine” product offerings without getting carried away by big dreams of lucrative—and quick—ROI. “AI needs to be part of long-term strategic planning for any software business, both in terms of offensive and defensive moves,” the authors write. “Right now, though, it is critical to get moving on piloting and deploying these technologies in the areas that will pay off today.” Bain has posted excerpts here and the full report here
     
  • There’s no shame in admitting you don’t have a good answer. Try telling that to the latest generation of large language AI chatbots. A new study shows the crop tends to give incorrect answers when it would have done better to confess ignorance. Worse, the same research showed people a little too eager to accept iffy answers as authoritative enough. The bots’ proclivity for giving opinions beyond the scope of their abilities “looks to me like what we would call bullshitting,” Mike Hicks, a philosopher of science and technology at the University of Glasgow, tells Nature. GenAI, he adds, “is getting better at pretending to be knowledgeable.”
     
  • Doctors using GenAI to draft messages for patients can be a pretty good thing. Doctors sending these messages without checking for accuracy can be a Very Bad Thing. Athmeya Jayaram, PhD, a researcher at the Hastings Center, a bioethics research institute in Garrison, N.Y., nails the nub of the problem. “When you read a doctor’s note, you read it in the voice of your doctor,” he tells the New York Times. “If a patient were to know that, in fact, the message that they’re exchanging with their doctor is generated by AI, I think they would feel rightly betrayed.” And if the message includes errors, inaccuracies or misleading advice—see item immediately above—the ultimate outcome could be a lot worse than feelings of betrayal. 
     
  • It’s been said before and will be said again: Data for training AI is a finite resource. That may not seem possible, given the mountains of multimodality content getting created and digitally posted every day. But when it comes to training AI, quality matters as much as—if not more than—quantity. “Access to quality data is the lifeblood of AI innovation,” Lisa Loud, executive director of the privacy and open-source advocacy group Secret Network Foundation, tells The Street. “Better data doesn’t just enhance AI, it ensures its relevance and fairness.” Read the article.
     
  • A database is a database. Unless it’s a vector database. In which case it can handle generative AI tasks with particular aplomb. That’s because vector databases “focus on the unstructured, feature-rich vectors that AI systems feed off,” a contributing writer at InfoWorld explains in a feature posted Sep. 23. “Driven by the growing importance of vector search and similarity matching in AI applications, many traditional database vendors are adding vector search capabilities to their offerings.” Read the whole thing
     
  • HHS is adding a division to offer technical expertise with special focus on AI. Micky Tripathi, PhD, whose HHS titles include acting chief AI officer, announced the change at a health IT summit this month. “We will have teams that will provide digital services and technical assistance to all of our operating and staffing divisions so that they don’t have to worry about going out and hiring teams for that kind of expertise,” Tripathi, who will oversee the new division, told attendees, according to GovCIO. “They will be on demand and will help with consulting and the enablement of technologies.”
     
  • The generative AI vendor whose feet are being held to the proverbial fire by Texas’s attorney general is speaking out. AG Ken Paxton alleged Pieces Technologies unlawfully exaggerated its algorithm’s accuracy at writing clinical notes and documentation, misleading several hospitals in the Lone Star State. Pieces agreed to terms, which did not include punitive measures but must have smarted anyway. At the time, a Texas TV station called the case a “first-of-its-kind investigation into AI in healthcare.” Now comes a prepared statement from Pieces claiming that a press release from the AG’s office “misrepresents the Assurance of Voluntary Compliance (AVC) into which Pieces entered.” Pieces adds: “The AVC makes no mention of the safety of Pieces products, nor is there evidence indicating that the public interest has ever been at risk.” HIPAA Journal has more
     
  • What a week for OpenAI. The company announced it will partially restructure as a for-profit corporation and, in the process, extend equity to CEO Sam Altman. Maybe relatedly, or maybe not, one of Altman’s top lieutenants, CTO Mira Murati, resigned. Partially is a hedge word, as the plan seems to be keeping an arm in place as a nonprofit that will have a minority ownership stake in the for-profit corporation. OpenAI said in prepared remarks that it “remain[s] focused on building AI that benefits everyone, and we’re working with our board to ensure that we’re best positioned to succeed in our mission. The non-profit is core to our mission and will continue to exist.” Press coverage is everywhere you’d want it to be. 
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare