News You Need to Know Today
View Message in Browser

AI oddities of 2024 | Partner news & views | Newswatch: Medical AI reimbursement

Friday, December 27, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence blunders 2024

2024 in AI weirdness: ‘Insane’ ChatGPT and mutant rat genitals were only the beginning

Before the waning year recedes into history, let us pause to appreciate that the past 12 months have brought numerous exciting advances involving AI and related emerging technologies. And in doing so, let us not fail to take a short last look at some of the more spectacular flops. 

It’s a fitting year-end exercise. Besides, some of the best of the worst moments delivered doses of that safest and most efficacious medicine: laughter. 

Ars Technica senior AI reporter Benj Edwards lays out a baker’s dozen of his favorite 2024 news items in a piece posted Dec. 26. In introducing his selections, Edwards suggests the “weirdness” unifying the examples may owe much to the novelty of new technologies.  

“Generative AI and applications built upon Transformer-based AI models are still so new that people are throwing everything at the wall to see what sticks,” he offers. “People have been struggling to grasp both the implications and potential applications of the new technology.”

Here’s a sampling of Edwards’s selections for 2024’s low-end AI highlights. 

1. ChatGPT went insane. 

Early in the year, things got off to “an exciting start when OpenAI’s ChatGPT experienced a significant technical malfunction that caused the AI model to generate increasingly incoherent responses, prompting users on Reddit to describe the system as ‘having a stroke’ or ‘going insane,’” Edwards writes. “During the glitch, ChatGPT’s responses would begin normally but then deteriorate into nonsensical text, sometimes mimicking Shakespearean language.”

From Edwards’s original coverage of the incident: 

‘It [was] like watching someone slowly lose their mind either from psychosis or dementia,’ wrote a Reddit user in response to a post about ChatGPT bugging out. ‘It’s the first time anything AI-related sincerely gave me the creeps.’

2. Mutant rat genitals exposed peer review flaws.

In February, Ars Technica senior health reporter Beth Mole covered a peer-reviewed paper published in Frontiers in Cell and Developmental Biology that “created an uproar in the scientific community” when researchers discovered it contained nonsensical AI-generated figures, Edwards reminds. 

From Mole’s February article: 

The figures contain gibberish text and, most strikingly, one includes an image of a rat with grotesquely large and bizarre genitals, as well as a text label of ‘dck.’

3. Robot dogs learned to hunt people with AI-guided armaments. 

“At some point in recent history—somewhere around 2022—someone took a look at robotic quadrupeds and thought it would be a great idea to attach guns [and other weaponry] to them,” Edwards recalls. “A few years later, the U.S. Marine Forces Special Operations Command (MARSOC) began evaluating armed robotic quadrupeds.” 

From Edwards’s April article: 

The Thermonator—what [private marketer] Throwflame bills as the first-ever flamethrower-wielding robot dog—is now available for purchase. The price? $9,420.

4. Google Search told people to eat rocks and glue cheese to pizza.

Google’s newly launched AI Overview feature “faced immediate criticism when users discovered that it frequently provided false and potentially dangerous information in its search result summaries,” Edwards writes. “Among its most alarming responses, the system advised humans could safely consume rocks, incorrectly citing scientific sources about the geological diet of marine organisms.”

From Ars Technica’s May coverage: 

Some of the funniest examples of Google’s AI Overview failings come, ironically enough, when the system doesn’t realize a source online was trying to be funny. An AI answer that suggested using non-toxic glue to stop cheese from sliding off pizza can be traced to a mischievous online troll.

5. San Francisco hosted a robotic car-horn symphony. 

In August, San Francisco residents got a “noisy taste of robo-dystopia when Waymo’s self-driving cars began creating an unexpected nightly disturbance in the South of Market district,” Edwards re-reports. “In a parking lot off Second Street, the cars congregated autonomously every night during rider lulls at 4 a.m. and began engaging in extended honking matches at each other while attempting to park.”

From the August article: 

The absurdity of the situation prompted tech author and journalist James Vincent to write on X: ‘Current tech trends are resistant to satire precisely because they satirize themselves. A car park of empty cars, honking at one another, nudging back and forth to drop off nobody, is a perfect image of tech serving its own prerogatives rather than humanity’s.’

Get the rest straight from the source.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Assistant or Associate Dean, Health AI Innovation & Strategy - UCLA Health seeks a visionary academic leader to serve as its Assistant or Associate Dean for Health AI Innovation and Strategy and Director for the UCLA Center for AI and SMART Health. This unique position offers the opportunity to shape and drive AI vision and strategy for the David Geffen School of Medicine (DGSOM) and ensure translation of innovation in our renowned Health system. This collaborative leader will work with academic leadership, faculty, staff and trainees to harness the power of AI to transform biomedical research, decision and implementation science, and precision health. Learn more and apply at:

https://recruit.apo.ucla.edu/JPF09997 (tenured track) 
https://recruit.apo.ucla.edu/JPF10032 (non-tenured track)

The Healthcare Leader’s Checklist for Choosing an Ambient AI Assistant with Strong AI Governance - As ambient AI for clinicians continues to evolve rapidly, how can governance protocols keep pace?

Nabla's latest whitepaper explores:

☑️ Key considerations when evaluating Ambient AI solutions.
☑️ Proven strategies Nabla employs to ensure safeguards around privacy, reliability, and safety.

Access actionable insights to support your decision-making. Download the whitepaper here.

 Share on Facebook Share on Linkedin Send in Mail

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Healthcare AI startups that earn FDA approval are like jobseekers after landing a new job. Think about it. Both celebrate their respective wins only to realize their work has only just begun. As Healthcare Finance editor Jeff Lagasse notes, it can take seven years between FDA clearance and the time reimbursement starts rewarding providers for using the technology. Larger AI suppliers can ride out the lag, but startups and smaller players may wilt under the weight of the wait. And those who miss out the most might be the patients. Lagasse speaks with an executive at one of the fortunate few, Avenda Health, which is only the fifth AI startup to secure Medicare reimbursement for its products. “Unfortunately, the way reimbursement is set up in the U.S., it disincentivizes new technologies,” says Brit Berry-Pusey, PhD, Avenda’s COO. “If you’re really pushing the boundaries and creating something novel, it means you have to start from scratch from a reimbursement perspective.”
     
  • Fortunately, the sobering realities of AI reimbursement are little match for the high ideals of AI innovators. This comes through between the lines of an article posted by The Inscriber magazine. The writer, Afaque Ghumro, looks at 10 avenues of opportunity for healthcare software developers. Improving efficiency, deepening patient engagement and personalizing treatment plans all make the list. “The transformative impact of AI and machine learning on healthcare software development services cannot be overstated,” Ghumro writes. “As AI and ML continue to evolve, the possibilities for healthcare software development remain limitless,” he adds before suggesting the eventual outcome will be nothing less than “a more efficient and patient-centered healthcare ecosystem and a brighter, healthier future for all.”
     
  • A funny thing happened to a patient as he was getting examined by a physician using an ambient AI scribe. “While [Dr.] Sharp examines me, something remarkable happens,” the patient recounts in the Washington Post. “He makes eye contact the entire time. Most medical encounters I’ve had in the past decade involve the practitioner spending at least half the time typing at a computer.” The patient was a WaPo reporter, the doctor the chief medical information officer at Stanford Health Care. The article recounting the visit places the observations in the context of the good, the bad and the troubling around generative AI in healthcare. The strength of the piece owes much to the reportorial prowess of the patient, technology columnist Geoffrey Fowler. Read the whole thing
     
  • Remember the research showing large language AI models going senile with age? At least one physician is taking solace in those findings. AI’s cognitive falloff, he reasons, supports the essentiality of human doctors. “I find comfort in the fact that while AI may excel in some areas, it may fall short in spatial abilities and other cognitive tasks,” writes Arthur Lazarus, MD, MBA, over at KevinMD. “Instead of fearing replacement, we should focus on integration, leveraging AI’s strengths to complement our own and creating a healthcare system that is both technologically advanced and deeply humane.” 
     
  • Here’s a wise doctor who dreams of an AI tool that can protect her from her own human fallibility. “Like my patients, I too am filled with nuance and self-contradiction,” admits Permanente emergency physician Mary Meyer, MD, MPH. Publishing her ruminations in MedPage Today, she wonders: “Can future AI models warn me when I am engaged in dangerous multi-tasking? Or simply too exhausted to accurately treat my patients? Can it warn my supervisors when I am spread dangerously thin?” Meyer offers the thoughts after working for a time with software that functions like a combination scribe and administrative assistant. “My wish is for an AI tool that seeks to mitigate my Achilles’ heels,” she writes, “rather than a network that views me as a cog in a system that can always be made more efficient.” Hear her out.
     
  • Never confide in an AI chatbot with anything truly personal. That’s some heartfelt advice from the consumer tech aficionado and radio personality Kim Komando. “Even I find myself talking to ChatGPT like it’s a person,” Komando confesses in USA Today. “It’s easy to think your bot is a trusted ally, but it’s definitely not. It’s a data-collecting tool like any other.” The piece is ordered around 10 things you should never say to AI bots
     
  • The partisan energy debate rages on—even though AI will soon make it obsolete. It will do so not by outarguing environmentalists but by devouring electricity. Neil Chatterjee, a former head of the Federal Energy Regulatory Commission, makes the case in the New York Post. “Our only option is to use every energy source at our disposal,” he writes. “And I mean everything: natural gas, solar, geothermal, hydropower, energy storage, nuclear, you name it.” Of course, that line of reasoning won’t win over staunch opponents of fossil fuels. So Chatterjee, who served during the first Trump administration, lays down his trump card: “If we don’t win the AI race, China will—and we don’t want to live in a world where communist China dominates AI.” Yeah, that probably won’t settle the debate, either. Read the piece
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare