| | | The year 2023 saw a shift toward open-source AI models, surging investments in generative AI and increasing AI regulation. That year also witnessed the release of 149 foundation models—the most ever, up to that point—along with private investments in AI topping $67 billion in the U.S. alone. Meanwhile AI reached or surpassed human-level performance in many benchmarks. Why look back at 2023 now, with 2025 nearly upon us? Because this week Stanford’s Institute for Human-Centered Artificial Intelligence—aka “HAI”—is revisiting its 10 most-read blog posts of 2024. And the 2023 findings went up in April 2024. Despite the advances noted above, “concerns about job security, AI product safety and the need for regulatory measures are on the rise,” writes Shana Lynch, HAI’s content chief, in introducing the list. “[Y]ounger and more educated demographics [are] particularly attuned to AI’s impact on employment.” Here are excerpts from five of the 10 items in Lynch’s Dec. 9 listicle. 1. Large language models in healthcare: Close but not yet there. Despite the promise of LLMs in healthcare, “we have some major challenges to overcome before they can safely be integrated into clinical practice,” Lynch writes, quoting Stanford scholars. Current evaluations of LLMs, for example, “often rely on curated data rather than real-world patient information, and evaluation efforts are uneven across healthcare tasks and specialties.” More: ‘The research team recommends more rigorous, systematic assessments using real patient data and suggests leveraging human-guided AI agents to scale evaluation efforts.’
2. Much research is being written by LLMs. Too much? Stanford’s James Zou and his team found that nearly 18% of computer science papers and 17% of peer reviews include AI-generated content, Lynch writes. The rapid adoption “underscores both the potential benefits and ethical challenges of LLMs in research,” she adds. More: ‘Zou argues for more transparency in LLM usage, noting that, while AI can enhance clarity and efficiency, researchers must remain accountable for their work to maintain integrity in the scientific process.’
3. GenAI generates erroneous medical references.Researchers found that even the most advanced LLMs frequently hallucinate unsupported claims or cite irrelevant sources, with models like ChatGPT-4’s retrieval-augmented generation producing unsupported statements up to 30% of the time, Lynch writes. More: ‘As AI tools become increasingly common in healthcare, experts urge for more rigorous evaluation and regulation to ensure these systems provide reliable, evidence-based information.’
4. NLP helps detect mental health crises. As mental health needs surge, Stanford medical students Akshay Swaminathan and Ivan Lopez developed a natural language processing tool called Crisis Message Detector 1 (CMD-1) to improve response times for patients in crisis. “Tested on data from mental health provider Cerebral,” Lynch reports, “CMD-1 achieved 97% accuracy in identifying urgent cases and reduced patient wait times from over 10 hours to 10 minutes.” ‘The project highlights the potential of AI to support clinicians by streamlining workflows and enhancing crisis response in healthcare settings, and underscores the importance of collaborative, interdisciplinary development to meet clinical needs effectively.’
5. Privacy in an AI era: How do we protect our personal information?Potential for misuse, particularly with LLMs, runs the gamut from web data scraped for training to AI-driven threats like voice cloning and identity theft. To address these, Stanford HAI’s Jennifer King and Caroline Meinhardt suggest stronger regulatory frameworks are essential, Lynch notes. ‘They advocate for a shift to opt-in data sharing, a supply chain approach to data privacy, and collective solutions like data intermediaries to empower users in an era dominated by AI and vast data collection.’
Read the rest. |
| | |
| |
| | | | Buzzworthy developments of the past few days. - IT departments are budgeting more for AI in 2025 than for any other single area of investment. The spending will cover not only technologies but also skills, as organizations move to train current staff and hire new talent. The projection is from the IT education company Skillsoft, which surveyed 5,200 IT leaders and employees across numerous industries between May and September. AI was named as a top investment by 47% of the field. It beat out other high priorities such as cybersecurity (42%), cloud (36%) and infrastructure (33%). “Quite simply, skilling, upskilling and reskilling are imperatives,” Skillsoft remarks in its survey report. “IT decisionmakers who invest in their people will lead the charge in AI today—and all other innovations tomorrow.”
- Star Wars healthcare is here. Well, it’s in China, anyway. At a Beijing facility dubbed “Robodoc Hospital,” the droid that treated semiconscious Luke Skywalker has nothing on robotic doctors and nurses “powered by AI.” According to coverage in Medium, the nonhuman care pros can diagnose and treat up to 3,000 patients per day without human intervention, interact with patients remotely via smartphone apps and continuously learn from their experiences. Oh, and they never get tired. Fortunately, they’re not intended to completely replace human healthcare workers. “Rather, the goal is to augment and empower the medical profession, freeing up human practitioners to focus on the most complex and high-touch aspects of care,” the outlet reports. “The AI systems can handle the routine tasks [so] doctors can devote more time to patient education, emotional support and advanced treatments.” Read the rest.
- Nurses who want to meld traditional skills with advanced AI know-how have a new track to pursue in higher education. The option is available at Florida State University, which has opened what it says is the country’s first master’s degree program in nursing with a concentration in AI applications. “We are seeing hospitals and clinics begin to implement artificial intelligence,” says FSU dean of nursing Jing Wang, PhD, MPH, RN. “Our master’s program will create a new generation of nursing professionals ready to navigate and leverage these innovative skills and knowledge.”
- AI will struggle to transform healthcare if it doesn’t receive an infusion of synthetic data. By training algorithms without tapping sensitive patient data, such data sources could “vastly speed up the development and approval of new medical AI tools,” tech vendor CEO Dustin Salinas insists in Forbes. Salinas, whose company is called Just Going Viral, says he’s been working with the FDA to “secure the promise of synthetic data for future applications,” but progress has been understandably sluggish. Given a chance, synthetic data can go far helping push AI into healthcare safely and quickly, he maintains. “It lets us test AI systems in a controlled and risk-free environment. No real patients are involved, so we can put AI through all kinds of tests before it ever interacts with a live case.”
- What’s birthed in California often strays from California. So it’s worth re-noting nationally that, in September, Gov. Gavin Newsom signed into law the Golden State’s AI in Healthcare Services bill. This means providers will need to issue disclaimers and/or instructions when they use generative AI in various situations. Briefing stakeholders on the import of the development, attorneys at the Morgan Lewis law firm point out that the law, AB 3030, doesn’t apply to every GenAI-generated communication. “Most importantly, if a communication is ‘read and reviewed by a human licensed or certified healthcare provider’ before being disseminated, AB 3030 does not apply,” they write. “The law also does not impact GenAI communications unrelated to patient client information, such as communications for appointment scheduling or billing.” Read their news analysis.
- The murder of UnitedHealthcare CEO Brian Thompson has reignited interest in payers’ use of AI for coverage decisions. Of particular interest is the lawsuit UHC and Humana are fighting over that application of the technology. Focusing on the challenges faced by legal professionals pursuing fact discovery involving digital data—aka “eDiscovery”—analysts from the computer forensics company HaystackID note “mounting pressure” to develop enhanced capabilities in AI system auditing. “Looking ahead, the precedents established in this case will likely inform eDiscovery practices for years to come,” they write. “As AI systems become more deeply embedded in critical decision-making processes across sectors, the lessons learned from this healthcare controversy will guide future investigations.”
- Generative AI has achieved penetration rates twice those of personal computers and the internet over the past two years. That’s according to a working paper from the National Bureau of Economic Research. “GenAI has a 39.5% adoption rate after two years, compared with 20% for the internet after two years and 20% for PCs after three years (the earliest we can measure it),” the paper’s authors write. “This is driven by faster adoption of generative AI at home compared with the PC, likely because of differences in portability and cost.”
- ‘The urgency to adopt AI in healthcare has never been greater. The challenges are immense, but so are the opportunities.’ That’s from the introduction to a new book, 100 AI Applications for Hospitals: A Practical Guide to Revolutionizing Healthcare Operations. The author is Malak Halawy, MPH, senior partner with a company called First Principles Healthcare in Dubai, U.A.E. Halawy self-published the work, so its quality is 100% the author’s own responsibility. That said, going by the first section as viewable at Amazon, it looks nicely organized and well worthwhile.
- Recent research in the news:
- Funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|