News You Need to Know Today
View Message in Browser

Tech trends afoot / AI reporter’s notebook / Partner news

Tuesday, September 10, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

Generative AI LLM SLM

AI technical trends to watch for (and not just in healthcare)

Many gen AI end users are finding that large language models (LLMs) defy easy infrastructure setup and affordable management costs. One budding option may be to go with small language models (SMLs) instead. 

A lot of people are likely to do exactly that over the next 12 months, according to the latest InfoQ Trends report. “Companies like Microsoft have released Phi-3 and other SLMs that [people] can start trying out immediately to compare the cost and benefits of using an SLM versus an LLM,” the report’s authors write. “This new type of language model is also perfect for edge computing-related use cases to run on small devices.”

InfoQ reaches something like 1.5 million readers around the world. Its content is written by and for software engineers and developers, but much of it—like the Trends report—is accessible by, and of interest to, general technology watchers. 

Here are five more trends to anticipate, as enumerated by software architect Srini Penchikala and co-authors at InfoQ. 

1. The future of AI is open and accessible. 

“We’re in the age of large language models and foundation models,” the authors write. “Most of the models available are closed source, but companies like Meta are trying to shift the trend toward open-source models.”

‘Even though most currently available models are closed source, companies are trying to shift the trend toward open-source models.’ 

2. Retrieval Augmented Generation (RAG) will become more important.

RAG techniques, which combine LLMs with external knowledge bases to optimize outputs, “will become crucial for [organizations] that want to use LLMs without sending them to cloud-based LLM providers,” Penchikala and co-authors explain.  

‘RAG will also be useful for applicable use cases of LLMs at scale.’

3. AI-powered hardware will get much more attention with AI-enabled GPU infrastructure and AI-powered PCs.

AI-integrated hardware is “leveraging the power of AI technologies to revolutionize the overall performance of every task,” the authors observe. “AI-enabled GPU infrastructure like Nvidia’s GeForce RTX and AI-powered PCs like Apple M4, mobile phones and edge computing devices can all help with faster AI model training and fine-tuning as well as faster content creation and image generation.”

‘This is going to see significant development in the next 12 months.’

4. AI agents, like coding assistants, will also see more adoption, especially in corporate application development settings.

Autonomous agents and GenAI-enabled virtual assistants are “coming up in different places to help software developers become more productive,” the authors remark, noting that examples of AI agents include Gihub’s Copilot, Microsoft Teams’ Copilot, DevinAI, Mistral’s Codestral and JetBrains’ local code completion. 

‘AI-assisted programs can enable individual team members to increase productivity or collaborate with each other.’

5. AI safety and security will continue to be important in the overall management lifecycle of language models. 

Tip: Train your employees to have proper data privacy security practices, and “make the secure path the path of least resistance for them so everyone within your organization easily adopts it.”

‘Self-hosted models and open-source LLM solutions can help improve the AI security posture.’

The article draws from a podcast hosted by the InfoQ editorial team. Read the piece or listen to the podcast

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla Now Supports 35 Languages to Advance Culturally Responsive Care - Clinicians can now leverage AI-powered documentation in any of the 35 languages supported to cut down on charting time, focus on patient care and enjoy better work-life balance. Patients receive care instructions in their preferred language, ensuring clarity and compliance throughout their healthcare journey. Read more here
 


 

 Share on Facebook Share on Linkedin Send in Mail
ai in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • The U.S. Commerce Department wants proof of strong safety and security measures from AI developers and cloud suppliers. In an announcement posted Sep. 9, Commerce’s Bureau of Industry and Security says the aim is to minimize the risk of serious damage from cyberattacks. The proposal calls for making the affected parties report details of their work on, specifically, frontier AI. Commerce Secretary Gina Raimondo says the action will “help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.” The move follows just a week after Raimondo was featured in a segment of “60 Minutes.” “If you think about national security in 2024, it’s not just tanks and missiles,” she told CBS News correspondent Lesley Stahl Sep. 1. “It’s technology. It’s semiconductors. It’s AI. It’s drones. And the Commerce Department is at the red-hot center of technology.”
     
  • When Michelle Mello, JD, PhD, thinks about rules for healthcare organizations that use AI tools, a lot of people wait to hear what she’ll say. And those who don’t should. The Stanford professor of law and health policy has authored more than 250 peer-reviewed articles on topics from AI to biomedical research ethics. Her research concentrates on the effects of law and regulation on healthcare delivery and population health outcomes. “Healthcare organizations are supposed to be investigating these [AI] tools to see if they’re biased, but I have doubts that many organizations will really know how to do that in a meaningful way,” Mello says in a Q&A with the Regulatory Review. “And no one’s offering up money or technical assistance to help them.” Read the rest
     
  • Performance drift and data generalization: 2 issues that can keep FDA product reviewers up at night. The first has to do with algorithms testing well enough gain the agency’s OK only to slowly deteriorate in real-world clinical use. The second is about making sure training and validation datasets are “large, cleaned and representative” of the populations their respective models will serve. The insights are from FDA scientific reviewer Luke Ralston, who offered them at a medical conference last week that was covered by MedCity News
     
  • Here’s a dilemma you’re likely to hear more about. How capable might an AI model become of predicting care preferences for patients with cognitive impairment? The question came up at a meeting of the European Respiratory Society Sep. 7 in Vienna. Concerns arise “when the technology’s embedded ethical values do not align with the patient’s priorities for clinical decision-making,” reports journalist Hayden Klein, who covered the event for the American Journal of Managed Care. “For instance, AI could prioritize specific medical treatments and measurable outcomes over a patient’s quality of life, potentially undermining patient autonomy if they have different goals in mind.” More here
     
  • It’s time to lower expectations around AI in healthcare. Part of the dial-down is accepting that the charges of hype were at least partly right. That’s the view of Spencer Dorn, MD, MPH, MHA, vice chair and professor of medicine at the University of North Carolina. Stating his case in Forbes, Dorn suggests AI will eventually transform healthcare. But it’s going to take years. In the meantime, he recommends taking a few steps to better align hopes with reality. One of these: Accept incremental gains. “Organizations must resist audacious claims and ground in reality,” he writes. “And, most of all, they should reflect on who they are, what they do and how they can do it better—with or without AI.” Read the whole thing
     
  • Governance may be the least exciting of all AI duties and responsibilities. But it’s also among the most essential. This comes through in coverage of a recent webinar hosted by the Los Angeles-based law firm Sheppard Mullin Richter & Hampton. Partner Carolyn Metnick, JD, explained how a governance framework “serves to operationalize an organization’s values,” TechTarget’s Shania Kennedy reports. “By considering existing frameworks and regulations and how to best align with them, healthcare stakeholders can begin developing an AI governance program in line with their enterprise’s risk tolerance.” Unexciting? Maybe. Unskippable? For sure. 
     
  • The era of the AI PC is upon us. But healthcare providers need not rush to replace their existing desktop computers with shiny new ones packing AI inside. That’s not to say such a day won’t ever arrive. “For now, healthcare-specific capabilities are sparse,” CDW bloggers point out, “but organizations may want to keep an eye on the possibilities.”
     
  • The AI-ready iPhone 16 hogged the attention at Apple’s new product unveiling Monday. Lost in its shadow were two health features—sleep apnea support in Apple Watch and clinical-grade hearing aid technology in AirPods Pro 2. For more on those two straight from the source, click here
     
  • Time is out with its picks for the 100 most influential people in AI. I count four who are specific to healthcare—Insitro CEO Daphne Koller, Abridge co-founder Shiv Rao, Aeye Health CEO Zack Dvey-Aharon and Viz.ai co-founder Chris Mansi. Did I miss anyone
     
  • Recent research in the news: 
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare