In cooperation with | ● |
|
|
| | | China leads the world in business use of generative AI, with 83% of organizations saying they’re active adopters. However, using the technology and making it work for purpose are two different things. And the U.S. leads all countries in terms of full implementation, at 24% (vs. 19% for China). Researchers with SAS and Coleman Parkes Research made the findings after surveying 1,600 decision-makers around the world who have responsibility for strategy around GenAI or data analytics. Respondents represented numerous sectors, healthcare among them. Surveyed organizations ranged from at least 500 employees to more than 10,000. Posting the results and an analysis July 9, SAS offers tips on meaningfully embedding the technology into existing or unfolding operations. Here are six. 1. Use data management tools to ensure that large language models (LLMs) are fed the highest quality data and prompts—data that is both auditable and traceable.These tools can provide user privacy and security, with robust data protection measures, including data minimization, anonymization and encryption, the report’s authors note, ensuring that sensitive information remains safeguarded. “Furthermore, workflows can be automated for the shortest, most direct route to building or tuning an LLM.” More: ‘Organizations should refer to governance and compliance policies for an essential framework within which data management tools can be applied.’
2. Ensure that key decision makers are AI-literate before they develop your comprehensive GenAI strategy.“This requires time and will most often involve hiring outside experts to advise your team,” the authors write. To this SAS’s executive VP and chief technology officer, Bryan Harris, adds: ‘With any new technology, organizations must navigate a discovery phase, separating hype from reality, to understand the complexity of real-world implementations in the enterprise. We have reached this moment with generative AI.’
3. Identify your best GenAI use case to deliver speedy return on investment.The first step in successfully deploying GenAI is to identify high-impact use cases for the technology, which helps deliver a measurable return on investment as quickly as possible, the authors suggest. Expounding on this point is Marinela Profi, strategic AI advisor at SAS: ‘LLMs alone do not solve business problems. GenAI is nothing more than a feature that can augment your existing processes, but you need tools that enable their integration, governance and orchestration. And most importantly, you need people who can use tools to ensure the appropriate level of orchestration.’
4. Make sure your GenAI software vendors can integrate with existing workflow and decisioning platforms.GenAI is an ideal contributor to hyper-automation, which facilitates the automation of all feasible tasks within an organization, the authors state before adding: ‘GenAI excels in summarizing vast amounts of data to support decisioning workflows, enabling real-time interactions aligned with your preferred business processes.’
5. To facilitate measurable outcomes, use a decisioning workflow system to infuse GenAI into existing business processes.LLMs can only execute a few tasks of a use case, the authors point out. More: ‘Organizations still need an end-to-end process that orchestrates the AI life cycle while enhancing the transparency and governance of LLMs.’
6. Prepare for snags. “Across all organizations, GenAI use can create anxieties about data privacy, security and lack of governance—along with concerns about technology dependence and its potential for amplifying bias,” the authors write. “Many of these organizations have not fully prepared themselves to comply with regulations and do not have GenAI governance in place or ways to monitor the technology.” More: ‘Our research shows that businesses are rushing into GenAI before establishing adequate systems of governance, which could result in serious issues with quality and compliance later.’
There’s more in SAS’s full research report (contact info needed for access) and interactive data dashboard. |
| | |
| |
| | | Nabla Rolls out at Carle Health Through Epic Integration- After a successful pilot program, Nabla will be implemented across Carle Health’s Illinois-based multi-specialty physician group practice and gradually expanded to 1,500 Carle Health providers throughout the year. Clinicians will use the integrated version of Nabla within Epic on their desktop or mobile devices, experiencing the full value of Nabla's ambient AI. |
| | |
|
| | | Buzzworthy developments of the past few days. - Ambient documentation AI has won over clinicians en masse at Mass General Brigham. After drafting notes with close to 90% accuracy in a pilot of 500 patient-clinician interactions, the smartphone-based technology is headed for broad adoption across a good chunk of the Harvard-affiliated enterprise. Its next phase will put it in the hands of 800 healthcare workers. That’s double the number that planners initially intended. “Because there was such overwhelming interest in participating and such positive feedback early on, [our] senior leadership committed to making this available to more clinicians,” explains Amanda Centi, PhD, the institution’s innovation manager for emerging technologies and solutions. The secret of the toolkit’s success in making so many friends so fast: It keeps clinicians in the room with the patient rather than “putting up a barrier like a lot of technology does in our lives.” That quote is from Rebecca Mishuris, MD, MPH, chief medical information officer and vice president. Read more from both here.
- Utah has set up a new office of AI policy. Healthcare is its first order of business. Specifically, the office will concentrate out of the gate on using generative AI to improve mental healthcare. State official Margaret Busse says they’re starting with that use case—even prioritizing it over AI in K-12 education—for three reasons. One, mental health issues are widespread in Utah. Two, resources to deal with the problem at scale are short. And three, the application will yield learnings on multiple AI issues, including data privacy. Announcing the launch July 8, Gov. Spencer Cox said the work will encourage collaborations between government and industry so as to balance technological innovation with consumer protections. “I’m proud of the ‘Utah Way’ that encourages us to do this,” Cox said. “Business and government can work side by side in a way that helps everyone and elevates our state in a powerful way.”
- Nurses have nothing to fear from AI. Not only are their jobs safe, but their input on AI is essential. A blogpost at Nurse.com offers this reassurance while encouraging nurses to get involved. The integration of AI in nursing “must be approached thoughtfully, with a focus on augmenting rather than replacing the human elements of care,” the blogger writes. After reminding readers of some un-automatable care components—empathy, intuition, sensitive situational awareness—he issues something of a call to arms. “Ensuring that nurses are involved in the development and implementation of AI technologies,” the blogger writes, “is crucial for creating tools that truly support their work.
- Young people have high expectations for AI. High enough that they believe it should be used to modernize healthcare. The findings are from the U.K., but they may well reflect the disposition of the young toward AI wherever it’s up and coming. Researchers from University College London and Great Ormond Street Hospital asked U.K. residents ranging in age from 6 to 23 about their views on AI. When the questions turned to how they’d like AI to be used in healthcare, the respondents expressed openness. However, they wanted the tools to be supervised by healthcare professionals “as the young people feel there are elements of care—such as empathy and ethical decision-making—that AI cannot mimic,” according to a news item posted by UCL. “When faced [to choose] between a human and computer, they would be more willing to trust the human.”
- ‘In the infancy of the AI age, all physicians become kindergarten teachers, unwittingly molding AI models through our very interactions with it.’ And today’s kindergartner models are tomorrow’s trusted AI toolkits. Watch how you raise them up. The word picture is fleshed out with commendable thoughtfulness at HealthyDebate.ca by Angela (Hong Tian) Dong, MD, an internal medicine resident at the University of Toronto. “Physicians will need to understand the limitations of high-yield AI systems applied in a clinical setting,” she writes, “provide ongoing expert feedback to prevent post-market algorithmic drift, and recognize their role as canaries in the coal mine if healthcare AI systems drift away from patient-centered priorities and incentives.” Wait. Canaries or kindergarten teachers? No matter. The metaphors are mixed, but the point is well-made.
- The Coalition for Health AI has lost two board members. Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, and Micky Tripathi, PhD, the national coordinator for health IT at HHS, resigned from their CHAI roles for unknown reasons. Tripathi tells Fierce Healthcare he made his decision after being appointed chief AI officer and co-chair of the Biden Administration’s AI task force. Tripathi says the latter positions have him formally working across numerous federal agencies, putting him into situations that could present conflicts. The resignation, he says, is “not a reflection at all on CHAI, their mission, the strength of the collaboration they’re building, and work that they’re doing to advance responsible and trustworthy AI.”
- The quality of GenAI’s outputs is a reflection of not only a model’s training data but also of the end-user’s query. Testing the latter is the work of professionals called “prompt engineers.” Here prompt has nothing to do with being on time and everything to do with the teeing up of the queries. VentureBeat gives a nice primer, with examples, by Vidisha Vijay, a data scientist at CVS Health and an aficionado of prompt engineering. “Ethically designed prompts can reduce biases and promote fairness in LLMs,” Vijay writes. “It is also essential to continuously monitor AI outputs to identify and address new biases” that may emerge over time. Read the whole thing.
- From the AI hype vs. AI substance file: “AI—whether generative AI, machine learning, deep learning or you name it—was never going to be able to sustain the immense expectations we’ve foisted upon it,” writes Matt Asay, JD, at InfoWorld. “This doesn’t mean GenAI is useless for software development or other areas, but it does mean we need to reset our expectations and approach.” Hear him out.
- Recent research roundup:
- Funding news of note:
- From AIinHealthcare’s news partners:
|
| | |
|
| |
|
|