In cooperation with | ● |
|
|
| | | AI won’t fulfill its promise to transform American medicine if it isn’t appropriately integrated, step by step, across U.S. healthcare. This evolutionary process will have to be coaxed along with high levels of methodological rigor, risk awareness and nimble adaptability—not just from the government but from all interested parties. The conviction comes through between the lines of a June 17 blog post authored by Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence. Launched in 2020 as a branch of the agency’s Center for Devices and Radiological Health (CDRH), the DHCoE works to “foster responsible AI innovations in healthcare,” Tazbaz reminds, “while ensuring these technologies, when intended for use as medical devices, are safe and effective for the end-users, including patients.” Noting the center’s desire to encourage collaboration between healthcare AI stakeholders and its own people, Tazbaz offers three observations to help foster the requisite harmony. 1. Life-cycle planning for AI models can reduce risk. By adopting agreed-upon standards and best practices covering the various phases of AI models’ lifespans, stakeholders can actively help mitigate risks for the long term, Tazbaz suggests. “This includes, for instance, approaches to ensure that data suitability, collection and quality match the intent and risk profile of the AI model that is being trained,” he writes. More: ‘The healthcare community together could agree on common methodologies that provide information to a diverse range of end users, including patients, on how the model was trained, deployed and managed through robust monitoring tools and operational discipline.’
2. Quality-assurance measures can positively impact clinical outcomes.Continuous performance monitoring before, during and after deployment is one way to carry QA through an AI model’s life cycle, Tazbaz points out. Meanwhile transparency and accountability “can help stakeholders feel comfortable with AI technologies.” More: ‘Quality assurance and risk management, right-sized for healthcare institutions of all sizes, can help provide confidence that AI models are developed, tested and evaluated on data that is representative of the population for which they are intended.’
3. Shared responsibility can help ensure success. Efforts around AI quality assurance “have sprung up at a grassroots level across the U.S. and are starting to bear fruit,” Tazbaz writes. “Solution developers, healthcare organizations and the U.S. federal government are working to explore and develop best practices for quality assurance of AI in healthcare settings.” More: ‘These efforts, combined with FDA activities relating to AI-enabled devices, may lead to a world in which AI in healthcare settings is safe, clinically useful and aligned with patient safety and improvement in clinical outcomes.”
Read the whole thing. |
| | |
| |
| | | | Buzzworthy developments of the past few days. - Have healthcare AI suppliers really ‘overindulged’ in generative AI? Market evaluators at HFS Research could make the case. In a June report covering 36 vendors, HFS executive research leader Rohan Kulkarni and co-authors note that, sure, healthcare providers are “more open” to tech-enabled innovation to improve productivity, clinical outcomes and financial performance. However, the researchers imply, vendors offering AI-packing products probably could do a better job of reading the proverbial room. “The investments, pilots, proofs-of-concept, accelerators and more are the tactical manifestations of GenAI,” they write. “It is likely the GenAI hand has been overplayed relative to outcomes.”
- Either way, it’s remarkable what a healthcare AI marketer has to do these days. Looking at healthcare AI vendors from a parallel perspective, another specialist agrees the industry faces new challenges. But Mike White of the Alexander Group business consultancy calls for more of a fine-tune than a re-think. Pointing to his firm’s customer experience research, White says 90% of healthcare providers rank on-site case coverage and clinical education as the top factors when selecting med-tech vendors for high clinical complexity products. In this environment, he finds, companies “are being compelled to reevaluate their go-to-market strategies to effectively navigate the evolving landscape caused by the swift adoption of AI in healthcare.” Spiceworks has it.
- Healthcare translators may be wise to worry about job security. In the cultural bellwether state of California, state health officials are taking bids from GenAI vendors with expertise in translation. Why wouldn’t they? One of every three Golden State residents speaks a language other than English. And many patients grow impatient waiting for human translators—first to arrive and then to get things right. It’s probably safe to assume Spanish will come first, but the officials aren’t offering many specifics. And in California more than 200 languages are spoken. A medicolegal interpreter who specializes in Cambodian and Khmer tells the Los Angeles Times that AI “cannot replace human compassion, empathy and transparency, meaningful gestures and tones.” Maybe not, says a health equity advocate, but “in good hands it has many opportunities to expand the translation capability to address inequities.” Read the rest.
- Meanwhile nurses have reason to welcome rather than resist AI. Brian Weirich, chief nursing officer at Banner Thunderbird Medical Center in Arizona, suggests the technology will make nurses better at their jobs. It promises to do so, he explains, by supporting medical diagnostics, helping develop treatment plans and streamlining nursing workflows. “By embracing AI,” Weirich writes in Unite.AI, “the nursing field can evolve, ensuring that healthcare delivery becomes more efficient, personalized and effective.” Hear him out.
- Over in Revenue Cycle Management, department staff should be feeling similarly reassured. If not, they ought to read an article published in ICD10 Monitor June 19. “For AI to function appropriately in a complex RCM environment, humans must be in the loop,” contends subject matter expert and industry CEO Ritesh Ramesh. “Without question, AI can transform healthcare RCM. But doing so requires that healthcare organizations augment their technology investments with human and workforce training to optimize accuracy, productivity and business value.”
- You might need a chief AI officer if … Your organization is home to multiple AI initiatives across different departments or divisions, none of which are doing much to coordinate efforts or share resources … Your leadership can’t get its head around advances involving AI and their potential applications in healthcare … Your IT people are struggling to scale AI solutions and integrate them into existing systems and workflows. OK, that’s enough examples for now. Forbes has more from Andrei Kasyanau, cofounder and CEO at Glorium Technologies.
- Nvidia has brilliantly surfed the AI wave to become world’s most valuable public company. That’s subject to change as companies like Apple and Microsoft do this or that to get back in the leader’s saddle. Regardless, the ascent has rocketed Nivida’s top gun, Jensen Huang, toward a spot in the 10 Richest People in the World list. Forbes had him at No. 11 as of Tuesday afternoon, when his net worth swelled to something like $119 billion—up from a mere $77 billion at the start of this year. Just a fun AI-related fact to know and share.
- An AI startup just raised $15 million from investors to modernize sewer inspections. There’s not a lot of healthcare in this AI news from the company, SewerAI. Then again, “enhancing efficiency and accuracy in inspecting the 6.8 billion feet of sewer pipes across the nation” may yield significant health benefits down the road. Besides, it’s edifying enough to learn there’s a high-tech way to do things like “powering 3D manhole inspections.”
- AI funding news of note:
- From AIin.Healthcare’s news partners:
|
| | |
|
| |
|
|