News You Need to Know Today
View Message in Browser

AI code of conduct | Industry watcher’s digest | Partner news

Thursday, April 11, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo

Activeloop logo

healthcare AI code of conduct

Submitted for consideration by all healthcare AI stakeholders: 10 principles, 6 commitments, 1 direction

Key collaborators across the healthcare AI life cycle now have a common set of principles to which they can hold each other. And that means everyone from developers and researchers to providers, regulators and even patients.

The group defining the code of conduct, an AI steering committee of the National Academy of Medicine (NAM), says it hopes the guidance will

provide touchstones around which health AI governance—facilitative and precautionary—can be shaped, tested, validated and continually improved as technology, governance capability and insights advance.

NAM senior advisor Laura Adams and colleagues present the organization’s thinking in a draft posted April 8. The team’s recommended code-of-conduct principles, 10 in number, urge healthcare AI stakeholders to help make sure the technology is unfailingly:

  1. Engaged: ‘Understanding, expressing, and prioritizing the needs, preferences, goals of people, and the related implications throughout the AI life cycle.’
     
  2. Safe: ‘Attendance to and continuous vigilance for potentially harmful consequences from the application of AI in health and medicine for individuals and population groups.’
     
  3. Effective: ‘Application proven to achieve the intended improvement in personal health and the human condition, in the context of established ethical principles.’
     
  4. Equitable: ‘Application accompanied by proof of appropriate steps to ensure fair and unbiased development and access to AI-associated benefits and risk mitigation measures.’
     
  5. Efficient: ‘Development and use of AI associated with reduced costs for health gained, in addition to a reduction, or at least neutral state, of adverse impacts on the natural environment.’
     
  6. Accessible: ‘Ensuring that seamless stakeholder access and engagement is a core feature of each phase of the AI life cycle and governance.’
     
  7. Transparent: ‘Provision of open, accessible, and understandable information on component AI elements, performance, and their associated outcomes.’
     
  8. Accountable: ‘Identifiable and measurable actions taken in the development and use of AI, with clear documentation of benefits, and clear accountability for potentially adverse consequences.’
     
  9. Secure: ‘Validated procedures to ensure privacy and security, as health data sources are better positioned as a fully protected core utility for the common good, including use of AI for continuous learning and improvement.’
     
  10.  Adaptive: ‘Assurance that the accountability framework will deliver ongoing information on the results of AI application, for use as required for continuous learning and improvement in health, healthcare, biomedical science and, ultimately, the human condition.’

In addition, the draft offers a set of six proposed commitments stakeholders could take to “broadly direct the application and evaluation of the code principles in practice.” The commitments:  

  1. Focus. Protect and advance human health and human connection as the primary aims.
  2. Benefits. Ensure equitable distribution of benefit and risk for all.
  3. Involvement. Engage people as partners with agency in every stage of the life cycle.
  4. Workforce well-being. Renew the moral well-being and sense of shared purpose to the healthcare workforce.
  5. Monitoring. Monitor and openly and comprehensibly share methods and evidence of AI’s performance and impact on health and safety.
  6. Innovation. Innovate, adopt, collaboratively learn, continuously improve and advance the standard of clinical practice.

NAM suggests its 10 principles and six commitments “reflect simple guideposts to guide and gauge behavior in a complex system and provide a starting point for real-time decision making and detailed implementation plans to promote the responsible use of AI.” More:

Engagement of all key stakeholders in the co-creation of this Code of Conduct framework is essential to ensure the intentional design of the future of AI-enabled health, healthcare and biomedical science that advances the vision of health and well-being for all.

Read the whole thing.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Bayer Radiology uses Activeloop's Database for AI to pioneer medical GenAI workflows. Bayer Radiology collaborated with Activeloop to make their radiological data AI-ready faster. Together, the parties developed a 'çhat with biomedical data' solution that allows users to query X-rays with natural language. This collaboration significantly reduced the data preparation time, enabling efficient AI model training. Intel® Rise Program further bolstered Bayer Radiology’s collaboration with Activeloop, with Intel® technology used at multiple stages in the project, including feature extraction and processing large batches of data. For more details on how Bayer Radiology is pioneering GenAI workflows in healthcare, read more.

How to Build a Pill Identifier GenAI app with Large Language Models and Computer Vision. About 1 in 20 medications are administered wrongly due to mixups. Learn how you can combine LLMs, computer vision models like Segment Anything and YOLOv8 with Activeloop Deep Lake and LlamaIndex to identify and chat with pills. Activeloop team tested out advanced retrieval strategies and benchmarked them so you can pick the most appropriate retrieval strategy for your multi-modal AI use case. GitHub repository and the article here.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI for talk therapy

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • It takes time for a mental health therapist to sustain a patient’s trust over time. GenAI can help. “Rather than draining humanity from therapy, AI will flood the system with more time,” predicts Ross Harper, PhD, in Psychology Today. In his scenario, the technology would query the patient between visits, sending notes to the therapist ahead of the next face-to-face. This would cut the time needed for catch-up talk on the clock: No more “So tell me what’s happened since we last saw each other.” A one-hour session could dive straight into the productive here and now, Harper suggests. The saved time would allow the professional to more fully focus on “building a real human connection [with] empathy, active listening, relationship-building, trust and expectation management.”
     
  • Extra forethought may be in order when the patient receiving talk therapy is a child or teen. The heads-up carries considerable weight when it’s put out there by legal eagles. As it is in brief commentary from representatives of the D.C.-based ArentFox Schiff law firm. “When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered,” write partner David Grosso, JD, and government relations coordinator Starshine Chun. “Moving forward, school leaders, policymakers and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs.” Read their brief commentary here.
     
  • Bringing order to messy data, filling gaps in technological readiness and clearing regulatory hurdles. These are a few of the things the world’s largest maker of medical devices must do to make AI work for it. That’s according to the company’s chief technology and innovation officer. “The data readiness work we have to do is significant, but we know how to do it,” says the exec, Ken Washington of—wait for it—Medtronic. “We just need to get on with it.”
     
  • Happy first anniversary to Mayo Clinic Proceedings: Digital Health. The open-access journal is celebrating by spotlighting a few of its most downloaded articles, including “Diagnostic Accuracy of Artificial Intelligence in Virtual Primary Care.” Mayo’s news operation says the publication has so far posted almost 100 peer-reviewed articles on healthcare’s digital transformation. Read more about the milestone here.
     
  • Investment intelligencer CB Insights is out with its picks for the 100 most promising AI startups of the present year. Seven of the hot numbers are in healthcare. In alphabetical order: Bioptimus, Charm Therapeutics, Iambic, Isomorphic Labs, Genesis Therapeutics, Gesund.ai and OpenEvidence. Full list here.
     
  • Healthcare AI promises to improve care quality while lowering care costs. (No kidding.) But first it will have to bust through barriers involving incentives, data and regulation. (Duh.) Now comes a scholarly tome analyzing the pickle. It’s got content contributed by health economists, physicians, philosophers and scholars in law, public health and machine learning. It’s pricey to own but reasonable to rent in digital format—$12.50 for 45 days. Description and table of contents here.
     
  • The Australian government is investigating the possibly inappropriate use of AI in the country’s health system. Officials in charge of the probe took notice when complaints spiked about the suspected use of AI during telehealth drug prescribing. Evidently more than a few patients obtained prescriptions without ever speaking to a human. The Guardian has the story.
     
  • Recent research roundup:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare