News You Need to Know Today
View Message in Browser

AI activity at the United Nations | AI watcher’s digest | Partner voice

Friday, September 20, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence AI in healthcare

UN effort afoot to address the world’s ‘governance deficit with respect to AI’

The AI advisory board of the United Nations is calling for the creation of a global AI data framework. 

The board would like to see the agenda “developed through a process initiated by a relevant agency such as the U.N. Commission on International Trade Law and informed by the work of other international organizations.” 

This is just one recommendation of seven laid out in a 100-page report, Governing AI for Humanity, which the AI advisory board released this month. Here are four more of the report’s recommendations. 

 

1. An international scientific panel on AI. 

“We recommend the creation of an independent scientific panel on AI, made up of diverse multidisciplinary experts in the field serving in their personal capacity on a voluntary basis,” the report’s authors write. Supported by the proposed United Nations AI office and other relevant United Nations agencies, and partnering with other relevant international organizations, this panel’s mandate would include:

a.) Issuing an annual report surveying AI-related capabilities, opportunities, risks and uncertainties, identifying areas of scientific consensus on technology trends and areas where additional research is needed;

b.) Producing quarterly thematic research digests on areas in which AI could help to achieve the SDGs, focusing on areas of public interest which may be under-served; and

c.) Issuing ad hoc reports on emerging issues, in particular the emergence of new risks or significant gaps in the governance landscape.

 

2. Policy dialogue on AI governance. 

“We recommend the launch of a twice-yearly intergovernmental and multi-stakeholder policy dialogue on AI governance on the margins of existing meetings at the United Nations.” This dialogue’s purpose would be to:

a.) Share best practices on AI governance that foster development while furthering respect, protection and fulfillment of all human rights, including pursuing opportunities as well as managing risks;

b.) Promote common understandings on the implementation of AI governance measures by private and public sector developers and users to enhance international interoperability of AI governance;

c.) Share voluntarily significant AI incidents that stretched or exceeded the capacity of State agencies to respond; and

d.) Discuss reports of the international scientific panel on AI, as appropriate.

 

3. AI standards exchange. 

“We recommend the creation of an AI standards exchange, bringing together representatives from national and international standard-development organizations, technology companies, civil society and representatives from the international scientific panel.” The exchange would be tasked with:

a.) Developing and maintaining a register of definitions and applicable standards for measuring and evaluating AI systems;

b.) Debating and evaluating the standards and the processes for creating them; and

c.) Identifying gaps where new standards are needed

 

4. Global fund for AI. 

“We recommend the creation of a global fund for AI to put a floor under the AI divide,” the authors state. “Managed by an independent governance structure, the fund would receive financial and in-kind contributions from public and private sources and disburse them, including via the capacity development network, to facilitate access to AI enablers to catalyze local empowerment for sustainable development goals (SDGs).” These would include: 

a.) Shared computing resources for model training and fine-tuning by AI developers from countries without adequate local capacity or the means to procure it;

b.) Sandboxes and benchmarking and testing tools to mainstream best practices in safe and trustworthy model development and data governance;

c.) Governance, safety and interoperability solutions with global applicability;

d.) Data sets and research into how data and models could be combined for SDG-related projects; and

e.) A repository of AI models and curated data sets for the SDGs.

In making their case for the framework, the authors note the world’s “governance deficit with respect to AI.” 

“Despite much discussion of ethics and principles, the patchwork of norms and institutions [under consideration] is still nascent and full of gaps,” they add. “AI governance is crucial—not merely to address the challenges and risks [inherent in AI], but also to ensure that we harness AI’s potential in ways that leave no one behind.”

The goal is noble and the report is worth a look

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla Now Supports 35 Languages to Advance Culturally Responsive Care - Clinicians can now leverage AI-powered documentation in any of the 35 languages supported to cut down on charting time, focus on patient care and enjoy better work-life balance. Patients receive care instructions in their preferred language, ensuring clarity and compliance throughout their healthcare journey. Read more here
 


 

 Share on Facebook Share on Linkedin Send in Mail
ai in healthcare digest blog

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Royal Philips CEO Roy Jakobs has healthcare AI and a lot else on his mind. In a long and wide-ranging interview with The Verge, he talks about using AI to not only assist radiologists reading scans but also to cut scan times for better patient satisfaction and faster case throughput. Plus he reassures rads that AI will not replace them but will ease their workloads as the physician shortage worsens. “I still believe that we will have radiologists in the future, but the one thing I know for sure is we will not have enough,” Jakobs says. “It’s for us as a technology company to make sure that technology [doesn’t] make their job harder but actually really helps them do it better and faster.” Verge podcaster and editor-in-chief Nilay Patel also asked him about the damaging recall Philips suffered in 2021 involving millions of breathing machines. Knowing what the company knows now, would Philips have handled the situation differently? “Yes,” Jakobs replies. “We might even have done the recall differently.” Read (or listen to) the whole thing
     
  • Congress is swamped with AI bills. More than 120 are wending their way through committees in one chamber or the other, House or Senate. More than two-thirds are brainchildren of Democrats. And four deal specifically with healthcare (two introduced by R’s, two by D’s.) The tally comes courtesy of some fine in-depth reporting at MIT Technology Review. AI reporter Scott Mulligan scared up quotes from several close watchers and stakeholders. One is David Evan Harris of UC-Berkeley. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says Harris, who teaches AI ethics. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.” Read the rest
     
  • Untruth in healthcare AI advertising? That was the alleged wrongdoing when Texas Attorney General Ken Paxton investigated Pieces Technologies over complaints the company unlawfully exaggerated its algorithm’s accuracy at writing clinical notes and documentation. The product has been used by at least four major hospitals in the Lone Star State. This week the AG’s office settled with Pieces, which denied it committed any fouls but agreed to terms. These include informing customers of the software’s true accuracy, instructing them in how to properly use it and warning them of potential harms. A Texas TV station called the case a “first-of-its-kind investigation into AI in healthcare.” Fierce Healthcare has additional details
     
  • Healthcare AI has made the leap from lab concept to real practice. It’s that widely used, notes the American Medical Association. The messaging comes by way of promoting the fourth and latest learning module in the group’s “Ed Hub” CME series. Module excerpt: “Clinician roles in healthcare are evolving due to the integration of Al with a shift from bi-directional”—meaning between patients and the care team—“to a more complex tri-directional interaction that actively involves Al, empowers patients and requires clinicians to adapt to this evolving landscape.” Learn more
     
  • Too many older folks skip annual checkups. One survey has the rate as high as 82%. Common reasons—or excuses—for the senior absenteeism from primary-care offices include misperceptions of cost, lack of transportation and other easily addressed concerns. AI chatbots can help solve all those problems and more, CNET points out. “This is a population with limited income and significant health issues,” a healthtech CEO tells the outlet. “Health technology designed for seniors and their caregivers,” the CNET reporter adds, “can simplify their lives by addressing today’s challenges and improving the experience for future generations.” 
     
  • AI watchers can never get too many good primers on AI in healthcare. The NHS Confederation in the U.K. is out with a very good basic guide indeed. They almost could have called it “Everything You Ever Wanted to Know About Healthcare AI But Were Too Scared to Ask (Because Doing So Might ‘Out’ You as a Perpetual Newbie”). Check it out. We won’t tell anyone. 
     
  • Healthcare leaders struggle to wring value out of digital investments made to improve service. Why is that? Because their organizations lack service-focused things like a mission-led roadmap, a modernized workforce strategy, up-to-date technology and other little niceties. OK, snark switch turned off. For some serious thinking on “reimagining healthcare industry service operations in the age of AI,” let McKinsey be your guide
     
  • Israel is at war, but it remains a hub of healthcare AI innovation. The determination of the nation’s tech sector comes through in coverage of the 2024 ARC (Accelerate, Redesign, Collaborate) Summit, hosted this month in Tel Aviv at Sheba Medical Center. “The healthcare industry in general, and certainly in Israel, is very resilient,” says Avner Halperin, CEO of Sheba Impact, according to the Jerusalem Post. “We see that crises like COVID and the current war actually accelerate innovation. We’ve had dozens of new inventions, and we’re building startups around them. So, while there is the pain of the crisis, there is also excitement and hope, as the innovations born out of this time are literally saving lives.” Read the rest
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • AI funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare