News You Need to Know Today
View Message in Browser

AI regulation guidance | AI news watcher’s blog | Partner voice

Friday, November 8, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

AI regulation

A modest proposal to rally AI regulators around a simple game plan

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

The acronym derives from a 4-part formula for establishing a regulatory framework: Identify the Scope of what is meant to be protected, assess the Existing regulations across nodes in the relevant stack, select the Tools of regulation; and choose the Organization meant to enact the regulations.

Published by the Brooks Tech Policy Institute at Cornell University, the report is bylined to political scientist Sarah Kreps, PhD, and poli-sci doctoral candidate Adi Rao.

To write the paper, the duo analyzed existing and proposed regulatory schemes for governing AI. Their work is intended to help develop “a new theoretical roadmap for U.S. policymakers as they grapple with the promises and potential perils of generative AI.”

In a section looking at the third leg of the SETO plan, the authors consider tools available for incentivizing regulatory compliance. They describe five feasible options:

1. Total bans.

The benefit of a total ban is simplicity, Kreps and Rao suggest. “Once very clearly outlawed by the bureaucrat, the use of a technology will lead to criminal punishment. However, a prohibition approach might get in the way of U.S. attempts to cultivate AI technological edge over peers.” More: 

‘Firms might relocate to friendlier jurisdictions for a certain set of AI products, especially civil technologies much less military technologies. Component-based “blueprint” regulations might garner more broad-based appeal as they target precisely the main issue under contention, with a tradeoff of greater regulatory costs.’

2. Taxes and punishments.

Countries or jurisdictions may use economic incentives to make a certain behavior more likely, the authors point out. This strategy includes both punitive measures—taxes, fines, jail time—and positive reinforcement through subsidies and grants, they write. “Taxation is a mainstay of the incentive regime,” they note, while financial and business regulations “also make considerable use of punishments.” More: 

‘However, the use of taxes and punishments may be complicated in the context of AI. The causal chain of malfeasance represents a sort of “Whodunnit?” If an autonomous car swerves left versus right and strikes one individual rather than five, is it the fault of the car manufacturer, the driver—or no one at all?’

3. Blueprint manipulation regulations.

One example of manipulation regulation is in the realm of gun control, Kreps and Rao explain. For example, certain gun-related regulations “ban the use of features or accessories that can be attached to firearms, rather than firearms per se.” They name bump stocks, which enable semi-automatic firearms to simulate automatic fire, as a case in point. More: 

‘Potential restrictions in the context of AI could include bans on the collection of certain types of sensitive data, or of systems making decisions based on identity factors such as race or gender. The benefit of blueprint manipulation is in its moderacy: The specific regulatory issue is targeted with precision.’

4. Information revelation regulation.

Rather than banning technology outright or manipulating its production, regulations “could be put in place to force producers to reveal information so as to inform consumers of potential risks or harms,” the authors comment. “These transparency measures are perhaps most salient in the pharmaceutical sector, but also exist in other industries.” More:  

‘In the context of AI, information revelation might entail demanding that companies disclose what sort of information is fed into their algorithms. However, requiring companies to share information about the algorithms itself, would be more complicated: While we may possess weights about the algorithmic decision-making process, how [outputs] are generated is not possible to readily understand.’

5. Voluntary rules. 

The space industry operates on a global scale, involving countries with diverse interests and priorities, Kreps and Rao observe, adding that imposing strict regulations that do not align with a particular nation’s goals “can lead to conflicts and hinder cooperation.” By contrast, voluntary rules offer a framework for best practices that multiple countries and private entities can agree upon and adhere to” as they are able. More: 

‘In the AI domain, the White House has worked with the several main developers on a set of voluntary measures to develop safe, secure and transparent development of AI technology. This approach fosters a spirit of cooperation, allowing countries to work together toward shared objectives, such as space exploration, scientific research and environmental monitoring.’

Read the full report. 

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Nabla is now available as a mobile app for iOS and Android! - After months of real-world testing with our community of clinicians, we’re thrilled to introduce a faster way to document patient encounters on the go. With just one tap, healthcare practitioners will have instant access to Nabla, enhanced audio capture, and seamless EHR integration. Download now from the App Store and Google Play

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence technology

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Baptist Health is applying value-based care principles to AI integration. Meanwhile Geisinger is getting more patient-focused time with help from AI. And Henry Ford Health is using AI for adaptive radiotherapy for cancer care. These are just three of seven health systems held up as role models by the American Medical Association this week. What all seven have in common is membership in an AMA enterprise-solutions program and, evidently, a knack for making AI pop. Get the rest.
     
  • Does your health system have an AI-friendly culture? If not, don’t be surprised when AI fails to catch on across the enterprise. Gallup encourages taking stock by asking reflective questions like, “Is our workforce optimistic about the impact of AI on individual, team and organizational performance?” And “Do we have enough agility to adapt our vision as we adopt ever-more AI tools and applications?” Read it all.
     
  • U.K. healthcare is not ready for generative AI. So states a professor of safety science at the University of York in a piece published by The Conversation. “Healthcare could benefit tremendously from the adoption of GenAI and other AI tools,” writes computer scientist Mark Sujan, PhD. “But before these technologies can be used in healthcare more broadly, safety assurance and regulation will need to become more responsive to developments in where and how these technologies are used.” Hear him out.
     
  • The VA is throwing in with the FDA to launch a cross-agency AI testing facility. Called HAIL for Health AI Laboratory, the work will use a virtual lab environment. The plan is to assist not only government agencies but also private parties as they develop medical AI models for veterans and, indeed, all patients. The VA’s undersecretary for health, Shereef Elnahal, MD, MBA, announced the news last week. The FDA “saw our potential to do this work,” he said, “and they are selecting us as their main clinical partner in assessing the adherence to trustworthy AI principles for anyone who wants to test their interventions.” 
     
  • Watch for President-elect Donald Trump to waste little time before pushing the U.S. to regain clear AI preeminence. He may not even wait to move back into the White House before issuing the call. “Under a [second] Trump Administration, we would expect major AI initiatives within the U.S. government including the Department of Defense that would also be a major tailwind (for) AI players,” says securities analyst Daniel Ives, as reported by Investor’s Business Daily. “We would expect significant AI initiatives from the Beltway within the U.S. that would be a benefit for Microsoft, Amazon, Google and other tech players.” 
     
  • How do Islamic teachings approach the use of AI in healthcare? That may be a question not many in the U.S. are asking. But it’s important to consider, isn’t it? Muslims number 1.8 billion, making them more than 25% of the global population. “In order to make this encounter as ethical as possible, we put more responsibility on the powerful, that’s the physician, and more protection for the vulnerable, that’s the patient,” biomedical ethicist Mohammed Ghaly of Hamad Bin Khalifa University in Qatar tells Wired. “This is based on a basic assumption that is increasingly being disrupted and challenged by AI.” 
     
  • As it happens, that take aligns rather nicely with Pope Francis’s views on the subject. “Technology is born for a purpose and, in its impact on human society, always represents a form of order in social relationships and a disposition of power, which enables someone to take action and prevents others from doing so,” the leader of the world’s 1.3 billion Catholics said this past summer. 
     
  • Recent research in the news: 
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare