News You Need to Know Today
View Message in Browser

Healthcare pros raise voices against AI recklessness | Newsmakers: IBM, Mayo, AC/DC, more

Thursday, May 11, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo

artificial intelligence warning united nations

Int’l panel: Safeguard human health against AI now—or risk losing the chance forever

As the world grapples with the potential downsides of generative AI, a global group of healthcare researchers is warning of health perils that could emanate from well beyond clinical settings.

BMJ Global Health posted the team’s analysis May 9. Its senior author is David McCoy, a physician and public health researcher with the International Institute for Global Health, which is part of the United Nations University headquartered in Tokyo.

Absent regulatory measures undertaken, and soon, the authors foresee advanced AI potentially causing indirect yet serious health threats across three categories:

  1. Work and livelihoods. As AI automation pushes people out of jobs at scale, watch out for declining physical fitness and rising mental health problems combined with reduced healthcare access, McCoy and co-authors warn.
  2. Democracy, liberty and privacy. Superfast collection, scrubbing and analysis of massive datastores could allow tightly targeted marketing, misinformation and surveillance campaigns. Dire health-related outcomes might include worsened socioeconomic inequities aggravated by heightened political polarization.
  3. Peace and safety. AI is likely to help militaries develop and deploy lethal autonomous weaponry, including cheap and selective weapons of mass destruction, the authors suggest. Population health can only suffer by the spread of dehumanized military forces.  

And then there are the dangers of self-training AGI (artificial general intelligence). Should AGI be permitted to proliferate unimpeded, McCoy and colleagues suggest, the technology could heighten all the above threats while also disrupting critical infrastructures, consuming scarce resources and being used to outright attack or subjugate people.

What can—and should—medical professionals do?

  • Sound the alarm about risks and threats posed by AI. “Make the argument that speed and seriousness are essential if we are to avoid the various harmful and potentially catastrophic consequences of AI-enhanced technologies being developed and used without adequate safeguards and regulation,” the authors urge.
  • Identify those who are driving AI development too quickly or carelessly. “If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions and dilute power so that there are effective checks and balances.”
  • Help deploy clinical and public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy in an AI-everywhere world. The goal must be to “enable future generations to thrive in a world in which human labor is no longer a central or necessary component to the production of goods and services.”

Governmental regulation of the development and use of artificial intelligence is needed to avoid harm, the authors insist. Further, until effective regulation is in place, “a moratorium on the development of self-improving AGI should be instituted.”

Read the whole thing.

 Share on Facebook Share on Linkedin Send in Mail
IBM WatsonX

Industry Watcher’s Digest

Buzzworthy developments of the past several days.

  • Watson rises and shines anew. Less than a year after IBM sold the operative guts of its Watson Health business for $1 billion, the corporation is introducing a new AI platform. Announcing the move May 9, IBM says the new incarnation—watsonx—is good for foundation models, generative AI and machine learning. The full package includes a studio, data store and governance toolkit. Announcement here, website here.
     
  • Mayo all in with Lucem. A clinical AI startup focused on early disease detection and point-of-care guidance has received a $7.7 million cash infusion from Mayo Clinic and other influential investors. Lucem Health of Davidson, N.C., says it will use the Series A funds to further develop its platform, expand its product line and build its marketing might. Announcement.
     
  • AI in the endoscopy suite. Colonoscopists are about to be offered FDA-cleared AI assistance for diagnosing polyps. To be marketed by Iterative Health of Cambridge, Mass., the tool, called Skout, increased detection of adenomas (potentially cancerous polyps) by 27% in a randomized trial. Read more.
     
  • Large-language versatility. DiagnaMed of Toronto has launched a generative AI product that collects and analyzes healthcare data for administrative as well as clinical aims. The company says its FormGPT.io helps providers use ChatGPT to create customized forms and surveys but can also assist with patient feedback, progress monitoring and clinical decisions. Announcement.
     
  • AI’s eyes on the pancreas. Researchers have demonstrated the utility of AI-based population screening for patients at elevated risk for pancreatic cancer. In trials led by investigators at Harvard Medical School and the University of Copenhagen, the tool showed it could flag affected patients by up to three years ahead of diagnosis. And the patients’ clinical histories were all it had to work with. Journal study here, Harvard coverage here.
     
  • If you’re not using healthcare AI yet, just you wait. Some 66% of healthcare professionals are aware that AI technologies like ChatGPT and Med-PaLM 2 are being used in American medicine. Meanwhile more than 10% already use some form of AI, and almost 50% expect to do so before long. So found the healthtech company Tebra when it surveyed 1,000 healthcare consumers and 500 healthcare workers. Survey results and analysis here.
     
  • And a megadollar award goes to … an AI-enabled toolkit that smartly guides cancer patients through their care journeys. The clever software has won its creators a share of $1 million in prize money at Northwell Health. The health system based in New Hyde Park, N.Y., annually holds an internal innovation challenge. The cancer AI designers share 2023 honors with colleagues who came up with a novel bioelectronic treatment for stroke. Details here.
     
  • Did you say the check is in the mail? Outbound AI of Seattle is touting the ability of its virtual agents to navigate recorded phone interactions, wait on hold and speak with actual humans. The technology does all this and more, the company says, to help providers get paid for their services promptly while “elevating the daily job experience for human [administrative] talent.” News release here.  
     
  • Coronary assessment made easy with AI. AI can supply important insights into heart function during fairly routine chest X-rays enhanced with contrast media, heading off the need for more involved tests. Researchers demonstrated the technique, AI-aided coronary angiography, at UC-San Francisco and described it in JAMA Cardiology. News summary in Cardiovascular Business.
     
  • Keep on rockin’ in the O.R. No surgery team knew they needed it, but here comes an AI radio station playing rock music “clinically shown to improve surgical accuracy and efficiency.”  Brought to life on Spotify by NextMedHealth of San Diego, the gig features—for starters—such instant classics as “Surgeries Done Dirt Cheap,” “You Sewed Me All Night Long,” “Highway to Heal” and other songs played in the style of AC/DC. Announcement. Sample tunes.  
 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare