News You Need to Know Today
View Message in Browser

Who will control the AI world—US or China? | Healthcare AI newsmakers

Thursday, August 17, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

USA China artificial intelligence race

Race for AI dominance: 5 points to ponder about US v. China

AI that’s intended to let cars drive themselves can be repurposed to let tanks level cities. And AI can just as easily weaponize a virus as diagnose it. These are not secrets. However, the boundaries between the “safely civilian” and the “militarily destructive” are “inherently blurred.” That’s largely why the U.S. has clamped down on exports of advanced semiconductors to China.

The observation is from the political scientist Ian Bremmer and the serial tech entrepreneur Mustafa Suleyman. The unlikely pair has authored an essay asking whether governments of tech-forward nations can summon the will to effectively bridle AI “before it’s too late.” Foreign Affairs published the piece online Aug. 16.

A recurring touchpoint in the article is the clash between the world’s top two AI titans. Here are five excerpts from those portions.

1. China and the United States both view AI development as a zero-sum game.

“From the vantage point of Washington and Beijing, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. … In their view, a ‘pause’ in development to assess risks, as some AI industry leaders have called for, would amount to foolish unilateral disarmament.”

2. With hooks deeply set in its ostensibly ‘free market’ companies, the Chinese Communist Party probably could rein in AI within its borders if it really wanted to.

Alas, it probably doesn’t really want to—and reciprocating total control would be difficult to pull off in the West anyway. “Because [private enterprises] jealously guard their computing power and algorithms, they alone understand (most of) what they are creating and (most of) what those creations can do,” Bremmer and Suleyman point out. “A few big firms may retain their advantage—or they may be eclipsed by smaller players as low barriers to entry, open-source development and near-zero marginal costs lead to uncontrolled proliferation of AI.”

3. AI-aided control of nuclear arms is probably a fanciful thought experiment.

“AI systems are infinitely easier to develop, steal and copy than nuclear weapons. As the new generation of AI models diffuses faster than ever, the nuclear comparison looks ever more out of date. Even if governments can control access to the materials needed to build the most advanced models, they can do little to stop the proliferation of those models once they are trained and therefore require far fewer chips to operate.”

4. The rest of the world could help manage tensions between the two AI superpowers.

In the process, other nations might thwart advanced AI systems from multiplying and running amok. “One area where Washington and Beijing may find it advantageous to work together is in slowing the proliferation of powerful systems that could imperil the authority of nation-states. At the extreme, the threat of uncontrolled, self-replicating artificial general intelligence (AGI) models—should they be invented in the years to come—would provide strong incentives to coordinate on safety and containment.”

5. Some level of online censorship is going to be necessary.

For that task, the world will need some sort of “geotechnology stability board,” Bremmer and Suleyman suggest. “If someone uploads an extremely dangerous model, this [international] body must have the clear authority—and ability—to take it down or direct national authorities to do so,” they add. “This is another area for potential bilateral cooperation.”

There’s more. Read the rest.

 Share on Facebook Share on Linkedin Send in Mail
news.jpg

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • HHS’s inventory of AI use cases is soaring. The agency’s count, which reflects uses of the technology in various categories, spiked from 50 in fiscal 2022 to 163 this fiscal year (with September’s tally yet to come). The new-this-year instances include tools used by NIH to classify HIV-related grants and predict subcategories in stem-cell research applications. FedScoop has the scoop.
     
  • Cedars-Sinai Health System is showcasing its embrace of healthcare AI. In the process the Los Angeles institution, a care site for many an entertainment icon over the years, lists examples of its early adoption of AI for clinical indications. These include pancreatic cancer, heart health, Alzheimer’s research and spine surgery. CIO Craig Kwiatkowski says the org is “only at the very beginning of understanding what AI can do to improve healthcare.”
     
  • Morgan Stanley advises AI investors to watch four broad areas within healthcare for growth and thus good ROI. Three of the four may strike some as a little too airy to guide any serious strategizing—healthcare “services and technology,” life sciences “tools and diagnostics” and “medical technology.” Then again, the advice is free. The global investment bank and wealth-management firm offers it in an Aug. 16 post fleshing out a few details and tipping off readers to the availability of a fuller report. (The fourth growth area Morgan Stanley flags within healthcare AI? Biopharma.)
     
  • Did you know 2023 is the 200th birth year of The Lancet? It’s true. Interesting to consider, then, what the founders of the venerable British journal would have made of an editorial posted by the current leadership Aug. 12. The subject is none other than AI in medicine. The hope is palpable. But so are the fears. The piece wastes little time before quoting United Nations Secretary General António Guterres’s July speech warning of the “horrific levels of death and destruction” that malicious use of AI could cause. The editors ask: “How can the medical community navigate AI’s substantial challenges to realize its health potential?” For their answer, read the piece.
     
  • Milbotix (Oxfordshire, England) has been getting all kinds of positive press for its AI-powered socks. As well it should. The company worked with the University of Exeter to come up with a design that looks and feels like comfy crew-length hosiery but helps people with dementia live more independently. The socks pull this off by monitoring and relaying the patient’s heart rate, perspiration and worrisome movements. Fox News has backstory from the creator.
     
  • Alaffia Health (New York) has launched a text-based AI chatbot that helps payers process claims. The system combines OpenAI’s ChatGPT-4 with Alaffia’s own algorithms. As shown in an online demonstration, the assistant can summarize a lengthy medical record in seconds, answering specific questions posed by claims-management staff and other end users. Announcement. Demonstration.
     
  • From Aiin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare