News You Need to Know Today
View Message in Browser

AI for hospital boards | Healthcare AI newsmakers

Tuesday, February 13, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

artificial intelligence terminology AHA

5 terms every hospital trustee—and healthcare AI stakeholder—should know

As AI continues infiltrating healthcare at nearly every level, the technology’s potential for good and ill must become—or remain—a preeminent concern for hospital boards of trustees.

This can be difficult since many trustees are volunteers hailing from lines of work that, traditionally, have had little to do with advanced data science. More than a few of these leaders don’t have direct backgrounds in healthcare, either.

As acknowledged in a piece posted by the American Hospital Association this month: “The [hospital] trustee is faced with a double challenge: understanding the implications of AI in one’s own field as well as in the healthcare professions.”

The commentary is penned by Steven Berkowitz, MD, a healthcare consultant and former hospital and health-system CMO. If trustees are to stay on top of AI for the good of the healthcare institutions they serve, he suggests, they should know their way around five key concepts and controversies. These are:

1. Generative pretrained transformer (GPT). With more than 180 million users, OpenAI’s main product is the most familiar GPT model. As you read this, ChatGPT is being used to write articles, code programs, summarize research and analyze images textually. In fact, Berkowitz reminds, when it went head to head against physicians taking medical questions, it frequently outperformed the doctors on both accuracy and empathy. More Berkowitz:

The possibilities of GPT applications seem endless. Vastly more powerful updates are on the horizon. Multiple vendors are now entering this space. GPT will be embedded in many processes in all industries. Its potential in healthcare is overwhelming.

2. Deep fakes. To be sure, these are more likely to catch trustees’ attention as harmless amusements from the entertainment sector than as, say, fraudulent prescriptions for drugs from phony physicians—or heartfelt pleas for money from incredibly convincing “loved ones.” Still, Berkowitz points out, it’s ground worth exploring for future reference.

It remains to be seen where this will land, but it is an area of legitimate concern. Vendors offer the ability to separate real versus AI generated material. Meanwhile, the “bad guys” continue to produce more sophisticated ways to evade detection.

3. Inherent bias. AI is only as well-rounded, and thus as objective, as the data on which it’s trained. What’s more, algorithms can inherit biases from their developers. Berkowitz:

A recent article gave ChatGPT the Political Compass quiz, and it came out significantly on the left and libertarian side. It is fair to assume that any AI output could contain biases from numerous etiologies, and specific results should always be assessed for this possibility.

4. AI and consciousness. Is AGI—artificial general intelligence—a real possibility? Or is it just the stuff of overactive imaginations, now and for the foreseeable future? Either way, the debate is a surefire high-level conversation starter. And it’s one for which trustees only need to know questions to ask, not answers to supply.

Given the rapid expansion of the technology, the potential of computers crossing over that barrier into full self-awareness and consciousness must be considered.

5. Technological singularity. In the context of AI, this term refers to a state in which machine intelligence becomes superintelligent, uncontrollable and irreversible. If such singularity were ever to occur, AI could theoretically “take over the world,” Berkowitz writes. “Is this media hype, or is it our fate?”

One of the most primal instincts of a living organism is the need to survive. If the computer perceives a human as a threat, would it then feel compelled to destroy that human? Presently, this is the fodder of science fiction novels and movies. However, many respected AI researchers have expressed concern.

Read the whole thing.

 Share on Facebook Share on Linkedin Send in Mail
Valentines Data

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • 2024 will see stepped-up legislative activity around healthcare AI. It’ll come in state assemblies around the country as well as on Capitol Hill. That’s the prediction of attorneys from the Boston-based Mintz firm. Writing in the National Law Review, Daniel A. Cody, JD, and colleagues advise watching for elected reps to move on protecting patient data and privacy, heading off health inequities, warding off AI encroachment in clinical areas and, not least, keeping payers from using AI to deny care. Quick read here.
     
  • Also this year, AI will continue to change the way healthcare works. No kidding, right? But exactly how the tech will work its magic is open to conjecture falling everywhere along the continuum from safe forecasts to wild guesses. To hedge against the uncertainty, smart investors will get behind pretty sure bets. Companies like Nvidia, AMD and Palantir come instantly to mind for a contributing writer at stock market advisory outfit The Motley Fool. Get the investing prognosticator’s thinking here.
     
  • ‘If I want to benefit from other people’s data, is it not my duty to share my own?’ Nigam Shah, MBBS, PhD, chief data scientist at Stanford Health Care, borrows the thought from a philosophy professor to make a point about balancing data security with feeding medical science. AI is, of course, one of the feeder mechanisms. “I don’t want my [medical] record being leaked out on the internet, but I do want my information to benefit the care of hundreds or thousands of other people,” Shah says in an interview with Jeremy Faust, MD, editor-in-chief of Medpage Today. “If we want the learning health system—if we want decision support that is informed by the past experiences of patients like mine—we have to get over this privacy block and insist on secure sharing of data.” Video and transcript here.
     
  • The FDA has been a pioneer in issuing regulations around AI in healthcare. In fact, it’s “really the benchmark for other countries to follow. [FDA] has been very innovative in ensuring that there is a specialized pathway for AI-based software as a medical device, even though it takes some time and money to progress through this pathway.” This is the observation of an academic Aussie admirer, Sandeep Reddy. Director of the master’s in healthcare management program at Deakin University in Australia and chairman of a healthcare AI startup, Healea, Reddy gives a wide-ranging interview on healthcare AI to Inside Precision Medicine. Read it here.
     
  • Stroke survivors whose care is guided by AI do better than those without algorithmic recommendations. The improved outcomes show up as fewer recurrent strokes, heart attacks and vascular deaths. The research was conducted in China and presented in Arizona last week at an international meeting of the American Stroke Association. ASA’s own coverage here.
     
  • AI proponents specifically interested in nonclinical healthcare AI have a new online presence to call their own. Launched by the American Health Information Management Association (AHIMA), the AI Resource Hub launches on the strength of a white paper featuring input from “implementers and experts.” Announcement with link to white paper here.
     
  • Which of America’s flowing waters are owed protection under the Clean Water Act? The question is harder to answer than one might think. Fortunately, there’s now an AI app to help. Developed by agricultural economists at UC-Berkeley, the tool combines aerial imagery, soil data, weather patterns and other variables to gauge the likeliness of any given stretch of H2O to fall under CWA protection. Like AI used to predict wildfires, a public health risk, the water-identifier algorithm might be considered healthcare-adjacent AI. The NIH seems to think so too, or it wouldn’t have covered the research as it does here.
     
  • Beware Valentine’s Day con artists armed with AI. The FTC reports that almost 70,000 people were snookered out of $1.3 billion by “romance scammers” in 2022. These are the cold-hearted operators who trick lonely victims into sending money for the sake of long-distance “love.” This year an FBI expert is warning that emerging technologies could make the damage even worse. “What we picture is one person doing this whole work,” Special Agent Brett King tells Alabama TV station CBS42 (via the New York Post). “No, it’s a whole assembly line of people following a script, and they have really turned it into a science.” Elderly people are favorite targets. Valentine’s Day is probably no worse than any other day, but it’s a perfect time to alert someone who may be vulnerable and is close to your, you know, heart.
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare