News You Need to Know Today
View Message in Browser

Google’s Gemini does not compute | Healthcare AI newsmakers

Tuesday, February 27, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

Northwestern Logo ●  Nabla Logo ●  UCLA Health

google cloud mayo clinic artificial intelligence

What the heck? (And what else tech watchers are saying about Google’s glaring fumble with Gemini)

Did you witness last week’s commotion over Google’s new AI chatbot Gemini? When asked to depict people by various descriptors, the image generator delivered lots of results that were unintentionally hilarious for their wild inaccuracy.

The Pope: Female. The Vikings: Black. German soldiers of World War II? A mixed-race hodgepodge. And so on.

Amusing as it all was, some Google detractors suggested Gemini—formerly Bard, which had embarrassing problems of its own—hadn’t glitched up at all. Instead, said the sternest critics, it had worked precisely as intended by its creators. In a word, they said, Gemini is “woke.”

Whether or not they’re right about that, Gemini’s very visible pratfall may set back the public’s confidence in AI generally. That could mean a knock on healthcare AI too. With that as food for thought, here’s a roundup of noteworthy reactions to Gemini’s introductory faceplant.

  1. “Gemini’s racially diverse image output comes amid longstanding concerns around racial bias within AI models, especially a lack of representation for minorities and people of color. Such biases can directly harm people who rely on AI algorithms, such as in healthcare settings, where AI tools can affect healthcare outcomes for hundreds of millions of patients.”—Kat Tenbarge, tech and culture reporter at NBC News
     
  2. “‘Inaccuracy,’ as Google puts it, is about right. [A] request for ‘a US senator from the 1800s’ returned a list of results Gemini promoted as ‘diverse,’ including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) It’s a response that ends up erasing a real history of race and gender discrimination.”—Adi Robertson, senior tech and policy editor at The Verge
     
  3. “The backlash was a reminder of older controversies about bias in Google’s technology, when the company was accused of having the opposite problem: not showing enough people of color, or failing to properly assess images of them. In 2015, Google Photos labeled a picture of two Black people as gorillas. As a result, the company shut down its Photo app’s ability to classify anything as an image of a gorilla, a monkey or an ape, including the animals themselves. That policy remains in place.”—Nico Grant, tech reporter at the New York Times
     
  4. “The embarrassing blunder shows how AI tools still struggle with the concept of race. OpenAI’s Dall-E image generator, for example, has taken heat for perpetuating harmful racial and ethnic stereotypes at scale. Google’s attempt to overcome this, however, appears to have backfired and made it difficult for the AI chatbot to generate images of White people.”—Catherine Thorbecke and Clare Duffy, business and tech reporters at CNN
     
  5. “Solving the broader harms posed by image generators built on generations of photos and artwork found on the internet requires more than a technical patch,” says University of Washington researcher Sourojit Ghosh, who has studied bias in AI image generators. “You’re not going to overnight come up with a text-to-image generator that does not cause representational harm. [These tools] are a reflection of the society in which we live.”—Kelvin Chan and Matt O’Brien, business and tech reporters at AP News
     
  6. “This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical—or any other—images. So we turned the image generation of people off and will work to improve it significantly before turning it back on. This process will include extensive testing.”—Prabhakar Raghavan, Google senior VP of knowledge & information

 

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence industry watchers digest

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Beatles vs. Stones. Amazon vs. Walmart. OpenAI vs. Nvidia. Classic rivalries all, and the fierceness of the latter battle will—like those earlier struggles for dominance—bring out the best in both. As put by Pymnts.com: The dynamics between OpenAI and Nvidia “will ultimately transform commercial and consumer AI. After all, AI, which requires accelerated computing to run, is seen by many as a natural progression, or a phase step up, of the internet’s own foundational computing and information layers.” Read the whole thing.
     
  • Apps are direct-to-consumer health technologies. As such, they represent a new folk medicine. That’s the view of academic psychologist Jordan Richard Schoenherr, PhD. “Users adopt these [healthcare AI] technologies based on trust rather than understanding how they operate,” Holznagel writes at The Conversation. “App store ratings and endorsements can replace the expert review of healthcare professionals.” Read the rest.
     
  • Regulatory frameworks to govern AI in healthcare are works in progress. It can be no other way, because, as noted repeatedly in this space, regulators can’t keep pace with the innovators they’re charged with regulating. What they can do, suggests Dubai-based business journalist Praseeda Nair, is prioritize the risks that call for immediate attention. “As the regulatory landscape evolves, legal teams and industry players must navigate complexities,” Nair writes at Omnia Health Insights. “Pioneering regulation introduces short-term compliance burdens but can offer clarity, reduce litigation risks and instill [long-term] confidence in the technology.”
     
  • An AI startup that’s still three months from celebrating its first birthday is already valued at more than $2 billion. And it just got hitched to Microsoft as a preferred partner. Paris-based Mistral is led by a 31-year-old CEO named Arthur Mensch who worked for Google until leaving to do something different with two like-minded 30-something engineers from Meta Platforms’ AI lab in Paris. The Wall Street Journal reports the trio and their colleagues plan to “outmaneuver Silicon Valley titans” on nimbleness. One way they’ll do this is by making their AI software open-source and handing it out for free. “We want to be the most capital-efficient company in the world of AI,” Mensch tells WSJ. “That’s the reason we exist.”
     
  • Training surgeons-to-be in laparoscopic procedures they’ll need to master: There’s an AI for that. The teaching module is more than a computerized version of the classic board game Operation, explains a professor at the New Jersey Institute of Technology, where the program is under development. The project “steps outside the bounds of the generative AI hype and into the domain of helping humans learn better,” says the professor, Usman Roshan, PhD. “We expect broader usage of our software in surgical training programs nationwide and ultimately into other areas of human learning where physical activity is involved.” Details here.
     
  • Amazon tycoon Jeff Bezos is leading a $675M funding round to help an AI startup get its humanoid robots on their own two feet. The startup, Figure AI, is unique for its “aggressive incorporation of AI” into its robots, which bear a passing resemblance to C3PO of Star Wars fame, as Inc. magazine notes. Figure AI figures the droid-like machines will relieve human workers of all sorts of workplace tasks. “The dexterity with which the robot picks up [a coffee] capsule and can wiggle it if it hasn't fallen into place properly is very impressive—and very human-like,” Inc. reports. More here.
     
  • From the AI research beat:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare