News You Need to Know Today
View Message in Browser

AI’s big guns promise to behave themselves | AI names in the news

Tuesday, July 25, 2023
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo

artificial intelligence leaders

7 AI powerhouses make 8 commitments on model development

Seven of the most competitive tech companies have publicly pledged to follow some specific guidelines when advancing AI technologies.

Amazon, Google, Meta, Microsoft, OpenAI, Anthropic and Inflection agreed to the ideals and constraints July 21.

The harmony among alpha rivals was orchestrated by the Biden Administration. It’s organized around three principles that all seven players agree “must be fundamental to the future of AI”—safety, security and trust.

By saying yes to the White House’s proposal, the seven have promised to:

  1. Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks and national security concerns such as bio, cyber and other safety areas.
  2. Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities and attempts to circumvent safeguards.
  3. Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
  4. Incent third-party discovery and reporting of issues and vulnerabilities.
  5. Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking or both, for AI-generated audio or visual content.
  6. Publicly report model or system capabilities, limitations and domains of appropriate and inappropriate use, including discussion of societal risks such as effects on fairness and bias
  7. Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy.
  8. Develop and deploy frontier AI systems to help address society’s greatest challenges.

What close watchers are noticing: 

  • The voluntary commitments announced [July 21] are not enforceable—Paul Barrett of New York University in The New York Times
     
  • The guidelines outlined Friday don’t require companies to disclose information about their training data—Sabrina Siddiqui and Deepa Seetharaman of The Wall Street Journal
     
  • Lawmakers on both sides of the aisle have introduced legislation to regulate the tech in the weeks since Senate Majority Leader Chuck Schumer (D-NY) began trying to corral bipartisan action on AI in June—Makena Kelly of The Verge
     
  • Critics argue that [OpenAI CEO] Sam Altman and other AI doomsayers are incentivized to push for regulation because it would raise the barrier of entry for potential rivals and make it harder for them to compete with deep-pocketed industry leaders—Thomas Barrabi of the New York Post
 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence industry

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Dredging motherlodes of data from the briny deep to the nettable surface is likely to rank high among generative AI’s top contributions. At least CIOs seem to think so. After surveying 600 of them, MIT Technology Review and Databricks report this capability will power “extraordinary new advances” across enterprises. Where previous AI initiatives had to focus on use cases in which structured data was ready and abundant, the authors point out, “a trove of unstructured and buried data is now legible, unlocking business value.” Summary here, full report available here in exchange for contact info.
     
  • In many cases, physicians plus AI are better than either one alone. By now that’s a given. But the many in the first clause is doing a lot of the work here. And now another physician has translated the vague abstraction into a vivid firsthand account. The incident happened in the ER. “Based on nearly every algorithm and clinical decision rule that providers like me use to determine next steps in cases like this, my patient was safe for discharge. But something didn’t feel right,” writes Craig Spencer, MD, MPH, in STAT. “My gut instinct compelled me to do more instead of just discharging her.” Sure enough, the patient was quietly suffering a potentially fatal medical emergency. Read the whole thing.
     
  • Will AI become the crack cocaine of the digital age? It may well, if the author and public intellectual Joel Kotkin has a bead on it. His AI detraction has nothing to do with the technology’s potential for wiping out humanhood. It’s about the money. Like crack, AI offers “the highs of facility and speed to the masses without giving most of us anything good,” he writes in The Spectator. “Meanwhile, the dealers—the tech giants and autocratic regimes—will become ever more rich and powerful.” He’s not done yet. The current trajectory of AI is taking it to where it will probably function, Kotkin predicts, as a “force multiplier for bad things.” Read it and think.
     
  • And could this be one of the bad things that AI stands to forcibly multiply? Some people long for a future in which none of their loved ones ever completely dies. The decedents just become AI-powered bots. ABC News talks to the enthusiasts and raises the right questions.
     
  • Artificial intelligence is changing our comprehension of the real thing. In fact it may end up changing the shape and nature of human intelligence itself. The political scientist and educator Anne-Marie Slaughter considers the potential in a piece published July 24 in Financial Times. “The significance of generative AI is less that it is artificial than it has replicated … specific strands of intelligence that we must now integrate into our understanding of our abilities.” Every time you turn to Google for help remembering something, for example, you’re using it as “an instantly searchable external organ.” Read the rest.
     
  • Meta is making its Llama 2 large-language models available through Amazon SageMaker JumpStart. The announcement from AWS coincides with last week’s word from Meta that Llama 2 is now free and open-sourced for all comers. Amazon details here.
     
  • Intel is offering almost three dozen open-source AI ‘reference kits’ created to help developers do their thing faster and easier. The resources are fruits from Intel’s long-running collaboration with Accenture. Intel says each kit contains model code, training data, instructions for the machine learning pipeline and more. Details.
     
  • Pharmacy benefits manager NirvanaHealth (Southborough, Mass.) is migrating Troy Medicare (Charlotte, N.C.) onto the former’s AI-equipped cloud platform. The plan is to leverage the might of independent pharmacies for improving patient service while holding the line on costs. Announcement.
     
  • Cardiac blood-test supplier Prevencio (Kirkland, Wash.) is showcasing data proving the accuracy of the company’s AI-powered blood tests. The facts and figures are from research conducted at Massachusetts General Hospital. Prevencio has concentrated on cardiovascular disease, but its platform is disease agnostic. Announcement.
 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare