News You Need to Know Today
View Message in Browser

5 steps to AI maturity | AI newsmakers | Partner news

Thursday, June 6, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

artificial intelligence adoption asana anthropic

5 questions to guide the AI voyage from skepticism to maturity across the enterprise

Knowledge workers are using generative AI to write emails, summarize information, generate content, draft technical copy, brainstorm ideas and analyze data. Yet less than a third of companies that employ such workers have a formal AI strategy in place. And “dangerous divides” separate knowledge workers from the leaders to whom they ultimately report.

The figures and the alert are from a survey conducted by the work management platform supplier Asana and the AI safety and research startup Anthropic. Researchers from the two received completed responses from around 5,000 knowledge workers in the U.S. and U.K.

Noting that more than half of knowledge workers now use gen AI, the report authors observe that organizations move through five stages along the path to full AI integration—skepticism, activation, experimentation, scaling and maturity. Along the way, they suggest, AI-primed operations do well to take five questioning strides—all starting with a “C”—to successfully traverse the five stages.

1. AI Comprehension: How well do your employees understand how to use AI?

To reach AI maturity, employee understanding of the technology is crucial, and it varies significantly across stages, the authors write. At stage 1, “AI skepticism,” only 2% are familiar with generative AI basics, 6% have a strong understanding of its capabilities for their work, and 20% use AI weekly at work. More:

By stage 5, “AI maturity,” familiarity with generative AI surges to 35%, 53% have a strong grasp of its capabilities, and 93% engage with AI each week.

2. AI Concerns: What issues are top-of-mind for employees regarding AI?

As workers develop basic AI proficiency in stages 2 and 3—“AI activation” and “AI experimentation”—new fears emerge, Asana and Anthropic point out. Contributors become increasingly concerned about others’ perceptions of their new AI use, worrying that relying on AI might be seen as taking shortcuts or producing inauthentic work. More:

Data shows 29% of workers worry about being perceived as lazy and 25% feel like frauds for relying on AI to complete tasks.

3. AI Collaboration: How do employees work together with AI?

Workers who see AI as a teammate are 33% more likely to report productivity gains from using AI at work, compared to those who consider it a tool. The authors further note that, as workers use AI more frequently, they begin to recognize its potential for collaboration and its capacity to assume more complex roles within their workflows. More:

Workers who interact with AI on a daily basis are significantly more likely to prefer AI to act like a teammate (22%) compared to those who use it monthly (16%).

4. AI Context: What AI policies, guidelines and principles frame the organization’s outlook on AI?

At stage 1 (AI skepticism), only 2% of employees report that their organizations have defined AI principles, compared to 34% at stage 5 (AI maturity). By stage 5, organizations not only recognize that well-defined AI policies and principles are crucial for regulatory compliance, but they also view them as strategic assets that can differentiate them in the market.

These policies and principles provide a clear framework for employees to follow when using AI, ensuring ethical, consistent and responsible usage across the organization.

5. AI Calibration: How is AI effectiveness and value measured in your organization?

To effectively calibrate AI, organizations must engage their workforce in the evaluation process, the authors state. At stage 1 (AI skepticism), only 17% of workers say their organizations actively collect employee feedback on AI tools, which may contribute to the modest productivity improvements observed at this stage.

By contrast, at stage 5 (AI maturity), this practice becomes well-established, with 91% of workers reporting that their organizations actively incorporate employee feedback into the AI calibration process.

Download the full report.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Freeing Clinicians from the Documentation Burden with Nabla - Curious about real-world data from ambient AI implementation? Check out this NEJM Catalyst article to see how Nabla has reduced documentation burden for one of the largest medical groups in the US, fostering more personal and effective patient interactions.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days.

  • Physicians aren’t complicated. When it comes to medical technology, at least. “We need to know: Does it work? Will it work in my practice? Will insurance cover its use? And, importantly, who is accountable if something goes wrong?” The perspective is from Jesse Ehrenfeld, MD, MPH. The president of the American Medical Association makes the point on behalf of all physicians by way of reiterating what he believes it will take for healthcare AI to live up to its potential. “For health AI to work,” he maintains, “physicians and patients have to trust it.” The implication is that the technology—along with its regulation, liability loopholes and transparency assurances—is not there yet. Read the rest.
     
  • ‘Garbage in, garbage out.’ Another close watcher of AI in healthcare makes much the same point as Dr. Ehrenfeld. This commentator just comes at it from a different angle. “I know that AI can and will have value in healthcare at some point,” writes Jeff Gorke, MBA, a healthcare specialist with the tax, assurance and consulting firm Elliott Davis. “But I also know that there are very real landmines defined and built by coders/programmers. Bad inputs drive inaccurate, poor-quality and dubious outputs that cannot be trusted.” Forbes published the piece June 4.
     
  • Nurses have a responsibility to be knowledgeable about emerging healthcare technologies. As for AI in particular, they should embrace it and not be frightened of it. That’s the opinion of Maura Buchanan, a past president of the Royal College of Nursing in the U.K. Nurse Buchanan made the remarks this week at the RCN’s 2024 annual congress in Wales. Another U.K. nursing leader, RCN safety rep Emma Hallam, encouraged attendees to represent the profession wherever and whenever healthcare AI tools are being designed. “Nurses at every stage of their career, of all disciplines and diverse characteristics,” Hallam said, “should be at the forefront of the thoughtful development, monitoring and regulation of AI in healthcare.” Event coverage here.
     
  • By their distinguishing characteristics will you know tomorrow’s CIOs. A survey by Deloitte’s CIO program finds only 35% of technology leaders ranking AI, machine learning and/or data analytics as their No. 1 priority. More common is shaping, aligning and delivering a unified tech strategy and vision. No less interestingly, the survey shows perceptions of the CIO role splitting between the conventional-minded and their contemporary counterparts. Old: “A technical guru.” New: “A change agent.” And so on. More here.
     
  • OpenAI continues taking it on the chin. This week the ChatGPT sensation absorbed a punch from a dozen or so current and former employees. The ad hoc group posted an open letter demanding protections for individuals who raise safety concerns from inside ChatGPT and other purveyors of advanced AI systems. The former staffers signed their names while the currents went with “Anonymous.” The signatures are joined by two names of some note from Google DeepMind and three AI pioneers—Yoshua Bengio, Geoffrey Hinton and Stuart Russell. Even though the letter isn’t aimed solely at ChatGPT, it comes after the company has endured high-level resignations over safety concerns, barbs from former executives, a brewing legal battle with Scarlett Johansson and anger over what some would call its stifling non-disparagement policies. The letter’s posting has re-raised heated chatter about all that and more.
     
  • How far should the government go in its efforts to thwart deep fakers of audio content? The question takes on new urgency in an election year. Precedent for answering it may come from the kerfuffle over a recording of a special counsel’s interview with President Joe Biden. The interview had to do with his handling of those classified documents that turned up in unsecured places. But that’s beside the point. A DOJ official gets to the heart of the matter from the audio-withholders’ perspective. “If the audio recording is released, malicious actors could create an audio deepfake in which a fake voice of President Biden can be programed to say anything that the creator of the deepfake wishes.” On the other side are Biden’s political rivals accusing DOJ of trying to protect Biden from the embarrassment of sounding elderly and confused in the recording. Get the rest.
     
  • Elvis, meet Elon. The latter wants to build his multibillion-dollar AI supercomputer plant in Memphis, Tenn. Somehow that seems fitting, given the city’s eternal association with the de facto king of another lucrative realm, rock and roll. The building on which Musk has set his sights used to house a factory owned by Electrolux, the Swedish multinational home appliance manufacturer. Memphis’s mayor tells the Memphis Business Journal he’s excited about the opportunity. Meanwhile some local leaders are raising concerns about the massive drain on power and water that supercomputers require. Good local coverage here.
     
  • Recent research roundup:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

Interested in reaching our audiences, contact our team

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand
Innovate Healthcare