News You Need to Know Today
View Message in Browser

Executive inattention to workplace AI | AI reporter’s blog | Partner news

Wednesday, September 25, 2024
Link to Twitter Link to Facebook Link to Linkedin Link to Vimeo

In cooperation with

Northwestern Logo ●  

Nabla Logo

artificial intelligence AI in the C suite

It’s 2024. Does the C-suite know—or care—what workers are doing with generative AI?

In the rush to do something, anything with AI, are America’s business leaders playing fast and loose with the risks? 

Unease wouldn’t be unreasonable, as a new survey of 330 C-suiters shows fewer than half of organizations have policies in place to mitigate AI’s inherent risks. And even among those that have codified their concerns, the policies “lack the teeth and internal alignment needed to make them most effective.”

The finding and the remark are from the international law firm Littler, which specializes in labor and employment practice. The firm published a report on the C-suite project Sep. 24. The paper offers insights into the state of the balance between risk and reward as viewed from the top down. 

While healthcare was not a discrete focus in the work, the report’s content is broadly relevant to executives across various sectors of the U.S. economy. Here are some highlights. 

1. AI-related lawsuits are expected to rise alongside heightened regulatory risks.

Watch for the suits to span issues from privacy to employment law to copyright and trademark violations, Littler advises. “A complex patchwork of local and state laws is emerging in the U.S.,” the report’s authors write. “In the 2024 legislative session, at least 40 states introduced AI bills related to discrimination, automated employment decision-making and more.”

‘C-suite executives are taking note: Nearly 85% of respondents tell us they are concerned with litigation related to the use of predictive or generative AI in HR functions and 73% say their organizations are decreasing their use for such purposes as a result of regulatory uncertainty.’

2. Positive sign: Nearly three-quarters of respondents whose organizations have a generative AI policy in place require employees to adhere to it. 

About seven in 10 are relying on “expectation setting,” to track compliance, Littler reports, while more than half use access controls and employee reporting. “Given that training and education about generative AI (and indeed, all AI) goes hand in hand with successful expectation setting, it is notable that only 46% of employers are currently offering or in the process of offering such programs.” More:  

‘However, high percentages of those who do [offer such programs] include several important components in these trainings, such as AI literacy, data privacy, confidentiality and ethical use.’

3. Risks associated with generative AI are rising, not least because the tech is easy for employees to use of their own volition. 

Despite this reality, only 44% of organizations have a specific policy in place for employee use of the technology, Littler found. Some 48% of the surveyed field cited perception of low risk as a major reason for their lack of a policy covering this concern. 

‘The perception of low risk may be understandable, particularly for smaller organizations in less-regulated industries. The number of lawsuits and regulatory enforcement actions has not yet reached a fever pitch—though that’s expected to change in the months and years to come.’

4. Chief legal officers (CLOs) and general counsels (GCs) are less certain that employee-use components are part of their organizations’ policies than their CEO and chief HR officer counterparts. 

For instance, Littler found, 84% of CEOs and CHROs believe their policies include employee review and acknowledgement, while only 57% of legal executives say the same. Additionally, 66% of CEOs and CHROs say that employees approve uses with managers or supervisors, compared with 30% of CLOs and GCs.

‘Some of this dissonance may be driven by the rapid rate of change. Legal teams, for example, may not be involved in policy elements until there is a problem—and, depending on the organization, may not be part of the centralized AI decision-making group.’

5. HR-related AI litigation may not seem like a significant risk today—but that doesn’t mean it won’t be tomorrow. 

‘So far, claims have mostly been brought against software vendors themselves—including class actions in California, Illinois, and Massachusetts—though this could change as more organizations put these tools into practice and more regulations are established.’

The report is available in full for free.

 

 Share on Facebook Share on Linkedin Send in Mail

The Latest from our Partners

Clinical Pioneer University of Iowa Health Care Rolls Out Nabla to All Clinicians - UI Health Care, a leader in clinical innovation, partnered with Nabla to alleviate administrative burdens and enhance provider well-being by optimizing clinical documentation processes. During a five-week pilot program, clinicians reported a 26% reduction in burnout. Building on this success, the ambient AI assistant will now be deployed to over 3,000 healthcare providers, including nurses, with customized features specifically designed to support nursing workflows.

 Share on Facebook Share on Linkedin Send in Mail
artificial intelligence AI in healthcare

Industry Watcher’s Digest

Buzzworthy developments of the past few days. 

  • Healthcare AI has a checklist manifesto all its own. The document is the brainchild of an international team of researchers. They were led in the effort by a biostatistician and a quantitative medicine specialist at Duke-NUS Medical School. That’s the academic collaboration between Duke University and the National University of Singapore. The Duke-NUS duo says the team designed the work mainly for researchers working with GenAI, along with scientific journal publishers, institutional review boards, funders and regulators. They built the paper on nine widely accepted ethical principles—accountability, autonomy, equity, integrity, privacy, security, transparency, trust and beneficence. “As far as we are aware, our checklist is the first attempt at creating a practical solution to the ethical issues raised in the papers we included in our review [of existing ethical discourse],” the Duke-NUS researchers, Ning Yilin, PhD, and Liu Nan, PhD, tell the school’s news division. “While the checklist will not close all these gaps, it is a tool that can mitigate ethical concerns by guiding users in making comprehensive ethical assessments and evaluations.”
     
  • Is it me or has talk about the Internet of Healthcare Things quieted? No matter. At least one expert in healthcare technology sees big things ahead from the combo of IoT and AI. He even names it as the AI innovation about which he’s most excited. In a Q&A with Intelligent CIO, Herat Joshi, PhD, MBA, also predicts quantum computing could “further transform healthcare by accelerating data processing for drug discovery, while federated learning will expand AI’s reach by enabling models to learn from decentralized data across multiple health systems, all while maintaining patient privacy.”
     
  • LinkedIn, we have a good mind to fire you. Everybody’s favorite online networking platform recently started tapping users’ uploaded content to train its algorithms. It hasn’t been hard to opt out, but you had to know that you had to. You also had to noodle around to find the right click sequence. LinkedIn is paying the price now, albeit with nothing stronger than some bad press. “Hard to find opt-out tools are almost never an effective way to allow users to exercise their privacy rights,” F. Mario Trujillo, a staff attorney at the Electronic Frontier Foundation, tells the Washington Post. “If companies really want to give users a choice, they should present users with a clear ‘yes’ or ‘no’ consent choice.” Additional coverage here
     
  • The AI Gold Rush isn’t a onetime money flood. It’s more like a series of moon tides. This is clear because the word picture is back, floated to the fore by OpenAI’s reported bid to raise $6.5 billion at a $150 billion valuation. This development “could signal a new era in AI commercialization, potentially reshaping entire industries and sparking a fierce battle for market dominance,” write the editors at Pymnts. The implications “could be far-reaching, with businesses across sectors rushing to integrate advanced AI capabilities into their operations. Companies successfully leveraging OpenAI’s technology may gain substantial competitive advantages, potentially disrupting traditional business models and reshaping entire markets.” Read the rest
     
  • Primary care could use a break. AI might be able to supply it. By having bots and other AI agents debrief and inform patients before they’re seen by overbooked clinicians, the technology could free one of the most perpetually time-pressed specialties to spend a full 15 minutes paying attention to the patient. Once the human-to-human interface finally happens, of course. Forbes contributor Sai Balasubramanian, MD, JD, takes a quick but fresh look at the potential. Companies seeking to AI-ify the space for service and profit “are indeed learning that care delivery, and specifically primary care delivery, is not easy,” he writes. “This is where AI tools and bots are being considered as potentially helpful.” 
     
  • Mental healthcare is similarly ripe for AI. One of the things it might do is ease the transition individuals must make. Many have to change from a private person hiding their struggles to a patient willing to share such secrets with a stranger. AI can ease this difficult turnabout by helping to, believe it or not, make mental healthcare more human. “Ironically, by removing humans from the intake process, the non-judgmental nature of AI can create a more welcoming environment for those who might otherwise feel stigmatized or uncomfortable seeking help from an individual,” explains computational neuroscientist Ross Harper, PhD, in MedCity News. “In mental healthcare, we are increasingly thankful to AI for doing those things that allow us to be more … human.”
     
  • Price transparency meets AI simplicity. UnitedHealthcare hopes to find that good things come to those who combine the two. To that end the country’s biggest health insurance company by revenue recently introduced its new Find Care & Costs tool. The algorithm steers enrollees to the right clinicians while also letting them know how much they’ll probably have to pay out of pocket for the visit. Data and analytics exec Craig Kurtzweil tellsFierce Healthcare the company is already seeing “better patterns of care, more utilization of preventive care, things like lower utilization of ER, those types of things start to pop out for members that can leverage this experience. So making it really convenient and something that they actually enjoy using is a win-win.” 
     
  • I tried and failed to guess the company here: “Investing in This Healthcare Stock Could Be Like Catching Nvidia at the Dawn of the AI Boom.” Now you guess. Then see how you did
     
  • Recent research in the news: 
     
  • Notable FDA Approvals:
     
  • Funding news of note:
     
  • From AIin.Healthcare’s news partners:
     

 

 Share on Facebook Share on Linkedin Send in Mail

Innovate Healthcare thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.

*|LIST:ADDRESSLINE|*

You received this email because you signed up for newsletters from Innovate Healthcare.
Change your preferences or unsubscribe here

Contact Us  |  Unsubscribe from all  |  Privacy Policy

© Innovate Healthcare, a TriMed Media brand

Innovate Healthcare