It’s 2024. Does the C-suite know—or care—what workers are doing with generative AI?

In the rush to do something, anything with AI, are America’s business leaders playing fast and loose with the risks? 

Unease wouldn’t be unreasonable, as a new survey of 330 C-suiters shows fewer than half of organizations have policies in place to mitigate AI’s inherent risks. And even among those that have codified their concerns, the policies “lack the teeth and internal alignment needed to make them most effective.”

The finding and the remark are from the international law firm Littler, which specializes in labor and employment practice. The firm published a report on the C-suite project Sep. 24. The paper offers insights into the state of the balance between risk and reward as viewed from the top down. 

While healthcare was not a discrete focus in the work, the report’s content is broadly relevant to executives across various sectors of the U.S. economy. Here are some highlights. 

1. AI-related lawsuits are expected to rise alongside heightened regulatory risks.

Watch for the suits to span issues from privacy to employment law to copyright and trademark violations, Littler advises. “A complex patchwork of local and state laws is emerging in the U.S.,” the report’s authors write. “In the 2024 legislative session, at least 40 states introduced AI bills related to discrimination, automated employment decision-making and more.”

‘C-suite executives are taking note: Nearly 85% of respondents tell us they are concerned with litigation related to the use of predictive or generative AI in HR functions and 73% say their organizations are decreasing their use for such purposes as a result of regulatory uncertainty.’

2. Positive sign: Nearly three-quarters of respondents whose organizations have a generative AI policy in place require employees to adhere to it. 

About seven in 10 are relying on “expectation setting,” to track compliance, Littler reports, while more than half use access controls and employee reporting. “Given that training and education about generative AI (and indeed, all AI) goes hand in hand with successful expectation setting, it is notable that only 46% of employers are currently offering or in the process of offering such programs.” More:  

‘However, high percentages of those who do [offer such programs] include several important components in these trainings, such as AI literacy, data privacy, confidentiality and ethical use.’

3. Risks associated with generative AI are rising, not least because the tech is easy for employees to use of their own volition. 

Despite this reality, only 44% of organizations have a specific policy in place for employee use of the technology, Littler found. Some 48% of the surveyed field cited perception of low risk as a major reason for their lack of a policy covering this concern. 

‘The perception of low risk may be understandable, particularly for smaller organizations in less-regulated industries. The number of lawsuits and regulatory enforcement actions has not yet reached a fever pitch—though that’s expected to change in the months and years to come.’

4. Chief legal officers (CLOs) and general counsels (GCs) are less certain that employee-use components are part of their organizations’ policies than their CEO and chief HR officer counterparts. 

For instance, Littler found, 84% of CEOs and CHROs believe their policies include employee review and acknowledgement, while only 57% of legal executives say the same. Additionally, 66% of CEOs and CHROs say that employees approve uses with managers or supervisors, compared with 30% of CLOs and GCs.

‘Some of this dissonance may be driven by the rapid rate of change. Legal teams, for example, may not be involved in policy elements until there is a problem—and, depending on the organization, may not be part of the centralized AI decision-making group.’

5. HR-related AI litigation may not seem like a significant risk today—but that doesn’t mean it won’t be tomorrow. 

‘So far, claims have mostly been brought against software vendors themselves—including class actions in California, Illinois, and Massachusetts—though this could change as more organizations put these tools into practice and more regulations are established.’

The report is available in full for free.

 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.