Legal, compliance and privacy leaders rank rapid Generative AI adoption as their top issue in the next two years

Industry Updates Published 16th January 2024

Rapid generative AI (GenAI) adoption is the top-ranked issue for the next two years for legal, compliance and privacy leaders, according to a survey by Gartner. In a September 2023 survey of 179 legal, compliance and privacy leaders, 70% of respondents reported rapid GenAI adoption as a top concern for them.

Stuart Strome
Stuart Strome, director, research in the Gartner Legal, Risk & Compliance Practice

‘Increases in capability and usability have prompted rapid and widespread company adoption of GenAI’, said Stuart Strome, director, research in the Gartner Legal, Risk & Compliance Practice. ‘While AI regulation is still being developed, however, uncertainties and unforeseen risks abound. Businesses will have to contend with these challenges to ensure ethical and legal use of this powerful new technology.’

Legal, compliance and privacy leaders rank rapid Generative AI adoption as their top issue in the next two years

Gartner has identified four key areas that legal, compliance and privacy leaders need to address.

1) Limited visibility into key risks

The ease of adoption, widespread applicability, and the ability of GenAI tools to perform a range of different business task mean that assurance teams will have limited visibility into new risks.

New processes to detect and manage these risks will take time to roll out leaving businesses exposed in the interim

‘New processes to detect and manage these risks will take time to roll out leaving businesses exposed in the interim’, said Strome. ‘Legal leaders should adapt preexisting, well-established and widely distributed risk monitoring and management practices until new processes can be implemented. For example, they might modify data inventories and records of processing activities of privacy impact assessments to track GenAI usage.’

2) Lack of employee clarity on acceptable use

Employees will lack clarity on what constitutes acceptable use of the technology due to unfamiliarity with the rules governing it. Legal leaders should work to build consensus on “must avoid” outcomes and institute controls to minimise the likelihood of those outcomes while championing acceptable use cases in policies and guidance.

Consider working with IT to develop embedded controls

‘Legal leaders need to institute a mandatory human review of GenAI output, prohibit entering enterprise IP or personal information into public tools such as ChatGPT, and develop policies that require clear indication of GenAI provenance on any public-facing output’, said Strome. ‘It’s important to include real-world examples of prohibited and acceptable GenAI usage in policy guidance and alert employees when policies are updated. Further, consider working with IT to develop embedded controls, such as popups in GenAI tools that require users to attest they are not using the tools for prohibited cases.’

3) Need for AI governance

As GenAI tools rapidly become more ubiquitous, poor accountability for negative outcomes could create unacceptable legal and privacy risks. Yet for most companies AI governance will not fit neatly into existing functional organizational structures, and the expertise needed may be scattered throughout the business or even not exist at all. Legal leaders need to clearly document roles and responsibilities for approvals, policy management, risk management and training for GenAI.

Advocate for establishing a cross-functional steering committee

‘Legal leaders should advocate for establishing a cross-functional steering committee, or for modifying the mandate of an existing committee, to establish principles and standards for use, and to align on roles and responsibilities related to AI governance,’ said Strome.

4) New opportunities to scale repetitive legal tasks

GenAI’s capacity to produce natural language output lends itself to several departmental uses for legal teams. This holds the potential to minimise the time lawyers spend on low-value work. While GenAI tools have the potential to assist with time-consuming, repetitive tasks such as conducting legal research, drafting contracts, and producing summaries of legislation its output often includes errors, legal leaders must ensure the output is reviewed for accuracy.

It’s no surprise that rapid GenAI adoption is the most referenced risk for legal leaders this year

Legal leaders should develop an internal pilot program to test GenAI automation or augmentation for low-risk repetitive, time-consuming tasks that involve production of written deliverables. They should also compare pilot outcomes on time spent and output quality versus conventionally produced outcomes.

‘Given GenAI’s ease of use and flexibility of application for enterprises, it’s no surprise that rapid GenAI adoption is the most referenced risk for legal leaders this year, however, legal leaders should not simply react by instituting draconian policies that restrict its use’, said Strome. ‘That approach will likely impact business competitiveness and encourage employees to illicitly use GenAI tools on their personal devices. Progressive legal leaders accept that GenAI can drive value, and they are working with others in their organization to develop governance and policies that nudge employees and business partners toward high-benefit, low-risk use cases.’