Should the Legal industry embrace AI?
We asked ChatGPT.
Artificial intelligence (AI), such as OpenAI’s ChatGPT, has generated a great deal of vacillating discussion in the media since its launch in November 2022. Despite the excitement, there are still many questions about AI and its impact on the legal sector and executive search which must be explored, such as can this tool be deployed usefully? What are the implications around ethics, accuracy and employment?
The rise of AI has huge implications for us all. In the legal industry, where clients have rated law firms only six out of ten for their use of technology, we could see the onboarding of such AI technologies come at great speed. A fifth of the world’s largest firms are already using AI technology, such as leading international law firm, Allen & Overy, which has implemented the use of AI platform Harvey after a successful trial.
In our recent poll on Linkedin, 49% of respondents said that they would use AI in their paralegal work, and 15% would use it for more complex, contract work (results of which can be found here). To explore the capabilities of ChatGPT, we asked it to discuss the potential benefits of AI to the legal industry, and the risks it anticipated for lawyers.
The first question we asked ChatGPT was whether or not it thought law firms should use AI in their practice. Within seconds, it generated three paragraphs outlining the benefits and, interestingly, some of the potential risks. Notably, ChatGPT refers to itself in the first person, ‘I’. From this, we asked a series of follow-up questions to elaborate further, below are the answers:
While there were numerous benefits the ChatGPT shared with us, two were especially pertinent within the legal context. We asked ChatGPT to elaborate on the improved efficiency and accuracy.
Follow-up question: ‘You mentioned a benefit of AI could be improved efficiency. Can you elaborate?’
From ChatGPT: ‘AI can automate routine tasks such as document review, allowing lawyers to focus on more complex and value-added work. AI can help lawyers conduct research more quickly and effectively.’
It is true that computers can process thousands of lines of research in a fraction of the time it would take a person . The ability to shorten the time it takes to research and review matters is extremely beneficial. Lawyers are under increasing pressure from businesses to keep their billing timesheets reasonable, so this may help legal teams to meet internal and external deadlines more quickly, keeping the cost to the client down.
Follow-up question: Explain to us how AI can be both efficient and accurate.
From Chat GPT: ‘AI can be both efficient and accurate because it uses algorithms and models that can detect patterns, and in turn use this information to make predictions based on large sets of data, without human bias and error limitations. AI-powered legal research tools can quickly identify relevant cases and statutes that human lawyers may have missed, and can do so with a high degree of accuracy.’
The average lawyer must create, review, and send large amounts of documentation. This daily task for many lawyers can become monotonous and prone to error which, of course, introduces risk . There are also technology-assisted review tools (TAR) which enable lawyers to identify risks at an earlier stage. This enables a more pre-emptive approach, where clients and employers can be advised on relevant legal issues before they occur.
While the benefits of using AI are compelling, ChatGPT did highlight the risks associated with its adoption, which need to be considered by any firm or company.
Interestingly, one of the first risks highlighted is also a benefit ChatGPT initially shared, accuracy.
From ChatGPT: ‘There is a risk that AI systems may produce incorrect results or make errors in their analysis.’
When asked for examples of caselaw, ChatGPT returned three cases which, upon closer examination, were not real . At this stage, AI tools like ChatGPT lack the nuance required to produce consistently correct answers.
Legal & Ethical Implications
From ChatGPT: ‘There may be legal and ethical implications associated with the use of AI in the legal profession, such as questions about the responsibility for errors or omissions made by AI systems.’
As ChatGPT pointed out, if there are errors from an AI tool, who carries the liability? Regarding the ethical handling of sensitive information, lawyers should be aware that ChatGPT stores personal and conversation data. Lawyers are responsible for monitoring the training and application of AI tools and their algorithms, which creates an extra element of liability within the business. This in itself requires a great deal of work to monitor and safeguard against error, potentially undermining any time saved by using AI.
The Use of ChatGPT and Similar AI Tools in Executive Legal Search
With its ability to generate a wealth of data-driven, relevant information, AI tools such as ChatGPT and Harvey (also developed by OpenAI), have the potential to optimise the search process. One pertinent example is during the mapping stage, where key criteria from the job description (such as years of experience, location and qualifications), are used to create a list of potential candidates on software like LinkedIn Recruiter. As a result, the ability to deliver candidate shortlists and candidate longlists is significantly expedited.
That said however, given the people-facing nature of executive search as an industry, it is possible that AI tools such as ChatGPT may overlook more unconventional candidate profiles that could also be suitable. Its capacity to assess how the candidate presents themselves and is able to articulate their sector-specific experience and knowledge, must also be examined more closely.
On a broader level, as highlighted in Harvard Business Review, the use of such tools elicits questions surrounding data privacy, such as the storage of candidate data on their systems. This is also true with psychometric test results which are used to differentiate between potential new hires, and is especially relevant where an individual may have learning difficulties, for example. As technological development facilitates the hiring process, the ethics of what information recruiters and hiring managers have access to must be more clearly defined. In Harvard Business Review, the author uses the example of social media activity having a positive correlation with an individual’s intelligence and personality, including their “dark side” traits. This questions whether it is ethical to mine such data for hiring purposes, when users generally use apps like Twitter for personal, not professional, use.
In summary, the decision to adopt AI technologies takes time and careful consideration. Some may not yet be ready to totally embrace AI given its risks. However, it seems within the legal sector it is already being deployed in everyday work. In a wider context, a recent study showed that 27% of professionals already use generative AI tools in their work; of that, 68% had yet to tell their bosses.
As a closing question, we asked ChatGPT who would make a better lawyer, a robot or a human?
Its response was reassuring and complimentary: ‘it’s unlikely that AI and robots will ever completely replace human lawyers, as there will always be a need for the judgment, creativity, and personal touch that human lawyers can provide.’