AI holds incredible potential for the legal industry, driving new levels of efficiency across processes such as case management, document collation, and compliance verification.
When used in the right way, AI tools can pave the path to major productivity gains. But while AI can be your ‘copilot’ to help bring down administrative burdens, it’s essential for legal practices to use it correctly. There is a risk that if you don’t understand or use AI correctly, you might fall foul of AI hallucinations, and you could even find yourself with a miscarriage of justice.
How do AI hallucinations occur in legal contexts?
In simple terms, when AI is not implemented and used correctly hallucinations can occur. An AI hallucination is where AI presents false, misleading or fabricated information as fact. In a legal context, that can naturally be incredibly problematic, with hallucinations arising in several different ways:
- Inaccurate or irrelevant data
If a legal practice allows an AI model to extract information from both the firm’s internal case management system and the wider web, it could end up using unreliable sources.
- Conflicting or outdated information
Even if AI models have been told to only draw upon information in your case management system, issues can still arise. There might, for example, be multiple versions of key documents and AI draws information from the incorrect version. - Poorly framed prompts
In instances where your AI is leveraging the right datasets, it can still provide wrong, vague or incomplete answers if you don’t define your prompts and questions clearly enough.
- Fabrication
Finally, without the right guardrails in your prompts, AI may completely fabricate information.
In a legal context, any of these hallucinations can have severe consequences. In an £89 million damages case against the Qatar National Bank earlier this year, 18 out of 45 case-law citations submitted by the claimants were found to be fictious.
Preventing AI hallucinations from arising in legal cases
It’s important to understand that AI is a ‘tool’ for legal professionals. It’s not there to replace people – it’s there to assist.
Legal practices need to ensure that AI is being used in the right way. To achieve this, firms need to develop and implement AI usage policies, outlining which tools are appropriate for use and which ones are not.
According to Microsoft, 78% of AI users are bringing their own AI tools to work. For legal practices, that’s potentially a lot of case managers using unsanctioned tools that aren’t fit for purpose.
To reduce this risk, firms need to ensure approved tools are used, such as dedicated legal AI agents, or develop proprietary ones with an IT solutions partner for their practice using their own validated data sources.
Checking and validating AI systems is crucial in legal settings where the consequences can be severe. Get it right, and AI can drive huge efficiency gains, accelerate case reviews, and enhance legal analysis and documentation. Get it wrong, however, and AI hallucinations stemming from invalid data or poor prompts could cause havoc. If you don’t have the right AI expertise in-house to do this, work with AI experts to keep you on the right track.
In collaboration with Fraser Dear, Head of AI and Innovation, BCN
