Public artificial intelligence tools and online chatbots such as ChatGPT, Claude, Gemini and Grok have quickly become part of everyday life. We use them to answer questions, explain complex topics and help draft emails and other documents.
It’s no surprise that people involved in a dispute may be tempted to use them early on to understand their legal position or decide what to do next.
These tools can seem helpful, but using public AI for anything connected with a legal dispute carries real risks. In some cases, those risks can seriously damage your position.
AI can have a place in dispute resolution, but it’s important to understand the risks of using public AI chatbots.
Misplaced confidence
Public AI systems work by drawing on internet content and spotting patterns in existing text. They don’t understand the law, your situation or the strength of your case. Instead, they predict what a convincing answer looks like.
AI responses often sound polished and authoritative. They can appear to be sound legal advice even when they’re wrong or misleading.
Public AI chatbots may:
- Rely on outdated or incorrect legal information without telling you where it came from
- Confuse English law with the law of other countries
- Miss important facts or procedural rules
- Tell you what you want to hear depending on how you ask the question (and a small change in wording can produce a different answer).
AI chatbots can also invent legal principles or court cases that don’t exist. This tendency to generate false information is called ‘hallucination’. There have been cases in England and Wales, and around the world, where people (including lawyers) have been caught citing fictitious cases hallucinated by AI.
If you rely on public AI output when deciding whether to take a certain step, run an argument or issue or settle a claim, you may be working from an incomplete or inaccurate view of the facts and the law. That can undermine your position and affect the outcome of your dispute.
Lack of expert knowledge and experience
Public AI tools only have access to information that’s publicly available online. They can’t access specialist legal textbooks, resources or many court decisions that sit behind paywalls.
They also lack context. They don’t have experience of how courts typically deal with cases like yours, how procedural rules are applied in practice or what litigation strategies tend to succeed or fail. They don’t know (and won’t ask about) the underlying context of your dispute or the motivations of the people involved.
A serious warning: confidentiality and privilege
Another serious risk of using public AI chatbots was addressed in a recent immigration case, UK v Secretary of State for the Home Department. This case involved a legal adviser entering case information into ChatGPT, which ultimately resulted in fictitious cases being referred to in court.
Most public AI chatbots state in their terms of use that information you enter isn’t confidential, and that prompts and responses may be stored or used by the provider.
So, if you enter details of your dispute into a public AI tool (what happened, who said what, draft documents, witness accounts or your strategy and concerns), you may be sharing that information with a third party.
The risk that the chatbot provider will directly use or disclose your information is probably low. However, sharing information in this way can have serious consequences for legal privilege.
Privilege is a principle that protects certain confidential communications, meaning they can’t be relied on as evidence against you in court. It usually covers communications between you and your lawyers, and confidential communications with third parties made in contemplation of litigation.
However, privilege depends on confidentiality. If you make information public, it may lose its confidential quality and privilege may be lost. Once privilege is lost, it can’t be recovered.
In the case referred to above, the court held that placing information into ChatGPT put it into the public domain. As a result, it was no longer confidential and any privilege was lost.
Why does this matter? In most disputes, the court will order the parties to disclose documents (including electronic documents) that are relevant to the dispute. Privileged documents can be withheld, but not if privilege has been lost.
That means you may have to disclose prompts, documents and responses from public AI chatbots to the other side. That could expose your thinking, weaken your position or harm your credibility.
The law in this area is still developing, but the risk of waiving privilege should be considered from the outset of a dispute.
AI does have a place in dispute resolution
By contrast, closed AI tools can be used more safely in disputes with the right safeguards. They can also help lawyers handle cases more efficiently.
Using AI safely in disputes means using a secure system with strict access controls, clear rules on what information can be entered and assurances that prompts and outputs aren’t used to train public models. Even then, AI should support (not replace) professional judgment, and outputs should be reviewed by someone with the relevant expertise.
Conclusion
Public AI chatbots aren’t a safe substitute for legal advice in disputes. They can:
- Sound persuasive while being wrong
- Encourage risky or unrealistic decisions
- Strip away confidentiality and legal privilege.
You take serious risks if you rely on public AI chatbots to assess the strength of your case, develop strategy, prepare documents or enter confidential or sensitive information.
If you have concerns about a dispute, speaking to a qualified lawyer remains the safest way to protect your position. Where appropriate, lawyers can use AI in the right way to handle your case more effectively, while ensuring your position remains protected.