Machines are learning – your texts can be predicted; your navigation system can advise you to go on a different route if there is traffic ahead; your car knows to brake to avoid a collision; your connected lights at home know when its dusk; your health app can tell whether your symptoms mean you should go to A&E or drink plenty of fluids. In the UK during 2020, 80% of adults owned at least one connected device and relied on the AI (Artificial Intelligence) within it to make decisions.
“The justice system works swiftly in the future now that they’ve abolished all lawyers” – Doc Brown, Back to the Future II (2015)
Asking your device if you need an umbrella is a far cry from asking what the likely outcome of a dispute will be, so HCR will be around longer than Doc Brown suggested. However, many law firms (including HCR) are adopting technologies powered by artificial intelligence and augmenting them into how legal services are delivered. The LawTech Adoption and Training Survey (2020) found that AI based technologies were predominately used for legal research, due diligence, eDiscovery/document review and regulatory compliance. A few firms have used it for predictive analysis in litigation scenarios – what are the prospects of success of a given claim.
Reshaping the legal profession?
AI has been defined by the government as ‘the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.’
Those advising on disputes will regularly be asked what they think the likely outcome might be in litigation. To provide an answer, all known facts are evaluated, legislation and case law is analysed, experience of previous similar cases is drawn upon, information on the opponent is considered, conclusions drawn and prospects provided. This process will of course be influenced by the available information and the adviser’s personal experiences, IQ and EQ. If the adviser is risk adverse, they may lower the prospects of success, compared to an adviser who is less concerned by certain factors. As such, it is an imperfect process flawed by the human element.
Software designed for risk analysis and to predict life expectancy based on environmental, health and wealth factors, has been developed and redeployed to analyse the statutes and the huge body of case law precedents within the legal system, to identify trends or aspects that lead to those decisions. Clearly the data capacity for the AI systems running such software will vastly exceed the mental capacity of any human lawyer. The facts of a given case can then be inputted to the system, and the software algorithms enabled to predict the likely outcome of a given dispute, by analysing the statistical relevance of earlier cases, and identifying the probability of the same outcome, thus providing the prospect of success.
AI vs human emotions
However, AI systems are yet to grasp fully the nuances of expression and language (how many times do you repeat a simple request to a connected device, only to be told “I can’t help you with that right now”?). The algorithms may result in certain aspects of cases being disregarded for lack of statical relevance, but those cases might have unlocked the success of a given case. Presently (at least) the AI systems are also flawed because they lack the human element.
Augmented legal services are already a reality, so it is not so much a choice of taking advice from an AI lawyer, but would you not prefer to be advised by a lawyer that uses AI to give you a better service?
In a speech made in March last year the Master of the Rolls, Sir Geoffrey Vos, predicted that use of AI and other digital tools will become a core part of the UK’s justice system over the next 20 years, confirming that there is huge scope for technology to help with the efficiency, speed and scale of court processes. The adoption of digital tools (Teams Court Hearings and eTrial Bundles) was rapid during the pandemic, with one judge in Bristol commenting that more technological changes were implemented in their court during the first three months of the lockdown than in the previous three years.
If Vos is correct, “those seeking justice in 2040 would do so through an integrated online digital justice system composed of pre-action dispute resolution portals resolving different kinds of disputes backed by a court-based online dispute resolution system, across civil, family and tribunals”. So maybe Doc Brown was only 25 years out?!
AI Act proposal
Whatever the next 20 years brings, it is clear the reliance on AI systems across all sectors will increase. In anticipation of this, the law writers in Europe have already been busy establishing the regulations and legislative safeguards. The AI Act proposal is the world’s first comprehensive attempt to regulate AI. It hopes to address issues such as algorithmic social scoring, remote biometric identification and the use of AI systems in law enforcement, education and employment.
The extent to which the UK will replicate the AI Act (as it did the GDPR) remains to be seen. The current draft has been subject to some significant criticism, so a revised form with the same goals should be expected. In any event, UK companies who “place on the market or putting into service AI systems in the Union, have users of their AI systems located within the Union, and/or the output produced by the system is used in the Union” (Art. 2) will still need to comply with the AI Act.
The AI Act is still going through the legislative approval process but could come into force later this year, possibly as early as 30 March. Pursuant to the current draft under Art. 85, the AI Act will then apply 24 months after that date, so you have a little time left to prepare; and if organisations (including law firms) don’t move with the times, they will lose any competitive advantage they may currently have.