The below article has been adapted from a presentation given by Andrew Goddard KC (with assistance from Laura Hussey) at “TCC 150: Past, Present and Future: The TCC in Wales and the South East” in Bristol on Thursday 18 May to mark the Technology and Construction Court’s 150th Anniversary (hosted by Burges Salmon).
There have been numerous procedural innovations in civil litigation which we now take for granted, for which the TCC, in terms of its judges, lawyers and users, can take enormous credit. Many of these developments, all of which most practitioners would consider to be progressive, derived from an impetus for efficiency. They also were informed by rationality, or, in other words, that they made sense judged against objective criteria. For example, Written Openings, served in advance of trial, allow the parties and the judge to know what the real issues are considered to be and where the likely battleground will lie. The same can be said of Witness Statements and Lists of Issues, whilst a core bundle limits the need to trawl through endless lever arch files searching for the relatively few documents around which much of the evidence and most of the issues are likely to revolve.
These and other changes in procedure were motivated by what might be termed internal forces. That is to say, they sprang from a review of the courts’ own procedures, by both lawyers and judges, concerned to see if the existing procedures required review and updating to improve efficiency without adversely affecting the quality of the litigation process, and, indeed, with the intention of improving it.
However, there now exist what one might term external forces which have the capacity, radically, to alter the litigation process and the legal landscape more generally. Those external forces can be summed up by two letters: A. I.
For many of us A.I., or Artificial Intelligence, is still something about which we have little more than a sketchy understanding. Indeed, it is a term that is relatively unspecific, in that it covers numerous systems from word processing, at one end of the spectrum, to wholly autonomous vehicles or weaponry at the other. Few of us, it is suggested, understand to any significant level of detail how Large Language Models (LLM) or machine learning works; or how various AI models reach conclusions or offer predictions. Given the speed of development of A.I. systems, it is necessary to consider how A.I. may come to play a role in the litigation process (indeed, this is something which is already happening) and how it may come to play a role in the judicial decision-making process.
Computer programmes utilising billions of semi-conductors allow an astronomically large amount of data to be searched or processed almost instantaneously. These programmes can enable litigation research to be conducted in a fraction of the time that a manual search would need, whether it be for topic-specific documents or for authorities considering a particular point of law or a particular standard form contract provision and so on. The massive computing power now available provides a quicker means of doing what was already being done manually; but it doesn’t change the nature of the task, any more than Westlaw has changed the nature of the task of legal research.
Thus, research tools like Westlaw still require the lawyer to review the authorities that a particular search may return and to consider what the textbooks may have to say on a particular point. Similarly, programmes that can search databases of documents and identify and order those that fit a given search profile do not obviate the need for the lawyer to evaluate the returned documents and to consider how they may support, or otherwise, the narrative that a party’s case is promoting. These systems involve a low level of A.I., but essentially provide the automation of a manual process.
Generative A.I., of which ChatGPT is the most well-known, however, involves a level of autonomy going beyond mere automation. By autonomy, what is meant is the system’s capacity to generate its own reasoning with the capability of learning and changing over time. Used in the field of law, this technology, it is suggested, does have the potential to change quite radically the nature of the legal function, including the role of lawyers and of the courts and judiciary.
Note that the reference to ‘generative’ A.I is not a reference to ‘General’ A.I. Generative A.I. refers to an A.I. system that is capable of generating new content that is similar to the data it has been trained on. For example, a generative A.I. model trained on images of birds could generate new images of birds that it has never seen before. Generative A.I. is often used in applications like image generation, text generation, and even music generation. On the other hand, General A.I., also known as “Artificial General Intelligence” (AGI), refers to an A.I. system that can perform a wide range of intellectual tasks that are typically associated with human intelligence. AGI systems would be able to reason, solve problems, and learn in a way that is similar to human intelligence. In summary, while generative A.I. focuses on generating new content based on existing data, general A.I. aims to create a machine that can replicate human-like intelligence across a range of domains and tasks.
Now, we can be confident that the description just given of the distinction between generative and general A.I. is correct, because it is the response that ChatGPT gave when it was prompted. And it should know.
In the context of the practice of law, this distinction can be seen as being between, on the one hand, the augmentation of human legal reasoning efforts by use of outputs generated from existing data; and on the other, the achievement of autonomous legal reasoning by computer-based systems able to perform legal reasoning unaided by human legal reasoners.
Many in the A.I. community think that it is only a matter of time before full autonomous reasoning is achieved, although the predicted timescale varies between 5 and 25 years, depending upon whom you ask.
So, we are not there yet. But it is interesting to consider how generative A.I. is being and might be used in the legal context and what impact that may have on the role or position of lawyers and the courts.
There are several law firms in England which have already developed their own in-house A.I. systems which go beyond automation and have a level of generative autonomy. When input with relevant facts and parameters specific to a real-life issue, the A.I. can provide its prediction of the likely resolution or outcome of the issue. It can also justify and explain its reasoning by reference, inter alia, to case law and what the A.I. considers to be, for example, trends in judicial decision making. This type of A.I., of which ChatGPT is the most well-known, uses Large Language Model (LLM) technology and deep learning to produce human-like text in response to prompts or questions.
ChatGPT has recently made available its Version 4, which is a significant advance on Version 3.5. By way of example, when Version 3.5 was made to ‘sit’ the American Uniform Bar examination, its score placed it in the bottom 10% of test takers; using Version 4 placed in the top 10%. That is amazing improvement in about 18 months.
As the functionality and reliability of these systems increases it seems likely that lawyers’ use of such tools will increase. If a reasoned analysis of a problem can be returned by the A.I. in a matter of seconds or minutes, why, one may ask, should one not take advantage of such a service? In some ways it could be said to be little different from a Partner asking their associate solicitors to do some research, or Leading Counsel asking their Junior to do the same.
And what of the TCC judge? They can be expected to consult Hudson or Keating and other texts to inform the decision-making process, so why not an LLM system; why not a law-oriented version of ChatGPT? Indeed, practices in the USA are already advertising that they use such systems to increase their success rates and reduce client costs.
Let us pause here and consider the implications of the use by Lawyers and by Judges of this new tool and the implications for Experts and for Clients.
Lawyers
Lawyers are using and increasingly will use an LLM system to inform the advice given to a client or a submission made to a court or arbitral tribunal.
But A.I. based upon LLM technology can ‘hallucinate’ (to use the current terminology) meaning that it can make mistakes and even make things up. A recent report in the Guardian newspaper revealed how a chatbot cited more than one article by a Guardian journalist which they had never written and which had never been published because it never existed. US experience suggests that there is a real risk of phantom authorities being hallucinated by the A.I.
Now let us suppose that legal advice is given on the basis of erroneous output from A.I. To what extent will lawyers be entitled to rely on what the algorithms tell them? Will the lawyer concerned be in breach of their professional duties? But what of the service provider, or the A.I developer? Would they also be accountable, and if so, on what basis? And accountable to whom – the unfortunate lawyer? The unfortunate litigant?
And what of the lawyer who does not use the available technology, and so misses a line of argument that the A.I. would have suggested, and which might have won the argument in court? Or the lawyer who does not use the available technology and so does not receive the A.I’s prediction that the case would be likely to fail, and who ploughs on to lose in court at great cost to their client?
Perhaps the future superstars of the legal world will not be those with the best ‘legal skills’ as we know them now, but instead those best at understanding how AI works and creating the most useful prompts and being able to sift out hallucinations. And will the make-up of legal teams change so that a team always includes an individual adept at dealing with A.I.? Indeed, it may be that this is happening already in some instances.
Let us now turn the clock forward 10 or 20 years. Computing power will have increased exponentially. Today’s ChatGPT and similar LLM technology may appear quite quaint in comparison to what will then be the state of the art. A.I. legal reasoning may generally outperform that of human lawyers, aided by an ability to search vast amounts of material and detect patterns and trends that are beyond the human mind’s ability to process.
Consider also that the use of A.I. in diverse fields of human activity will have increased dramatically – such as health, education, the environment, security and so on. Each of these fields is perfectly capable of giving rise to litigation concerning the use of A.I.
Moreover, as is well accepted, advanced A.I. systems can produce results in respect of which the process by which the result was produced cannot be explained. By way of example, consider the system developed to play the game Go. Go is an abstract strategy board game for two players in which the aim is to surround more territory than the opponent. The game was invented in China more than 2,500 years ago and is believed to be the oldest board game continuously played to the present day. The A.I. (AlphaGo) was trained on over 30 million moves and was able to come up with moves that no Go experts imagined when it beat the world champion five games to nil. How the A.I. did this is not explainable, in the sense that the system designers cannot explain by what series of steps the A.I. reached the conclusion that the moves it made were the correct ones to win the game. The A.I was obviously correct, because its previously unimagined moves won the game, but how it ‘imagined’ such moves is unknown and, it would seem, unknowable.
So, consider this: what if a party to litigation wants to rely on the output of the A.I., not by adopting or borrowing from its arguments, but rather by advancing its output as conclusory on the basis that what the algorithms have determined must, ipso facto, be correct, because the algorithms have a track record of being correct? Such a proposition, which today seems extreme, raises numerous questions as to the design of the algorithms and their appropriateness, whether they are biased towards a particular result and so on – questions which could be answered, if at all, only by the manner of the A.I.’s performance being open to scrutiny, which, of course, it may not be. And in any event, how the A.I. reached its conclusion may simply not be explainable. How a TCC judge in the future should or would deal with such a situation remains to be seen.
Judges
It was discussed earlier that we expect judges to consult textbooks to assist them in the decision-making process, so why should they not consult an A.I.? A few matters immediately seem worthy of consideration. Firstly, if the decision-making process may be influenced by the output of A.I., parties may wish to see, and may consider that they have a right to see, what was the input by the judge, and what was the output of the A.I. But it also raises the point mentioned earlier about being able to scrutinise the design of the algorithms, which may simply not be possible (they may be the confidential proprietary product of a developer) or feasible.
Next, if parties were allowed some insight into how the judge had used the A.I., when would such be provided? If after judgment, that might be thought too late; and also, might give the opportunity for perhaps unmeritorious appeals. If insight into the judge’s use of the A.I. were provided prior to judgment, such might provide the opportunity for further submissions and argument, thus adding to the time and cost of the proceedings. Moreover, judges might not (perhaps understandably) feel comfortable about their decision-making processes being subject to such scrutiny.
And what if a judge uses A.I., but does not inform the parties? Or what if a party suspects that the judgement has been influenced by undisclosed use of A.I.? An internet search reveals that some judges, although it is believed not in England and Wales, have already used the technology to assist their decision making. It may be that consideration should be given to making rules to govern such situations; or guidance given to judges as to how, if at all, their own use of the technology is to be dealt with, to the extent such guidance does not currently exist.
It seems all but inevitable that TCC judges will have to deal with A.I on at least two fronts. Firstly, there will be litigation concerning the accuracy and effectiveness of A.I. used in the design and building of construction and engineering projects and in their operation. In other words, there will be claims centred upon allegedly defective A.I. itself. Secondly, TCC judges will have to grapple with the use made of A.I. in the prosecution and defence of claims. Indeed, if as may be likely, the use of A.I. proliferates and its capabilities become greater and more complex, query whether a specialist branch of the TCC devoted to A.I. cases may be required.
And if A.I.-related litigation proliferates in other fields – the aforementioned health, education, the environment etc – the case for a specialist A.I. court may become even stronger.
Judge Analytics
And what of ‘Judge Analytics’, that is A.I. which purports to analyse every available judgment or other pronouncement by a judge and to predict how they are likely to rule in a particular situation? Whether this is simply an example of greater open justice, and as such to be applauded, or instead a form of potentially destabilising surveillance, is yet another debate which is sure to be had.
It is worthy of note that under recently enacted French law, judicial analytics is prohibited, and punishable by up to five years imprisonment: see Article 33 of the Justice Reform Act. It is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.
A key passage of the new law states:
‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’
Expert Evidence
And what about the role of the technical or quantum expert in the TCC? Will the technical or quantum expert’s role shift from being an expert in Discipline ‘X’ to an expert in ‘X and in how to create the best prompts and input data into an AI programme’? As lawyers, will we start to consider less an expert’s prowess at explaining technical issues in reports and in cross-examination and more their ability to utilise A.I.? One may query what the implications of this might be for the independence of the expert.
Clients
A word about clients: the people who pay our bills. For many clients, mediation has been a life-saver. Where successful it reduces or avoids the costs, time and stress of litigation and leads to a result which they feel they can live with. It is not yet mandatory under the TCC Rules, although it is certainly vigorously encouraged. And it may indeed soon become mandatory.
So what if, when A.I. is sufficiently advanced, parties are encouraged (perhaps directed) to submit their claims and defences to early A.I. evaluation? If a party continues to litigate in the face of A.I.’s predicted result, and loses or does less well than predicted, should it face some special costs order? This may sound somewhat fantastical today, but it may be the norm in the years to come.
Regulation
Donald Rumsfeld famously said that:
“We … know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history … it is the latter category that tends to be the difficult one.”
There is good reason to think that we are in the territory of both ‘known unknowns’ and ‘unknown unknowns’ when it comes to how A.I. will play out in the context of the practice of law. And, of course, not only with regard to the practice of law – witness the recent resignation of Geoffrey Hinton, the ‘Godfather of AI’, from Google so that he could speak freely about the ‘dangers’ and ‘risks’ of the same technology that he was fundamental in creating.
The questions pondered above are not insignificant and the legal community needs to grapple with the potential effects of A.I. as a matter of urgency. The Law Society and the Bar Council have established working groups to examine the impact of technology, including A.I., on the legal sector. However, it would be reasonable to suggest that in terms of regulation of A.I. we are way behind where we should be: the machines are ahead of us, and disappearing further into the distance.
Writing recently in the Economist, the philosopher and journalist Yuval Noah Harari said:
“We can still regulate the new A.I. tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, A.I. can make exponentially more powerful A.I. The first crucial step is to demand rigorous safety checks before powerful A.I. tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new A.I. tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.”
Respectfully, that view is echoed.
To begin the dive into the world of A.I. and its relationship with the law is to go down the proverbial rabbit hole. For those interested in doing so the below articles may be of interest:
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
Will ChatGPT make lawyers obsolete? (Hint: be afraid) | Reuters
Former Go champion beaten by DeepMind retires after declaring AI invincible – The Verge
An AI robot lawyer was set to argue in court. Real lawyers shut it down. : NPR
What can the Legal industry Reasonably Expect out of ChatGPT? (thomsonreuters.com.au)
Colombian judge uses ChatGPT in ruling on child’s medical rights case – CBS News