Featured Image
The Master of Rolls, Sir Geoffrey Vos, gave the plenary lecture at the SLSA 2022 conference in York, 7th April 2022Socio-Legal Studies Association / YouTube

(LifeSiteNews) — England’s second highest judge stated that he thinks Artificial Intelligence (AI) programs will decide court cases in the future. 

The Master of Rolls, the second highest judge in England and Wales, Sir Geoffrey Vos, said that “I believe that it [AI] may also, at some stage, be used to take some (at first, very minor) decisions.” 

Vos made these comments during the Law and Technology Conference, according to the Daily Mail. 

“I have already said that I have no doubt that lawyers will not be able to stand aside from the uses of generative AI,” the high-level judge said. 

“Clients will insist that all tools available are at least considered for application within the delivery of legal services.” 

But will judicial decisions be taken by machines rather than judges? As many of you will know, we are introducing in England and Wales a digital justice system that will allow citizens and businesses to go online to be directed to the most appropriate online pre-action portal or dispute resolution forum.” 

“That digital justice system will ultimately culminate at the end of what I regard as a ‘funnel’ in the online court process that is already being developed for pretty well all civil, family and tribunal disputes.” 

Vos said that in order for AI to make decisions in court cases certain control mechanisms will be required, namely, “(a) for the parties to know what decisions are taken by judges and what by machines, and (b) for there always to be the option of an appeal to a human judge.” 

READ: ‘Godfather of AI’ warns of difficulty in stopping ‘bad actors’ from exploiting it 

While Vos was confident that AI would take over the decision-making in certain legal disputes, he predicted that cases involving serious criminal offenses or custody of children will still be judged by humans because people would not accept judgments by robots in these instances. 

“The limiting feature for machine-made decisions is likely to be the requirement that the citizens and businesses that any justice system serves have confidence in that system,“ he said. 

“There are some decisions – like for example intensely personal decisions relating to the welfare of children – that humans are unlikely ever to accept being decided by machines.” 

“But in other kinds of less intensely personal disputes, such as commercial and compensation disputes, parties may come to have confidence in machine made decisions more quickly than many might expect,” Vos stated. 

Addressing the issue of letting computer programs make important decisions instead of them being tools to help humans make decisions, Gary Smith, author of the book The AI Delusion, warned that “The real danger is that we think computers are smarter than us and we let them make important decisions that should not be made.” 

Moreover, glitches in computer programs that can be very hard to spot in increasingly complex AI models could potentially have catastrophic results, if AI programs make important decisions on their own. 

READ: The simple reason why AI will never be smarter than human beings