Educational Resources

Can Artificial Intelligence Replace Judges? 

Written by Russell Alexander ria@russellalexander.com / (905) 655-6335

Can Artificial Intelligence Replace Judges? 

In our last Blog, we talked about how the lawyer-client relationship cannot be truly replaced by Artificial Intelligence (AI).

In this final installment in our three-part series, we turn our focus to another intriguing question:  Is AI capable of replacing judges?

Many would argue “Yes”. Erik Brynjolfsson and Andrew McAfee observe in the Harvard Business Review that:

[we] sometimes hear “Artificial intelligence will never be good at assessing emotional, crafty, sly, inconsistent human beings — it’s too rigid and impersonal for that.” We don’t agree. [Machine Learning] ML systems like those at Affectiva are already at or beyond human-level performance in discerning a person’s emotional state on the basis of tone of voice or facial expression. Other systems can infer when even the world’s best poker players are bluffing well enough to beat them at the amazingly complex game Heads-up No-Limit Texas Hold’em. Reading people accurately is subtle work, but it’s not magic. It requires perception and cognition — exactly the areas in which ML is currently strong and getting stronger all the time.

There are some signs that AI could do a better job than many Judges.  Judicial “discretion” allows for two different judges, with the same set of facts, to potentially arrive a two different, and at times seemingly contradictory results. Examples of injustice in our system, some would argue, are sadly far too common. AI would result in increased certainty and predictability surrounding judicial outcomes.

However, our answer to the question is AI capable of replacing judges? Is a “No” (at least not yet).

True, AI is being used to replacement or supplement human judgment in some areas.  For example, it’s being used in medicine to review brain scans that have been done on patients; it turns out that in that narrow role it’s proven to do a better job, with better accuracy, than human doctors doing the same kinds of medical evaluation.

So – as the argument goes – if AI can surpass the ability of humans to scrutinize and assess data in some areas, why can’t it replace judges too?

It’s Not a Numbers Game

From the perspective of Family litigants especially, it’s a compelling thought that in some futuristic “someday”, divorce will be a push-button affair:   Spouses simply log in to an AI-fuelled justice website, enter some personal details and data about their dispute, and a few mouse-clicks later … Presto!  Out comes a legally-fair, cheap, and binding decision in their divorce.

But for those of us who are in the trenches of the Family justice system, we know that AI will never do what judges do on a day-to-day basis.   That’s mainly because there are esoteric, discretion-based “human” elements to the law that AI systems will always struggle to incorporate.

These elements arise from how the Canadian justice system actually works.  Admittedly at its core, the law is immutably rational. But there are other simple truths that cannot be overlooked, including:

  • Not all legal concepts and principles can be quantified in data;
  • Legal principles are also tied to complex issues of ethics, morality, and philosophy;
  • Different areas of the law can apply different approaches to legal decision-making; and
  • The law aims to address human needs, and to serve the public good, above all.

For a machine, these are difficult concepts to grasp.

[Will artificial intelligence (AI) replace Family Lawyers? In this special series, “30 Days of AI”, we examine the evolution of AI and the potential impact for clients, family lawyers and legal commentary.  By publishing legal content generated by AI we aim to gauge its effectiveness through user experience and commentary. It will be interesting to test the AI and determine if the answers and commentary generated remain static or evolve in time.]

Certainly AI-driven machines can try to put mathematical figures on concepts like probative value, prejudicial effect, credibility, and the balance of probabilities.  They can be fed an algorithm or decision-tree for “triaging” certain types of straightforward claims. Or – based on data in the form of case precedent and actuarial information – they might be used to guide some outcomes on simple mathematic things, like estimating the damages or costs associated with a personal injury.

In other words:  AI machines can do numbers, well.   But as for judging actual cases involving real people, and bringing them to a legally-fair and compassionate conclusion, this fixation on data is the very prominent flaw in AI systems generally.

The Monkeywrench in the Machine

For one thing, if the data that is fed into the AI system is biased, or incomplete, then the results will be skewed accordingly.  But more importantly, once an AI system has identified the facts and governing law, calculated the probabilities, and has done its predictive analysis – essentially, “crunched the numbers” – the end result is merely a data-driven prediction, not a judgment.

To render a fair binding judgment, it requires the application of a judicial reasoning process – one that simply cannot be approximated by a machine.  Computer engineers might be able to write an algorithm that covers off statistics- and probability-based analysis, but the machine will grind to a halt when it comes to true assessment of more “human” characteristics – like credibility.

Family Law is Steeped in Discretion – Not Data

AI systems also do a poor job of discerning the subtleties of a legal case and applying discretion where warranted.  And this is precisely what judges do best – when asked to rule on a matter or issue, they exercise their discretion based on facts, on an intuitive reading of the governing law, and on an assessment of the parties’ credibility.

Critically, those decisions by judges are also informed by public policy concerns, which means they are accountable to the public in a way that machine-driven AI is not.

This is particularly important in Family Law cases, where legislation like the Divorce Act and the Family Law Act establish a framework to guide judges’ decision-making, but also grants them enormous discretion in determining the outcome that will foster the “best interest of a child”, for example.  No matter how advanced, AI systems simply cannot do this kind of nuanced thinking.

Depriving Litigants of their “Day in Court”

Finally, AI systems also cannot give people the “day in court” experienced that they deserve, or a sense that their matter was given fulsome human consideration.  When Family litigants go before a judge, they want a customized determination that is appropriate to their case, and one that includes an element of compassion, where warranted.  They do not want – or deserve – an outcome that is generated by a machine, based on prior similar cases.

That is simply not what the Canadian justice system is all about.

Stay in Touch

Keep learning about the latest issues in Ontario family law! Subscribe to our newsletter, have our latest articles delivered to your inbox, or listen to our Podcast Family Law Now.

Be sure to find out more about the "new normal", by visiting our Covid-19 and Divorce Information Centre.

About the author

Russell Alexander

Russell Alexander is the Founder & Senior Partner of Russell Alexander Collaborative Family Lawyers.