By now you’ve probably heard plenty about Artificial Intelligence (AI). You may have even grown tired of hearing about it. You already know that it has myriad and seemingly endless applications, and that its use is growing at an exponential rate around the world. You’re aware that it’s already revolutionized many industries and sectors – including the practice of law. You may have even given some thought to how using AI might facilitate your own law practice, and especially how it might benefit your clients.
But what you may not have given much (if any) consideration to, are the ethical duties and professional responsibilities that spring from the use of AI, both in your own day-to-day practice, and in connection with your representation of those same clients.
In this first instalment of a multi-part series, we will take a look at some of the issues that have developed on this front, beginning with a glimpse at some of the broader principles adopted by governing body for the province’s lawyers, namely the Law Society of Ontario (LSO).
What are My General Obligations When Using AI?
The answer to this is easy, but not straightforward: We already know that the conduct of every Ontario lawyer is subject to strict professional responsibility rules and ethical obligations, which are overseen and enforced by the LSO. But as those various rules apply to AI, the path is uncharted, and can get muddy very quickly.
By now we’ve all heard the stories of the pair of New York lawyers who were sanctioned by their regulator for submitting a legal brief to the court containing six fictitious cases that had been “hallucinated” by ChatGPT. Closer to home, a B.C. Family lawyer named Chong Ke has likewise run afoul of the province’s law society, for submitting materials that contained two bogus decisions conjured up by ChatGPT. Although the cases were never included in the lawyer’s formal arguments (she withdrew them once their fake nature was uncovered), she was not only reprimanded, but also ordered to personally compensate opposing counsel for the added time it took them to uncover the falsehood. As the judge added in his ruling in the parties’ case, “Competence in the selection and use of any technology tools, including those powered by AI, is critical.”
The Law Society of Ontario’s Approach to AI and Professional Responsibility
This reference to “competence” brings us to the question of how the use of AI is regulated. As you know, the LSO’s expectations are clearly set out in its Rules of Professional Conduct, which provide guidance on how all lawyers should act in various situations. They are based on principles of integrity, confidentiality, competence, and professionalism.
Although we’ll cover the specific governing Rules in Part II of this series, it’s important to note that the LSO has formally recognized the potential for AI use in legal practice – but it has also warned of the risks. The LSO has taken clear steps to address the professional responsibility of lawyers when using generative AI, using an approach based on the following principles:
- The use of AI must be consistent with a lawyer’s professional obligations.
- Lawyers must be competent to use AI in legal practice.
- Lawyers must understand the limitations of AI and be transparent with clients about its use.
- Lawyers must take steps to ensure that AI is used ethically and in a non-discriminatory manner.
The LSO also offers extensive guidance to lawyers on how to comply with their professional responsibility obligations when using generative AI. Those guidelines cover topics such as:
- How to ensure that AI is used in a competent and ethical manner, and in keeping with professional responsibilities.
- How to address the potential for bias in AI algorithms.
- How to protect client confidentiality when using AI.
- How to ensure transparency when using AI.
In April 2024 the LSO’s Futures Committee also issued a White Paper, which provides a fulsome overview of the AI concepts, along with what it terms “guidance and considerations for licensees on how the professional conduct rules apply to the delivery of legal services empowered by generative AI.”
Similarly the Federation of Law Societies of Canada, which is the national association of the 14 provincial and territorial law societies across the country, has also issued its own guidelines for the use of AI in legal practice, including the following:
- Competence. Lawyers must ensure that they are competent to use AI in legal practice. This means that they must understand the technology, how it works, and its limitations. Lawyers must also ensure that they have the necessary skills and knowledge to use AI effectively and ethically.
- Confidentiality. Lawyers have a duty to maintain client confidentiality. When using AI in legal practice, lawyers must take steps to ensure that client confidentiality is protected. This may include limiting access to data and ensuring that data is encrypted and secured.
- Bias. AI algorithms can be biased, leading to unfair treatment and discrimination. Lawyers must be aware of the potential for bias in AI algorithms and take steps to ensure that AI is used in an ethical and non-discriminatory manner.
- Transparency. Lawyers must be transparent about the use of AI in legal practice. This means they must provide clients with information about how AI is being used and how it may impact their case. Lawyers must also be transparent about the limitations of AI and ensure that clients understand that AI assists lawyers, but will not necessarily replace the lawyer’s judgment.
So it’s clear that when it comes to the actual use of AI in daily legal practice involving client files, lawyers have a lot of broad principles to keep in mind. Further, they must also consider the wider implications of AI in all aspects of their legal practice, to ensure that its use supports ethical and fair processes.
But what does all that mean to a typical lawyer, on a more practical day-to-day basis? What are the specific Rules of Professional Conduct that lawyers must be alert to? And what are the common pitfalls to be wary of?
In Part 2 of this series, we’ll cover those topics, along with some practical points on matters such as the extent of disclosure about AI use that should be given to clients.
*This article was originally published by Russell Alexander in Law360 on August 30, 2024.