In a Blog we posted recently, we talked about several recent instances where several Canadian lawyers – and in one case, a Canadian judge – were accused of misusing Artificial Intelligence (AI) tools like ChatGPT.
The lawyers had used AI to generate legal arguments and documents filed with the court that turned out to have “hallucinations” – meaning references to fake cases, or to quoted passages that did not actually exist. In the case of the judge, he is accused of having used AI to write parts of his ruling on a fraud case, because it referred to cases and evidence that were not real.
Admittedly, it’s not that the use of AI should necessarily be banned outright. It can be a useful tool – like spellcheck, legal precedents, online caselaw databases, and Google searches. But above all, lawyers are mandated by their regulator to operate under a detailed framework of professional and ethical duties that govern how legal work is done – whether that work involves traditional research methods, or new useful tools like AI.
In Ontario, this comes from the Law Society of Ontario, which imposes binding obligations through its Rules of Professional Conduct. These rules require all lawyers to be competent, to provide conscientious and thorough service, and to ensure that all legal work – especially legal research and writing – is accurate and reliable. They also cannot delegate responsibility for their work to a junior, a template they find on the internet, or new tech tool. The duty remains personal and non-transferable.
In Canada, judges are not governed by the same set of rules, exactly. But they are held to equally stringent standards, through judicial ethical principles that cover any AI by implication. Their role requires:
- Independence
- Impartiality, and
- Careful reasoning based solely on the evidence and law properly before the court.
For both lawyers and judges alike, any use of AI must be done carefully, critically, and with integrity.
Our Firm-Wide Approach
Here at Russell Alexander Collaborative Family Lawyers, we all take our duties to clients and the court very seriously, and are constantly mindful of our many professional responsibilities. These include a duty of candour, that demands honesty and transparency when dealing with the court. It also encompasses a duty of competence, which requires us to understand the capabilities – and the limitations – of any tool we use, including AI.
We also know that falling short of those duties can have drastic consequences: It can not only jeopardize a client’s legal matter, it can expose us to individual allegations of professional misconduct as lawyers, especially where the court has been misled.
This means that whenever we do legal research and prepare documents for clients, we are especially conscientious about our duty to:
- Independently verify all authorities before relying on them
- Confirm that cited cases exist and support the stated proposition
- Ensure the law is current and has not been overturned or limited
- Review original decisions rather than relying solely on summaries
- Scrutinize quotations for accuracy and completeness
- Apply independent legal judgment to all research results
- Ensure AI-assisted content reflects the client’s actual facts
- Identify and resolve gaps or inconsistencies in the analysis
- Protect client confidentiality when using any digital tools
- Avoid misleading the court, including by omission or overstatement
It’s a long list. And if we use AI tools at all (typically by agreement, and to save the client costs), we know that the ultimate responsibility for the accuracy and substantial integrity of the final work product rests solely with us.
For any additional questions or support needed, please visit FamilyLLB.com or contact us at Russell Alexander Collaborative Family Lawyers.
