In the recent Ontario Superior Court decision, Ko v. Li (2025 ONSC 2766), the court addressed critical issues concerning the reliance on artificial intelligence (AI) by lawyers in preparing legal submissions. Justice Myers’ decision provides an important lesson about the professional obligations of lawyers, particularly the responsibility to ensure the accuracy of all legal citations and references, even when utilizing innovative technology.
AI-Generated Errors in Legal Documents
The central issue arose from legal submissions by the applicant’s counsel, Ms. Jisuh Lee from ML Lawyers. In her factum, Ms. Lee referenced two case citations—Alam v. Shah, 2023 ONSC 1772, and DaCosta v. DaCosta, 2010 ONSC 2178. Upon examination, the court discovered these cases did not exist in any recognized legal database. Additionally, the lack of specific page and paragraph citations further complicated the court’s ability to validate these references.
Justice Myers concluded that these nonexistent cases were likely products of AI “hallucinations,” a phenomenon in which AI tools fabricate plausible yet entirely fictitious information. The presence of such inaccuracies within court submissions significantly impacted the integrity of the judicial process and raised profound ethical concerns regarding the use of AI technology by lawyers.
Court’s Emphasis on Lawyer Accountability
The court underscored the importance of thorough verification by legal professionals, irrespective of whether AI or other technological tools are used in their preparation process. Justice Myers stressed that errors of this nature cannot be overlooked lightly, as they directly undermine the credibility and efficiency of court proceedings.
Although the court chose not to impose immediate disciplinary measures in this instance, the ruling made clear that repeated incidents could lead to serious professional repercussions for practitioners.
Navigating AI Responsibly
This ruling is a reminder that, while AI can substantially streamline certain legal tasks, it is not infallible. Lawyers must diligently cross-check and verify AI-generated information against authoritative legal databases and resources. The integration of AI into legal practice demands careful scrutiny and constant vigilance from practitioners.
Additionally, the ruling suggests a need for legal institutions and professional regulators to consider developing clear guidelines or training resources. Such measures could help lawyers effectively and ethically integrate AI tools into their practice, preventing similar occurrences in the future.
The case of Ko v. Li highlights critical ethical and professional responsibilities associated with emerging technology in legal practice. Lawyers must balance innovation and efficiency gains from AI tools with their fundamental duty to ensure accuracy and reliability in all submissions to the court.
By maintaining rigorous standards of verification and responsibly employing AI, legal professionals can leverage technological advancements without compromising the essential integrity of the judicial system.
For further details, view the full decision here: Ko v. Li, 2025 ONSC 2766.