In the part 1 and part 2 this three-Part series, we covered the basics of how the use of Artificial Intelligence (AI) in the legal profession is being governed – at least up until this point in time.
First, we looked at the general AI-related obligations imposed by the Law Society of Ontario (LSO), and the various guidelines that the regular has offered in response to this nascent area so far. Next, we explored, Rule-by-Rule, the specific provisions of the LSO’s Rules of Professional Conduct that are relevant to the use of AI in legal practice.
So far, so good. As a lawyer this should give you a good start, in terms of providing a principle-based roadmap for how to approach your use of AI in your law practice. You might have even referred to the LSO’s own “Quick-start Checklist” on how to integrate AI processes into your legal workflow.
So now you’re ready to go. But you still have questions.
That’s what this third and final Part of the series will cover – the Frequently Asked Questions. Here are some of the potential questions you may be asking yourself, when dealing with clients, opposing counsel, and the courts on a day-to-day basis. (Note: These answers are extrapolated from the LSO materials; make sure to do your own verification).
Q. Can I use AI tools for my client intake and initial consultations?
- Yes. But you should review and confirm any information collected, before you make any legal decisions or provide advice about your new client’s specific matter.
Q. Can I use AI to provide my clients with legal advice about their case?
- It depends, and only with good judgment, moderation, and – above all – careful and independent review. The LSO expects lawyers to provide competent advice, always. While this may incorporate AI in some respects, it should not be viewed as a substitute; lawyers remain responsible to ensure the advice is accurate and reliable.
With that said, some legal tasks simply cannot be – or should not be – delegated to AI systems. Some of them, such as giving tailored legal advice based on a described fact scenario, simply require the experienced input and competence of a trained lawyer, and cannot be outsourced. There is too great a risk that AI might offer up incorrect or inappropriate legal advice, or that it will create other confusion.
Q. What about a simple opinion letter to my client? Can I use AI for that?
- Yes, but only with a thorough and detailed review of the results, together with independent legal assessment of the rationale for the conclusions, and the reasoning behind them. Remember that AI tools are designed to be an aid – not a substitute – for professional assessment and legal judgment in your client’s matter. As always, you are obliged to maintain the highest standards of integrity, confidentiality, competence, and professionalism – and using AI does not change that.
Q. Can I rely on AI tools for my legal research?
- Yes, but again this should not be a standalone solution. You must always conduct strident double-checks on the AI work product you generate, and should augment it your own legal research using traditional methods.
As discussed in earlier Parts of this article series) some lawyers in both Canada and the U.S. have fallen famously afoul of their professional obligations in this regard. Even in those Canadian jurisdictions where the courts do not require lawyers to attest to having double-checked their AI-produced work, it is vital to do this kind of verification.
Q. Can I use AI to help draft my legal documents?
- Yes. However, you remain responsible to review and finalize the resulting documents. They must: 1) meet legal standards; and 2) be tailored to the specific needs of your client.
Q. Do I have to tell a client that I used AI to draft my materials, for court or otherwise?
- The short (and safe) answer is “Yes”. Admittedly, there is currently no strict requirement on lawyers that they disclose to the client each and every situation where they have used AI. Rather, it’s a case-by-case judgment call, and at this early stage it tends to be a gray area.
However, as a lawyer you must always be transparent to clients – and this includes how you use AI in your practice. Ideally, you should not only provide clients with information about how AI is being used in their matter, but you must also tell them how it may impact their case.
Fortunately, the LSO has provided a White Paper on Licensee Use of Generative Artificial Intelligence, and it serves as an excellent guide in this regard. It provides input on what lawyers ought to consider when choosing whether to disclosure their AI use to clients, in the course of providing legal services.
According to the White Paper, there are many factors to be considered, including:
- Will the fact that AI was used necessarily be disclosed publicly? (e.g. in situations where the relevant court’s Practice Directions already mandate such disclosure).
- Would the client reasonably expect that the material being prepared would actually be prepared by the lawyer, rather than with the help of AI?
- Does the use of AI in the circumstances require the inputting of the client’s personal or proprietary information?
- Is there any reputational or other form of risk to the client, that might arise from the lawyer’s use of AI?
In short: There will be some scenarios where a client would very reasonably expect you to disclose your AI use to them. In others, you may be perfectly entitled not to make the disclosure.
Q. What about the judge? Must I tell the court that I used AI to help with drafting?
- Prudence is always the best course here. In the Ontario justice system, there is currently no Practice Direction or mandate from the court absolutely requiring lawyers to disclose their AI use in court-filed materials. However, this may change.
In Manitoba, for example, the Chief Justice of the Court has issued Practice Direction obliging lawyers and self-represented litigants in that province to disclose if they have used AI to prepare court documents in the Court of King’s Bench. Their court submissions must include a statement on whether AI was used in their preparation, and precisely how it was applied. Likewise, the Federal Court has issued a Notice to the Parties and the Profession on the use of Artificial Intelligence in Court Proceedings.
If you are an Ontario lawyer, the safest course is always best, but again you should determine this for yourself on a case-by-case basis.
Q. Are there any specific LSO rules about using AI to communicate with opposing counsel?
- This simply falls under the specific Rules of Professional Conduct dealing with how you should communicate with opposing counsel in general. Using AI does not change the level of honesty, integrity and courtesy that you must display. Further, you should never use AI tools to deceive or mislead.
Q. If I use AI for various tasks in my law practice, can I still bill my clients at my regular rates for that work?
- No. Even if you are using AI to optimize your efficiency and productivity, you may still only charge for the time you actually spend on a file. For example, you cannot charge a full two hours for drafting a document, when it took half that time because you used AI to get a head start. (It’s the same principle as for any productivity tool: You do not charge a client four hours of billed time, because that is what it would have taken you to write out, long-hand with a pencil, a document that took you less than an hour using a word processor).
However this does not mean that you cannot alternative fee arrangements, that have built-in adjustments for the use of AI technology, just like any other tool they may use. As long as it is fair and reasonable – and arranged with the consent of the client – it might still comply with your professional obligations around transparent and reasonable billing.
Q. It is ethical to use AI tools to predict case outcomes or jury behavior?
- Yes, but use the results of the analysis with caution. Predictive analytics can be useful, but you must use it responsibly, and must never present it to your client as a “guarantee.” Always consider the ethical implications of relying on such tools.
Q. What happens if my use of AI is tantamount to a breach of the Rules of Professional Conduct?
- As with any other breach or act of non-compliance, the result could be disciplinary action, including fines, suspension, or disbarment. You must always ensure that you comply strictly with your ethical and professional standards, when using AI tools or any other technology, for that matter.
The Takeaway
If there is one pervading theme in this three-Part series, it’s that AI is a valuable tool for lawyers – but it is also one that must be used with extreme caution, and with one eye kept firmly on the professional obligations that are a key part of a lawyer’s role.
Although the LSO has made strides in addressing the nuanced complexities of AI, we are only at the beginning stages of the journey. As its use grows by leaps and bounds in all industries, the LSO will have to try to keep pace – and so will lawyers, in connection with the growing list of new scenarios, and the many burgeoning opportunities that AI technology provides.
*Originally published in Law360 on September 5th, 2024.