Press Alt+1 for screen-reader mode, Alt+0 to cancelAccessibility Screen-Reader Guide, Feedback, and Issue Reporting | New window
All CLE Packages On Sale Now — Ends 2/13 (No Code Needed)

Ethics of AI ChatGPT

Presented by Cari Sheehan

(12,591 Ratings)
LexVid

Watch This Course for Free!

New LexVid members can watch their 1st course for free. No credit card needed, just create an account and you'll receive your certificate immediately after watching your free course.

Already a member? Sign in
Video Player is loading.
Slides
move
Loaded: 0%
Current Time 0:00

Course Description

Length: 1h 0min    Published: 3/12/2024    
Do you use ChatGPT in your practice? Have you thought about what ethical rules apply? This seminar will explore the ethical risks associated with lawyers using AI, particularly ChatGPT in their practice. This seminar will explore Rules of Professional Conduct 1.1 Competence, 1.6 Confidentiality, 1.5 Attorney Fees, 5.2 and 5.3 Supervision, and so much more. This seminar will also include a demonstration of the capabilities of ChatGPT. This is a seminar you will not want to miss to stay competent on this new emerging technology.
Learning Objectives
* Understand how ChatGPT is being used in the practice of law
* Know what ethical issues emerge with the use of ChapGPT
* Explore how ChatGPT can impact your duty of competence
Read the course transcript.

Speaker Q&A

Question
Hello Cari. Thank you for this overview of ChatGPT. I just have one question. ChatGPT has an option in the Data Control Settings to not feed your information back to the AI model, which should keep the information you put in and get back confidential and available only to you. Is there a reason you did not bring it up in the course? Does this solve the confidentiality problem? I still would not input clients' names, however, I think if I change my Data Control Settings and turn the "Improve the Model for Everyone" option OFF, the information I input and get back should remain confidential. I'd like to hear your thoughts on this. Thank you! Marina
- MarinaB
Answer
Great Question. Unfortunately, no, it does not meet our requirements because the platform still stores our information and we have no control over it. We have to have full control over our information (the entire time) and know what is done with it. This would not, by itself, solve the issue with the platform.
- Cari Sheehan
Question
IS THERE AN AI PROGRAM THAT IS FREE AND YOU RECOMMEND AS THE BEST?
- FernandoC
Answer
No. So long as AI programs are for the public they rarely, if ever, comply with Rule 1.6. You will need to vet each program and usually pay a subscription to get a closed-circuit AI program that will not share or store your information.
- Cari Sheehan
Question
Is an attorney required to disclose the use of ChatGPT to the client.
- GaryB
Answer
Yes, under ABA formal opinion 512 an attorney needs to seek specific consent from a client prior to using any GenAI. The consent should be informed and cite the risks and benefits of the the GenAI the attorney plans to use.
- Cari Sheehan
Question
Cari, first let me thank you for an extraordinary program. I must say you scared the living daylights out of me by awakening me to the ethical dangers of ChatGPT (and AI technology in general), particularly with respect to the sharing of client confidences! I was wondering about the use of aliases to avoid problems of sharing clients’ confidential information. A hypothetical for you: Karen Jones and Bob Smith come to me to form a business entity for them which they wish to call Euphoria Enterprises. I determine that an LLC would be the most appropriate choice of entity. Other problems/limitations of ChatGPT aside, what if I were to write up a ChatGPT prompt asking that it prepare an operating agreement for two people with a fictitious entity name; fictitious members names and bogus addresses, with the intention to modify it to contain the correct information for the clients and thus for use as the operating agreement for Euphoria Enterprises, LLC?
- WilliamH
Answer
Thank you for your kind note—I’m glad the program was useful, even if a bit unsettling. A healthy level of concern is exactly where most ethics guidance is trying to land lawyers right now. Your hypothetical is a good one, and it’s a question I hear often. Using fictitious names, entity titles, and addresses does meaningfully reduce the risk of disclosing client confidences, and that approach is generally far safer than inputting real identifying information into a public AI tool. From a confidentiality perspective, prompting an AI system with anonymized or fictionalized facts is one of the recognized risk-mitigation strategies. That said, a few important cautions remain: First, even when names and addresses are fictional, lawyers must be careful not to include other information that could indirectly identify the client or their matter—such as highly specific deal terms, uncommon structures, sensitive business strategies, or unique factual circumstances. Confidentiality is not limited to names alone. Second, the output should be treated as a starting point only, much like a generic form pulled from a practice guide. You would still need to independently review, revise, and exercise professional judgment to ensure the operating agreement complies with applicable state law, reflects the clients’ actual intentions, and does not omit or misstate material terms. Over-reliance on AI-generated drafting without verification raises competence and supervision concerns. Third, while anonymization reduces confidentiality risk, it does not eliminate other ethical obligations—particularly the duty of competence. If the AI-generated agreement contains errors, outdated law, or inappropriate provisions, responsibility for those deficiencies rests entirely with the lawyer, not the tool. Finally, firm policies and client expectations matter. Some firms restrict the use of public AI tools altogether, and some clients may reasonably expect disclosure if AI is used in a material way in drafting core documents—even if anonymized. Those considerations should factor into the decision. So, in short: your approach is far more defensible than using real client information, and many lawyers use a similar technique. But it should be paired with careful judgment, rigorous review, and compliance with firm policy and client expectations. AI can assist the process—but it cannot replace the lawyer’s role.
- Cari Sheehan
Question
This is a very informative presentation, especially with regard to the rules of professional conduct. Thanks! Since this presentation was made in early 2024, some state bars (like California) have isueed formal ethics opinions suggesting that "technological competence" now includes a duty to understand the risks assoicated with AI. Given the rapid advancement of AI, does a lawyer today risk an ethical violation simply by NOT using AI if it has become the standard for efficient practice? Or is the risk still primarily associated with misusing it? Also, two years ago, the primary concern was "leaking" data into an AI system's training set. Today, many firms use "walled-garden" or enterprise versions of ChatGPT. Do such systems change a lawwyer's duty of disclosure to clients?
- PaulS
Answer
1. Does a lawyer risk an ethical violation by not using AI if it becomes the standard for efficient practice? At this point, the ethical risk remains far more closely associated with misuse of AI rather than non-use. While some jurisdictions (including California) have clarified that “technological competence” includes understanding the risks and limitations of AI, none have gone so far as to require affirmative use of AI tools. The duty is one of competence, not adoption. Lawyers must understand enough about AI to evaluate whether its use would be appropriate, safe, and accurate for a given task—but exercising professional judgment to decline its use does not, by itself, create an ethical violation. If that changes, it will likely be through incremental guidance tied to specific practice contexts rather than a blanket obligation to use AI. 2. Is the ethical risk still primarily tied to misuse of AI? Yes. The core risks remain the same: confidentiality, supervision, accuracy, bias, over-reliance, etc. Using AI without understanding how it works, failing to verify outputs, inputting confidential information into an unsecured system, or allowing AI to substitute for professional judgment are where lawyers continue to face the greatest exposure. In other words, the ethical concern is not whether AI is used, but how it is used. 3. Do “walled-garden” or enterprise AI systems change a lawyer’s duty of disclosure to clients? They can affect the analysis, but they do not eliminate the duty altogether. Secure, enterprise-level AI tools that do not train on user data meaningfully reduce confidentiality risks, which is an important development from where the conversation stood two years ago. That said, disclosure obligations are still fact-specific. If AI use is material to the representation—for example, if it meaningfully affects how legal services are performed, billing practices, or the handling of sensitive data—client disclosure may still be required or, at minimum, advisable. Transparency remains best practice, even where the underlying technology is more secure.
- Cari Sheehan

Presented By:

Cari Sheehan

Indianapolis, IN

(812) 239-4187

csheehan@taftlaw.com

Featured Reviews

"Excellent, informative, timely. She rocks!"

   Dean P

"Very good course; strong presentation. This course made me think beyond the presented material, and made me consider how to help clients with this information as well. Great job!"

   Tracy W