Generative to Agentic: The Legal Ethics of AI Evolution
Artificial intelligence has moved well beyond the theoretical. It is now an active force in legal practice, and the profession is still catching up. Generative to Agentic: The Legal Ethics of AI Evolution, a Lawline course presented by Khasim Lockhart of Frankfurt Kurnit Klein & Selz, offers attorneys a forward-looking framework for navigating the ethical landscape of both generative and agentic AI, the two dominant forms of AI reshaping how legal work gets done.
Whether you are already integrating AI into your practice or still evaluating the risks, this course provides the tools to stay on the right side of your professional obligations.
What Is the Difference Between Generative and Agentic AI?
Not all AI tools are created equal, and understanding the distinction matters for lawyers.
Generative AI refers to the tools most attorneys have already encountered: systems like ChatGPT or Microsoft Copilot that respond to prompts by generating text, summaries, or analysis. These tools assist with discrete tasks: drafting a motion, summarizing a deposition, or researching a legal issue. The attorney remains in direct control of the process, reviewing and acting on whatever the system produces.
Agentic AI is a different and more consequential development. These systems are capable of autonomous planning, decision-making, and execution across multiple steps, without requiring human input at each stage. An agentic AI might not just draft a document; it might gather information, analyze it, draft a response, send communications, and update records, all with minimal human intervention. That autonomy is precisely what makes agentic AI so powerful, but also so legally fraught.
How Does AI Intersect With the Model Rules of Professional Conduct?
The Model Rules were not written with AI in mind, but they apply regardless. This course walks through five rules that are most directly implicated by AI use in legal practice.
Rule 1.1 (Competence) requires attorneys to keep pace with changes in the law and practice, including the benefits and risks of relevant technology. Using AI tools without understanding how they work, or failing to verify their output, may fall short of the competence standard.
Rule 1.6 (Confidentiality) becomes particularly sensitive when client information is entered into third-party AI platforms. Many generative AI tools are trained on or retain user inputs, raising real concerns about whether confidential client data is being exposed.
Rule 5.3 (Supervision of Nonlawyers) has been interpreted to extend to AI tools. If an attorney relies on AI-generated work product without adequate review, the professional responsibility for any errors still rests with the attorney.
Rule 5.5 (Unauthorized Practice of Law) asks a harder question: at what point does an AI system, particularly an agentic one capable of independent legal reasoning and action, cross the line into practicing law, and what is the attorney's role in that determination?
Rule 3.3 (Candor Toward the Tribunal) is the one that has already generated the most high-profile consequences. AI systems can and do hallucinate, producing citations to cases that do not exist, misrepresenting holdings, or generating fabricated quotations. Attorneys who submit AI-generated content without verification risk violating their duty of candor, as several courts have made clear in recent years.
What Are the Most Common Ethical Risks of AI in Legal Practice?
There are three categories of risk that attorneys should be actively managing.
Hallucinated output is the most immediately dangerous. Generative AI systems produce text that sounds authoritative even when it is factually wrong. In legal contexts, this can mean fabricated case citations, incorrect statutory references, or invented procedural history. The risk is not that AI gets things wrong occasionally: it is that it does so convincingly, making errors easy to miss on a quick review.
Data privacy concerns arise whenever client information enters an AI system. Attorneys must understand whether the platforms they use store, share, or train on inputs, and whether their use agreements adequately protect confidential information.
Inadequate oversight is the systemic risk. As AI tools become faster and more capable, there is a natural tendency to rely on them more heavily and review them less critically. For agentic systems in particular, the speed and autonomy of the tool can outpace an attorney's ability to meaningfully supervise the work.
Watch Any Course for Free
Start a no-risk free trial today to watch any Lawline On Demand course for free. Choose from over 2,000 courses, including attorney well-being, ethics, professional development and 40+ practice areas.
Does AI Use Raise Unauthorized Practice of Law Concerns?
This is one of the more unsettled questions this course addresses. With generative AI, the concern is largely hypothetical: the attorney is still the one making decisions and applying judgment. But with agentic AI, the calculus changes. A system that autonomously researches an issue, drafts a legal strategy, communicates with counterparties, and executes steps in a representation is doing things that look a lot like practicing law.
The question of whether agentic AI implicates Rule 5.5 does not have a definitive answer yet, but attorneys who deploy these tools need to think carefully about where the line is and how to ensure their own independent judgment remains genuinely in the loop.
What Does Responsible AI Use Look Like in Practice?
This course offers practical guidance rather than just theoretical warnings. A few core principles emerge from the material.
Verification is non-negotiable. Any AI-generated content that will be used in a filing, brief, or client communication must be independently reviewed and confirmed. This is especially true for citations and case holdings.
Platform selection matters. Before using any AI tool with client data, attorneys should review the platform's terms of service, data retention policies, and any available enterprise agreements. Not all tools are appropriate for all uses.
Supervision policies should be explicit. Firms that use AI tools should have written policies that define how AI output is reviewed, who is responsible for verification, and how agentic tools in particular are overseen.
Documentation protects attorneys. Keeping records of how AI tools were used in a matter, what was reviewed, and what oversight was applied provides a defensible record if questions arise later.
Key Takeaways
-
Generative AI assists with discrete tasks; agentic AI acts autonomously across multiple steps: a distinction that has significant ethical implications.
-
The Model Rules of Professional Conduct apply to AI use, whether or not the rules explicitly mention technology.
-
Competence, confidentiality, supervision, candor, and the unauthorized practice of law are all live concerns.
-
Hallucinated output remains the most immediate courtroom risk, and several attorneys have already faced sanctions as a result.
-
As AI systems become more capable, the burden on attorneys to supervise, verify, and document their use only increases.
For a deeper dive on this topic, watch the full Lawline course Generative to Agentic: The Legal Ethics of AI Evolution presented by Khasim Lockhart of Frankfurt Kurnit Klein & Selz.
The AI Revolution Is Already in Your Courtroom
This course reveals a central truth: the ethical challenges of AI in legal practice are already here. The attorneys navigating this moment most effectively are not necessarily the most tech-savvy; they are the ones who take the time to understand what these tools actually do, ask hard questions about their risks, and refuse to outsource their professional judgment to an algorithm.
Ready to join the thousands of legal professionals who are preparing for the AI-powered future of law? Explore Lawline's comprehensive AI Hub and discover the knowledge that will position your practice at the forefront of legal innovation.
Stay up to date! Receive updates on new content, promotions, and more:
