Blog | Lawline

Human Judgment Required: The Ethics of Practicing Law in the Age of Generative AI

Written by Lawline Staff Writer | Apr 30, 2026 3:00:03 PM

Generative AI has moved from a novelty to a daily fixture in legal practice faster than almost any technology before it. Attorneys are using it to draft documents, summarize materials, conduct research, and manage knowledge across entire firms. And yet, for all its utility, the ethical landscape around these tools remains unsettled, and the stakes for getting it wrong are high.

Human Judgment Required: The Ethics of Practicing Law in the Age of Generative AI, a Lawline course presented by John Koss, Head of Innovation, AI, and Data Consulting at Mintz, offers attorneys a rigorous and practical orientation to what these tools are, what the rules require, and where lawyers have already stumbled. Koss draws on real disciplinary cases, evolving ABA guidance, and the hard-won perspective of someone who works at the intersection of law and technology every day.

Not All AI Is the Same, and the Distinction Matters

One of the most useful contributions of this course is its insistence on precision. AI is not one thing. Koss opens by separating traditional AI, which has been present in legal practice for more than a decade in the form of document review tools, pattern-matching search, and contract extraction, from generative AI, which is what most people mean when they talk about AI today.

Generative AI tools, including the ChatGPTs, Claudes, and Geminis of the world, work by training on vast quantities of text and then assembling responses to prompts by predicting what sequences of words are most likely to be useful and coherent. They are not retrieving factual answers from a verified database. They are, in a real sense, making educated guesses based on pattern recognition across enormous datasets. Understanding that distinction is not merely academic: it is the foundation for understanding why these tools fail in the specific ways they do, and why blind reliance on their outputs is a professional responsibility problem, not just a technical one.

Koss maps the generative AI landscape into four categories that attorneys encounter in practice.

  • General purpose tools are the publicly available LLMs that most people use. 

  • Legally specific tools are built on those same underlying models but are layered with additional legal training, workflows, or curated datasets to make them more reliable for research and drafting.

  •  Search and experience management tools leverage generative AI to help organizations find and reuse their own work product and institutional knowledge.

  •  Legal task-specific tools are purpose-built for particular functions, such as patent prosecution, due diligence, litigation strategy, or trust and estate document generation.

 Each category carries its own capabilities and risks, and knowing which type of tool you are working with is itself part of your ethical obligation.

 

Competence Now Includes Understanding the Tools You Use

The ABA's position on AI is not ambiguous. Under Rule 1.1, Comment 8, attorneys are required to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. In 2026, that requirement encompasses generative AI. Formal Opinion 512, issued in 2024, addresses generative AI specifically and makes clear that lawyers must understand not just how to use these tools, but what they are actually doing under the hood.

Koss breaks down what competence looks like in practice across several dimensions. Attorneys must understand the capabilities and limitations of any AI tool they use before deploying it in client work. They must ensure that client data remains confidential, which means understanding what happens to information once it is submitted to a given tool. They must communicate clearly with clients when AI is being used, explain the relevant risks, and obtain consent where required. They must also ensure that their billing practices accurately reflect the efficiency gains that AI provides, because billing a client for eight hours of work when AI reduced it to two raises its own set of problems.

These are not new duties. They are the same duties of competence, communication, and candor that have always governed legal practice, applied to a new context. But that context creates real and specific new risks that the course addresses directly.

When the Privilege Walks Out the Door

Client confidentiality is one of the foundational ethical obligations in law, and generative AI introduces a set of risks to it that many practitioners have not fully considered. Koss uses a recent case from the Southern District of New York, the Hefner matter, to illustrate exactly how those risks materialize.

In that case, a lawyer was inputting client documents and firm materials into a publicly available LLM, uploading them as attachments, asking the tool to summarize and respond, and relying on the outputs. When those inputs and outputs were later discovered, the court found that because the tool was publicly facing, and because its terms and conditions permitted the provider to use submitted data for training and storage purposes, the attorney-client privilege had been waived as to those communications.

The lesson is not to stop using AI. It is to understand what you are using. Publicly available versions of AI tools, even ones that require a subscription, are not the same as enterprise agreements that contractually prohibit training use and create a closed loop in which submitted data is used only for the specific inquiry at hand. Most large law firms and in-house departments are now contracting directly with AI providers to ensure exactly that kind of protection. The responsibility for understanding the difference falls on every attorney who uses these tools.

What Hallucinations Actually Are, and Why They Are Not Going Away

The fabricated citation problem has generated significant press coverage, and Koss does not shy away from it. But he makes an important point that reframes how attorneys should think about hallucinations: they are not a malfunction. They are an expected outcome of how these tools are designed.

Generative AI is trained to be helpful and responsive. It is optimized to answer questions, not to decline them. When a tool is asked for legal authority and cannot reliably locate any, it does not say so. Instead, it assembles something that looks and sounds like legal authority, because assembling plausible text is exactly what it is built to do. A legal citation is, from the model's perspective, just a predictable sequence of words and numbers, easy to pattern-match and reassemble into something that looks correct without being correct.

The Mata v. Avianca case from the Southern District of New York in 2023 is the reference case here. Counsel submitted a brief containing fabricated AI-generated citations that sounded entirely plausible, involving airlines, personal injury claims, and familiar legal frameworks. When opposing counsel and the court could not locate the cases, counsel had to admit they had used ChatGPT and had not verified the output. The result was Rule 11 sanctions and a very public lesson about the gap between confident-sounding AI output and verified legal authority.

The Noland case, out of a California appeals court, pushes the lesson further. Not only was the submitting counsel criticized for relying on fabricated citations, but opposing counsel was also called out for failing to identify and flag the fabricated authorities. The court treated it as a failure on both sides. The takeaway for practitioners: every attorney involved in a proceeding has an obligation to verify cited authority, not just the one who submitted the brief.

Koss is clear that using AI is not inherently improper, and that courts and the ABA have acknowledged as much. The problem is not the tool. The problem is relying on the tool without applying professional judgment and verification to everything it produces.

The Bias Problem Is Real, Even When It Is Invisible

Algorithmic bias receives less attention in the popular discourse around AI than hallucinations, but Koss argues that it may ultimately pose an equally serious ethical risk. The large language models that underpin most generative AI tools are trained on vast quantities of human-generated text scraped from the internet and other sources. That text reflects the full range of human bias, misinformation, and historical inequity present in those sources. When a model learns patterns from biased training data, those patterns show up in its outputs.

This matters most when AI is used for tasks that involve classification, prediction, or analysis of categories that track protected characteristics, such as race, gender, or national origin. A tool used for employment matters, sentencing analysis, or anything touching on demographic data can produce biased outputs that a lawyer then relies on and passes on to a client without recognizing the distortion. The attorney's duty not to engage in discriminatory practice does not disappear because the bias originated in an AI model.

There is also the black box problem. Even experts struggle to explain with precision how these models arrive at specific outputs. That opacity makes it very difficult to audit for bias, catch errors, or explain your reasoning if challenged. And as more and more online content is itself AI-generated, there is an emerging feedback loop in which models are being trained partly on AI-generated material, with uncertain implications for accuracy and bias over time.

Building an Ethical AI Framework

The last section of the course shifts from risk identification to practical guidance. Koss presents four criteria for evaluating any AI tool before relying on it in practice: precision, meaning whether the tool can explain its reasoning and show where its conclusions come from; truth, meaning whether its outputs are verifiable and accurately cited; clarity, meaning how transparent the tool and its vendor are about how it works and what it does with data; and trackability, meaning whether the process can be verified, reversed, or explained when challenged.

He also makes the case for written generative AI permissible use policies, whether at the firm level or for individual practitioners. A policy of this kind should address what the tools are used for, what data can and cannot be submitted to them, how client and firm confidential information is protected, and what process is required to verify AI-generated outputs before they are relied on professionally.

Throughout, Koss returns to a phrase that captures the entire argument: the human in the loop. AI should function as a tool that enhances legal capability, not a replacement for professional judgment. The efficiency gains are real and significant. Getting a first draft generated in minutes instead of hours, uploading a document set for analysis and receiving a structured summary, asking a research question and receiving a well-organized starting point, all of these represent genuine value. But they represent value only when an attorney applies trained legal judgment to everything that comes out of them, checks every citation, questions every conclusion, and takes personal responsibility for the accuracy of anything that goes to a client or a court.

That has always been the obligation. Generative AI makes it more visible and more consequential than ever.

The Tools Are Only as Good as the Judgment Behind Them

The tools are not going away, and no serious practitioner should want them to. But using them well requires something that no model can provide on its own: the judgment, accountability, and professional responsibility that define what it means to practice law. This course is a practical guide to bringing exactly that to every AI-assisted decision you make.

Ready to understand the ethics of AI in legal practice at a deeper level? Watch Human Judgment Required: The Ethics of Practicing Law in the Age of Generative AI on Lawline today.

 

The AI Works for You. You Are Still the Lawyer.

This course makes one thing clear: the ethical challenges of AI in legal practice are not on the horizon. They are already here, showing up in courtrooms, disciplinary proceedings, and client relationships right now. The attorneys navigating this moment most effectively are not necessarily the most tech-savvy. They are the ones who take the time to understand what these tools actually do, ask hard questions about the data they touch, and refuse to let a confident-sounding output substitute for professional judgment.

Ready to deepen your understanding of AI ethics in legal practice? Explore Lawline's comprehensive AI Hub and find the courses that will help you use these tools well and keep your practice on solid ethical ground.

 


Further Your Legal Education With Lawline Courses

Interested in learning more strategies and tips for becoming a better lawyer? Lawline offers a wide assortment of informative continuing education courses that allow you to develop your expertise and take charge of your professional growth. Check out our programs covering more than 60 practice areas and sign up for a free trial today.

 

 

Stay up to date! Receive updates on new content, promotions, and more: