COVID-19 Response from The Colleges of Law:

Legal Currents and Futures: ChatGPT’s Limitations: A Closer Look

Jeanne Eicks, J.D., associate dean of Graduate and Lifelong Learning Programs at The Colleges of Law, explores the limitations of ChatGPT when applied to the legal industry.

Readers who saw last week’s blog may recall the impressive list of ways the large language model AIs, such as ChatGPT, can support the work of legal professionals. In short, ChatGPT can draft legal documents, complete legal research, draft client communications, attribute sentiment to text, and predict legal outcomes. The latest version, GPT-4, claims the distinction of scoring better than 90% of humans on the Uniform Bar Examination. While ChatGPT primarily benefits legal professionals, it has shortcomings. For the unaware, ChatGPT’s flaws could erode confidence in the legal professional or lead to the commission of malpractice.

What are ChatGPT version four’s flaws and limitations? When asked that question, ChatGPT version four responds that it 1) lacks legal expertise; 2) has limited access to legal resources; 3) may demonstrate bias; 4) has no empathy; and 5) can be incorrect. ChatGPT also seems aware that it cannot represent legal clients. I’d add that ChatGPT has spotty knowledge of facts that happened after 2021, and it can hallucinate. Let’s consider these concerns with ChatGPT one at a time and how they may impact a lawyer’s use of the tool.

ChatGPT’s limited access to law (public domain law can be far too difficult to access for AI and the public) paired with its limited knowledge of occurrences after 2021 curtails the tool’s utility. Imagine an attorney attempting to address a woman’s access to abortion, which has been legally blocked in her state since the Dobbs decision. If an attorney seeks information from ChatGPT on women’s access to abortion following the Dobbs decision, ChatGPT would lack the necessary legal precedent to provide accurate guidance. When ChatGPT lacks the necessary information, it will result in fundamentally flawed legal research, analysis, and arguments. 

Compounding these limitations, ChatGPT may fabricate answers when it doesn’t know the correct response. ChatGPT presents these answers to its users without indicating that it had to guess. Computer scientists refer to this somewhat rare phenomenon as a hallucination. In a hallucination, the AI has tricked itself and cannot separate its created fact from reality. In short, it cannot assess the truth of its statement. ChatGPT hallucinations can cause significant doubt and have far-reaching consequences if a legal professional relies on these assertions.

Along with false assertions, ChatGPT may produce biased responses. ChatGPT has guardrails that will prevent it from supplying such a response when asked a question that directly attempts to assess bias. As noted in a paper by Timnit Gebru, the former co-lead of Google’s ethical AI team, large language model AI’s bias is more insidious. Large language models, like ChatGPT, use training data that contains decades, if not centuries, of discriminatory practices. Overwhelmingly this data presents a view of the world that embeds bias in its corpus of knowledge. Responses that arise from these inherently flawed training datasets will include bias. This biased perspective may impact legal professionals by providing results that reinforce biases in the legal system rather than encouraging legal professionals to question the foundations of laws that disparately affect minorities or underrepresented groups. In short, ChatGPT’s answers cannot help but reinforce the status quo and risk leading unaware reliant legal professionals down a similar path.

Finally, ChatGPT states that it lacks legal expertise and empathy. Legal professionals understand the complexities and nuances inherent in legal subject matter. Practicing attorneys combine that knowledge with experiential insight into a jurisdiction and specific courts to offer strategic advice. ChatGPT cannot provide legal judgments informed by human needs, an empathic understanding of local courts and judges, or other environmental and experiential factors that influence the best course of action.

ChatGPT has some critical limitations. Despite these limitations, legal professionals can still benefit from using ChatGPT. Staying informed about its constraints ensures responsible use of the tool. While ChatGPT can increase efficiency, it cannot replace sound judgment and empathic, human-centric advice. To truly supplant legal professionals, ChatGPT has significant hurdles to overcome.

To learn more about the Juris Doctor, the Hybrid Juris Doctor, or Master of Arts in Law program at The Colleges of Law, fill out the form below.