Over the next several months of Legal Currents and Futures posts, The Colleges of Law faculty and students are sharing a series of thought pieces about artificial intelligence and the law. Massimo Genovese penned this introduction to the series.

Artificial intelligence (AI) ignites human imagination, from academia to critically acclaimed entertainment, including Isac Asimov’s novels or films such as “Blade Runner.” This publication and the symposium hosted by the Santa Barbara County Bar Association and The Colleges of Law ground AI discussions in law today. The motivation to address AI issues has been accelerated by the recent surge with notable breakthroughs in generative AI, like Open AI’s ChatGPT.1The surge has caused numerous novel concerns to erupt across legal fields.[1] With the ubiquity of AI across homes and businesses, we must face new imperatives by adding AI to both the practice and the business of law.[1]

Written broadly, this introductory article creates a foundation of AI knowledge upon which the following articles in this series rest. Initial topics include definitions and characterizations of AI, a discussion of current applications across the globe, future technological implications, and ethical, legal, and practical dilemmas. Ultimately, this introduction aims to help orient readers for the articles that follow and future reading. To showcase the breadth of scholarship in the field, concepts might conflict or be presented as multifaceted to better reflect diverse opinions and the community of authors who have contributed to this collection.

Definitions: What is Artificial Intelligence?

For much the same reason that judges interpret and define abstract concepts and provide precedent for common legal understanding, AI must be demystified for legal utility, clarity, and predictability. Fortunately, the legal community does not face this task alone. The demystification of AI invigorates philosophers, sociologists, computer scientists, and popular culture. Across disciplines, artificial intelligence represents provocative concepts that elude a universally agreed-upon definition or characterization. Artificial intelligence sometimes refers to a singular product or entity, such as ChatGPT. However, it can also describe a goal-driven field of study, typically in computer science or philosophy, which aims to build or understand a particular type of system. A meaningful portion of the literature places AI within a quartet of categories, providing a starting point for further investigation. This quartet is as follows: systems that think like humans, systems that act like humans, systems that think rationally, or systems that act rationally.[2] The law may require multiple definitions based on the AI’s functions or form of computation.[1] Fully exploring the range of AI definitions exceeds the scope of this article; instead, providing a basic definition will foster mutual understanding and create a basis for future understanding.[3] In the pursuit of a utilitarian definition of AI for the purpose of this article, we turn to law professor Harry Surden, who has demystified the term and provided utility for lawyers. He describes AI asany technology automating a task typically requiring human intelligence.[4] Professor Surden’s definition fits somewhere in the quartet; more importantly, it can be used to address policy issues and avoid meaningful but speculative futuristic discussion.[4] This definition forms a foundation for the collection of articles in this issue.

Categorization: AGI, Top-Down, Bottom-Up, & LLM’s

Even the most impressive AI applications, capable of producing lawyer-like results, lack computation comparable to or exceeding lawyer-like intelligence.[4] The awe-inspiring artificial general intelligence, which equals or transcends human capacity, seen in many films, does not currently exist. Without a “true” artificial general intelligence, we turn to the bifurcation of existing systems. Current classifications describe two central paradigms—top-down (rules-driven AI) and bottom-up (data-driven AI). This is a helpful initial dichotomy, further complicated by the existence of hybrid systems and large language models.

Roughly speaking, a top-down program takes relevant rules from an expert domain (tax codes, statutes, etc.) and inputs the computational equivalents into a machine. Top-down systems use representations and logic, often if/then/else structures; their output can be precise and representative of binary logic. In essence, top-down machines require machine-readable information. A good example of machine-readable information is eXtensible Business Reporting Language (XBRL), a structured, machine-readable standard used to report business operations to the Federal Deposit Insurance Corp., the Securities and Exchange Commission, and the Federal Energy Regulatory Commission.[5] The top-down model’s efficacy depends on unambiguous, clear rules, the user’s competence, and the initial programmer’s understanding of the law. While these systems have limited uses, their deductive capabilities make them noteworthy, allowing them to accomplish tasks that are either too time consuming or too complex for a human expert.[6] Personal income tax provides an excellent example of the utility of an expert top-down system since specific tax laws are axiomatically clear, making computational equivalents possible. A familiar and successful top-down example comes from Intuit’s TurboTax software, an expert top-down AI system. To put the system’s success into perspective, Intuit, with the expert knowledge of attorneys, accountants, engineers, etc., was able to train a top-down AI so accurately that 60 million Americans use it, and the IRS often defaults to the model’s judgment.[7]

Top-down systems have significant limitations: their deductively closed loops mean they struggle or cannot independently adapt to novel scenarios. On the other hand, bottom-up (data-driven) systems are flexible and come in various forms, with the most notable example found under the umbrella term of machine learning. Machine learning encompasses a family of techniques (neural nets, Bayes classifier, etc.), best thought of as a sub-category of AI. Various terms in this space, such as “neural” or “learning,” employ typical descriptors of the human mind, which serve as helpful metaphors for computational concepts.4These metaphors refer to the system’s use of algorithms to recognize patterns, act independently, and potentially make better future decisions. Bottom-up (data-driven) systems are the most significant major AI systems impacting society today.[4] Many bottom-up examples exist, such as Google’s spam filters, Tesla’s self-driving cars, and PayPal’s fraud detection systems. The machine learning system at PayPal detects fraud in microseconds by comparing patterns and applying linear and nonlinear algorithms to make a final assessment. This beats industry standards, keeping lost revenue under a third of 1%.[8]

Another significant and popularly successful technology is generative AI, specifically large language models like Bard or Chat GPT. Large language models are a recent development in AI technology, and their proliferation and utility cannot be attributed to mere “hype.” Large language models can change the nature of work previously only produced by humans.[1] A key feature of large language models is that they can create new content at incredible speeds, including more than just text.[1] Large language model capabilities include creating executable machine code, audio, images, text, 3D objects, simulations, and videos.[1] Large language models like Chat-GPT operate partly via billions of values defining connections between “neurons,” called weights, and partly via analyzing vast amounts of unstructured data.[1] Neurons act as small mathematical functions that process information and transmit it to another layer. Weights are the numerical values assigned to connections between those neurons, and they determine the relative strength of the signal that should be passing from neuron to neuron—allowing the model to learn patterns in the data it processes. Adjusting these weights during training enables the large language model to optimize its ability to generate meaningful and contextually accurate responses. Large language models have affected the legal marketplace, and upcoming legal tech companies like Spellbook advertise that their legal programs are trained using ChatGPT. Large firms have already adopted GPT-4 programs, like Allen & Overy’s Harvey, trained on legal data and specialized in legal work.[1] Large language models’ effects are only increasing; whether it’s law, art, or other business, the legal community will be forced to continue interacting with them.

In all categories, these programs struggle or fail with critical lawyerly skills involving abstract concepts or functions such as intuition, normativity, creativity, etc.[4] The efficacy of many of these systems depends on large amounts of high-quality data organized in a way that a machine can read within the legal field; such freely available and nonproprietary data is comparatively scarce.[4] Even with quality data, many areas of law are outside the purview of current AI. Partly because so many legal issues remain grey or fact-contingent, with the quintessential catchphrase of legality being “it depends.”

Consumer Products: Law Firms of the Future?

AI’s influence in the U.S. legal sector is evident in applications too numerous to list here. Westlaw and Lexis have acquired AI companies, such as Case Text and Lex Machina, but startups exist and are emerging, such as Spellbook, Lawgeex, and Disco. Some law firms leverage AI for e-discovery, contract review and analysis, internal metrics, key performance indicators, research and compliance, and beyond foundational tasks, predictive analytics to develop a litigation strategy.[9] Future research on the efficiency of AI products and their impact is paramount to ensure ethical development, optimize resource allocation, drive innovation, understand societal consequences, enhance safety and security measures, inform policymaking, and improve overall user experience.

Global Perspectives: From Word to Artificial Adjudication

Imagine legal AI on a spectrum, from those that elicit little to no policy concerns to those that generate substantial and compelling policy concerns. In that case, we might end up with consumer AI technologies like Grammarly or Microsoft Word on the most mundane side and, on the other, the algorithmic adjudication schemes such as those deployed in the Internet Courts of China.[10] Based on a complex history, Chinese courts have tried integrating AI and governance to improve their ability to defuse social conflict, monitor society, increase oversight, reduce malfeasance, and boost efficiency and consistency.9 Public interest focuses on their purported autonomous “robot” judges, but the major pilot systems are AI clerical assistive and recommendation systems.[11] Nevertheless, there are indeed artificial adjudicators, such as a named system called Xiao Zhi, which has helped adjudicate matters like a dispute over private lending in the Hangzhou region.10 Xia Zhi assisted a human judge in concluding that dispute in 30 minutes. It deploys argument summarizations, evidence evaluation, and award recommendations.10. It is pertinent to note that these systems are all under close human supervision, require human approval, and the stance of China’s current chief justice is that AI can only ever serve as an assistant, never entirely replacing a human judge.10

To some, it might seem like Chinese “robot judges” are a distant, immaterial edge case left best to the gaze of comparative-court watchers, but this is not true. Nation states are competing, and AI technologies related to adjudication and other public policy exist across many countries, such as Estonia’s small claims AI judge or “robot mediators” in Canada and the U.K.[12] Large companies like Tyler Technologies and courtroom adjudicatory technologies, like COMPAS, also exist in the U.S.

Artificial Adjudication: The United States of America

Artificial adjudication processes exist in various parts of the U.S. Michigan deployed a $46 million adjudication platform called MiDAS, which decided whether someone’s application for unemployment benefits was fraudulent.[13] The system’s faultiness caused Deloitte to take on the task of replacing it with a system called UFacts, and the Supreme Court of Michigan ruled that workers falsely accused can sue Michigan for violating their constitutional rights.[12] American courts and corrections departments utilize several risk assessment algorithms, including the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS is a proprietary enigma, also known as a black box, but the output clearly informs decisions about bail, sentencing, and parole.[14] It has been reported that COMPAS partly works by answering a 137-item questionnaire to give judges a risk score on a scale of 1-10.[15] Controversies exist, such as a 2016 report from ProPublica, which analyzed more than 7,000 cases purporting racial bias in the COMPAS output, garnering attention from both advocates and detractors of the system.14Both academics and Equivant, the maker of the COMPAS system, have offered counter evidence..14This includes arguments about needs assessments versus risk assessment and counter research, finding the system meets the .70 AUC standard.14Some studies show algorithms in the justice system have a positive effect, but the opposite could have easily been found, raising issues like transparency and constitutional due process.14

Conclusion: Future Topics

Exploring the use of artificial intelligence within the law reveals a diverse landscape, ranging from top-down expert systems to data-driven machine learning and culminating in advanced generative AI models like ChatGPT. AI affects the legal field already, encompassing areas such as contract review for a solo practitioner or algorithmic adjudication by the government. This article and those that follow are a brief initial survey of a limited selection of AI related subjects. Subsequent inquiries may delve into pivotal topics such as the economic implications and the evolving landscape of legal professions, safety considerations, including the alignment problem and AI regulation, and a more thorough examination of consumer applications. Given the ongoing evolution in AI within the legal field, professionals must constantly refresh their knowledge to adapt, ensuring their expertise remains current and applicable. Research, debate, and dialogue will be essential to navigating the intersection of AI and law and shaping a responsible and realistic future for the legal profession and society.


[1] Artificial Intelligence Toolkit, Practical Law Toolkit w-019-1426

[2] “Artificial Intelligence.” Stanford Encyclopedia of Philosophy, 12 Jul. 2018, plato.stanford.edu/entries/artificial-intelligence/#WhatExacAI. Accessed 15 Sept. 2023.

[3] Khani, Ali H. “The Indeterminacy of Translation and Radical Interpretation.” The Internet Encyclopedia of Philosophy, 10 Mar. 2021, iep.utm.edu/indeterm/#:~:text=The%20indeterminacy%20of%20translation%20is,translation%20is%20the%20right%20one. Accessed 15 Sept. 2023.

[4] Surden, Harry, Artificial Intelligence and Law: An Overview (June 28, 2019). Georgia State University Law Review, Vol. 35, 2019, U of Colorado Law Legal Studies Research Paper No. 19-22, Available at SSRN: https://ssrn.com/abstract=3411869

[5] XBRL US welcomes enactment of the Financial Data Transparency Act (FDTA). XBRL US. (2022, December 26). https://xbrl.us/news/xbrlus-welcomes-fdta/

[6] Hutson, Matthew. “Computers Are Starting to Reason like Humans.” Science.Org, 15 Sept. 2023, www.science.org/content/article/computers-are-starting-reason-humans.

[7] “Artificial Intelligence and Law – An Overview and History.” Youtube, uploaded by Stanford Law School, 15 Sept. 2023, www.youtube.com/watch?app=desktop&v=BG6YR0xGMRA.

[8] “Paypal Vs. Fraud – Have No Fear, Machine Learning Is Here !” Harvard.Edu, 11 Nov. 2018, d3.harvard.edu/platform-rctom/submission/paypal-vs-fraud-have-no-fear-machine-learning-is-here/#. Accessed 15 Sept. 2023.

[9] Using Artificial Intelligence in Law Departments, Practical Law Practice Note w-012-7887

[10] Stern, Rachel E. and Liebman, Benjamin L. and Roberts, Margaret E. and Wang, Alice, Automating Fairness? Artificial Intelligence in the Chinese Courts (August 1, 2021). Columbia Journal of Transnational Law, No. 59, 2021, Columbia Public Law Research Paper , Available at SSRN: https://ssrn.com/abstract=4026798

[11]Wang N, Tian MY. “Intelligent Justice”: human-centered considerations in China’s legal AI transformation. AI Ethics. 2023;3(2):349-354. doi: 10.1007/s43681-022-00202-3. Epub 2022 Aug 23. PMID: 36032775; PMCID: PMC9396564.

[12] Vasdani, Tara . “From Estonian AI Judges to Robot Mediators in Canada, U.K.” Lexisnexis.Ca, www.lexisnexis.ca/en-ca/ihc/2019-06/from-estonian-ai-judges-to-robot-mediators-in-canada-uk.page. Accessed 15 Sept. 2023.

[13]Angwin, Julia Angwin. “The Seven-Year Struggle to Hold an Out-of-Control Algorithm to Account.” Themarkup.Org, 8 Oct. 2022, themarkup.org/newsletter/hello-world/the-seven-year-struggle-to-hold-an-out-of-control-algorithm-to-account. Accessed 15 Sept. 2023.

[14] Tashea, Jason . “Courts Are Using AI to Sentence Criminals. That Must Stop Now.” Wired.Com, 17 Apr. 2017, www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now. Accessed 15 Sept. 2023.

[15] Yong, Ed. “A Popular Algorithm Is No Better at Predicting Crimes Than Random People.” TheAtlantic.Com, 17 Jan. 2018, www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/. Accessed 16 Sept. 2023.