ChatGPT and Other AI Programs Aid and Muddle Access to Justice as Non-Lawyers Seek Their Advice Commentary
geralt / Pixabay
ChatGPT and Other AI Programs Aid and Muddle Access to Justice as Non-Lawyers Seek Their Advice
Edited by: JURIST Staff

Can access to justice be enhanced via the advent of generative AI such as the widely and wildly popular ChatGPT app?

I get asked this pointed question quite frequently. The straightforward answer is that generative AI provides a mixed bag, namely that in some respects this type of AI will indeed enable greater access to legal information and bolster access to justice, though there are downsides that muddle the otherwise hoped-for positive benefits. Great care and mindful attention need to be diligently considered on how AI comes into play for legal tasks.

To address this topic, I will first cover the essentials of generative AI so that you will be familiar with what the technology is all about. I next discuss how lawyers can make use of generative AI. This then serves as a vital foundation for showcasing the use of generative AI for non-lawyers and the tradeoffs involved by the general public seeking lawyering insights and legal advice from an AI app.

Generative AI And ChatGPT: An Overview

You almost certainly have heard about or seen banner headlines proclaiming the wonders of an AI app known as ChatGPT. This particular AI app was devised by a company called OpenAI and was released in late November of last year. ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. For my detailed explanation of generative AI and ChatGPT, see my recent Forbes column.

You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding. All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can contain various errors, falsehoods, biases, and portrayed “facts” that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor due to the anthropomorphizing that the catchphrase imbues.

Another set of overall qualms consists of the potential privacy loss and lack of data confidentiality. By and large, most of the generative AI apps encompass licensing that allows the AI maker to make use of whatever prompts are entered. Thus, your entered data or personal information can become fodder for further data training in the AI app.

Use Of Generative AI By Lawyers For Legal Tasks

Lawyers can readily make use of generative AI such as ChatGPT.

For example, when seeking to draft a legal document, a lawyer can enter a prompt that loosely describes the needed legal contents and the AI app will attempt to generate a draft for you. This can immensely expedite the writing process. A key caveat is that you will need to closely review the produced essay, making sure to find and correct any errors, falsehoods, and the like. In other recent analyses, I explained key ways lawyers can sensibly use generative AI, as well as how law practices can best implement generative AI.

There are some additional and quite significant limitations with existing generative AI such as ChatGPT that are important to consider when used in a distinctly legal context. The data training for these AI apps consists of wide-ranging content across the Internet. The chances of pattern-matching on legal myths and legal fiction are somewhat high. Furthermore, there are almost no bona fide legal materials used for the data training, such that the AI app has not encountered legal court cases, legal briefs, and the like.

A somewhat mind-bending puzzlement surprisingly arises about this lack of legal context injection. Generative AI can potentially pass a state bar exam (just squeaking through via the minimum scores allowed).

How so?

If the generative AI wasn’t trained as human lawyers are, the possible passing of a state bar exam seems unimaginable. The answer is that by having pattern-matched on publicly posted bar exams (ones used for practice, and ones posted after having been formally administered), the generative AI can do sufficient word matching to answer relatively typical bar exam questions.

To make it abundantly clearcut, generative AI is not sentient, and nor are we close to attaining AI sentience. The common phrase in the AI field is that generative AI is a stochastic parrot. This labeling denotes that the AI is merely a mimicry of human natural language, primed via a vast scan of human writing and leveraging modern-day algorithms boosted via large-scale computing capabilities.

The General Public Using Generative AI For Legal Advice

Access to justice is a notable concern that pervades local, national, and international constituencies. The American Bar Association (ABA) has emphasized the importance of access to justice as a basic human right:

“International standards recognize access to justice as both a basic human right and a means to protect other universally recognized human rights. Too often, even when rights exist on paper, enforcement of these standards is weak. Where human rights protections are lacking, marginalized groups are often vulnerable to abuses and face significant challenges to realizing their rights, including within the formal justice system.”

In the United States, the disconcerting facet of legal deserts is well-documented, whereby many geographical pockets throughout the U.S. lack a sufficient availability of human lawyers. I’ve explored how the appropriate use of AI can aid in coping with the paucity of legal aid in legal deserts and other jurisdictions.

With the widespread and ready access to generative AI such as ChatGPT, we now stand on a somewhat dicey precipice. Non-lawyers can readily interact with generative AI and ask any manner of legal questions. The generative AI is likely to provide an answer, doing so without much if any indication that the answer is shaky and without a proper legal foundation. Some of the AI makers have been adding a warning clause to their generative AI that simply says that legal advice is not to be given weight and that the user should seek a human lawyer. But this seems dubious to some as sufficient protection for those that otherwise might construe the generative AI-outputted essays as rock-solid legal advice.

As earlier mentioned, besides the generative AI-outputted essays not being potentially legally supported, the essays can contain legal-basis errors, falsehoods, biases, and AI hallucinations. It seems unlikely that a non-lawyer would readily be able to discern the validly useful real legal insights from the fabricated ones.

All of that seems to suggest that generative AI is doing a disservice when used by non-lawyers for pursuing legal information. Let’s consider the other side of the coin.

Today, via the use of just about any Internet search engine, a non-lawyer can easily search for and find legal materials and legal information. Some of the legal content is correct and usable, while some of it is possibly error-prone and suspect. The gist is that the general public today can get access to legal info that is also replete with problematic aspects.

Generative AI has entered into a world in which online access to legal content already exists. The added advantage of generative AI is the interactive conversational capabilities. A person can interact and ask the generative AI to aid in identifying legal issues. They can ask for the generative AI to translate arcane legalese into easily understandable indications. They can get generative AI to explain legal topics, doing so interactively and on a personalized basis to the user’s requests.

Overall, the implication is that via generative AI, people at large will at least have a running chance of learning about legal aspects and potentially become more aware of their legal rights. This can be done in an engaging and comprehensible manner. This can be done anywhere at any time, using generative AI available 24×7.

Some liken this to the realm of medical advice. Via generative AI, people can interact with the AI app and ask questions of a medical nature. They can indicate their ailments and seek presumed medical advice. The medical profession faces a similar concern as do lawyers, namely that people not versed in these steeped domains are potentially getting misleading or outrightly wrong advice.

Like it or not, generative AI is here and going to increasingly be advanced and expanded to use by the general public. Medical professionals are gradually getting accustomed to patients that come ready to discuss their medical conditions or that challenge medical advice based on the use of generative AI. Lawyers can expect the same. Clients will tend to have formed legal assumptions or might challenge legal advice based on their personal use of generative AI.

I’ve urged lawyers to consider these impacts as to their proffering of legal services and the structure and nature of their legal practices. Bottom line is that lawyers making use of generative AI are going to have an advantage over those lawyers that do not do so. By being comfortable with generative AI, lawyers can use the AI apps for the provisioning of their legal services, plus they will be readied for the emergent avalanche of non-lawyers arriving “legally informed” as a result of using generative AI (including coping with legal misinformation and disinformation that their prospective or existing clients might have picked up along the way).

The AI-Savvy Lawyer of the Future

The next step of using generative AI for legal tasks encompasses using AI in real-time while in a courtroom or when interactively engaged in formal legal discussions with say a judge or legal meditator. A flurry of news stories arose recently when a vendor that makes use of generative AI indicated they were going to have non-lawyers be outfitted with headphones so that they could represent themselves in court while being guided via the AI.

The effort was ultimately nixed.

Part of the basis for the pushback against non-lawyers using generative AI to perform explicit and formal legal tasks entails the prevailing APL (Authorized Practice of Law) and UPL (Unauthorized Practice of Law) provisions. I have closely analyzed the APL/UPL provisions and pointed out that the future will almost inevitably cause a change to occur since it is highly likely that generative AI will improve and eventually embody the legal field in a robust and lawyering capacity. The work of my AI & Law research lab is steadily making progress, as are many others toward semi-autonomous and ultimately fully autonomous AI-lawyering facilities.

But we aren’t there yet.

An oft-repeated quote about access to justice is worthy of attention and inspiration: “We educated, privileged lawyers have a professional and moral duty to represent the underrepresented in our society, to ensure that justice exists for all, both legal and economic justice,” per Associate U.S. Supreme Court Justice Sonia Sotomayor, in November 2002.

Lawyers are doing their best to provide pro bono legal services. Some would compellingly contend that human lawyering alone though will never be sufficient to meet the public need for justice. The use of generative AI and other AI elements might be a viable means to provide access to justice at scale.

This does not necessarily suggest that human lawyers will be left out to dry. The more prudent notion in the near term is that lawyers will find themselves encountering a greater need for their legal services, sparked by the general public becoming more familiar with their legal rights as a result of utilizing AI.

The AI-savvy lawyer is where the future is hurriedly heading.

Dr. Lance Eliot is a global expert on AI & Law and serves as a Stanford Fellow affiliated with the Stanford Law School (SLS) and the Stanford Computer Science Department via the Center for Legal Informatics. His popular books on AI & Law are highly rated and he has been an invited keynote speaker at major law industry conferences. His articles have appeared in numerous legal publications including MIT Computational Law Journal, Robotics Law Journal, The AI Journal, Computers & Law Journal, Oxford University Business Law (OBLB), New Law Journal, The Global Legal Post, Lawyer Monthly, Legal Business World, LexQuiry, The Legal Daily Journal, Swiss Chinese Law Review Journal, The Legal Technologist, Law360, Attorney At Law Magazine, Law Society Gazette, and others. Dr. Eliot serves on AI & Law committees for the World Economic Forum (WEF), United Nations ITU, IEEE, NIST, and other standards boards, and has testified for Congress on emerging AI high-tech aspects. He has been a professor at the University of Southern California (USC) and served as the Executive Director of a pioneering AI research lab at USC. He has been a top executive at a major Venture Capital (VC) firm, served as a corporate officer in several large firms, and been a highly successful entrepreneur.


This research is part of an ongoing initiative on AI & Law and thanks go to the Stanford University CodeX Center for Legal Informatics, a center jointly operated by the Stanford Law School (SLS) and the Stanford Computer Science Department. CodeX ‘s emphasis is on the research and development of computational law—the branch of legal informatics concerned with the automation and mechanization of legal analysis.

Suggested citation: Lance Eliot, As Non-Lawyers Increasingly Seek Legal Advice From ChatGPT, Other AI, Access to Justice is Both Aided and Muddled, JURIST – Academic Commentary, March 7, 2023,

This article was prepared for publication by JURIST staff. Please direct any questions or comments to them at

Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.