Confronting the Pitfalls and Possibilities

ChatGPT and the Law


ChatGPT and the Law

MORE THAN 100 MILLION USERS have tested the capabilities of Chat Generative Pre-trained Transformer (ChatGPT) to do everything from composing song lyrics to writing code—absolutely free of charge.

At a minimum, ChatGPT is entertaining to use. And, some executives have claimed that it will eclipse the impacts of both the Industrial Revolution and the internet.

Forget the stilted chatbots that have been helping consumers make retail purchases and book flights (when all they really wanted was to talk to a human, anyway). ChatGPT is a whole new class of machine-to-human interaction.

OpenAI launched ChatGPT in November 2022. People were so eager to try it that more than 1 million users signed up the first week. ChatGPT has supreme confidence in its abilities. It has style. It sounds like a real person-a really smart person who wins at trivia, excels in business, and has the arts & culture section of every newspaper ever written memorized. In fact, ChatGPT does all of these things, in a way.

By learning what ChatGPT is and understanding its capabilities and limitations, it will start to become evident why this tool has incredible potential. Reflect a moment longer, and its inherent challenges begin to crystallize.

Emory Law's Matthew Sag, professor of law in artificial intelligence, machine learning, and data science, and Jennifer Murphy Romig, professor of practice, spoke about the technology as it relates to current Emory students and the legal industry. Yandong Liu 09G, one of Emory's first computer science PhD students and the co-founder and CTO of Connectly, explained how ChatGPT will affect our lives. Their insights offered clarity and optimism as they predicted how ChatGPT might serve law students and legal professionals. 

What Is ChatGPT?

First, what is it not: an information retrieval device. Professor Sag was very insistent about that, and Liu emphasized that ChatGPT has no access to the internet. "ChatGPT is this snapshot of knowledge in a moment," Liu said.

Now, what it is: a large language model. It has been programmed to make an educated guess about what text should look like in response to a prompt. Think of the text completion features on messaging applica­tions or email. It's that, on steroids. ChatGPT has been constructed with massive amounts of electronic texts: "books, websites, probably the whole of Wikipedia," Professor Sag said. But how was it made? "This is where things get difficult to understand because of the scale. They used machine learn­ing to train a model so it could guess the next word from context. It has 175 billion param­eters, a scale that defies comprehension," Sag said.

The training data matters a lot. Information that is widely understood because it has been in texts for a long time is represented frequently and substantively in the data. Professor Sag offered the example of Marbury v. Madison. Ask ChatGPT to summa­rize the case, and its summary is decent. However, ask about a recent Supreme Court case that is underrepresented in the training data, and the summary will be bad or inaccurate. Professor Sag said that ChatGPT is not striving for accuracy; it is striving to mimic our language. "It's good at sounding confident and mimicking style, but often for the things that are more technical or advanced or not represented in the training data, it looks impressive but is just wrong," Sag said. 

What use is a tool that gets basic facts wrong? More important than the tool itself is the technology powering it. Liu said that even if it is not evident, Al is behind many of the choices people make every day. At some point soon-if not already-ChatGPT is going to be unavoidable. "Everything behind the scenes at ChatGPT is driven by large language models. They're very big ... and people are constantly dumping more data to it. That's why it's outperforming everything we had previously." What's more, Liu said, it has a conversational interface, and everyone has access to it, whereas other models were restricted to developers, whose job it was to integrate the technology into various applications.

Ethical challenges

Much initial excitement around ChatGPT seemed to be about simply testing the tool. Can it give advice? Yes, but it is not a licensed therapist. Can it write a recipe? Yes, but not one developed by a test kitchen. Can it write an essay? Well,yes. And can anyone tell if the document was writ­ten by ChatGPT? Here's where things get a little dicey.

Cheating and plagiarizing

There are major ethical challenges to using ChatGPT at schools. In short, ChatGPT makes it really easy to cheat and plagiarize. ChatGPT has even passed law exams with multiple choice and essay questions.

While there are schools that have outright banned its use (New York City and Seattle public schools were among the first on a growing list), others are determining how it can be used properly if it cannot be banned. "A number of individuals [ at Emory] have amended their syllabi to prohibit the use of ChatGPT or limit it to appropriate circum­stances," Sag said. Examples of these uses: inappropriate would be using it to generate answers for exam questions, and appropriate would be using it as a writing tool in the draft stage. 

Professor Romig described an experiment she did with ChatGPT. 

"I plugged in some simple facts and prompted ChatGPT to write two versions of a demand letter-harsh and legalistic, and the other version conciliatory and reasonable-and I was presented the same facts in two very different styles and tones," Romig said. "ChatGPT has some good potential to help new lawyers experiment with some of the subtleties of how they might need to write and communicate." In her Introduction to Legal Advocacy, Research, and Communications (ILARC) course, Romig teaches first-year students how to write common legal docu­ments. The style and tone she mentioned are significant; each situation and document call for variations in tone. Students will continue sharpening their writing skills in the work­place (both with their colleagues and opposing counsel), but in the meantime, ChatGPT can help them find their voice as writers.

If a student uses ChatGPT as a writing tool, that's not always evident in the final product, especially if the user polishes the language, fact­checks information, and adds citations. "First-year students are stunned to know how heavily and frequently they must cite," Romig said. While a student could very well use ChatGPT to take a pass at a first draft, ChatGPT currently cannot create a final written product that meets Emory Law's standards for sourcing, attributing, and citing.

Professor Sag said that he has used ChatGPT for summarization. Importantly, he knows a lot about the subjects he is asking ChatGPT to summarize. "It can be a really good time-saving device if you are not relying on it to tell you the truth," Professor Sag said. Reading a sum­mary by ChatGPT is not too different from consulting a Wikipedia entry, and students certainly have the freedom to do that. 

Take it one step further and ask ChatGPT to think critically about some aspect of the topic, and that looks like cheating. The student's job is to think critically-and ChatGPT should not and cannot. "Think about the goal of these big models. It's to engage and continue the conversa­tion to mimic human behavior. . .It doesn't try to get it right. Actually, it has no sense of what is right or what is wrong, so you can't really rely on it for anything that's important," Liu said.

Perpetuating bias and misinformation

As ChatGPT continues to train, it will undoubtedly absorb biased infor­mation and misinformation that exists on the internet. "If the model hasn't seen it, it won't reproduce something it doesn't know," Liu said. But who is to say that people won't realize the potential of ChatGPT and other similar technologies to spread misinformation? Filters can only do so much; there must be standards. "You really have to be able to tell what's bad content, and from different perspectives, the definition is nuanced," Liu said. "It's going to be a constant battle, and we have to start from the policy side to define what is considered bias." There is no going back for a redo at this point. "We have to find a new way to coexist, adapt, and make use of it. There's no way to stop this anyway. It's going to continue. It's really up to the researchers and policymakers to proactively seek transparent policies and safeguards to make sure the AI efforts are well-aligned with our values," Liu said.

Beyond intentional bias and misinformation worming their way into training data, there are seemingly regressive perspectives coming from ChatGPT. Professor Sag explained that in the troves of training data, there are more discussions of, for example, male doctors and female nurses. Consider the sheer amount of text that existed before there was representation of gender, race, and other characteristics. "It's not that machine learning is biased," Sag said. "It's that it's an accurate reflection of the world we live in that's biased. Our world has been shaped by injustice and inequality, and so is the model you just trained."

Automating the law

Given its unavoidable bias, should ChatGPT be trusted to help with mat­ters of justice? In early February 2023, a judge in Colombia consulted ChatGPT while writing a ruling for a case. This caused a stir. Faced with a backlog of cases, the judge insisted that the tool helped him work faster but that the ultimate judgment was his. Was this an appropriate use of ChatGPT as a time-saving tool used in parallel with human judgment? Is this the first step toward automating certain types of cases so that robot lawyers become a reality?

No one thinks that robot lawyers are an imminent threat ( and cer­tainly no jurisdiction has authorized a ChatGPT device to practice law), but there is no doubt that certain legal processes and practices could benefit from an AI turbo-charge. "Computers don't know standards for what's right or wrong," Liu said. "But in the short term, you can see how assistance from AI could do some basic groundwork. Get the facts and present them to me so I can prepare my judgment."

Professor Romig does not need to be convinced that AI tools can enhance efficiencies in workflow and project management. "Efficiency is always important to corporate clients, institutional clients-all clients," Romig said, adding that while ChatGPT can provide information, it cannot ana­lyze it. Extracting the useful information requires a strategy, Professor Romig said. "We're in the infancy of generative AI as a tool for lawyers and behind-the-scenes legal services. We have to figure out how to use it without diminishing all the work that lawyers do. There is a craft to working with the facts. I know that Al can help with that, but we have to keep what's great about human judgment," Romig said.

Liu pointed out that, in his industry, AI has the potential to expedite processes, too. "ChatGPT could give me a boilerplate computer program to work on so I can flesh out the details. That will be highly likely to happen, but I don't think it will be a simple replacement to any serious job," Liu said.

"Even the model itself is pointing out through its paraphrased lan­guage that good legal advice is necessary to take that next step," Romig said. This is because the technology has no conscience or reasoning capabilities. It is simply replicating human language. This is yet another reason to be unconcerned about ChatGPT automating legal jobs. Technology does displace certain types of labor-and sometimes entire jobs, Sag said. "But most of the time, tech that people think will replace work just makes it different. "Did spreadsheets replace accountants? 

"They write numbers in series of columns and then give you a report," Professor Sag said. And yet, "Accountants are doing just fine. They are working at a higher value-add stage."

Reasons to be optimistic

In the most promising version of living and working with ChatGPT, the tool absorbs the thoughtless, redundant aspects of work. This frees up humans for more important tasks that require careful, thoughtful consid­eration. "If it can save me some time and energy by generating the repet­itive work, then it seems like a good thing to me. It's not creating a fake patent application so you can throw everything at the wall to see which one gets accepted. That's obviously a terrible, terrible idea," Liu said.

"It is easy to foresee some of the more mundane activities that it can automate, but it is hard to predict the new opportunities it will create," Sag said. He is a self-proclaimed tech optimist but is cautious about some grandiose claims within the industry. "A lot of [ the hype is] just ridiculous, but that doesn't mean [ the machine learning] isn't incred­ibly impressive and potentially very important," Sag said. He predicts that the technology will have the biggest impact on search engines' ability to understand what people are really looking for.

Romig acknowledged that it might be idealistic, but tech tools like ChatGPT could serve the legal profession by serving the underserved with legal information and advice. "Entrepreneurs are touting AI as a potentially revolutionary tool to enhance access to justice. The legal industry and the legal profession are trying to vet those claims so that real enhancements are offered and revised regulation is instituted where it will be helpful," Romig said. Cautiously, she added, "But hype that doesn't live up to the promise should not become the product."

Email the Editor

Share This Story