CaseMark’s Blitz Strategy
CaseMark’s internal approach to building momentum and collaboration across departments.
ChatGPT should remain in the category of trust-and-then-verify class of tools for now.
It's a story as old as time; a man claims he was injured by a serving cart while flying on Avianca airlines and ends up suing. Avianca asks to toss out the case only to get a 10 page missive in response from opposing counsel with scathing and vehement objections citing more than half-a-dozen relevant court cases.
There was just one problem; none of the cases existed. ChatGPT made it all up.
Just 4 days ago, another attorney, this time in Colorado Springs, leveraged ChatGPT to write and file his first motion for summary judgement. The AI did the same thing, hallucinating fake cases that the attorney included in his filing.
Everyone keeps saying ChatGPT and generative AI are going to be the end of attorneys, but with stories like this, I would imagine most in the legal profession are at best skeptical. And they should be.
Attorneys have been using a variety of software solutions over the years that have helped in preparing for legal proceedings. However, with the advent of ChatGPT and it's meteoric rise in the last 6 months it has made it center stage of not only what this technology can do but some of the scarier parts that lurk when you don't fully understand what or how the technology works.
A Large Language Model, such as ChatGPT or its underlying GPT-4 architecture, is a type of artificial intelligence that has been trained on a vast amount of text data. Its training involves learning patterns in the language, including grammar, facts, reasoning abilities, and even some degree of creativity, from the text data it was trained on. Once trained, it can generate human-like text that can seem surprisingly cogent and well-informed. It can be used for many applications, such as drafting emails, writing essays, answering questions, translating languages, and much more.
The term "hallucination" in the context of AI refers to the model generating information that isn't grounded in its training data. This can often occur when the model is generating a long sequence of text or when it's asked about very specific or novel topics that it doesn't have much precise training data on. In the case of legal matters, when asked to generate a response or create content, the AI might "hallucinate" case law - that is, it might generate legal scenarios or legal decisions that sound reasonable but are actually completely fictional. This is, of course, completely unacceptable if you want to use ChatGPT for drafting legal documents.
This happens because the AI has learned the patterns and structures of legal argumentation and case law but doesn't have an inherent understanding of the law itself or a way to verify the real-world accuracy of the specific legal cases it generates. It's striving to create a coherent, contextually appropriate response based on patterns it has seen before. While this can lead to the generation of creative and seemingly knowledgeable responses, it can also result in outputs that might be confused with actual, factual case law, causing potential issues like the ones you're observing.
For both the attorneys mentioned above, ChatGPT hallucinated new case law because it likely did not have enough training on case law that might be helpful. This can be solved with correct training. The biggest mistake here wasn't in using ChatGPT, it was in trusting it blindly.
Quick Tips for Attorneys and ChatGPT
Here at CaseMark, we're focused on helping people in the legal profession stay up-to-date on the fast-paced world of generative AI. We host a weekly webinar around the basics of using ChatGPT and how you can leverage it safely and effectively immediately as well as discuss some of the emerging tools that can further streamline your workflow.
ChatGPT should remain in the category of trust-and-then-verify class of tools for now.
It's a story as old as time; a man claims he was injured by a serving cart while flying on Avianca airlines and ends up suing. Avianca asks to toss out the case only to get a 10 page missive in response from opposing counsel with scathing and vehement objections citing more than half-a-dozen relevant court cases.
There was just one problem; none of the cases existed. ChatGPT made it all up.
Just 4 days ago, another attorney, this time in Colorado Springs, leveraged ChatGPT to write and file his first motion for summary judgement. The AI did the same thing, hallucinating fake cases that the attorney included in his filing.
Everyone keeps saying ChatGPT and generative AI are going to be the end of attorneys, but with stories like this, I would imagine most in the legal profession are at best skeptical. And they should be.
Attorneys have been using a variety of software solutions over the years that have helped in preparing for legal proceedings. However, with the advent of ChatGPT and it's meteoric rise in the last 6 months it has made it center stage of not only what this technology can do but some of the scarier parts that lurk when you don't fully understand what or how the technology works.
A Large Language Model, such as ChatGPT or its underlying GPT-4 architecture, is a type of artificial intelligence that has been trained on a vast amount of text data. Its training involves learning patterns in the language, including grammar, facts, reasoning abilities, and even some degree of creativity, from the text data it was trained on. Once trained, it can generate human-like text that can seem surprisingly cogent and well-informed. It can be used for many applications, such as drafting emails, writing essays, answering questions, translating languages, and much more.
The term "hallucination" in the context of AI refers to the model generating information that isn't grounded in its training data. This can often occur when the model is generating a long sequence of text or when it's asked about very specific or novel topics that it doesn't have much precise training data on. In the case of legal matters, when asked to generate a response or create content, the AI might "hallucinate" case law - that is, it might generate legal scenarios or legal decisions that sound reasonable but are actually completely fictional. This is, of course, completely unacceptable if you want to use ChatGPT for drafting legal documents.
This happens because the AI has learned the patterns and structures of legal argumentation and case law but doesn't have an inherent understanding of the law itself or a way to verify the real-world accuracy of the specific legal cases it generates. It's striving to create a coherent, contextually appropriate response based on patterns it has seen before. While this can lead to the generation of creative and seemingly knowledgeable responses, it can also result in outputs that might be confused with actual, factual case law, causing potential issues like the ones you're observing.
For both the attorneys mentioned above, ChatGPT hallucinated new case law because it likely did not have enough training on case law that might be helpful. This can be solved with correct training. The biggest mistake here wasn't in using ChatGPT, it was in trusting it blindly.
Quick Tips for Attorneys and ChatGPT
Here at CaseMark, we're focused on helping people in the legal profession stay up-to-date on the fast-paced world of generative AI. We host a weekly webinar around the basics of using ChatGPT and how you can leverage it safely and effectively immediately as well as discuss some of the emerging tools that can further streamline your workflow.