AI and the Law

AI

Long ago when I was a third-year law student I taught first year law students legal writing. I corrected spelling errors on a writing one of the students submitted (pre-spell check era!) and returned it to them. They said, “that’s what my secretary will be for”. My response was “if you rely on your secretary to catch your mistakes, you’ll be in trouble”.

What I meant is that an attorney responsible for client work cannot abdicate that responsibility to someone else. At a minimum they need to know that anybody contributing to the work for the client is competent, can be trusted, and the work that they have contributed appears to have been done correctly. Any attorney who doesn’t do this is risking substandard work, a bad outcome, and possibly malpractice.

Fast forward to today and we have the modern-day equivalent of attorneys imprudently abdicating responsibility. But now, instead of abdicating to a person they are abdicating to AI. There have been many widely reported instances where attorneys filed documents with courts with AI content which proved to be completely erroneous. Specifically, court decisions were included which simply do not exist. AI made it up. The consequences for the attorneys, and their clients, weren’t good.

Somewhat inexplicably, even after these embarrassing episodes have been exposed, there seems to be a notion prevailing in the public that attorneys are going to be obsolete because AI will replace them, or at least much of what they do. People, and businesses, won’t need to use attorneys nearly as much, all they’ll need to do is go to AI and get whatever they need. And legal training and experience are unnecessary because anybody can do a prompt and get just what they need.

We are now regularly presented AI product from prospective clients, clients, and opposing parties, both represented and not. Generally, they seem to be impressed with what they’ve gotten and believe we will be too. Much of this comes from ChatGPT, likely the culprit in the attorney failures referenced above.

There are numerous problems with ChatGPT for legal matters. Sometimes it just tries too hard. So much so that it makes things up. It is notorious for that. Like court opinions.

Also, if the prompt isn’t framed properly, it isn’t going to provide the correct response. The prompter may believe they’ve posed the right prompt, but they don’t know what they don’t know.

My observation is that often ChatGPT provides a lot of possibilities which are contingent on the specific circumstances and may or may not be applicable. Unfortunately, many, naively, seem to just accept what they’re given as “on point” and run with it. Similarly, ChatGPT doesn’t seem to provide a lot of guidance on which options may be better than others, and what the prospects are of succeeding on various options. That’s important. This is something that a competent attorney can certainly do.

In addition, the ChatGPT database is not current, so if something has happened recently which impacts the correct response to the prompt it won’t know it. Finally, ChatGPT does not allow the user to keep their information confidential. As attorneys we can’t divulge confidential information of clients.

We use AI all the time, but we use it with tools designed for attorneys where the information provided is verified, and what we do is kept confidential. We won’t be using any non-existent cases in our work.

These tools are very high quality and useful. They help us to do our work quicker and better. Attorneys who are not using AI the right way are at a disadvantage.

And the right way means they must take the time, and put forth the effort, to construct a solid prompt, tweak the prompt as necessary, and then carefully assess the results of the prompt. Ultimately, they must choose what, if anything, in the results is useful, and almost surely enhance it into a quality finished product. It is very rare to do a prompt and receive exactly what is needed. And only someone with the proper expertise would be able to discern whether they have done the right prompt and have received what they need.

So, back to all the AI that is being presented to us. Here is a representative example. An individual, along with his wife, sued a business client of ours, on his own, without an attorney. We consider the case completely without merit. We contacted the parties and advised them of why we believe the case has no merit, requested that they dismiss the case, and warned them that if they didn’t, we would pursue sanctions against them. Mind you this is all over a $75 charge and an $8 late fee.

Rather than comply with our request they turned to AI and sent us an aggressive response to the effect that if we were to pursue sanctions they would “produce an evidentiary hearing at which our clients would be required to explain to the court why they” are not cashing the check they sent for payment of the invoice and late fee they dispute which they had asked not be cashed. I’m not making this up.

Unfortunately for these people it appears AI has given them a false sense of security. What they are perhaps going to learn the hard way is that our client has done nothing wrong, isn’t liable, and it is very unlikely they are going to be found liable. What is likely is that they are going to lose, the money they spent bringing the case will be lost, and they may be paying the attorney fees of our client, which will be thousands of dollars. And, again, over $83. A motion to dismiss the case has been filed, a motion for sanctions has been served on them, and a hearing will soon be held on the motion to dismiss.

This is typical. Even if the AI is right those who are untrained in the law usually don’t fully understand what they have. It looks great, and they think they’ve accomplished something, but usually it is not what they think it is. When clients, or prospective clients, come to us with this we try to let them down easily. We do appreciate that they are attempting to be educated and helpful. That’s a good thing. But in our experience so far, there has been very little of any value in what they’ve provided. Competent attorneys already understand the law, and what needs to be done, for the client or prospective client, at least in a general sense, subject to further analysis. If your attorney needs your help to learn applicable law, or other technical things applicable to your matter, then you probably have the wrong attorney.

The bottom line is that AI is rarely a good substitute for a qualified attorney. Some simple, routine sorts of things, yes. And to the extent it can assist people who can’t afford an attorney to navigate the legal world successfully, that is a plus. But if the needs are the least bit complex, and the proper outcome is important, you are far better off with an attorney and may be risking a catastrophe by not using one. Use AI to become educated, to ask questions of the attorney, and to understand things which the attorney is doing, or recommending, but not to be the attorney, or thinking you will instruct the attorney.

While AI can be fun, and very helpful, assuming that AI is all you need for important legal matters is just foolish. This is reality. And we feel obligated to spread the word before too many people, and businesses, fall victim to AI delusions.

Categories: 
Related Posts
  • Case Summary: How Snell Legal Recovered Millions in a Family Business Dispute Read More
  • A Missing Comma Cost Maine Dairy Business $5 Million – Could it Happen to You? Read More
  • This Lawyer Isn’t Protecting A Harvey Weinstein In Your Workplace Read More
/