What I find really strange about this is I use AI a lot as a “smart friend” to work through explanations of things I find difficult etc and I am currently preparing for some exams so I will often give the AI a document and ask for some supporting resources to take the subject further and it almost always produces something that is plausibly close to a real thing but wrong in specifics. As in when you ask for a reference it is almost invariably a hallucination. So it just amazes me that anyone would just stick that in a brief and ship it without checking it even more than they would check the work of a human underling (which they should obviously also check for something this important).
For example, yesterday I got a list of some study resources for abstract algebra. Claude referred me to a series by Benedict Gross (Which is excellent btw). It gave me a line to harvard’s website but it was a 404 and it was only with further searching that I found the real thing. It also suggested a youtube playlist by Socratica (again this exists but the url was wrong) and one by Michael Penn (same deal).
Literally every reference was almost right but actually wrong. How does anyone have the confidence to ship a legal brief that an AI produced without checking it thoroughly?
People are lazy. I’m enrolled in a language class in a foreign country right now - so presumably people taking that class want to actually get good at the language so they can actually live their life here - yet a significant portion of students just turn in ChatGPT essays.
And I don’t mean essays edited with chatGPT, but essays that are clearly verbatim output. When the teacher asks the students to read them out loud to the class, they will stumble upon words and grammar that are way obviously way beyond anything we’ve studied. The utter lack of self awareness is both funny but also really sad.
I think it's easy to understand why people are overestimating the accuracy and performance of LLM-based output: it's currently being touted as the replacement for human labor in a large number of fields. Outside of software development there are fewer optimistic skeptics and much less nuanced takes on the tech.
Casually scrolling through TechCrunch I see over $1B in very recent investments into legal-focused startups alone. You can't push the messaging that the technology to replace humans is here and expect people will also know intrinsically that they need to do the work of checking the output. It runs counter to the massive public rollout of these products which have a simple pitch: we are going to replace the work of human employees.
Which version of GPT? I've found that 4o has actually been quite good at this lately, rarely hallucinating links any more.
Just two days ago, I gave it a list of a dozen article titles from a newspaper website (The Guardian), asked it to look up their URLs and give me a list, and to summarise each article for me, and it made no mistakes at all.
Maybe your task was more complicated to do in some way, maybe you're not paying for ChatGPT and are on a less able model, or maybe it's a question of learning how to prompt, I don't know, I just know that for me it's gone from "assume sources cited are bullshit" to "verify each one still, but they're usually correct".
I use it in much the same way as you, and it's been extremely beneficial. But I also would not dream of signing my name on something that has been independently produced by AI, it's just too often blatantly wrong on specifics.
I think people who do are simply not aware that AI is not deterministic the same way a calculator is. I would feel entirely safe signing my name on a mathematical result produced by a calculator (assuming I trusted my own input).
Lindell's lawyer claimed that somehow the preliminary copy (before human editing) got submitted to the court - that they actually did the work to fix it, but then slipped up in submitting it.
I could see that, especially with sloppy lawyers in the first place. Or, I could see it being a convenient "the dog ate my homework" excuse.
> Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct.
Glad to see that this is the outcome. Similar to bribes and other similar issues, the hammer has to be big and heavy so that people stop considering this as an option.
The problem is that Trump, Musk, Lindell, etc are all extremely arrogant and constantly disregard sound legal advice. Their lawyers aren't merely associated with a controversial client; their professional reputation is put at risk because they might lose easily winnable cases due to a client's dumb tweet. You have to be a crappy lawyer (or an unethical enforcer like Alex Spiro and Roy Cohn) to even want to work with them.
Some of the prominent people on the right have tried to ignore the law, to not let the law modify their behavior, fighting off lawsuit after lawsuit, and adverse ruling after adverse ruling. If you're going to do that, you have to file a lot of motions. That seems to drive an emphasis on volume rather than quality of motions in reply. At least, that's my perspective as an outside observer.
It's not like there are many lawyers left who are willing to represent them. Either because they have behaved so utterly vile like Alex Jones, the case is so clear cut due to their own behavior that there is zero chance of achieving more than a token reduction in sentence (while risking the ire of the clueless fanbase for a "bad defense job") like in this case, or because they have a history of not paying their bills like Trump.
That leaves only those as lawyers who already have zero reputation left to lose, want to make a name for themselves in the far-right scene, who are members of the cult as well, and those who think they can milk an already dead/insolvent horse.
Jones is a good example of this. He cycled through about 20 different lawyers during the sandyhook trials. The reason he was defaulted is because when he was required to produce something, he fire the lawyers (or they'd quit), hire new ones, and invariably in the depositions an answer to "did you bring this document the court mandated that you produce" the answer was "oh, sorry, I'm brand new to this case and didn't know anything about that".
Jones wasn't cooperating with his lawyers.
There are plenty of good lawyers that have no problem representing far right figures. The issue really comes down to those figures being willing to follow their lawyer's advice.
The really bad lawyers simply don't care if their clients ignore their advice.
I constantly see people reply to question with "I asked ChatGPT for you and this is what it says" without a hint of the shame they should feel. The willingness to just accept plausible-sounding AI spew uncritically and without further investigation seems to be baked into some people.
That sort of response seems not too different from the classic "let me google that for you". It seems to me that it is a way to express that the answer to the question can be "trivially" obtained yourself by doing research on your own. Alternatively it can be interpreted as "I don't know anything more than Google/ChatGPT does".
What annoys me more about this type of response is that I feel there's a less rude way to express the same.
At least those folks are acknowledging the source. It's the ones who ask ChatGPT and then give the answer as if it were their own that are likely to cause more of a problem.
It's not constructive to copy-paste LLM slop to discussions. I've yet to see a context where that is welcome, and people should feel shame for doing that.
Go look at "The Credit Card Song" from 1974. It's intended to be humorous, but the idea of uncritically accepting anything a computer said was prevalent enough then to give the song an underlying basis.
The Judge spent the time to do exactly this. Judges are busy. Their time is valuable. The lawyer used AI to make the judge do work. The lawyer was too lazy to do the verification work that they expected the judge to perform. This speaks to a profound level of disrespect.
As an attorney, I’ve found that this isn’t the issue it was a year ago.
1. Use reasoning models and include in the prompt to check the cited cases and verify holdings. 2. Take the draft, run it through ChatGpt deep research , Gemini deep research and Claude , and tell it to verify holdings.
I still double check, for now, but this is catching every hallucination.
Everything about this entire situation is comically dumb, but shows how far the US has degraded, that this is meaningful news. If this were a fiction book, people would dismiss it as being lazy writing - an ultra conservative CEO of a pillow company spreads voting conspiracies leading to a lawsuit in which they hire lawyers that risk losing the case because they relied on AI.
Quite dumb. If it were a book it would be "Infinite Jest", and the receipts of everyone who bought the pillows could be used to enter into some inane raffle.
What I find really strange about this is I use AI a lot as a “smart friend” to work through explanations of things I find difficult etc and I am currently preparing for some exams so I will often give the AI a document and ask for some supporting resources to take the subject further and it almost always produces something that is plausibly close to a real thing but wrong in specifics. As in when you ask for a reference it is almost invariably a hallucination. So it just amazes me that anyone would just stick that in a brief and ship it without checking it even more than they would check the work of a human underling (which they should obviously also check for something this important).
For example, yesterday I got a list of some study resources for abstract algebra. Claude referred me to a series by Benedict Gross (Which is excellent btw). It gave me a line to harvard’s website but it was a 404 and it was only with further searching that I found the real thing. It also suggested a youtube playlist by Socratica (again this exists but the url was wrong) and one by Michael Penn (same deal).
Literally every reference was almost right but actually wrong. How does anyone have the confidence to ship a legal brief that an AI produced without checking it thoroughly?
People are lazy. I’m enrolled in a language class in a foreign country right now - so presumably people taking that class want to actually get good at the language so they can actually live their life here - yet a significant portion of students just turn in ChatGPT essays.
And I don’t mean essays edited with chatGPT, but essays that are clearly verbatim output. When the teacher asks the students to read them out loud to the class, they will stumble upon words and grammar that are way obviously way beyond anything we’ve studied. The utter lack of self awareness is both funny but also really sad.
I think it's easy to understand why people are overestimating the accuracy and performance of LLM-based output: it's currently being touted as the replacement for human labor in a large number of fields. Outside of software development there are fewer optimistic skeptics and much less nuanced takes on the tech.
Casually scrolling through TechCrunch I see over $1B in very recent investments into legal-focused startups alone. You can't push the messaging that the technology to replace humans is here and expect people will also know intrinsically that they need to do the work of checking the output. It runs counter to the massive public rollout of these products which have a simple pitch: we are going to replace the work of human employees.
I asked ChatGPT to give Wikipedia links in a table. Not one of the 50+ links was valid.
Which version of GPT? I've found that 4o has actually been quite good at this lately, rarely hallucinating links any more.
Just two days ago, I gave it a list of a dozen article titles from a newspaper website (The Guardian), asked it to look up their URLs and give me a list, and to summarise each article for me, and it made no mistakes at all.
Maybe your task was more complicated to do in some way, maybe you're not paying for ChatGPT and are on a less able model, or maybe it's a question of learning how to prompt, I don't know, I just know that for me it's gone from "assume sources cited are bullshit" to "verify each one still, but they're usually correct".
I use it in much the same way as you, and it's been extremely beneficial. But I also would not dream of signing my name on something that has been independently produced by AI, it's just too often blatantly wrong on specifics.
I think people who do are simply not aware that AI is not deterministic the same way a calculator is. I would feel entirely safe signing my name on a mathematical result produced by a calculator (assuming I trusted my own input).
Reading your comment, I'd like to coin the "AI-enhanced Dunning-Kruger".
Lindell's lawyer claimed that somehow the preliminary copy (before human editing) got submitted to the court - that they actually did the work to fix it, but then slipped up in submitting it.
I could see that, especially with sloppy lawyers in the first place. Or, I could see it being a convenient "the dog ate my homework" excuse.
Having not looked into it, I would guess that his lawyers know they aren’t going to get paid any time soon.
> Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct.
Glad to see that this is the outcome. Similar to bribes and other similar issues, the hammer has to be big and heavy so that people stop considering this as an option.
What is it with the American far-right and hiring the most _incompetent possible lawyers_? Like, between this and Giuliani...
The problem is that Trump, Musk, Lindell, etc are all extremely arrogant and constantly disregard sound legal advice. Their lawyers aren't merely associated with a controversial client; their professional reputation is put at risk because they might lose easily winnable cases due to a client's dumb tweet. You have to be a crappy lawyer (or an unethical enforcer like Alex Spiro and Roy Cohn) to even want to work with them.
Selection bias on your part. There's plenty of incompetence (and outright fraud) on the other side as well.
Rememebr Michael Avenatti?
I wonder what the effects of an echo chamber in a forum like this would be.. maybe something similar to what Reddit has become
Some of the prominent people on the right have tried to ignore the law, to not let the law modify their behavior, fighting off lawsuit after lawsuit, and adverse ruling after adverse ruling. If you're going to do that, you have to file a lot of motions. That seems to drive an emphasis on volume rather than quality of motions in reply. At least, that's my perspective as an outside observer.
If their goal is to hire people who believe in their cause, their hands are tied
It's not like there are many lawyers left who are willing to represent them. Either because they have behaved so utterly vile like Alex Jones, the case is so clear cut due to their own behavior that there is zero chance of achieving more than a token reduction in sentence (while risking the ire of the clueless fanbase for a "bad defense job") like in this case, or because they have a history of not paying their bills like Trump.
That leaves only those as lawyers who already have zero reputation left to lose, want to make a name for themselves in the far-right scene, who are members of the cult as well, and those who think they can milk an already dead/insolvent horse.
These are often also simply hard clients.
Jones is a good example of this. He cycled through about 20 different lawyers during the sandyhook trials. The reason he was defaulted is because when he was required to produce something, he fire the lawyers (or they'd quit), hire new ones, and invariably in the depositions an answer to "did you bring this document the court mandated that you produce" the answer was "oh, sorry, I'm brand new to this case and didn't know anything about that".
Jones wasn't cooperating with his lawyers.
There are plenty of good lawyers that have no problem representing far right figures. The issue really comes down to those figures being willing to follow their lawyer's advice.
The really bad lawyers simply don't care if their clients ignore their advice.
I dont understand how a lawyer can use AI like this and not just spend the little time required to check that the citations actually exist.
I constantly see people reply to question with "I asked ChatGPT for you and this is what it says" without a hint of the shame they should feel. The willingness to just accept plausible-sounding AI spew uncritically and without further investigation seems to be baked into some people.
I've seen this as well and I've seen pushback when pointing out it's a hallucination machine that sometimes gets good results, but not always.
Way too many people think that LLMs understand the content in their dataset.
That sort of response seems not too different from the classic "let me google that for you". It seems to me that it is a way to express that the answer to the question can be "trivially" obtained yourself by doing research on your own. Alternatively it can be interpreted as "I don't know anything more than Google/ChatGPT does".
What annoys me more about this type of response is that I feel there's a less rude way to express the same.
It's worse, because the magic robot's output is often _wrong_.
Well wrong more often. It's not like Google et al has a monopoly on truth.
At least those folks are acknowledging the source. It's the ones who ask ChatGPT and then give the answer as if it were their own that are likely to cause more of a problem.
Shame? It's often constructive! Just treat it for what it is, imperfect information.
It's not constructive to copy-paste LLM slop to discussions. I've yet to see a context where that is welcome, and people should feel shame for doing that.
Go look at "The Credit Card Song" from 1974. It's intended to be humorous, but the idea of uncritically accepting anything a computer said was prevalent enough then to give the song an underlying basis.
You could probably use AI to check that the citations exist
The multiplying of numbers less than 1 together will continue until 1 is reached.
And if they don't the AI will make up some for you
Maybe someone can make a browser extension that does not take 404 for an answer but just silently makes up something plausible?
It's not "a little time"
The Judge spent the time to do exactly this. Judges are busy. Their time is valuable. The lawyer used AI to make the judge do work. The lawyer was too lazy to do the verification work that they expected the judge to perform. This speaks to a profound level of disrespect.
Perhaps not, but it is the time required to discharge their obligation under Rule 11 of the Federal Rules of Civil Procedure (IANAL).
It’s “paralegal time” which is nearly free …
First, you're confusing time with money
Second, the mistakes weren't just incorrect citations any paralegal could check
> Second, the mistakes weren't just incorrect citations any paralegal could check
... Some of the 'mistakes' (strictly speaking they are not mistakes, of course) are _citations of cases which do not exist_.
... just ...
This is just Mata v. Avianca again
As an attorney, I’ve found that this isn’t the issue it was a year ago.
1. Use reasoning models and include in the prompt to check the cited cases and verify holdings. 2. Take the draft, run it through ChatGpt deep research , Gemini deep research and Claude , and tell it to verify holdings.
I still double check, for now, but this is catching every hallucination.
That’s so stupid, he almost deserves to lose the case just for that
He needs punishment for himself, not for the people or entity he's representing.
Everything about this entire situation is comically dumb, but shows how far the US has degraded, that this is meaningful news. If this were a fiction book, people would dismiss it as being lazy writing - an ultra conservative CEO of a pillow company spreads voting conspiracies leading to a lawsuit in which they hire lawyers that risk losing the case because they relied on AI.
Because this sort of thing is totally geographically bound.
Quite dumb. If it were a book it would be "Infinite Jest", and the receipts of everyone who bought the pillows could be used to enter into some inane raffle.