Two federal judges were caught using AI to draft documents

Senate Probe Targets Judicial AI Errors

Senate Judiciary Committee Chairman Charles E. Grassley has initiated an investigation into two federal district judges who retracted their opinions this summer due to errors attributed to artificial intelligence use.

The inquiry focuses on Judge Henry T. Wingate in Mississippi and Judge Xavier Neals in New Jersey, both of whom maintained that the inaccuracies did not impact the substance of their rulings but provided limited details on the origins of the issues.

As the committee’s leader responsible for judicial oversight, Grassley emphasized the need for greater transparency from the judges.

“No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy.”

“Indeed, Article Ill judges should be held to a higher standard, given the binding force of their rulings on the rights and obligations of litigants before them,” the Iowa Republican said.

Grassley requested that the original erroneous opinions be restored to public dockets and directed the judges to explain the errors, including any involvement of generative AI by themselves or their clerks.

He also asked for details on steps taken to prevent future occurrences.

Details of the Retracted Rulings

Judge Neals, appointed by President Biden, released an opinion in June that included incorrect citations of case outcomes, fabricated quotes from non-existent rulings, and misattributed statements to parties in the case.

Andrew Lichtman, a lawyer involved, submitted a letter highlighting these issues and noting that the opinion had been referenced as precedent in other proceedings.

Judge Wingate, a Reagan appointee, issued a July restraining order against a Mississippi law restricting diversity, equity, and inclusion teachings in schools.

The opinion referenced non-parties as litigants and invented sections of state law. Wingate described the mistakes as “clerical errors” and both judges resisted further explanations.

Legal experts identified the errors as typical AI “hallucinations,” or invented details, which have increasingly appeared in legal filings.

Grassley seeks comprehensive responses from the judges to address how such tools may have been employed and to outline preventive measures, aiming to maintain public confidence in federal court proceedings.

Are you worried about the use of AI in the judicial system? Sound off with your thoughts in the comments below!

1 COMMENT

Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Juan4awl
Juan4awl
6 days ago

The “intelligence” is ARTIFICIAL! So are we now going to ignore the ‘artificial’ in AI, like we do the ‘illegal’ in immigrant? Lord have mercy, how much more stupid do we have to get??!?!!

Featured Articles

Subscribe

Related Articles

1
0
Comment and let us know what the people thinkx
()
x