pull down to refresh

THE COURT: Was this motion generated by generative artificial intelligence? MR. KACHOUROFF: Not initially. Initially, I did an outline for myself, and I drafted a motion, and then we ran it through AI. And I -- THE COURT: And did you double-check any of these citations once it was run through artificial intelligence? MR. KACHOUROFF: Your Honor, I personally did not check it. I am responsible for it not being checked. THE COURT: And you understood, as an officer of the court, pursuant to Rule 11 -- MR. KACHOUROFF: I did, Your Honor. THE COURT: -- if you’re going to use generative artificial intelligence that that did not excuse you from the obligations of Rule 11? MR. KACHOUROFF: Absolutely not. Absolutely not. THE COURT: You understood that, correct? MR. KACHOUROFF: Yes, I did, Your Honor. THE COURT: And that doesn’t seem to have happened here, does it? MR. KACHOUROFF: No, Your Honor.
Somehow, this feels like such a primary school moment with the teacher when a kid got caught cheating... interesting times?
What is the crime or transgression?
Sounds like judicial activism
reply
According to Rule 11, one shall not make up laws.
reply
Can you provide citation? Is it in the court listener doc?
reply
I linked it above in the text?
reply
I consulted copilot...
The short answer is: Rule 11 doesn’t directly apply to AI or AGI, but it does apply to the humans who use them in court filings.

⚖️ Rule 11’s Scope

Rule 11 of the Federal Rules of Civil Procedure governs the conduct of attorneys and unrepresented parties who submit documents to federal courts. It requires that filings:
  • Are not for improper purposes (like harassment or delay)
  • Are legally and factually grounded
  • Are signed by a responsible human
Since AI or AGI systems aren’t legal persons and can’t sign pleadings, they aren’t directly subject to Rule 11. But if a lawyer or party uses AI to draft a filing—say, to generate case law or arguments—they’re still personally responsible for ensuring the content complies with Rule 11.

🤖 AI in the Crosshairs

Recent cases have shown how this plays out:
  • In Mata v. Avianca, attorneys used ChatGPT to draft a brief that cited fictitious cases. The court sanctioned them under Rule 11 for failing to verify the content.
  • This has led some judges to issue standing orders requiring disclosure of AI use in filings or even banning it outright.

🧠 What About AGI?

If we ever reach a point where AGI can autonomously draft and file court documents, the legal system would need to evolve. For now, humans remain the accountable agents under Rule 11.
If you're thinking about how this intersects with broader legal ethics or policy, I’d be happy to dive deeper. Want to explore how courts are adapting to AI use more broadly?
reply
Question for ya: if you have the link to both the court document AND the relevant text that the judge applied, why would you ask autocorrect?
Here is a bio of the judge:
In private practice, Wang focused on copyright and intellectual property law.[4]
Wang cofounded the Colorado Pro Bono Patent Initiative. She also previously served as president of the Asian Pacific American Bar Association of Colorado.[5]
On January 19, 2022, President Joe Biden nominated Wang to serve as a United States district judge of the United States District Court for the District of Colorado.
Age 52 or 53, born in Taiwan
reply
Here is a summary of the case, generated by CoPilot...
This case—Coomer v. Lindell et al., Civil Action No. 22-cv-01129-NYW-SBP—is a defamation lawsuit filed by Dr. Eric Coomer, the former Director of Product Strategy and Security at Dominion Voting Systems, against Michael J. Lindell, My Pillow, Inc., and FrankSpeech LLC.
Here’s a concise overview of the case:

🧾 Allegations

Dr. Coomer alleges that:
  • The defendants falsely accused him of participating in an Antifa conference call where he purportedly said he would ensure Donald Trump lost the 2020 election.
  • These claims were widely disseminated through Lindell’s media platform, FrankSpeech, and other public appearances.
  • The statements were defamatory, false, and made with actual malice, causing reputational harm, emotional distress, and threats to his safety.

⚖️ Legal Claims

The lawsuit includes:
  • Defamation
  • Intentional infliction of emotional distress
  • Civil conspiracy
  • Requests for injunctive relief and monetary damages

🧑‍⚖️ Judicial Proceedings

  • The case is being heard in the U.S. District Court for the District of Colorado by Judge Nina Y. Wang.
  • The court has denied motions to dismiss, allowing the case to proceed to discovery and pretrial stages.
  • In a recent development, Judge Wang issued an Order to Show Cause regarding the defendants’ use of generative AI in legal filings, citing fabricated case law and misrepresentations—a potential Rule 11 violation.

Would you like a breakdown of the AI-related sanctions issue, or a summary of the amended complaint’s key factual allegations?
Here's a clean summary of the court’s ruling in Coomer v. Lindell et al., Civil Action No. 22-cv-01129-NYW-SBP, based on the final order issued by Judge Nina Y. Wang:

⚖️ What Happened

  • The Court issued an Order to Show Cause ([Doc. 309]) after discovering that Defendants’ legal filing (an opposition to a motion in limine) contained numerous false or fabricated case citations.
  • The filing was prepared using generative AI, and the attorneys failed to verify the citations before submitting it to the Court.

🚨 Court’s Findings

  • Judge Wang found that the filing included:
    • Nearly 30 defective citations, including nonexistent cases, misquotes, and misstatements of law.
    • Misattributions of case law to the wrong jurisdictions.
  • Defense counsel admitted to using generative AI and failing to cite-check the output before filing.
  • The Court was not persuaded by the attorneys’ explanations, including claims that the wrong draft had been filed.

🧑‍⚖️ Ruling: Order Made Absolute

  • The Court made the Order to Show Cause absolute, meaning it found that Rule 11 violations occurred.
  • The attorneys’ conduct was deemed to fall short of their obligations under:
    • Federal Rule of Civil Procedure 11
    • Colorado Rules of Professional Conduct, including duties of candor, competence, and diligence.

📌 Next Steps

  • The Court is expected to impose sanctions and may refer the attorneys to disciplinary authorities.
  • The ruling sends a strong message about the risks of unverified AI use in legal practice.

Let me know if you’d like a breakdown of the specific citations that were flagged, or if you’re drafting commentary on AI ethics in litigation.
Here's a clear and concise summary of the final ruling in Coomer v. Lindell et al., Civil Action No. 22-cv-01129-NYW-SBP, issued by Judge Nina Y. Wang:

⚖️ Final Sanctions Order Highlights

  1. Order to Show Cause Made Absolute
    • The Court formally concluded that the attorneys’ conduct violated Rule 11 of the Federal Rules of Civil Procedure.
  2. Sanctions Imposed
    • Christopher I. Kachouroff and his law firm, McSweeney Cynkar & Kachouroff PLLC, were sanctioned jointly and severally in the amount of $3,000.
    • Jennifer T. DeMaster was also sanctioned individually in the amount of $3,000.
  3. Payment Deadline
    • All sanctions are due and payable to the Registry of the U.S. District Court for the District of Colorado by August 4, 2025.

This ruling underscores the Court’s view that the attorneys’ use of generative AI—without proper verification—resulted in a filing riddled with errors, including nonexistent case citations, and thus warranted monetary penalties under Rule 11(c)(1).
Would you like a breakdown of how this compares to other AI-related sanctions cases or how courts are shaping expectations around AI use in litigation?
reply
Why does that matter?
reply