An anonymous reader quotes a report from TechCrunch: A lawyer representing Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with “an inaccurate title and inaccurate authors,” Anthropic says in the filing, first reported by Bloomberg. Anthropic’s lawyers explain that their “manual citation check” did not catch it, nor several other errors that were caused by Claude’s hallucinations. Anthropic apologized for the error and called it “an honest citation mistake and not a fabrication of authority.” Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic’s expert witness — one of the company’s employees, Olivia Chen — of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations. Last week, a California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with “numerous false, inaccurate, and misleading legal citations and quotations.” The judge imposed $31,000 in sanctions against the law firms and said “no reasonably competent attorney should out-source research and writing” to AI.
Anthropic’s Lawyer Forced To Apologize After Claude Hallucinated Legal Citation
3