Lawyer Faces Sanctions for Citing Fake Cases Generated by ChatGPT in Court

A lawyer’s use of the artificial intelligence bot, ChatGPT, during a routine personal injury lawsuit has sparked controversy as it provided fake legal cases that were presented in court. The incident has raised concerns about AI “hallucinations” influencing legal proceedings. In the case against Avianca Airlines, the attorney cited non-existent cases to establish precedent, leading the judge to consider imposing sanctions.

The lawyer, Steven Schwartz, admitted to using ChatGPT for legal research, unaware that it was a generative language-processing tool and not a search engine. Schwartz maintains that he had no intention to deceive the court and argues against sanctions. However, the judge expressed skepticism, highlighting shifting explanations and dishonesty from Schwartz and another attorney involved in the case, Peter LoDuca.

The situation has prompted discussions about the limitations and risks associated with AI tools like ChatGPT. While such platforms can be powerful, they are prone to generating false information and exhibiting bias. Some courts are taking precautions, with one federal judge issuing a standing order requiring attorneys to confirm whether any part of their filings was drafted by generative artificial intelligence.

Share the Post:

Related Posts