
ChatGPT’s parent OpenAI has been hit with its first defamation lawsuit.
Mark Walters, a radio host based in Georgia is suing the company after ChatGPT stated that he had been accused of defrauding and embezzling funds from a non-profit organization.
The system generated the information in response to a request from a third party, a journalist named Fred Riehl.
Walters is seeking unspecified monetary damages from OpenAI.
ChatGPT and other chatbots notably have no reliable way to distinguish fact from fiction, and when asked for information, they frequently invent dates, facts, and figures.
Notable amongst cases where mislead users include that of a professor threatening to flunk his class after ChatGPT claimed his students used AI to write their essays, and a lawyer facing possible sanctions after using ChatGPT to research fake legal cases.
OpenAI includes a small disclaimer on ChatGPT’s homepage warning that the system “may occasionally generate incorrect information.”
OpenAI’s own CEO Sam Altman has said on numerous occasions that he prefers learning new information from ChatGPT than from books.
Traditionally in the US, Section 230 shields internet firms from legal liability for information produced by a third party and hosted on their platforms. It’s unknown whether these protections apply to AI systems, which do not simply link to data sources but generate information anew (a process which also leads to their creation of false data).