Using AI Exposes Companies to Lawsuits, Experts Say
By Greg Beaubien
May 2024
Companies that use artificial intelligence will likely be liable for whatever the technology generates, especially when it makes mistakes.
As The Wall Street Journal reports, every company that uses generative AI not only faces reputational risk, but also could be liable under laws that govern defective products or speech that introduces bias in hiring, gives bad advice or makes up information that might inflict financial damage on someone.
Artificial intelligence now generates texts, images, music and video. The technology sometimes churns out “hallucinations,” the industry term for when AI gives fictional answers to sincere questions. In an early 2023 statement, Supreme Court Justice Neil Gorsuch said that AI companies and companies that use AI won’t be protected by current laws. As a result, organizations of all sizes could face a flood of lawsuits.
Section 230 of the Communications Decency Act of 1996 provides immunity for providers and users of interactive computer services. However, Graham Ryan, a litigator at Jones Walker, says Section 230 doesn’t cover speech that a company’s AI generates. “Generative AI is the Wild West when it comes to legal risk for internet technology companies, unlike any other time in the history of the internet since its inception,” Ryan says.
OpenAI, which markets ChatGPT software, is being sued for defamation in at least two cases, the Journal reports. A Georgia radio host alleges the company’s chatbot falsely accused him of embezzlement. The company has argued that it isn’t responsible for what ChatGPT creates, likening the product to a word processor that people use to create content. But that argument is likely to fail, says Jason Schultz, director of New York University’s Technology Law & Policy Clinic.
Michael Karanicolas, executive director of the Institute for Technology, Law & Policy at UCLA, says that if “large volumes of people are doing dangerous things as a result of receiving garbage information” from AI, then “it isn’t necessarily a bad thing to assign cost or liability as a result of these harms.”