US Trade Commission investigates OpenAI over deceptive practices

The ChatGPT page open on a smartphone in front of a colourful background

The U.S. Federal Trade Commission (FTC) has launched an investigation into ChatGPT’s creator, OpenAI, in relation to unfair or deceptive privacy data security practices.

In a 20-page document first obtained by the Washington Post, the FTC detailed that it is opening an investigation in connection with OpenAI offering or making available products and services incorporating Large Language Models (LLM) to determine whether the San Francisco startup has “engaged in unfair or deceptive privacy data security practices or engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm”.

The Commission has issued a demand for records about how OpenAI addresses risks related to its AI models, with the investigation marking the most severe regulatory threat to date to OpenAI. The firm became largely known for its proliferation of generative AI, since the chatbot’s emergence in November 2022, sparking various conversations about its impact.

The FTC specifically requested that OpenAI share detailed descriptions of all complaints it had received of its products making “false, misleading, disparaging or harmful” statements about people, as well as records related to the security incident that the company disclosed in March. 

The FTC is particularly focusing on whether OpenAI’s data security practices violate consumer protection laws. 

In a tweet responding to the recent development, Sam Altman, CEO of OpenAI, said that “it is very disappointing to see the FTC’s request start with a leak and does not help build trust”, while also adding:

“That said, it’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.

“We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals.

“We’re transparent about the limitations of our technology, especially when we fall short. and our capped-profits structure means we aren’t incentivized to make unlimited returns.”

Commenting on the recent development, Laura Petrone, principal analyst in thematic research at GlobalData, pointed out that the FTC is not the only regulator to go after tech companies for their use of AI, adding:

“Italy’s privacy watchdog temporarily banned ChatGPT while it examined the U.S. company’s collection of personal information and only reinstated it when it found that OpenAI changed its privacy policy and introduced a tool to verify users’ ages. 

“As large language models like ChatGPT are used more and more widely worldwide, we’ll see more and more cases of hallucinations, inaccurate and unreliable information and a large volume of fake and harmful content. These models are inherently risky because they are so large. Therefore, it is hard to audit them to check for inaccuracies, biases or misinformation.”

While AI grows in popularity and experts and consumers weigh in on the debate if the technology needs more robust regulations to prevent harmful application, the U.S. Senate majority leader Charles E. Schumer has previously said that new AI legislation will soon be introduced. 

OpenAI has been approached for a comment.