FTC is concerned about the potential for OpenAI to be used to spread misinformation or harm consumers. Get updates on the latest news & their potential implications.
OpenAI, an AI company already facing congressional scrutiny, now confronts an investigation by the Federal Trade Commission (FTC) regarding its language model, ChatGPT, and its potential to generate false information.
The FTC is concerned that ChatGPT could be used to generate false and misleading statements, including potentially defamatory statements. The agency is also asking OpenAI about how it handles personal information in the data used to train ChatGPT. The FTC recently sent OpenAI a detailed 20-page letter, requesting information on various aspects such as how OpenAI handles fabrications by the chatbot, data acquisition, and vetting practices, and the selection of training data. The FTC’s investigation coincides with an increasing demand for regulations to govern artificial intelligence. OpenAI has also faced legal action related to allegations of copyright infringement, privacy violations, and defamation.
Inquiries also address OpenAI’s approach to handling statements made by ChatGPT about individuals, including potentially defamatory content. The FTC seeks a comprehensive explanation of the steps taken to address the risks associated with false, misleading, or disparaging statements about real individuals.
Privacy-related concerns are raised as well, inquiring about OpenAI’s data handling practices, including the generation of statements containing real and accurate personal information.
This investigation occurs amid increasing calls for AI regulations from lawmakers, consumer advocates, and business groups. OpenAI is concurrently facing lawsuits relating to copyright infringement, privacy, and defamation.
In March, the Center for AI and Digital Policy, an advocacy group, submitted a petition to the FTC, urging them to cease further commercial releases of GPT-4, the most recent iteration of the chatbot. The group highlighted concerns about bias, deception, and risks to privacy and public safety. They emphasized how the model could reinforce harmful stereotypes, aid malware development by hackers, and enable the generation of highly realistic and deceptive content by propagandists.
The organization further noted that several countries, including Canada, France, Italy, Germany, Spain, Australia, and Japan, have initiated their investigations into OpenAI and ChatGPT.
The urgency for FTC action is stressed by the group, as delay would impede the establishment of essential safeguards.