The maker of ChatGPT has been investigated by US regulators over AI risks

Get free artificial intelligence updates

The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time, after the Federal Trade Commission launched a wide-ranging investigation into ChatGPT maker OpenAI.

In a letter to Microsoft’s support firm, the FTC said it wanted to see whether people were harmed by the AI ​​chatbot creating false information about them and whether OpenAI engaged in “unreasonable or deceptive” privacy and data practices. Safety procedures.

AI experts and ethicists are in the crosshairs of regulators around the world over the vast amount of personal data consumed by the technology and its potential harmful outputs, from misinformation to sexist and racist comments.

In May, the FTC issued a warning to the industry, saying it is “paying serious attention to how companies may choose to use AI technology, including new generation AI tools, in ways that could have a real and substantial impact on consumers.”

In its letter, the US regulator asked OpenAI to share content ranging from how the group retains user information to measures the company has taken to address the risk of making “false, misleading or defamatory” statements.

The FTC declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later Thursday, OpenAI chief executive Sam Altman said called it “It’s very disappointing to see the FTC’s request start with a leak and doesn’t help build confidence”. He added: “It’s very important to us that our technology is safe and consumer-friendly, and we’re confident that we follow the law. Of course we will work with the FTC.

See also  Small boats: France to get more money, says James wisely

FTC Chairwoman Lina Khan testified before the House Judiciary Committee Thursday morning and faced sharp criticism from Republican lawmakers over her tough enforcement stance.

When asked about the investigation during the hearing, Khan declined to comment on the investigation, but said the regulator’s broader concerns are that ChatGPT and other AI services “provide a huge amount of data”. Added to these companies.

He added: “We have heard of reports of people’s sensitive information being shown in response to an inquiry from someone else. We have heard of defamation, defamatory statements, untruths coming out. We are concerned about that kind of fraud and deception.

Khan was peppered with questions from lawmakers about his mixed record in court after the FTC suffered a major setback this week in its bid to block Microsoft’s $75bn acquisition of Activision Blizzard. The FTC appealed the decision on Thursday.

Meanwhile, the committee’s chairman, Republican Jim Jordan, accused Khan of “harassing” Twitter after the FTC alleged in court that it engaged in “irregular and improper” conduct in enforcing the consent order it imposed last year.

Kahn did not comment on Twitter’s filing, but said the FTC is “concerned that the company is following the law.”

Experts are concerned about the large amount of data being accumulated by the language models behind ChatGPT. OpenAI has more than 100 million monthly active users within two months of its launch. Microsoft’s new Bing search engine, powered by OpenAI technology, was used by more than 1 million people in 169 countries within two weeks of its release in January.

See also  The US economy added 253,000 jobs in April, a sign of labor market strength

Users reported that ChatGPT fabricated names, dates and facts, as well as fake links to news websites and references to academic papers.

The FTC’s investigation digs into the technical details of how ChatGPT was designed, including the company’s manipulation of illusions and oversight of its human reviewers, which directly affects consumers. It also asked for information on consumer complaints and the company’s efforts to assess consumers’ understanding of the chatbot’s accuracy and reliability.

In March, Italy’s privacy watchdog temporarily banned ChatGPT while it investigated the US company’s collection of personal information following a cyber security breach. It was reinstated a few weeks later after OpenAI made its privacy policy more accessible and introduced a tool to verify the age of users.

Echoing earlier acknowledgments of ChatGPT’s fallibility, Altman tweeted: “We’re open about the limitations of our technology, especially when we’re short. And our unlimited profit structure means we’re not incentivized to make unlimited returns. However, the chatbot is built on “years of security research.” He said: “We protect user privacy and design our systems to learn about the world, not individual people.”

Leave a Reply

Your email address will not be published. Required fields are marked *