Sam Altman steps down as CEO of OpenAI

Altman’s departure follows a review by the company’s board of directors.

Sam Altman is stepping down as CEO of OpenAI, the company announced Friday.

The departure follows a review process by the company’s board of directors, according to OpenAI, the maker of popular chatbot ChatGBT.

“Mr. Altman’s departure follows the board’s consultative review process, which found that he had been consistently dishonest in his communications with the board, impeding his ability to carry out its responsibilities,” OpenAI concluded. said In a statement. “The board has no confidence in his ability to continue to lead OpenAI.”

The company’s chief technology officer, Meera Murati, will take over as CEO on an interim basis, OpenAI said.

Founded as a non-profit organization in 2015, OpenAI has grown in prominence since ChatGPT became publicly available a year ago. The chatbot now has more than 100 million weekly users, Altman announced earlier this month.

Meanwhile, the company has grown dramatically. As of October, OpenAI is on track to bring in more than $1 billion in revenue over the course of a year from sales of its artificial intelligence products. Information reported.

In January, Microsoft announced a $10 billion investment in OpenAI. The move deepens the longstanding relationship between Microsoft and OpenAI, which began four years ago with a $1 billion investment. Microsoft’s search engine, Bing, provides users with access to ChatGPT.

Speaking to ABC News’ Rebecca Jarvis in March, Altman said AI has the potential to profoundly improve people’s lives but also poses serious risks.

See also  Tom Brady in Antonio Brown: "This is a very difficult situation for everyone involved."

“We have to be careful here,” Altman said. “I think people should be happy that we’re a little scared about it.”

In May, Altman testified before Congress with a similarly sober message about AI products, including the latest version of ChatGPT, called GPT-4. He called on lawmakers to impose restrictions on AI.

“The GBT-4 is more likely to respond helpfully and truthfully and to reject harmful requests than any other widely used model with similar capabilities,” Altman said.

“However, we think regulatory intervention by governments will be critical to reduce the risks of increasingly powerful models,” he added, suggesting the adoption of licenses or security requirements for the operation of AI models.

This is a growing story. Check back for updates.

Leave a Reply

Your email address will not be published. Required fields are marked *