Elon Musk and top AI researchers call for ‘giant AI experiments’ to pause

It has a number of well-known artificial intelligence researchers – and Elon Musk Sign an open letter Calling on AI labs around the world to pause development of large-scale AI systems, citing concerns about the “profound dangers to society and humanity” that they claim this program poses.

The letter, published by the nonprofit Future of Life Institute, notes that AI Labs is currently caught in an “uncontrollable race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.” .

“We call on all AI labs to immediately stop, for at least 6 months, training AI systems more powerful than GPT-4.”

“Therefore, we call on all AI labs to immediately stop, for at least 6 months, training AI systems more powerful than GPT-4,” the letter reads. This discontinuation must be public and verifiable, and include all key actors. If such a moratorium cannot be triggered quickly, governments must step in and impose a temporary moratorium.”

Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jean Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Joshua Bengio, Gary Marcus, and Imad Moustak. . The full list of signatories can be viewed herethough new names should be treated with caution as there have been reports of names being added to the list as a joke (eg OpenAI CEO Sam Altman, an individual partially responsible for the current race dynamic in AI).

The letter is unlikely to have any impact on the current climate in AI research, which has seen tech companies like Google and Microsoft rush to publish new products, and previously stated concerns about safety and ethics are often sidelined. But it is a sign of growing opposition to the “ship it now and fix it later” approach. An opposition that could make its way into the political arena for actual legislators to consider.

See also  European stocks fall after China data added to growth problems

As mentioned in the letter, even OpenAI itself has expressed the potential need for an “independent review” of future AI systems to ensure they meet safety standards. The signatories say that now is the time.

“AI Labs and independent experts should use this pause to develop and implement a set of joint safety protocols for the design and development of advanced AI that are rigorously audited and supervised by independent external experts,” they wrote. “These protocols must ensure that the systems they are bound by are secure beyond a reasonable doubt.”

You can read the entire message here.

Leave a Reply

Your email address will not be published. Required fields are marked *