Leading AI experts, including OpenAI and Google DeepMind executives, signed a brief statement citing an urgent need to address "the risk of extinction from AI," comparing its threat to nuclear war and pandemics. In the open statement, more than 350 experts stressed that mitigating AI's risks should be a global priority. The statement was released by the Center for AI Safety and signed by prominent figures such as OpenAI's Sam Altman, DeepMind's Demis Hassabis, and Anthropic's Dario Amodei.
A letter in March signed by over 31,000 industry members, including Elon Musk and Steve Wozniak, called for a six-month pause in advanced AI development due to the substantial risks posed to society and humanity. |