- Sam Altman, CEO of OpenAI, the maker of ChatGPT, as well as executives from Google’s artificial intelligence arm DeepMind and Microsoft were among those who supported and signed the short statement.
- “Mitigating extinction risks from AI should be a global priority along with other societal risks such as pandemics and nuclear war,” the statement reads.
- Other tech leaders, like Tesla’s Elon Musk and former Google CEO Eric Schmidt, have warned about the dangers artificial intelligence poses to society.
Microsoft Bing app running on an iPhone is seen in this photo illustration on May 30, 2023 in Warsaw, Poland. (Photo by Jaap Ahrens/NurPhoto from Getty Images)
Yap Aryan | Norphoto | Getty Images
Artificial intelligence may lead to human extinction and reducing risks associated with the technology should be a global priority, industry experts and technology leaders stated in the open letter.
“Mitigating the risk of extinction from AI should be a global priority along with other societal risks such as pandemics and nuclear war,” the statement said on Tuesday.
Sam Altman, CEO of ChatGPT maker OpenAI, as well as executives from Google’s artificial intelligence arm DeepMind and Microsoft were among those who supported and signed the short statement from the AI Security Center.
This technology has accelerated in recent months after the ChatGPT chatbot was released for general use in November and subsequently went viral. In just two months after its launch, it reached 100 million users. ChatGPT amazed researchers and the general public with its ability to generate human-like responses to users’ demands, suggesting that AI could replace jobs and imitate humans.
Tuesday’s statement said there had been growing discussion about “a wide range of important and urgent risks from artificial intelligence.”
But she said it might be “difficult to express concerns about some of the advanced risks of AI” and her goal was to overcome that hurdle and open up discussions.
Arguably, ChatGPT has sparked much more AI awareness and adoption as major companies around the world have raced to develop competing products and capabilities.
Altman admitted in March that he is “a little scared” of AI because he fears authoritarian governments will develop the technology. Other tech leaders like Tesla’s Elon Musk and ex Eric Schmidt, CEO of Google, has warned about the dangers that artificial intelligence poses to society.
In an open letter in March, Musk, Apple co-founder Steve Wozniak, and several tech leaders urged AI labs to stop training systems to be more powerful than GPT-4 — OpenAI’s latest big language model. They also called for a six-month pause for such advanced development.
The letter states that “contemporary AI systems are now able to compete with humans in public tasks.”
“Should we mechanically eliminate all jobs, including those that serve a purpose? Should we develop non-human minds that might eventually outnumber, outsmart, outlive, and replace us? Should we risk losing control of our civilization?” ” he asked the letter.
Last week, Schmidt also separately warned of “existential risks” associated with AI as the technology advances.