- Experts and industry leaders, including Sam Altman of OpenAI and Geoffrey Hinton, known as the “father of AI”, have issued a warning, urging global leaders to consider the risks associated with artificial intelligence (AI) technology. They have equated these risks to other significant threats such as pandemics and nuclear war.
- Concerns include the development of artificial general intelligence (AGI), a point at which machines could perform a variety of tasks and even write their own programming. This could potentially result in humans losing control over these AI systems, leading to potentially catastrophic consequences for humanity.
A collective of industry chiefs and experts issued a stark warning on Tuesday, urging global leaders to focus their efforts on mitigating the “risk of extinction” posed by artificial intelligence (AI) technology. They likened the threat level of AI to that of other major societal-scale risks, including pandemics and nuclear war.
Dozens of specialists endorsed the plea, including Sam Altman, whose firm OpenAI developed the revolutionary ChatGPT bot. The statement underlined that addressing AI’s risks should be a “global priority.”
ChatGPT made headlines late last year with its powerful capabilities, including generating essays, poems, and conversations from the simplest prompts. The AI breakthrough triggered billions of dollars of investment into the sector.
However, the rapid advancement of AI has sparked concerns among critics and industry insiders alike, with issues ranging from biased algorithms to the potential for substantial job losses due to AI-powered automation pervading daily life.
The warning was published on the website of the US-based non-profit organization, the Center for AI Safety. While the statement did not delve into specifics about the potential existential threat posed by AI, several signatories have issued similar warnings previously.
Notably, one of the signatories is Geoffrey Hinton, a pioneering figure in the AI industry often referred to as its “father.” He and others have expressed their apprehension about artificial general intelligence (AGI), a concept that refers to the point where machines gain the ability to perform diverse tasks and develop their own programming.
Such a development, they fear, would wrest control away from humans, potentially leading to catastrophic consequences for humanity.
The latest letter was signed by numerous academics and experts from tech giants such as Google and Microsoft. This call for action comes two months after billionaire entrepreneur Elon Musk and others called for a pause in the development of such technology until its safety could be assured.