Trailblazers in AI technology, including OpenAI and Google Deepmind, have warned that artificial intelligence could lead to the extinction of humanity.
More than 350 CEOs, experts, scientists and celebrities have outlined the existential risk posed by the advancing technology.
In a joint statement, they called for the mitigation of AI risk to be a “global priority” alongside pandemics and nuclear war.
The statement was published by the Centre for AI Safety.
Signatories include OpenAI (which made ChatGPT) chief executive Sam Altman, the CEOs of AI firms DeepMind and Anthropic, executives from Microsoft and Google and the so-called ‘godfather’ of AI, Geoffrey Hinton.
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
It comes as the success of ChatGPT has raised alarm from AI experts, journalists, policymakers and the public about the potential uses of AI.
The Centre for AI Safety’s website says AI has been compared to electricity and the steam engine in terms of its ability to transform society.
“But it also presents serious risks, due to competitive pressures and other factors,” says the website.
“While AI has many beneficial applications, it can also be used to perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyberattacks.
“Even as AI systems are used with human involvement, AI agents are increasingly able to act autonomously to cause harm.”
The centre warns that when AI becomes more advanced, “it could eventually pose catastrophic or existential risks.
“There are many ways in which AI systems could pose or contribute to large-scale risks including:
- Weaponised: Malicious actors could repurpose AI to be highly destructive.
- Misinformation: A deluge of AI-generated misinformation and persuasive content could make society less-equipped to handle important challenges of our time.
- Proxy Gaming: Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.
- Enfeeblement: Enfeeblement can occur if important tasks are increasingly delegated to machines.
- Value Lock-in: Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
- Emergent Goals: The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.
Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs.
But this has sparked fears the technology could lead to privacy violations, power misinformation campaigns and lead to issues with “smart machines” thinking for themselves.
The warning comes two months after the non-profit Future of Life Institute issued a similar open letter, signed by Elon Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.
“Our letter mainstreamed pausing, this mainstreams extinction,” said FLI president Max Tegmark, who also signed the more recent letter.
“Now a constructive open conversation can finally start.”
AI pioneer Mr Hinton earlier told Reuters that AI could pose a “more urgent” threat to humanity than climate change.
Last week Mr Altman, OpenAI’s CEO, referred to EU AI – the first efforts to create a regulation for AI – as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.
Mr Altman has become the face of AI after his ChatGPT chatbot stormed the world.
-with AAP
No comments:
Post a Comment