Extract from The New Daily
All the talk of AI's dangers is a bit reminiscent of the ignored warnings about the greenhouse effect of fossil fuels. Photo: Getty/TND
In the movie, Tom Cruise as Ethan Hunt battles an AI thingy called The Entity, which becomes sentient, takes over a submarine, kills everyone aboard and then threatens to use its super intelligence to control the world’s militaries. Luckily there’s a two-piece key that can turn off the Entity, which Mr Cruise manages to put together and … well, that seems to be for Part Two.
White House deputy chief of staff Bruce Reed, who watched the movie with the President, told Associated Press in an interview: “If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about.”
Really? Didn’t he watch Terminator 2: Judgment Day, 32 years ago? Or Terminator 3: Rise of the Machines, 20 years ago?
Either of those movies would have given the President plenty to worry about long ago. But timing is everything in politics, and the time for worrying about AI is 2023, not 1991 or 2003. Then it was science fiction, now ChatGPT has made it real.
Five months ago, on May 30, 352 of the world’s leading AI scientists and other “notable figures” (now it’s up to 662) signed the following succinct ‘Statement on AI Risk’: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’.
‘Extinction’ anxiety
A gulp went around the world; everyone looked up from their phones briefly at the word “extinction” – and then went back to looking at their phones. Most of those who signed the statement warning about extinction went back to developing AI as fast as they could.
But it seems to have had quite an impact on the White House. Teams were set up to craft something, and it was decided that it should be a presidential executive order.
The President was alarmed by the evil, sentient Entity, probably not long after the movie came out in mid-June, and put a rocket up the AI policy teams. Last Monday, four months later, a gigantic 19,704-word executive order was emitted from the White House to deal with the risks of AI.
Will it do that? Well, it certainly is very long, and very prescriptive, so it might stunt AI’s growth a bit. But regulation usually favours incumbents, so if nothing else it will probably help to entrench the technology oligopoly of the big six – Microsoft, Apple, Meta, Alphabet, Nvidia and Amazon.
A day after the executive order was issued, UK Prime Minister Rishi Sunak opened the two-day global AI Safety Summit at Bletchley Park in Buckinghamshire, the headquarters of Britain’s code-breaking efforts in WWII.
Many of those who signed the ‘Statement on AI Risk’ five months ago were there, along with a rare double from Australia: Deputy Prime Minister Richard Marles and Minister for Science Ed Husic. They all signed the Bletchley Declaration, “affirming that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible”, as the subsequent media release from Marles and Husic put it.
Perhaps the most telling sentence in the declaration was the last: “We look forward to meeting again in 2024.” In other words, this first meeting won’t achieve much apart from making a start.
After the meeting, Rishi Sunak had a live-streamed conversation with Elon Musk on the subject, in which Musk observed that “the pace of AI development is the fastest of any technology in history by far. It’s developing at five-fold, ten-fold per year. Governments aren’t used to moving at that speed”.
The Musk perspective
He added that AI was the most disruptive force in history, that we will have for the first time something that is smarter than the smartest human, and “there will come a point where no job is needed. You can have a job if you want a job, for personal satisfaction, but AI will be able to do everything”.
“I don’t know if that makes people comfortable or uncomfortable”, he said with a smirk.
Probably uncomfortable, Elon, although not as uncomfortable as the idea that they won’t just be unemployed, they’ll be extinct.
Musk didn’t sign the May 30 statement on AI Risk that talked about extinction. But those who did sign it, like the CEO of ChatGPT developer, OpenAI, Sam Altman, and the CEO of Google DeepMind, Demis Hassabis, did not down tools because what they were doing was an existential risk like a pandemic or nuclear war.
They ploughed on doggedly, heroically forging mankind’s path into technology’s next era. In September, OpenAI announced that ChatGPT “can now see, hear, and speak”, Google has launched an AI feature in Gmail, and new AI “entities” are being launched every day, each one smarter than the one before and a little bit closer to being sentient. The industry is now talking about 2024 being the biggest year yet for AI.
It’s all a bit reminiscent of the early warnings about the greenhouse effect of fossil fuels.
Climate crunch
My trusty AI assistant, Google Bard, tells me that in 1824, French physicist Joseph Fourier proposed that the atmosphere acts like a greenhouse, trapping heat from the sun and preventing it from escaping back into space, and in 1896, Swedish scientist Svante Arrhenius calculated that human emissions of carbon dioxide could lead to global warming.
Bard continued: “Scientists started getting really worried in the mid 20th century. In 1957, American scientist Roger Revelle published a paper in which he warned that human activities were increasing the level of carbon dioxide in the atmosphere, and that this could lead to significant global warming and catastrophe.”
And in 1972, the National Academy of Sciences issued the equivalent of the May 30 Statement on AI Risk, concluding that “human activities were likely to produce an increase in the average surface temperature of the Earth”, although admittedly they didn’t use the word “extinction”.
The first global summit meeting about climate change, like the one in Bletchley Park last week, was held in Berlin in 1995, and two years later in Kyoto, a declaration was issued called the Kyoto Protocol in which everyone agreed to do something.
And here we are.
Alan Kohler writes twice a week for The New Daily. He is finance presenter on ABC News and founder of Eureka Report
No comments:
Post a Comment