Extract from The New Daily
When Science Minister Ed Husic was announcing a new expert panel to advise the government about artificial intelligence last Wednesday, two other things were about to happen that should sharpen their focus and urgency, but also perhaps their sense of futility.
By the way, eight of the 12 members of the Artificial Intelligence Expert Group revealed on Wednesday are professors; none is the CEO of an AI business. By contrast, half of the US government’s 24-member AI advisory panel, established two years ago, are AI business people, although admittedly they’re easier to find in the US than here.
They’ll be just a little bit harder to find after Australian tech startup Altium was sold on Thursday, the day after Ed Husic’s announcement, to a Japanese firm for $9.1 billion.
Altium is a sort of AI business: It sells tools invented at the University of Tasmania in the mid-1980s to design the printed circuit boards that semiconductors sit on, and are essential for AI to operate. In other words, Altium is part of the “picks and shovels” of the AI gold rush. The company moved to California in 1991, but it’s still sort of an Australian company, listed in the ASX. But not for long.
And then on Friday, OpenAI, the developer of AI sensation ChatGPT, announced its latest product – Sora, which can create realistic videos from simple text instructions. It’s a “research product” for now, in testing for security and harmful possibilities, and likely not available widely for some time (GPT-4 was tested for six months before release).
But the demonstration videos released on Friday look incredible. When Sora is eventually released, two things will happen – the jobs of millions of people associated with producing films and videos will be in serious trouble, and the potential for realistic fake videos, especially during election campaigns, will go to a new and far more dangerous level.
‘Things just go horribly wrong’
As it happens, around the same time as Ed Husic was announcing his panel of professors, the CEO of Open AI, Sam Altman, was giving an interview to Associated Press in which he warned of the societal havoc that could be wrought by AI.
“I’m not that interested in the ‘killer robots walking on the street’ direction of things going wrong,” Altman said. “I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”
Sam Altman, CEO of OpenAI which created ChatGPT, has warned of AI dangers. Photo: Getty
Altman called for a body like the International Atomic Energy Agency to oversee AI, which is advancing faster than the world expected or governments can deal with.
“We’re still in the stage of a lot of discussion … everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK. I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”
The IAEA parallel is superficially valid and optimistic, but irrelevant. It’s true that humanity hasn’t blow itself up after the UN body was created in 1957 to encourage the peaceful use of nuclear energy, but that was only because of another acronym – MAD, for mutual assured destruction.
A better acronym analogy for what will likely end up happening with AI is IPCC – the UN’s Intergovernmental Panel on Climate Change, established in 1988. It has met every year since 1995, and made absolutely no difference to rising greenhouse gas emissions.
The asbestos lesson
Or better still, smoking. The other night I re-watched the first episode of Mad Men, the TV series about advertising in New York in the 1960s. The episode was about the challenge of advertising a brand of cigarettes (Lucky Strike) when there was growing awareness that the product kills its customers, and the government had just banned false advertising, so you weren’t allowed to lie any more.
Don Draper came up with the genius slogan “It’s Toasted”, which is true of all cigarettes, but what struck me was that all of those involved in the business – manufacturer, advertising agency, media company – weren’t remotely interested let alone feeling guilty about the dangers of the product, just making money. And we know the same was true of asbestos, and now gambling and, of course, fossil fuels.
As always, governments were slow to do something: TV ads for smoking were eventually banned in the US on January 2, 1971, in 1976 in Australia, and in 1999-2000 across Europe. Advertising in newspapers, magazines and billboards continued for years after that.
Unlike cigarettes, AI is not entirely harmful – far from it. The potential to benefit humanity is enormous: To increase the productivity of humans and industry and create more prosperity and leisure.
In some ways, it could be seen as the birth of a virtuous form of slavery – unpaid labour – that is, without the inhumanity, since the slaves are machines not fellow humans. And just as slavery created colossal wealth – for some – in Britain, Europe and the United States before it was abolished in the mid-19th century, AI is already creating colossal wealth for some today.
The market value of AI chip maker, Nvidia, has increased sixfold in 15 months to $2.8 trillion, more than Australia’s GDP and more than the value of every company on the ASX. The companies involved in AI have been responsible for all of the performance of the US sharemarket over the past few years. The CEO of Altium, Aram Mirkazemi, who arrived in Australia in 1985 as 18-year-old refugee from Iran, is about to bank $650 million in cash.
AI is a product that companies are selling to other companies and individuals to replace human labour. Some people will get very rich from it, and already are, some will be impoverished by losing their jobs or by having their bank accounts cleaned out; governments will have commission studies and have conferences about it, but in a capitalist society in which products that are sold freely at a market price are its foundation, there is a profound reluctance to interfere.
But if you’re having trouble seeing the potential harm from AI, watch this eight-minute sci-fi film about slaughterbots.
Short of the horror of cheap miniature drones that identify and kill specific people, there is simply the idea that we are creating entities that are smarter, faster and stronger than humans, and might eventually become sentient.
We just don’t know, and we don’t know how to stop it.
Alan Kohler writes twice a week for The New Daily. He is finance presenter on the ABC News and also writes for Intelligent Investor
No comments:
Post a Comment