Extract from Eureka Street
- Home
- Vol 32 No 2
- The rise of the machines
- David James
- 31 January 2022
There is a great deal of commentary about the growing importance of artificial intelligence, or AI, especially in business circles. To some extent this is a self-fulfilling prophecy — if people think something will have a seminal effect then it probably will. But if the supposed commercial benefits are significant, the dangers are potentially enormous.
Elon Musk, a person intimately familiar with AI, frets that it could become ‘an immortal dictator from which we would never escape’ suggesting that it will overtake human intelligence in five years. He, and many others, envisage a ‘technological singularity’: a point when machine intelligence surpasses human intelligence and the computers accelerate at an incomprehensible rate.
At one level these claims are just nonsense that reveal just how degraded our understanding of ourselves has become. The first and most obvious problem is that AI can never replicate the complexity and range of human intelligence. At best, it can improve on a small part of our thinking, computation. But computation is only one part of our cognition, and cognition is only one slice of the range and depth of human thought.
There are other errors, perhaps implying some more intelligence is required when thinking about AI. Computers do not have intentionality (will), which is self-evidently necessary to thinking. They have no sense of their own mortality. Anything that involves our understanding of qualities rather than quantities, such as the beauty of a painting or a piece of music, is outside the range of AI, or any computer. Computers cannot think, and to call what they do ‘intelligence’ is only to confirm how narrow our measurements of thinking are (IQ measures, basically).
Then there is the problem of consciousness: humans’ ability to be aware of their own thoughts and of themselves. It is possible to program software that can continuously produce new software configurations in response to the computer’s interaction with its environment. That is what using AI to get a computer to ‘learn’ means. But no machine will ever be aware of the experience of having learned. It is a machine. It is not merely lacking in self-consciousness, it is inanimate.
"The danger is that it will lead to a massive degradation of our humanity, reduce us to nothing but industrial outputs, transactions and binary behaviours."
Human self-awareness is impossible to deal with in mathematical terms because it is an infinite regress. There will never be an algorithm that plots self-consciousness because it could never include the awareness of the algorithm itself, which must always lie outside.
Despite all these obvious absurdities, there is no doubt that AI will become far more intrusive because it can be applied to repetitive industrial production. As bored workers throughout the ages will attest, self-awareness is often a disadvantage in the work place, not an asset. AI machines do not have that problem.
AI can be readily applied to market behaviour, which works off a simple binary: buy/not buy; sell/not sell. That is what turned the social media companies, which surveil our every move, into global behemoths. Dubbed surveillance capitalism, it works because the human behaviour involved is binary. AI is also being applied to war, another binary: kill/not kill (an effort appallingly called ‘human augmentation’).
Yet apply AI to something more complex, like writing a poem, and the outcome will be very different. It would take a legion of good poets, doing the programming, just to get an AI computer to generate bad poetry. You may as well hire a poet instead, they should be cheap.
Proponents of AI like to claim that it will improve humanity. It is more likely that the opposite is true. The danger is that it will lead to a massive degradation of our humanity, reduce us to nothing but industrial outputs, transactions and binary behaviours. Such computer technology may help us produce more stuff to consume, kill our enemies more efficiently, or create more financial activity, but it will come with a terrible price.
The enormity of the threat was described with startling prescience by CS Lewis in his book The Abolition of Man (the abolition of man is exactly the risk). Lewis said that human nature would be the ‘last part of Nature to surrender to Man’. That is the very thing that AI proponents are aiming at in their efforts to create what they call human 2.0. He wrote: ‘The battle will then be won ... but who, precisely, will have won it? For the power of Man to make himself what he pleases means, as we have seen, the power of some men to make other men what they please.’ As Lewis prophetically explained, technology will not liberate humans, it will enslave and diminish them, except for the select few.
Ignoring the human will also lead to catastrophic breakdown of human systems at some point. Witness the fate of Long Term Capital Management (LTCM), a hedge fund in the 1990s that used an algorithm for pricing risk, called the Black and Scholes risk pricing model, to make large investments. One of LTCM’s directors, Myron Scholes won a Nobel Prize for it.
The model’s mathematics were brilliant but at one point it went so badly wrong that the losses were enough to almost bring down the entire Western banking system. That is what happens when you try to model inherently unpredictable human behaviour. The catastrophe required then chairman of the US Federal Reserve, Alan Greenspan, to organise a massive bail out and demonstrated how dangerous self-impelling computers can be. Musk is right. AI represents perhaps the biggest danger humankind has ever faced.
No comments:
Post a Comment