Thursday 26 October 2023

AlphaGo marked the birth of modern AI. This is the moment the world changed.

Extract from ABC News

Science

Posted , updated 1
Lee Sedol during the Google DeepMind challenge match
Lee Sedol analysing the game during the series with AlphaGo in 2016.()

It was billed as a battle of man versus machine. Lee Sedol versus AlphaGo. Computers versus humanity.

For seven days in 2016, the world's top player of an ancient board game battled a new kind of artificial intelligence (AI).

And when the week was up, the world had changed forever.

Space to play or pause, M to mute, left and right arrows to seek, up and down arrows for volume.
The day modern AI toppled humanity's champion: Listen to the full episode of Science Friction

Looking back now, as machines prove they can do creative tasks we thought were exclusively human, AlphaGo showed us what was coming.

It didn't create this future, but it did announce it. And the story of what happened can help us make sense of where we are now.

AlphaGo is when the relationship of humans and AI got complicated. The line blurred. Science fiction became reality.

And in the beginning, some of the first people to witness this change were the small team of experts building the game-playing AI, and the board-game professional who faced it, alone and on stage.

The quest for the 'Holy Grail' of AI

These days, talk of AI is everywhere. Hardly a week goes by without some new product announcement or prophecy of doom.

But in 2016, excitement about AI was mostly limited to tech circles.

So when a small Google-owned company announced its AI would take on the top human player in a board game called Go, many thought the AI would do badly, like the AIs before it had done.

a person's hand can be seen touching a stone on a Go game board
Go originated in China and has been played for centuries — mostly in China, Japan and South Korea.(AFP)

Mastering Go was the Holy Grail of AI, says Thore Graepel, a computer scientist who helped build AlphaGo at DeepMind's London headquarters.

"People often say there are more positions in the game of Go than there are atoms in the known universe," he says.

"But the truth is that if for every atom in the known universe, you had another universe and you counted all the atoms in that collection of universes, that comes closer to the number of positions in the game of Go."

Go is a 4,000-year-old board game enormously popular in China, Korea and Japan.

Players take turns placing black or white stones at the intersection of a grid of 19 squares by 19 squares. The goal is to surround the other player's stones and conquer territory.

It's a bit like chess, but with more potential moves. And this makes it harder for traditional AI to conquer.

Garry Kasparov against DeepBlue
AI defeated Garry Kasparov in a six-game series in 1997.(Getty: Bernie Nunez /Allsport)

In 1997, IBM's Deep Blue supercomputer beat chess grandmaster Gary Kasparov by calculating millions of potential moves per second, and then following the sequence of moves with the highest chance of victory.

This technique, known as brute force computation, doesn't work with Go.

To win at Go, DeepMind needed to design a new kind of AI.

Their machine would have to mimic the human quality of intuition.

It would have to know which potential moves to discard, and which to consider, without considering every possible move.

An AI had never done this before.

But luckily for them, they had a trick up their sleeve. 

An AI that can teach itself to play

If you were able to peel back the outer layer of today's AI tools, you'd find the same basic underlying concept: neural networks.

It's in everything from ChatGPT to autopilot software in cars, and image generation to voice recognition.

But just 10 years ago, neural networks were uncommon.

For decades, they'd been dismissed, even ridiculed. Most AI research focused on rules-based programming.

Say you want to make a machine that can tell the difference between cats and dogs.

How do you build it? 

Maybe the simplest option is to program the machine with a series of logical rules, including "if it has long ears and a long tongue, it's a dog".

But there's another, more roundabout way. You can also feed the machine labelled photos of cats and dogs, and ask it to teach itself how one species appears different to another.

Instead of telling it the rules, you ask it to formulate its own set of rules through close observation.

This is called machine learning, and neural networks are one way of achieving this.

How AI remade a Pink Floyd song from recordings of brain activity.

Each software "neuron" within the network is like a tiny detective, observing a small detail in the image of a cat or dog.

To be useful, neural networks need millions or even billions of neurons. This is called "deep learning".

For many years, computers could not process networks at this scale.

But by 2016, as the DeepMind team trained AlphaGo in a nondescript office building in Kings Cross, London, that was changing.

"People had discovered that we can use neural networks and deep learning to learn very complex functions that we never thought possible before," Mr Graepel says.

Instead of photos of cats and dogs, DeepMind trained a neural network with completed games of Go.

With plenty of help, AlphaGo taught itself to play.

Mr Graepel, a competent amateur, was one of its first human challengers.

"I thought, 'What could go wrong? No computer has ever beaten strong Go players before.'"

Soon enough, he lost.

"I was a bit flabbergasted. I thought, 'Wow there's progress in the air.'"

'The Roger Federer of Go'

The five-game series between AlphaGo and Mr Lee was held over one week in March 2016 at a five-star hotel in Seoul, South Korea.

Huge video screens were erected around the capital.

Betting sites had Mr Lee as the favourite, although many pundits were cautious. This was an unknown competitor.

The live broadcast of the Lee Sedol versus AlphaGo match
The live broadcast of the Lee Sedol versus AlphaGo game on March 9, 2016 in Seoul.(Getty: Kim Min-Hee-Pool)

Excitement spread. The idea of a machine that could teach itself to play a game, and improve over time, fired people's imaginations.

Photographers and camera crews assembled in the hotel conference room. 

"It was breathtaking, the amount of press," says Maddy Leach, who was also working on DeepMind's Go team.

"I'd never felt like Beyonce before and never again since."

Mr Lee arrived at the hotel for the first game looking focused. He wore a dark suit, plain shirt, and no tie. 

He had trained in the game since childhood, turned professional at 12, and had won everything there was to win.

He was one of the greatest ever, dubbed the "Roger Federer of Go".

Now, at 33, he was at the height of his powers, playing in his home country.

"I think he really believed that it just wasn't possible he'd lose," Ms Leach says.

Game one: Shock, awe, disbelief

The match itself was played in a small inner room of the hotel, free of noise and excessive distractions.

Mr Lee sat opposite Aja Huang, a DeepMind engineer who made the physical moves on behalf of AlphaGo.

Mr Huang read these moves off a screen, connected via laptop to the remote server that ran AlphaGo.

Lee Se-dol plays AlphaGo
Lee Sedol (R) with DeepMind's Aja Huang (L) making the physical moves on behalf of AlphaGo. (AFP/Google DeepMind)

"The match began. And I literally felt my heart beat," says Ms Leach, who was in the playing room with Mr Lee.

"It really hit me that this was about a computer program that was about to demonstrate to the world that it could think intuitively, like a human."

Chris Garlock, an American Go insider was also in the Seoul hotel. He'd been picked to commentate the series in English.

Chris Garlock and Michael Redmond
Chris Garlock (L) and Michael Redmond (R) commentated the series from the Seoul hotel.(Supplied: Chris Garlock)

Seven years on from the match, Mr Garlock vividly recalls the progress of game one.

At first, Mr Lee appeared relaxed, playing with a slight smile. AlphaGo wasn't doing anything special. 

Then his body language abruptly changed. He grew tense. He paused to think. 

AlphaGo was gaining the advantage. 

Mr Lee tried to save his position, but it was impossible.

"He was shocked. I don't think that in his wildest dreams he thought this computer had a chance," Mr Garlock says.

"It was shocking. It was probably the most shocking thing I've ever seen in my life."

Three and a half hours into the match, AlphaGo won.

Ms Leach and Mr Graepel were elated. Together, they had achieved the impossible, claiming the Holy Grail of AI.

Then they noticed Mr Lee hadn't moved. He sat very still, facing the board. 

Lee Sedol after being defeated by AlphaGo
Sedol leaving the hotel after being defeated by AlphaGo.(Getty: Kim Min-Hee-Pool)

The expression on his face was one of disbelief, Ms Leach says.

"I think my heart broke.

"I thought, 'You just weren't expecting this.'"

Game on

But Mr Lee was not yet defeated.

Over the following six days, he and AlphaGo played four more matches.

The contest pushed both to the limits of their respective abilities, forcing each to invent creative new styles of play.

On the 37th move of game two, AlphaGo played a move so unlikely that "no human in a million years would have thought of it," Mr Garlock says.

"When AlphaGo played that move, we thought it had lost its computer mind."

But as the match progressed, it proved to be a masterstroke.

It hadn't learned this move from watching humans play Go, but dreamed it up itself, in the labyrinth of its neural network.

AlphaGo's Move 37 has become a symbol of machine creativity, commemorated on mugs and t-shirts.

Then it was Mr Lee's turn to show brilliance.

In the fourth game, he deliberately played a low-probability move that caught AlphaGo by surprise. Its strategy unravelled. 

This game was the last time a human beat the top Go AI. 

AlphaGo won the series four to one.

Three years later, Mr Lee retired, saying that AI "cannot be defeated".

The moment 'everything changed' for AI

Hindsight has changed the meaning and significance of AlphaGo's victory in Seoul.

In 2016, many people saw AI such as AlphaGo as an exciting novelty.

After game one, Mr Lee saw it differently.

In 2023, with neural networks spawning increasingly sophisticated AI tools, the world has belatedly caught up.

There's growing awareness of the power and potential of AI, and urgent interest in questions around AI's impact.

How will writing tools like ChatGPT disrupt education?

Which jobs are most likely to be displaced by AI?

For Mr Graepel, who's now working at a company that uses machine learning to rejuvenate biological cells, AlphaGo's win seven years ago marked the birth of modern AI.

"AlphaGo represented this step-change into the modern era.

"It points to a future where we will have to rethink the relationship between AIs and humans, and what is special about human intelligence."

Ms Leach is now training as a psychologist.

She recalls AlphaGo's win as the moment "everything changed" for the trajectory of modern AI, although we're still learning where this technical progress is leading, and how it will change our lives.

She says it's misleading to call AlphaGo's win a defeat for "humanity".

DeepMind built AlphaGo as a tool. The contest was never humans versus AI, but humans versus other humans.

"AI is crafted by humans to do problems that humans can't cognitively achieve themselves," she says.

"It allows us to access more of what we don't know, sooner."

Mr Garlock now hosts a podcast advocating for labour rights.

He sees AlphaGo's win as the pivot point between successive eras of workforce automation.

In the first, robots automated manufacturing processes.

In the second era, AI can also complete or assist with tasks we previously thought only humans were capable of doing.

First, production processes were automated, now it's the turn of the desk jockeys: white-collar jobs in areas like management, advertising, communications, finance and human resources.

"I think five, 10 years from now, we should check back and have another conversation," Mr Garlock says.

"Things are going to be really different.

"It's moving fast. It's moving really, really fast."

Listen to the full story of AlphaGo and Lee Sedol, and subscribe to RN Science Friction.

No comments:

Post a Comment