Extract from ABC News
The world's two powerhouse nations have finally agreed to sit down and discuss their concerns around the expanding power and reach of artificial intelligence (AI) after years of lobbying from officials and experts.
Both Beijing and Washington have been wary of giving their adversary an advantage by limiting their own research and capabilities, but observers have long-expressed concern that the existential risks of such an approach are far too high.
"The capacity of AI to induce risks that could potentially result in human extinction or irrevocable civilisational collapse cannot be overstated," AI policy and ethics experts warned last year.
While a date hasn't been set, it's expected that the US and China will meet in the next few months to work on a framework for the responsible development of AI.
As they eye the next wave of advance tech with potentially conflicting motivations and goals, here's a look at what each side wants, what regulations are in place, and the risks they may contend with.
What are the main concerns?
The rise of AI has fed a host of concerns.
They include fears it could be used to disrupt the democratic process, turbocharge fraud, cause widespread job losses — and then there's the obvious worries around military applications.
The rapid growth of generative artificial intelligence, which can create text, images and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major elections this year, as more than half of the world's population head to the polls.
It's already being used to meddle in politics and even convince people not to vote.
In January, a robocall using fake audio of US President Joe Biden circulated to New Hampshire voters, urging them to stay home during the state's presidential primary election.
For Samantha Hoffman, a leading analyst on China's national security strategy and emerging technology, the potential to use AI to dupe the public and even subvert political processes are among the greatest risks.
"Things like the interest in generative AI and collection of things like language, data, images, sound — anything related to the generation of potentially fake images and text and so on," she told the ABC.
"If you can influence the way that people think and perceive information it helps the [government] stay ahead of a crisis or conflict.
"If you lose in the information domain — that's one of the most critical domains — and so you might have already lost the battle."
Meanwhile, in a recent Brookings Institute report — A roadmap for a US-China AI dialogue — authors Ryan Hass and Graham Webster argued that any discussion about AI frameworks need to focus on three key areas: "Military uses of AI, enabling positive cooperation, and keeping focused on the realm of the possible."
For military applications, they said the challenge was not about promising not to use AI on the battlefield but to "begin building boundaries and common expectations around acceptable military uses of automation".
What's the current state of play?
Last year, a report from the Australian Strategic Policy Institute found China was beating the US in 37 of 44 technologies likely to propel innovation, growth and military power.
They include AI, robotics, biotechnology, advanced manufacturing, and quantum technology.
The US leads innovation in only seven technologies — including quantum computing and vaccines — and ranks second to China in most other categories.
The Biden administration has taken drastic steps to slow China's AI development.
It has passed laws to restrict China's access to critical technology, and is also spending more than $US200 billion ($306 billion) to regain its lead in manufacturing semiconductor chips.
Dr Hoffman said that would slow down some of China's development.
"But, it's not going to stop," Dr Hoffman told the ABC.
And so the need for the talks, which will build on a channel for consultation on artificial intelligence announced in November after US President Joe Biden and Chinese President Xi Jinping met in California.
What regulations are in place?
Regulations and potential controls are currently being formed.
In November, the US and more than a dozen other countries, with the notable exception of China, unveiled a 20-page non-binding agreement carrying general recommendations on AI.
The agreement covered topics including monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.
But it didn't mention things like the appropriate uses of AI, or how the data that feeds these models is gathered.
At a global AI safety summit in the UK in November, Wu Zhaohui, China's vice minister of science and technology, said Beijing was ready to increase collaboration on AI safety to help build an "international mechanism, broadening participation, and a governance framework based on wide consensus delivering benefits to the people".
"Building a community with a shared future for mankind," Mr Wu said, according to an official event translation.
More than 25 countries present at the summit, including the US and China signed the "Bletchley Declaration", under which they will work together and establish a common approach on oversight.
But despite the platitudes from both sides, many AI policy and ethics experts maintain that it's yet to be seen whether Beijing and Washington and their respective militaries can demonstrate a shared commitment to common interests or global safety.
The US is set to launch an AI safety institute, where developers of AI systems that pose risks to US national security, the economy, public health or safety will have to share the results of safety tests with the government.
Meanwhile, China has already blacklisted some information sources from being used to train AI.
The banned information covers things that are censored on the Chinese internet, including "advocating terrorism" or violence, as well as "overthrowing the socialist system", "damaging the country's image", and "undermining national unity and social stability", China's National Information Security Standardisation Committee said.
Beijing also has to clear any mass-market AI products before they are released.
What do experts hope talks will achieve?
As the two superpowers compete for AI dominance, experts have warned of the increasing importance for common ground on AI safety to be established, given how little either country knows about their counterpart's approach to AI.
In recent weeks, it's been revealed that Beijing and Washington are preparing for bilateral talks "this spring" (autumn in Australia).
While the final parameters for the talks are yet to be announced, given the wide applications of AI, they could cover "potentially everything", Dr Hoffman said.
Basically, AI can be adapted to use in so many applications it's hard to think of areas that won't be affected, from high-tech future weapons and drones used on battlefields to everyday tasks.
"It covers everything from healthcare applications to autonomous weapons, things like facial recognition to things like ChatGPT," she said.
One thing that has become clear already is that China and the US are not pursuing the same goals with AI, Dr Hoffman told the ABC, as the development of AI currently plays into their respective national strategies.
"They're really talking about replacing the existing world order," Dr Hoffman explains.
But, because it will be almost impossible for either China or the US to continue their technological advancements independent of each other, there's one thing Dr Hoffman believes both sides will want to discuss.
"It's about finding the most responsible ways to manage risk," Dr Hoffman said.
AI needs lots of information, and so developing standards for "data sharing vetted by both governments could be immensely powerful", the authors of the Brookings Institute report wrote.
Using the talks to raise other concerns, even ones that seem related like the US blocks on China's access to critical technologies, "would push the dialogue into a cul-de-sac", the report added.
That said, even if the talks remain general in nature and don't lead to any concrete agreement, experts and policymakers agree that a pledged willingness to cooperate is a much better scenario than not talking and continuing to develop AI frameworks covertly in isolation.
No comments:
Post a Comment