Saturday 17 February 2024

Deepfakes and 'the liar's dividend' — are we ready for the future of AI journalism?

Extract from ABC News

ABC News Homepage


In the gloomy discussions around the future of artificial intelligence (AI), predictions range from chatbots soon replacing all of our jobs to bringing about the end of humanity as we know it.

But beneath some of these alarmist headlines, there are deep concerns about how a lack of regulation over these powerful technologies could erode some of the key pillars of democracy.

Ahead of the first US presidential primaries of 2024, a deepfake robocall impersonating President Joe Biden stoked fears that this year's election will be even more plagued by disinformation than 2020's.

The World Economic Forum identified AI-driven misinformation and disinformation as among this year's biggest risks facing the global population — second only to extreme weather.

In a year where billions will vote in elections around the world, the 2024 annual risk report warned that misinformation and disinformation "could seriously destabilise the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes".

It's not just billionaires worried about how machine influence could affect their bottom line or unseat powerful friends — public advocacy groups have also been sounding the alarm.

Imagine, for example, what might happen if anonymous opponents used generative AI to create an audio clip that sounded like a leading political candidate chatting days before the election about buying votes to manipulate the result.

That's exactly what happened in the Slovakian election last year. The clip was debunked, but media blackout rules made it difficult for fact-checkers to spread the word before thousands of voters reached the ballot box.

This example was just one laid out by the US-based Brennan Institute in a recent paper on how AI-generated content could influence voters' decisions, sow doubt around authentic information, exacerbate divisions and undermine trust in electoral processes.

While established news organisations might once have been turned to for fact-checking in times of uncertainty, trust in mainstream media has shifted rapidly in recent years.

Beyond that, the expansion of large language models into our daily lives has made it almost impossible for many news outlets to compete with the convenience and immediacy of asking ChatGPT for updates on newsworthy topics.

Journalists and editors in newsrooms around the world are looking for ways to adapt, and part of the answer has been to understand — and in many cases, adopt — these tools.

"If 2023 was a year for coming to terms with generative AI, this will be the year when newsrooms fully embrace the technologies and incorporate them into workflows," the Reuters Institute declared in its predictions of 2024 media trends.

But there are lingering concerns about how these tools could reshape the future of journalism, and how ill-prepared news consumers are for this new reality.

How are journalists using AI right now?

Last year saw chatbots powered by generative AI take off around the world.

OpenAI's ChatGPT became the fastest-growing software application in history, sparking a craze that saw rival products entering the market from tech heavyweights and start-ups alike.

Google launched its chatbot, Bard (which has since been renamed Gemini), and reportedly began pitching another tool in development called Genesis, an AI assistant described as capable of producing news stories.

Newsrooms started experimenting with some of the various tools on offer.

According to the Reuters Institute survey of 314 media leaders from around the world, for now, most newsrooms anticipate using AI for less editorial tasks such as tagging and transcribing, or serving up recommendations for similar content.

The ABC has trialled using AI-powered virtual voices to read text-based news articles out loud and is currently developing an in-house transcription service to transform its podcast offering

Several news publishers have already dipped their toes into the world of autonomously created content, and despite numerous high-profile mishaps, the trend is only set to gain steam.

News Corp announced in July last year it was producing around 3,000 articles a week using generative AI, predominantly on topics such as weather, traffic and fuel prices for its "hyperlocal" mastheads. 

Sports Illustrated waded into hot water in November after it was accused of publishing articles written by AI under fake author bios, without specifying they were in fact, not real.

A computer-generated image of a young white man with short brown hair and blue eyes.
The bio for a Sports Illustrated author named Drew Ortiz included a photo that was found listed on a website that sells AI-generated headshots.(Supplied: Sports Illustrated/Wayback Machine)

Its publisher laid the blame on an advertising partner and subsequently severed ties with the company. But the controversy sparked a tumultuous turn for the 70-year-old magazine, with its CEO fired two weeks later, and the majority of its workforce gutted in January.

Closer to home, Channel Nine issued an apology last month after it published an altered image of Victorian MP Georgie Purcell that had changed her bust size and turned her dress into a top and skirt that showed her midriff.

The news outlet blamed the manipulation on a generative fill tool available through Adobe Photoshop, which uses AI to "imagine" and expand content outside the frame of an original image.

While Nine conceded the error did not meet its high editorial standards, it insisted that none of its staff had been involved. An Adobe spokesperson pointed out "any changes to this image would have required human intervention and approval".

Elsewhere, some of the world's oldest mastheads are trialling "AI-assisted reporters" while AI-generated presenters and newsreaders are popping up in television and radio studios around the globe.

A female AI news reader in a headscarf in a studio.
Indonesia's TVOne launched its first AI news presenter last year, Nadira, who can read the news in several languages.(Supplied: TVOne)

In the case of AI-generated channel NewsGPT, there are no humans at all.

"A small disclaimer says content may contain inaccuracies or unexpected outputs. NewsGPT, which can be watched via YouTube, bills itself as delivering news 'without human biases'," the Reuters Institute noted in its 2024 trends report.

The expansion of AI will also shake up how consumers access news content. 

Microsoft has already started incorporating more AI-generated content into its Bing search results, and Google is ramping up its Search Generative Experiences, touted as a faster, easier way to find answers than scrolling through a list of links to helpful websites. 

At the crux of these examples — and many more that have made headlines in the past year — is how this level of automation could affect trust in media.

How can people be sure that what they're reading, watching or listening to is "real"?

Deepfakes and 'the liar's dividend'

Dang Nguyen, a research fellow in automated decision-making systems at RMIT University, says this question is front of mind for those monitoring how generative AI is changing the news.

"At the moment, there is this chief concern around the authenticity and provenance of what we encounter in an information environment that's going to be increasingly saturated with synthetic media," she told ABC News.

While newsrooms using AI to create content predominantly involve humans in the editorial process — most of whom abide by a journalistic code of ethics — they are of course not the only ones with access to these tools.

As generative AI becomes more advanced and more widely available, the quality of intentionally false or misleading content is becoming more sophisticated.

It makes for extremely convincing deepfakes.

Taylor Swift for example — one of the most recognisable faces in the world — has been copied and manipulated into deepfaked pornography, scam advertisements for Le Creuset cookware and an unlikely Trump endorsement on the red carpet.

Professor Nguyen says the rise of deepfakes has "[eroded] trust in what we have come to rely on as evidence, such as video footage" and presented another troubling phenomenon known as "the liar's dividend".

It suggests the mere fact that video can be plausibly fabricated allows anyone with an ulterior motive to point at an authentic piece of footage and call it fake news.

Basically, it makes it easier for liars to deny that something true happened.

"That means that then the pressure is placed on others to prove that it is real. It's a very twisted logic," Professor Nguyen said.

"My worry is that it fuels this culture of excessive scrutiny of visual content … so we're all sent down the rabbit hole of trying to spot a supposed deepfake.

"This could induce undue scepticism about true media without actually providing much help on spotting the falsehood."

This phenomenon played out during the Argentinian election last year when a recording allegedly featuring an incumbent minister offering government positions in exchange for sexual favours was leaked a few days before the vote. 

The party dismissed the recordings as potentially fabricated using AI, but colleagues claimed they were real, and in later interviews, the minister, Carlos Melconian, did not explicitly say either way. 

"It has yet to be established whether the audio clips were indeed an AI-generated deepfake," the Brennan Institute noted in its paper. 

"However, the incident highlights the unexpected ways that even the potential for something to be AI-generated can shape the contours of an electoral contest."

This type of "collective amateur forensic analysis" poses obvious risks when it comes to free and fair reporting on elections, but researchers have also noted its devastating consequences for access to health information during the COVID pandemic.

The white male Silicon Valley lens is amplifying discrimination

Professor Nguyen says there are "more insidious ways" that generative AI is setting back efforts to close gaps across languages and cultures.

"Already the majority of content on the internet is in the English language, which means that it's essentially inaccessible to the majority of the world," she said.

"There was a lot of hope and optimism in the beginning about how these large language models were going to turbocharge machine translation capabilities. Much of that is gone now … so that's kind of bleak."

She noted a study published by Amazon Web Services AI Lab in January, which found more than half of the sentences on the web had been translated through two or more languages, with increasingly poor quality for lower-resourced languages other than English.

Professor Nguyen says this presents a real challenge in addressing the disparity of accessible information across languages.

"By flooding the internet with low-quality content in these [lower resourced] languages, that's really problematic, because this then makes improving machine translation capabilities in these languages really difficult."

Aside from challenges around authenticity, research has shown generative AI presents significant problems when it comes to reflecting and amplifying human biases.

University of Tasmania researchers found generative AI writing tools perpetuate gender bias in leadership, describing male leaders as strong and charismatic and women as emotional, ineffective, and people-pleasing.

Similarly, AI image generators are notorious for perpetuating harmful stereotypes — for example, over-representing light-skinned men, under-representing Indigenous people and sexualising Latin American women.

"The humans who build these [tools] themselves have biases that get translated into the parameters around how these operate. They tend to have a very clear profile of being white, male and from pockets of Silicon Valley," Professor Nguyen said.

AI also presents a messy new frontier for intellectual property.

The New York Times is taking Microsoft and OpenAI to court over what it describes as "unlawful use of The Times's work to create artificial intelligence products that compete with it".

If all of that sounds insurmountable, Professor Nguyen points out these aren't new problems. They're just going to require new solutions. 

How to protect users in 'the Wild West'

Regulating a space as vast and exponentially expansive as the internet is a mammoth task.

The International Standards Organisation provides best practice guidelines for responsible, safe and trustworthy development of AI around the world.

These standards are there to guide the industry and also help to inform governments in updating existing legislation around topics like copyright, intellectual property, privacy and discrimination. 

But everyday users aren't likely to be consulting those standards while they're playing around with Midjourney-inspired memes.

Take it from Pablo Xavier, who learned just how easily fake images can be confused for the real thing when he asked an AI image generator to put together "the Pope in Balenciaga puffy coat".

His chic tongue-in-cheek creation went viral. He told Buzzfeed at the time it was "scary" that so many people "thought it was real without questioning it".

"It's definitely going to get serious if they don't start implementing laws to regulate it," he said.

Governments around the world have been scrambling to do just that.

Last year the European Union passed the AI Act, the first comprehensive laws to govern AI in the world, and the US Congress has been debating how to model its legislation — with significant input from tech billionaires developing these tools.

In Australia, the federal government has just introduced its plan for the sector, which will include regulations around "high-risk" AI while allowing "low-risk" AI to continue to grow. 

The government is also working with industry to introduce AI watermarking, which flags any content that was created using generative AI so that users are aware of how it was made. 

This could work in conjunction with another technique called fingerprinting, which essentially builds known databases of AI-generated content so that they can then be easily detected.

Tech companies including OpenAI, Google's Alphabet and Meta have various plans in the pipeline to develop watermarking, but researchers have detected serious flaws in some of these models.

A woman stands on stage next to a screen with the words Bard can imagine
Google's chatbot, initially released as Bard, has since been renamed Gemini, after the AI model that powers the tool. (Reuters: Caitlin Ochs)

Professor Nguyen believes these types of safeguards will go some way towards balancing "what is right now is the Wild West of synthetic media on the internet".

But ultimately, she says it'll take a combination of technical measures and mass re-education to equip internet users for the future of AI.

She believes both news organisations and the tech companies that are developing generative AI have a role to play in improving critical media literacy and helping users understand what they're consuming.

"It's the latest iteration of an old problem — do readers have all the information and the knowledge required to discern what is in front of them?" 

"It shouldn't be on the people, the users, to try to constantly discern whether they can trust a piece of information.

"So there are very real ways in which we need to hold the creators of these AI infrastructures accountable and responsible for the kind of products that they put out.

"News organisations have a really big role to play in helping spread critical awareness of … synthetic media in this brave new world of AI-generated content. And then developing critical literacy through [their] reporting."

At the centre of that is transparency and openness around how generative AI is being used. 

"There is a risk in being scared of AI and not talking about it, which gives it power. So we as a society need to come to terms with the fact that the technology is here." 

No comments:

Post a Comment