Thursday, 8 January 2026

AI hallucinations and the dilemma of false or misleading information.

Extract from ABC News

Analysis

A colourful illustrations shows the neon paths of computer circuits arranged to resemble an illuminated human brain.

AI can provide insights on just about anything now, and the consequences of hallucinations can be dire. (AP: Michael Dwyer/File)

Yoshua Bengio explains why AI could become a threat to humanity.

Australians want to know when AI is used

When it comes to trusting AI systems, Australia is sceptical, sitting near the bottom of a list of 17 countries that took part in a global 2025 study.

Professor Davis said this did not reflect whether Australians thought the technology was useful, but instead showed they did not believe that "it's being used in ways that benefit them".

"What Australians don't want to be is at the receiving end of decisions that they don't understand, that they don't see, that they don't control,"

he said.

For a new technology that is so invasive and so powerful, it's only fair that the public wants to be looped in, particularly when the public discourse involves companies pointing the finger elsewhere when a system stuffs up.

When Air Canada's chatbot provided incorrect information about a flight discount, the airline tried to argue that the chatbot was its own "legal entity" and was responsible for its own actions, refusing to compensate the affected customer.

That argument was rejected by British Columbia's Civil Resolution Tribunal, and the traveller who received that information was compensated.

But this example raises an important question: if an AI bot provides false information, without disclosing who or what sent the information, how can it be held to account?

What would have happened with Air Canada if we didn't have the paper trail to lead us back to a technological error inside the company?

A journalist is held accountable through their by-line, companies with their logos, drivers with their number plates, and so on.

But if someone is provided with information by a fictional character like Alex Rivera, how can we hold them accountable if something were to go wrong?

When a journalist emails a company with questions looking for answers, the least we expect is a real person to feed us the spin, half-truths or outright lies. Not a machine.

No comments:

Post a Comment