Wednesday, 12 April 2023

Experts say AI scams are on the rise as criminals use voice cloning, phishing and technologies like ChatGPT to trick people.

Extract from ABC News 

ABC News Homepage

Experts say AI scams are on the rise as criminals use voice cloning, phishing and technologies like ChatGPT to trick people.

Earlier this year Microsoft revealed a new artificial intelligence (AI) system which could recreate a person's voice after listening to them speak for only three seconds.

It was a sign of just how quickly AI could be used to convincingly replicate a key piece of someone's identity.

Here is an example of someone's three-second voice prompt, which was fed into the system:

And here is what the AI, known as VALL-E, generated when it was asked to recreate that person's voice while saying the following phrase: "The others resented postponement, but it was just his scruples that charmed me."

Vice reporter Joseph Cox later used similar AI technology to reportedly gain access to a bank account with an AI-replicated version of his own voice.

In March, Guardian Australia journalist Nick Evershed said he was able to use an AI version of his own voice to gain access to his Centrelink self-service account, which raised concerns for some security experts.

While voice cloning is already being exploited by scammers, it's not the only way experts are seeing them take advantage of AI.

Let's take a look at how the technology is being used, and how best to protect yourself.

AI can replicate anyone's voice

The Guardian's investigation suggested the "voiceprint" security systems used by Centrelink and the Australian Tax Office (ATO) — which have used the phrase "In Australia, my voice identifies me" — could be fooled.

It felt like the scene in the 1992 movie Sneakers, when Robert Redford's character used a recording of someone's voice to get through a security checkpoint.

YouTube Tom Bishop (played by Robert Redford) uses a voice recording to get through a security checkpoint.

In its 2021-22 annual report, Services Australia said voice biometrics had been used to authenticate over 56,000 calls per day, and 39 per cent of calls to Centrelink's main business numbers. It also said a voiceprint was "as secure as a fingerprint".

The ATO said it was "very difficult for someone else to mimic your voiceprint and access your personal information".

Dr Lisa Given, a professor of information sciences at RMIT University, says AI-generated voices can also lead people to believe they are talking to someone they know.

"When a system can reasonably copy my voice and also add in empathy, you could imagine that a scammer could move from sending a text that says, 'Hey mum, I've lost my phone,' to making a phone call or sending a voicemail that was actually attempting to create that person's voice," she says.

Last month the US Federal Trade Commission warned consumers about fake family emergency calls using AI-generated voice clones. The FBI has also issued warnings about virtual kidnapping scams.

These concerns have led experts to suggest a few basic tactics people can use to protect themselves from voice cloning:

  • Call friends or family directly to verify their identity, or come up with a safe word to say over the phone to confirm a real emergency
  • Be wary of unexpected phone calls, even from people you know, as caller ID numbers can be faked
  • Be careful if you are asked to share personal identifying information such as your address, birth date or middle name

Mark Gorrie, the Asia Pacific Managing Director at cyber security software company Gen Digital, says AI voice generators are going to keep getting better at tricking both people and security systems.

"For many years it has been easy to detect 'robo-scams' just by the way they sound," he says. "But the voice-based AI is going to get better, and obviously the text that it uses will be better."

A pair of hands wearing fingerless gloves using a laptop computer in a dark room.
Artificial intelligence systems are also increasingly being used to identify AI-based scams.()

Scammers are fooling people with AI-generated text and fake product reviews

As AI systems improve, large language models (LLMs) such as OpenAI's popular chatbot ChatGPT are able to better emulate human-like responses. This is something which scammers try to recreate in emails, text messages and in other chatbots they might create themselves.

"Those notions of empathy and social queues that we use as humans in building relationships with people are exactly the kinds of tricks that scammers could use and build into the system," Dr Given says.

Scammers are using AI in phishing scams, which typically involve an email or text message that purports to be from a legitimate source but ends up using social engineering to obtain personal information. Some messages might also send you to a dangerous website using a link.

Dr Given says chatbots and LLMs can be used to make phishing campaigns more convincing by "perfecting the language" and making messages appear more personal.

"In the past, phishing emails have been filled with typing errors and details that don't ring true — enough that people will say, 'Who is this email coming from? What is this?' Those have really ramped up in recent years across texting scams, and scammers using many more platforms," she says.

Cyber security company Darktrace said it had seen a 135 per cent increase in sophisticated and novel social engineering attacks in the first months of 2023, which it said corresponded with the widespread adoption of ChatGPT.

"At the same time there has been a decline in malicious emails containing links or attachments. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale," the company said.

Mr Gorrie says Gen Digital predicts such scams will continue to increase because they are easy for people with little technical skill to generate.

"Don't assume you can identify purely from looking at a message whether it's real or fake anymore," he says. "You have to be suspicious and think critically about what you're seeing."

Darktrace's Chief Product Officer, Max Heinemeyer, said the company was also using AI to help it identify AI-based scams.

"In a world of increasing AI-powered attacks, we can no longer put the onus on humans to determine the veracity of communications they receive. This is now a job for artificial intelligence," he said.

AI is also being used to post fake product reviews online, but some tools designed to find AI-generated content have struggled to identify it consistently, Mr Gorrie says.

"It just shows with the quality of what is coming out that it is obviously much harder to detect."

A close-up of a section of the Amazon website showing there are 364 customer reviews for a product. with an average of 4 stars
AI can generate fake product reviews, which scammers use when trying to sell shoddy products.()

Scammers can use AI to create malicious computer code and crack passwords

Software engineers and enthusiasts have used AI to quickly build things like apps and websites, but the technology has also been used to generate code which can be used to hack into other computers.

"We've already seen on some of the hacker forums that a non-technical person who's not really familiar with writing malicious code can have the ability to write some basic code for malicious purposes," Mr Gorrie says. "So the barrier to entry to be a real hacker is definitely shifting."

AI programs have also been used in attempts to crack passwords, leading experts to urge people to strengthen their passwords and use two-factor authentication where possible.

Some experts are also concerned about AI features which may soon be added to productivity applications like Google Docs and Microsoft Excel. They are concerned that if scammers or hackers get their hands on large amounts of stolen data, AI tools can be used to quickly extract valuable information.

A close-up photo of a computer screen, which is displaying lines of computer code, with some words in different colours.
AI can generate malicious computer code, or attempt to crack people's passwords.()

AI makes scams 'harder to identify', Australian regulator says

The Australian Competition and Consumer Commission (ACCC) said while it had not received any reports of scams which specifically pointed to the use of AI, it was aware the technology "makes scams harder for the community to identify".

"With the emergence of new technologies, [the ACCC's] Scamwatch continues to see growing sophistication in scam approaches and is alert to the risks AI presents," a spokesperson said.

"We continue to work with telecommunications and digital platform industry partners to identify methods to detect and disrupt scams.

"The community should continue to approach any requests for personal information or money with caution and exercise caution when clicking on hyperlinks."

As Australian companies try to prevent scams, reports suggest trust in regulation is low

A report released in March by consultancy firm KPMG and the Australian Information Industry Association found two-thirds of Australians felt there were not enough laws or regulations to protect them from unsafe uses of AI.

Some Australian banks and telecommunications companies say they are already using AI to detect potential scams and cyber threats in their systems.

The Australian Financial Complaints Authority (AFCA) says it is receiving around 400 scam-related complaints each month, up from around 340 in 2021-22.

AFCA's Chief Ombudsman and Chief Executive, David Locke, says while some companies are working together to detect and prevent scams, more needs to be done.

"The widespread and sophisticated nature of scams means the industry needs to be willing to invest in new technology and have the ability to respond quickly," he says.

Dr Given says while AI has a lot of positive implementations, no one is immune from being targeted by AI-based scams.

"I think people have to realise that this is affecting everyone. It doesn't matter how expert or novice you are with the technology," she says.

"It's actually very healthy and positive to play with the technologies and understand how they're working and doing some reading around that — and talking to people and talking to your kids about that.

"People just need to be as critical as they can, while at the same time understanding that AI is not entirely risky or problematic."

No comments:

Post a Comment