AI helps lead the fightback against rampant cybercrime

Artificial intelligence has boosted cyber-criminals’ ability to identify network weaknesses and launch sophisticated cyber-attacks, experts have warned. The personal data of millions of Australians has been stolen in multiple large data breaches that have hit some of the nation’s big businesses including Optus, Qantas, Medibank and Medisecure, and the cyber-criminal networks continue to circle, searching for openings.

At an individual level, scammers take advantage of innocence and goodwill as they steal and deceive. According to the National Anti-Scam Centre, Australians reported more than 108,000 scams comprising financial losses of about $174 million to the Centre’s Scamwatch website in the first half of 2025. Cyber-criminals used fake websites, deceptive online advertisements and social media contact to dupe their victims.

“The cyber-criminal ecosystem has essentially evolved into a really sophisticated business model,” says Australian Cyber Security Centre (ACSC) head Stephanie Crowe.

The criminals operate in big, organised groups, she adds, and AI allows these groups to accelerate their cyber-criminal activity without recruiting mass numbers of people. Using deep fakes and sophisticated tradecraft, the cyber-criminals use a variety of tactics to prey on their targets.

Housed within the Australian Signals Directorate, the ACSC issues an annual Cyber Threat Report which last year found about one-third of notified breaches concerned stolen usernames and passwords.

AI can help cyber-criminals extract usernames and passwords from large data sets far more quickly and the data can then be sold on to other cyber-crime figures, Crowe says. “There’s a huge market for this, and they make a lot of money,” she adds, noting those usernames and passwords can then be used to compromise accounts and steal sensitive information.

Criminals can also use generative AI to craft more believable targeted spear phishing emails and potentially to sweep across huge ranges of invoices to collect data to con a victim into paying a fraudulent invoice, she added.

On the other hand, AI can be used to prevent cyber-crime by shoring up digital defences and by turning the tables and duping scammers with sophisticated voice and text bots.

AI can also accelerate the detection of potential breaches to limit the damage as quickly as possible, Crowe said.

“We are clear that there are both risks and benefits to AI, and our view is that we need to be super-cautious of AI,” Crowe said. “We need to make sure that we’re thinking about security when we deploy AI throughout networks and systems.”

As well as speeding up cyber-criminal attacks, cloud AI can help criminals identify organisations’ vulnerabilities, says Ivano Bongiovanni, a University of Queensland cyber-security expert and manager of the non-profit cybersecurity organisation AUSCERT.

Cyber-criminals once required a level of coding expertise but AI provides useful short-cuts for the less-expert, he adds. These criminals are not coders or developers, but they may have good AI prompt skills.

Barriers are built in to try and ensure the ethical use of AI, so asking a large language model to create a malicious script to perpetrate an attack would be refused, Bongiovanni says, adding that breaking the task down into smaller parts can work and there are ways to disguise criminal intent. “The LLM can then give you a response that, with a little bit of work, you can actually turn into a malicious tool,” he adds.

These days, an army of tens of thousands of AI-driven counter-intelligence bots is hard at work defending Australian business against scammers.

The work of Apate.ai, an Australian company spun out of Macquarie University research, the army includes conversational bots which speak different languages, with different accents.

Now numbering more than 85,000, they have different personalities and different moods and they con scammers into wasting time while the scammer’ origins and methods are probed. The conversational bots are deployed on standard phone numbers to reel in the scammers.

Dali Kaafar, founder and CEO of Apate.ai, says the company is building a counter-intelligence platform to infiltrate and disrupt scam networks, and provide clients with real-time fraud intelligence. In Australia, these clients include the Commonwealth Bank and TPG Telecom, and conversations are underway with digital platforms and Australian law enforcement agencies.

The bots’ conversational style sounds disarmingly realistic. A bot might cheerily respond to an offer of a free upgrade with “free is always good”, and then ask how long the upgrade would take “because I might need to grab a cuppa”.

The bots are programmed to respond appropriately to every possible verbal approach, and to answer nearly every possible question – with realistic pauses and sighs and chuckles.

“The mission is to break the business model of scammers by getting on essentially every single platform they might be using,” Kaafar says.

Apate.ai text bots are deployed on social media sites including WhatsApp, Telegram, Signal and potentially even dating sites, he adds, “we know there’s a lot of scam going on there and our bots will meet scammers where they are”.

Financial Review