Social media monitoring tool helps schools track malicious content

Lawrence Kusz was inspired to investigate ways to expose malice on social media when his niece, who has Down Syndrome, was savagely attacked by cyber-bullies. “It was 2021, and I was lecturing at the University of Queensland at the time, and I saw first-hand how limited the existing tools were,” he says. “I wanted to create something that was more pro-active, that could help families, schools and organisations to see the risks before the spiral to harm really began.”

He thought AI could be used to create tools that might help find and counter malicious actors on social media sites such as Instagram and Snapchat, and he began reaching out to leading researchers in the field.

The winner of the Government, Education and Not-For-Profit category of the Australian Financial Review’s Most Innovative Companies list, Chatstat is a Queensland-based start-up that scans for disturbing patterns of at-risk behaviour on the internet. These patterns might include eating disorders, radicalisation, suicidal ideation, hate speech-related issues, toxic culture development, bullying, and harassment, Kusz says.

Chatstat has now broadened its range, offering the tool to workplaces as well as schools. “We noticed that a lot of the issues that we deal with aren’t just isolated to youth, they’re also issues that are quite relevant to government and to organisations,” he adds, explaining the AI tool can monitor internal communication channels, such as Teams channels, Slack, and email.

For children and adolescents, it’s a matter of social media. If a school signs up for Chatstat, the parents are given the option of taking part. If they agree, they then give the Chatstat app all their child’s social media account handles. Social media sites have a rich history of individuals’ data, which enables the Chatstat staff to spot historical patterns.

The company also provides advice on how parents can find hidden social media accounts, usually via the child’s mobile phone number.

“It’s up to the parents how much information they want to share,” Kusz says. “We don’t want to be invasive.” A key focus of the company is to ensure the privacy of all student data, especially for adolescents who are struggling in any way. “It is ensuring that safety, but also making sure that everybody’s able to get the help that they need,” he adds.

Parents can choose whether or not to sign on to Chatstat in the first place and then choose whether data about their children’s social media activities is shared with the school, he adds.

The parents may prefer to deal directly with their own child, leaving the school out of the loop, or they may want the Chatstat feedback provided to the school in order to receive assistance with the issues that have arisen. He adds that usually “the school will speak with the individuals involved, and then if they detect that it’s a broader problem, they might speak with the group as a whole”.

A social enterprise, Chatstat only scans public content, Kusz says.  “We’re not looking at any private content, which is another way that we differentiate ourselves from a lot of the other tools that are out there.”

With most of these tools, he adds, a parent installs an app on their child’s phone that monitors the child’s internet exposure, from emails, to posts, to sites visited. Adolescents, Kusz points out, can get around this intervention simply by using another phone or device.

To date, the company is working with more than 100,000 school-students in Australia. As well as alerting parents to potential problems, Kusz says Chatstat provides engagement advice on how parents can deal with difficult issues.

“Chatstat’s able to play a crucial role,” he adds. “We’re able to understand these risks that parents often don’t know about. What might seem like something completely safe and or irrelevant to a parent, actually is something quite critical.” For instance, he says, the Chatstat AI tool has found patterns indicating a significant increase in hate speech over the past two years.

Driven by algorithms, disturbing content of all kinds is spreading on social media. Australia’s eSafety Commissioner in October issued an urgent advisory warning about the proliferation of extreme violent material online, including recent assassinations and brutal murders, mass casualties and conflict footage.

“So-called ‘gore’ content is surfacing with disturbing frequency on young people’s devices via autoplay, recommendations, direct messages and reposts,” the advisory said. “Once uploaded, the same clips can circulate across mainstream platforms such as X, Facebook, Instagram, Snapchat, TikTok, and YouTube. They can also be shared directly in messages, DMs and chats.” Research by eSafety found 22 per cent of children between the ages of 10 and 17 have seen extreme real-life violence online.

Most social media networks have policies which require the application of sensitive content labels, but eSafety Commissioner Julie Inman Grant said the major platforms have failed to deploy these filters quickly or consistently.

The federal government’s proposed social media ban for children and adolescents under the age of 16 is scheduled to come into effect in December but Kusz does not foresee any difficulties for his company.

“From an anecdotal point of view, the parents that I speak with, all the kids that are under 16 are just setting up multiple social media accounts right now in preparation for this,” he says, adding that if the ban is technologically effective it will affect adults as well as adolescents and children. He expects an outcry if adults have to prove their age when they use social media.

When Kusz and his colleagues first began developing Chatstat, they found the social media monitoring tools and methods then in use were outdated. Ten or fifteen years ago, he says, children and adolescents might take a school-issued laptop home and bully each other on Facebook.

An app installed on the laptop might flag disturbing usage. These days, he points out, most teens and pre-teens have their own personal devices and schools have no way of tracking their social media activity.

Chatstat has a pricing range for families, schools and business and government. Family plans range up to $20 per month on a monthly basis. When schools sign on, every parent is given access to a Complete Plan. Pricing ranges from $2,500 or $6 per student for a small school to $4 per student for a school with more than 1,000 students.

A Business Basic plan costs $1,000 per month, a Business Pro plan $3,000 per month, and the Chatstat pricing for government and enterprise starts at $10,000 per month, depending on scale.

Originally from the US, Kusz immigrated to Australia in 1995. He worked in the technology field for some years, so he understood the potential of AI early on, he says.

“Although I might not be a programmer, I could understand technology,” he adds. “I could see where things were going. I knew that AI was going to be a huge thing in the near future. We started working on everything about a year and a half before ChatGPT came out, and then ChatGPT was released, and all of a sudden, everybody was an AI guru.”

He studied for both his MBA and his PhD in strategy and innovation at the University of Queensland, and Chatstat still has a strong connection to the university, he says.

Chatstat has begun to move into workplaces more recently, he says, because many of the issues that arise with youth are also relevant to government and to organisations.

“We’re beginning to have conversations with Home Affairs,” he adds. “There’s a number of different things that we’re able to detect for at-risk behaviour: suicidal ideation, hate speech-related issues, toxic culture development, bullying, harassment.”

Chatstat headquarters is in Brisbane, but about a quarter of the 30 or so staff-members staff work in the Gold Coast office. The company has fielded enquiries from across Australia and abroad and recently analysed social media connected to the Harvard University art school in the US, where Kusz says Chatstat staff detected multiple instances of anti-Semitic content shared by associates.

The Trump administration cancelled US$2.2 billion in federal funding for Harvard in April (a move struck down by a US court in September) following allegations of anti-Semitism. The administration had earlier cut US$400 million in grants and contracts to Columbia University, again following allegations of anti-Semitism.

Failing to deal with hate-speech posts have cost US universities dearly, Kusz says. “We’re able to detect these issues at an early stage,” he adds, “before they actually escalate.”

Financial Review