Social media is alive with criminal activity

Social media can be an effective way of staying on top of trends, staying in touch with friends and family and staying abreast of professional developments: yet there are any number of criminals lurking on the various platforms, ready to snap up the unwary. Users’ data is constantly sought and monetised and social media has been used to further the worst of human impulses: sexual exploitation, bullying, theft, coercion and extortion.

Nigel Phair, University of NSW Institute of Cyber director (enterprise), says the big tech companies do everything they can to keep the lucrative torrent of users’ data gushing in. Meta, which owns Facebook, Instagram and What’sApp; Twitter; ByteDance, which owns TikTok; Snap, which owns SnapChat, and Alphabet, which owns Google and YouTube, have all made billions by monetising user data.

“If something is for free, you are the product, and I don’t think we’ve got that into the heads of the average Australian,” Phair says. “That’s why all those companies are unicorn companies. It’s all about the data. All they want is for you to be on it all the time.”

Phair believes Australia should launch a far-reaching education campaign to ensure social media users know what they’re giving away and help them to understand the possible ramifications of failing to lock down their social media account settings. He often hears the line: “I’ve got nothing to hide so I don’t care if Google knows all about me”, but he says these people don’t really understand how much their personal data is worth.

Say a user has a Facebook and Instagram account, in her own name, and uses Gmail, and Google maps and Google to look for good restaurants in her area. She will probably get restaurant ads in her news feed, and her Google search will not direct her to the best or closest Italian restaurant, Phair says, but to the Italian restaurant that pays Google the most money.

On the darker side, social media is alive with criminal activity and scams and the worst of human behaviour, including cyber-bullying, child sexual exploitation and revenge porn. Posting photos of children in school uniforms on open social media accounts can allow hackers to get more information from school websites and posting holiday photos can signal to criminals that no-one is home. There’s social media crime in the commercial world, too, Phair says, adding criminals can hack into the Twitter account of a small to medium business, take it over and hold the organisation to ransom.

The big social media companies make redress extremely difficult, he adds, because they mostly don’t have anyone to contact in Australia. He believes such a contact presence should be a legal requirement.

The Australian Cyber Security Centre has a raft of advice for social media users on its website, warning that even innocent information, photos, messages or videos can be harvested to build up detailed profiles of social media users. These profiles can then be used in “extortion or social engineering campaigns” aimed at getting more information or ultimately compromising an organisation. Users’ data can sometimes be stored outside Australia, which then makes it subject to both lawful access and covert collection by other nations.

Social media use, the Centre says, should be governed by both common sense and a “healthy level of scepticism”. It’s advice includes using aliases, rather than full names on social media, maintaining a separate email address for social media, limiting personal information posted on social media sites, accepting requests only from known people and remaining wary of any unrequested contact.

Australia’s eSafety Commissioner, Julie Inman Grant, has seen deep into the darkest side of social media and human interaction online, a field with so many “different risks, threats and harms”.

With 22 years in the tech sector, she sees interactions on social media platforms open to a range of wrongdoing. “We’re dealing with personal harms, like cyber-bullying, online hate and harassment, non-consensual sharing of intimate images and videos – image-based abuse, and child sexual exploitation material and there are many different manifestations of that.”

ESafety has investigated many thousands of instances of illegal and restricted online content since 2015, mostly involving child sexual exploitation. Social media platforms can provide a place for predators to find victims, cosy up to them and lure them into otherwise never-considered actions. To counter this, various companies offer controls for families and schools to restrict children’s access to social media and block adult content.

“Young kids, some as young as seven or eight, going on TikTok, are met by a predator who will usually use a fake or imposter account, pretending to be another young person, who will encourage them into a relationship,” Inman Grant says. “They might groom them over time, ask them to take off a piece of clothing, start to extort them, through coercion, demand they perform more risqué sex acts.”

After the initial meeting, the predator will usually try and take the child to a more secure site for the abuse, or further engagement on a platform like Snap which doesn’t leave an evidence trail. The resulting abuse video, often filmed in a bathroom or the child’s bedroom, is then screen captured – a procedure called “capping” – by the abuser.

If a complaint is received (and more than 21,000 were received last year), eSafety classifies the content, tries to work out the location of the child (if a child is involved), then works to get the content taken down within three days.

On another front, eSafety launched scheme to counter cyber-bullying in 2015, to help children who are being threatened and harassed, intimidated or humiliated and where social media platforms are going slow preventing it. A scheme to counter image-based abuse began operations in 2017, and Inman Grant says eSafety has had a 90 per cent success rate in getting intimate images and videos shared without consent off platforms all over the world.

The organisation has recently issued legal notices to big tech social media companies including Snap, Microsoft, Skype, Meta and Omegle (“a website that’s like a hunting site for pedophiles,” Inman Grant says) and asked them all to provide information about their actions to prevent sexual exploitation on their platform, including whether they’re using grooming detection technologies.

Inman Grant believes the true scale of online child exploitation has yet to be revealed and a huge effort is required on the part of tech companies – which have so far not proved particularly enthusiastic – to eliminate all potential for abuse on their platforms.

“We’ve been asking them questions and frankly not getting adequate answers,” she says, “what I would describe as selective transparency.”

Photo: Julie Inman Grant, New York Times

The Australian