Ballooning suspicion has pushed TikTok onto the back foot across much of the western world with increasing concerns the popular video-sharing app could double as a platform for disinformation as well as data gathering tool for China. Social media disinformation campaigns have been favoured by certain nations for years; deployed to push a range of agendas: to undermine democratic elections in the US, to sow suspicion of Hong Kong’s independence activists and to derail support for Taiwan. These grey-zone disinformation campaigns are difficult to attribute, opaque, and forcefully amplified by bots.
“Authoritarian states have realised that they’ve been up against a collectivity offered by the security architectures offered by NATO and other bloc security relationships and have therefore kept their projection of political power beneath the kinetic threshold,” says Dr Jake Wallis, chief of disinformation analysis at the Australian Strategic Policy Institute.
China is of particular concern, he says, “because it owns the pipelines through which some of this content flows – platforms like TikTok, Wechat, Weibo and others.”
In March, TikTok CEO Shou Zi Chew tried to reassure sceptical US politicians in a Congressional hearing that TikTok was not “owned or controlled by the Chinese government” and “does not promote or remove content at the request of the Chinese government.” TikTok is banned from government phones and devices in Canada and Britain, and Australia is expected to soon follow suit.
Owned by Chinese company ByteDance, which has headquarters in Beijing, TikTok has at least 150 million active users in the US and seven million in Australia, and concerns have grown that the Chinese government has recently expanded its influence in the company.
Chinese disinformation campaigns have been tracked across various platforms, including Twitter. ASPI has established a website (https://infoops.aspi.org.au) to present the extent of disinformation data in various datasets released by Twitter. A dataset of 936 accounts found to be linked to the Chinese state were “deliberately and specifically attempting to sow political discord in Hong Kong, including undermining the legitimacy and political positions of the protest movement”, ASPI says.
Social media platforms such as TikTok, Twitter, Facebook, Instagram, WeChat, Weibo are the new battlefields where nations and other parties manouvre to gain advantage and where disinformation campaigns are the potent new weapons.
Russia used internet trolls to try and sway the 2016 US presidential election in favour of Donald Trump: posing as Americans on various social media platforms in disinformation campaigns designed to inflame political tensions.
Disturbingly aggressive Russian tweets from Moscow-linked accounts, with commonly-used hashtags including “IslamIsTheProblem”, “StopImportingIslam”, “BanIslam”, “QAnon”, “MAGA”, and “FAKENEWS”, have been captured in Twitter datasets and used by the ASPI website to assess Moscow’s disinformation war in the US. The dataset includes more than six million tweets spread by more than 5,000 accounts, illustrating Russia’s intent to wage an ongoing and multi-faceted social media disinformation campaign.
Moscow has also focused its disinformation efforts on Ukraine: suggesting the Ukrainian government was waging genocide and a US-backed lab in the beleaguered nation had been manufacturing bioweapons.
“The invasion itself came after a decade of this hybrid grey zone activity that laid the groundwork, creating the strategic environment that Russia hopes will lead to a positive outcome through kinetic activity,” Wallis says. The Russian disinformation campaign promoted interference and subversion in Ukraine, he adds, as well as supporting Russian-backed insurgents in eastern Ukraine and leveraging diaspora communities including the Russian-language communities in the nation’s east.
Closer to home for Australia, the downing of Malaysian Airlines flight MH17 over eastern Ukraine, which killed 298 passengers and crew including 38 Australians, has been “a persistent issue” in Russian state-backed disinformation campaigns, Wallis says.
These purveyors of disinformation can obfuscate the technical infrastructure they use and out-source their disinformation campaigns to third-party proxies, Wallis says, adding there’s a “shadow economy of influence for hire that is quite concentrated in the Indo-Pacific region”.
To combat this grey zone skirmishing Australia is now committing more resources to combat online disinformation and other forms of cyber warfare, and $15 billion has been allocated over the next decade to strengthen the defence department’s information and cyber domain capabilities.
Disinformation campaigns have also been waged on the domestic front. In Australia, the anti-vaxxer disinformation war continues: in June 2021, the Australian Communications and Media Authority (ACMA) reported 82 per cent of adult Australians had read, heard or seen Covid-19 disinformation over the previous 18 months.
Platforms including Apple, Meta, Adobe, Microsoft, Google, Twitter and TikTok have signed on to a voluntary code of practice to slow or halt the spread of disinformation in Australia. Meta, which owns Facebook and Instagram, says it removed more than 11 million instances of “harmful health information” from Facebook and Instagram globally in 2021, including more than 180,000 from Australian pages or accounts.
The Australian Electoral Commission and the NSW Electoral Commission have both set up disinformation registers to address falsehoods. With reference to the 2022 federal election, the AEC register notes (among many other instances) that contrary to certain elements of disinformation, the Commission did not outsource vote-counting to vote-counting software, the AEC has not “ignored” various candidates to rig the election, and the AEC did not give “incorrect” or “illegal” instructions to voters regarding senate ballots.
“Incorrect claims about electoral processes are increasingly causing damage to the reputation of different electoral systems,” an AEC spokesperson says. “The AEC is taking the steps that we can – appropriate steps – to limit the harm of such occurrences in Australia.”
It can be awkward for democratic governments to monitor and deal with online disinformation, both domestic and international, without censoring valid discourse. When does a personally-held belief about electoral structures, or, say, a narrative which might have originated with the Chinese Communist Party or the Kremlin morph from legitimate political expression to disinformation?
Wallis believes that over the past two or three years Australia has begun to commit substantial new resources to this field in order to effectively counter disinformation campaigns. “We should have a threshold of response,” he says, “we should have indicators that allow us to make these calls.”