The Role of Twitter Bots in the Arson Narrative of Australia’s Bushfire Crisis

On the back of Greta Thunberg’s ascension to Time’s ‘Person of the Year’, the ever pervading divisiveness surrounding climate change found a new battle ground, Australia’s bushfire crisis. Proponents of climate change viewed the crisis as further evidence that climate change is real and should be a global concern. However, soon enough, posts began popping up on social media feeds spreading the word that such proponents were overreacting. It was said that the crisis was in fact caused by the concerted efforts of a string of arsons across the country, evidenced by about 180 arrests.[1] What was the motivation of these supposed arsonists? Perhaps it was to exploit Scott Morrison’s failings of character; maybe they were undercover agents sent by Greta to spread more terror in the continued effort to bring down oil and gas giants; or maybe it was some underground psycho pyromaniac society. Either way, any shift in public opinion that climate change was connected to the crisis was being undermined.

#ArsonEmergency, not #ClimateEmergency

Timothy Graham, a lecturer at Queensland University, conducted a study of the effect of the Australian bushfire crisis on twitter accounts. He found that the first instances of information pushing the arson narrative could mostly be traced back to a Twitter hashtag, ‘#ArsonEmergency’.[2]

Unfortunately, due to Twitter’s privacy policies and other factors, it would be extremely difficult to identify the specific individual or account that first tweeted ‘#ArsonEmergency’ (although, the New York Times reported its suspicions that Rupert Murdoch is the concept’s originator).[3]

Graham provides the following example of how ‘#Arson Emergency’ began to influence the bushfire crisis narrative early on. During the beginning of Australia’s bushfire crisis the hashtag ‘#ClimateEmergency’ was one commonly used hashtag of reference. Then, sometime in November, a tweet occurred in which it was suggested that ‘[w]ith the vast majority of fires being deliberately lit, a better hashtag for the bushfires instead of #ClimateEmergency would be #ArsonEmergency’.

It is prudent to note the risks in taking any statement made on social media, such as this, at its face value. Hopefully, this tweet caused you to ask (as it did me) where the poster obtained their statistics regarding how many fires are deliberately lit, and if they even had any basis for making that statement in the first place.

That being said, it appeared that several individuals did not question the validity of this claim and began to retweet the hashtag ‘#ArsonEmergency’ faster than fire can spread.

Around the same time, a rumour circulated online, which was published by some news outlets, that 180 people had been arrested for arson since the start of the bushfire season. However, in the months following, state police have provided information and data which shows this claim to be completely false.[4]

Twitter Bots/trolls and their role in influencing narrative

In the past decade, there have been several occasions where political discussion online has been effectively influenced by online trolls and bot accounts. These accounts are responsible for the spreading of misinformation, the purpose of which is to move discussion to a specific narrative. Bots are able to influence public opinion so effectively because they can mimic genuine opinions of some members of society and spread that opinion more widely, and in doing so the opinions reach and persuade others. Whether or not that opinion has basis in truth, we as members of the public have a responsibility to be wary of the fact that this is often occurring.

Bots are often programmed by an individual to spread certain information and repeat similar tasks, such as retweeting ‘#ArsonEmergency’. These tasks are programmed to be automated (to occur without human oversight). Their creation is relatively simple for anyone versed in programming and an individual can create several fake accounts which are all programmed to complete similar tasks. This means that a single person can make what is actually a marginally held opinion appear to be an opinion held by a majority, ultimately lending the opinion the appearance of greater merit.

Identifying bots and fake accounts

In 2017 it was reported that bots made up 52% of all web traffic.[5] Suspicious behaviours which may indicate a bot include high frequency rate for posting of content, lack of account information (such as a generic landscape profile picture, or none at all), and lack of original content/posts (mostly retweets and quoting links).[6]

More sophisticated methods use software specially programmed to identify bots. In his study Graham used Tweetbotornot, Botometer and Bot Sentinel to test multiple suspicious accounts and tweets. The results showed that the vast majority of tweets and accounts from which ‘#ArsonEmergency’ seemed to originate and spread were flagged as bots by the software programs.

Graham ultimately found that once ‘#ArsonEmergency’ gained traction it was sustained by an effort of about 300 accounts. He also noted that, on several occasions, suspicious accounts targeted and countered expert opinions (credible scientists in their fields).

Ramifications, legal and otherwise, for bots and the spreading of false information

Twitter does not completely ban users from employing automation for certain tasks (such as replying to follower enquiries), however, users are certainly banned from ‘spamming’. On the subject, Twitter states the following:

‘[t]rending topics: You may not automatically post about trending topics on Twitter, or use automation to attempt to influence or manipulate trending topics’

and

‘[m]ultiple posts/accounts: you may not post duplicative or substantially similar Tweets on one account or over multiple accounts you operate’.

If a user/account is found to have violated these rules Twitter reserves the right to suspend or delete the account.[7]

In Australia, internet trolls could theoretically be found criminally responsible in certain instances under s 474.17(1), Part 10, of the Criminal Code Act 1995 (Cth).[8] However, as the section is limited to usage of a carriage device to ‘menace, harass, or cause offence’, it is more applicable to online bullying and harassment and less likely to be enlivened by troll bots influencing narrative in instances such as those surrounding the bushfire crisis.

To date there are no specific legal ramifications for the creation and usage of online bots and fake accounts, however, Australia is beginning to take action broadly against the dissemination of false information.

In 2019, the ACCC released its report on its inquiry into online digital platforms.[9] Due to the great power and influence wielded by online digital platforms, the ACCC recommended that more government intervention and oversight should be taken into the conduct and regulation of such platforms. In late 2019, the Australian Government agreed to adopt the key recommendations.[10] One such recommendation includes the implementation of a permanent ACCC Digital Platforms Branch which, among others tasks, will develop a ‘new code that will address the inherent power imbalance between platforms and media companies in Australia’ and an additional code of conduct ‘to address disinformation’.

Any such reforms would of course need to balance the need for protection with other considerations. This includes protecting freedom of speech rights and avoiding injury to online commerce domestically and abroad. Reforms would also need to navigate the difficult logistics in developing an effective code for digital platforms that is broad enough to apply to all of their different operations and purposes. Whether or not a suitable policy can be developed, only time will tell.

A need for more platform responsibility

Ultimately online platforms need to acknowledge their roles as today’s prominent influencers of public opinions and the potential harm or good attached to that role. Since the Cambridge Analytica scandal platforms have been taking more steps to increase transparency and to root out the spread of false information. However, more proactive steps can be taken on their part. As Graham’s study showed, it was possible to identify a collaborative effort by twitter bots to influence the Australian bushfire narrative. The intentions of that effort had effects world-wide (personally I received several messages from concerned friends and family members abroad). If someone such as Graham, with far less resources than Twitter or Facebook, can investigate such an occurrence, surely the platforms could be doing a better job to stamp out these purveyors of false information before they induce the public into a frenzy of misplaced opinion. Lastly, there is a duty on us all as users of social platforms to ensure that any information or opinions we spread have basis in fact. Hopefully spreading awareness, which is the true goal of this article, will assist in just that.

 

[1] As reported by The Australian <https://www.theaustralian.com.au/nation/bushfires-firebugs-fuelling-crisis-asarson-arresttollhits183/news-story/52536dc9ca9bb87b7c76d36ed1acf53f>. However, see Christopher Knaus, ‘Police contradict claims spread online exaggerating arson’s role in Australian bushfires’, The Guardian (online), 8 January 2020 <https://www.theguardian.com/australia-news/2020/jan/08/police-contradict-claims-spread-online-exaggerating-arsons-role-in-australian-bushfires>.

[2] Timothy Graham and Tobias R Keller, ‘Bushfires, bots and arson claims: Australia flung in the global disinformation spotlight’, The Conversation (online), 10 January 2020 <https://theconversation.com/bushfires-bots-and-arson-claims-australia-flung-in-the-global-disinformation-spotlight-129556>.

[3] Damien Cave, ‘How Rupert Murdoch Is Influencing Australia’s Bushfire Debate’, The New York Times (online), 8 January 2020 <https://www.nytimes.com/2020/01/08/world/australia/fires-murdoch-disinformation.html>.

[4] ‘Police figures show far fewer people in Australia have been charged with bushfire arson’, AFP Fact Check (online), 14 January 2020 <https://factcheck.afp.com/police-figures-show-far-fewer-people-australia-have-been-charged-bushfire-arson>; Knaus (n 1).

[5] Adrienne Lafrance, ‘The Internet is Mostly Bots’, The Atlantic (online), 31 January 2017 <https://www.theatlantic.com/technology/archive/2017/01/bots-bots-bots/515043/>.

[6] Straith Schreder, ‘3 Ways to Spot Fake Twitter Accounts’, Internet Citizen (online blog), 8 January 2018 <https://blog.mozilla.org/internetcitizen/2018/01/08/irl-how-to-spot-a-bot/>.

[7] See ‘Automation Rules’, Twitter (Web Page), 3 November 2017 <https://help.twitter.com/en/rules-and-policies/twitter-automation>.

[8] See Elle Hunt, ‘What law am I breaking? How a Facebook troll came undone’, The Guardian (online), 30 July 2016 <https://www.theguardian.com/media/2016/jul/30/how-facebook-troll-came-undone>.

[9] Australian Competition and Consumer Commission, Digital Platforms Inquiry (Final Report, June 2019) <https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf>.

[10] ACCC, ‘ACCC welcomes comprehensive response to Digital Platforms Inquiry’ (online media release), 12 December 2019 <https://www.accc.gov.au/media-release/accc-welcomes-comprehensive-response-to-digital-platforms-inquiry>.