“I don’t see evidence that suggests BLM is driven by bot activity,” says Jensen. “By and large this appears to be driven by authentic activity.”
Nonetheless, it is likely bots are active on the topic. In fact, there were over 70 accounts created since May 25, 2020 that were active around the hashtags and appear to be bots.
“There are bots tweeting about BLM but they are not a significant influence in the overall volume of the data.”
Elsewhere, there has been anecdotal evidence of bot activity.
On June 1, as protests gripped American cities several inauthentic accounts shared posts with the hashtag #dcblackout, making false claims that Washington DC had an internet and mobile phone network blackout. A second wave of apparently bot-like activity asserted the first wave was “misinformation”.
A spokeswoman for Twitter said: “We’re proactively taking action on any coordinated attempts to disrupt the public conversation.
“We are also actively investigating hashtags and have already suspended hundreds of spam accounts,” she added.
Jensen cautioned that foreign interference was still possible around the BLM topic, especially efforts to shift terms of debate to identity politics and away from interest politics, as interests are negotiable in a way identities are not.
While not commenting on Jensen’s findings, QUT Digital Media Research Centre professor Axel Bruns says it was quite likely there would be bot activity around racial tension in the US, whether driven by political or commercial motives.
“With almost any major event on social media sites like Twitter and Facebook, it will attract some level of bot activity, whether by people looking to directly influence the event itself or by others looking to spam and push other information into the info space,” Bruns says.
However, he adds, the mood was so tense in the US, little effort would be needed to stoke conflict.
There are a few different ways to automate messages on social media, including writing up computer code to go to the right places on a Twitter website to create an account, Bruns says.
There are also services that sell bot activities and click-farms, in which companies set up walls of mobile phones which can be remote-controlled to create a mass of online activity.
The tactics “depend on how much criminal energy you have and how far you want to go down that track”, Bruns says.
Platforms have made life harder on bot operators. To give a sense of scale, in the last available data from a year ago, Twitter issued 15.3 million challenges to accounts for spam-like behaviour.
But bot users continue to innovate.
Twitter now requires a phone-linked number and email address for accounts, so bot operators buy up and employ single-use SIM cards to try to get around the requirement.
“It’s an arms race,” says Bruns.
While the term “bot” conjures empty “egg” accounts with no picture and a computer-generated handle, the real issue for Twitter is “platform manipulation” which includes “the malicious use of automation”.
This week, Twitter announced the removal of three state-sponsored networks from the People’s Republic of China, Turkey and Russia.
Reports in the US of militant anti-fascist group Antifa coming to small towns in America have flared across social media, including on Facebook, and through text messages. Antifa, which in actuality exists in incredibly small numbers, serves as a catch-all term used by the US right to describe opposition to President Donald Trump.
Controlling the framing of a debate on social media gives an upper hand in shaping evolving events, Bruns and Jensen say.
The Atlantic Council’s Digital Forensic Research Lab found a “surge” of antifa-related content flowing from May 25 to June 7 on social media, receiving 27 million shares, with three-quarters of those from right-leaning media outlets.
“Many of these stories are alarmist in nature, misrepresenting or fabricating violent incidents in order to maximise their digital traction,” the group said.
Since the George Floyd protests began, an effort has also been underway to promote the remedy of “defunding the police”. As the #defundthepolice hashtag suggests, the action could have dramatic consequences for cities that liquidate their police department.
In Minneapolis, where the protests began, the a majority of city council members have signalled their intention to disband the police force as it currently exists, a move opposed by the mayor that would create considerable political uncertainty.
While not automated, the hashtag #defundthepolice dates back to at least 2014.
One of the biggest risks from bots, cadres of trolls and other forms of coordinated activity is to help shape the agenda of legitimate news gathering.
In a time of fast-moving events, it’s too easy for reporters and editors to read signals from “trending terms” on social media which can be manipulated. Even if they are not manipulated, there is no barrier for a topic with no basis in reality to trend.
As of Thursday afternoon, #Floydhoax was trending on Twitter.
Bruns says there is another form of coordinated behaviour driven by real users motivated by conspiracy theories or a shared fringe worldview. This is much harder to deal with, because they are not doing anything untoward but are still promoting divisive untruths.
The groups have “weaponised media literacy”, in which they apply a critical reading to mainstream news in order to reaffirm their trust in fringe news sources, he adds.
They twist media literacy into something that can inoculate them against generally accepted views.
In this worldview, all the genuine reporting on the death of George Floyd is proof of the mainstream media’s “agenda” around race. By the same token, posts claiming that Floyd didn’t actually die in police custody are judged worthy of at least entertaining and sharing.
Chris is Digital Foreign Editor.