At 4:30pm on 21st December, a car drove into a crowd outside Flinders Street Station in Melbourne, Australia. 12 people were injured in what police initially said was a “deliberate act”.

Pictures and video of the car’s driver, a Middle Eastern-looking man, were immediately on Twitter. The police didn’t link the attack to extremist terrorism and already knew the man as a mentally ill, former drug addict of Afghan descent. Another man was also arrested as he was found by police filming the event and carrying a bag of knives.

Based on this limited knowledge, the Twitter-sphere immediately seemed to group in two major camps. On one site were those urging fellow citizens not to jump to prejudiced conclusions about the nature of the incident:

Others denied the initial statements from officials assuming the liberal left and the ‘political correct’ were covering up the fact that Islamist terrorists were behind the attack.

Of course, this is a rough division: A user could assume a terrorist-attack had happen without spreading xenophobic sentiment etc. However, the tragic event raises interesting questions about the nature of opinion formation on social media and ultimately about users’ perception of reality.

The concept of social media bubbles has been a much debated topic the past year. The concern is over how democratic cohesion and deliberation is preserved when our ‘friends’ and ‘follows’ on Facebook and Twitter are the primary source of news and information. Is it possible to have a meaningful societal debate if the citizens fundamentally disagree about the premise of reality?

By scraping real Twitter-data from the critical hours following the incident, I used statistical tools to qualify this discussion further.

The Twitter-Sphere

I retrospectively extracted 10.000 tweets at 10.30pm (local time). The oldest tweets I obtained were from 7pm, 2.5 hours after the incident occurred. The 10.000 tweets span over a period of 3 hours and 8 minutes. 


After inspecting my Twitter-feed, I decided to extract tweets using the hashtag “#Melbourne” as this was the prominent hashtag used at the time – sometimes in conjunction with #BreakingNews, #Australia and #FlindersStreet. I tried an analysis based on #FlindersStreet – but it didn’t give me enough results. Unfortunately, the #Melbourne-search term has the downside of being more generic. But given the relatively short time-period of extraction, I considered '#Melbourne' likely to be dominated by tweets related to the Flinders St. incident. With data going further back in time, it would be possible to see changes in total tweet volume before and after the incident, further qualifying the search term’s relevance.

To gain an initial understanding of the tragedy at Fleets Street as it unfolded on Twitter, I inspected different properties given by the API such as Likes, Retweets, Username, Location etc. A surprising amount of information could be found by such basic inquiry into the meta-data.

Likes

Of the total tweet volume, only 1300 tweets (13%) had any likes. This is maybe not surprising given the short amount of time after the incident. 5.5% of the tweets had one like, 2% had 2 likes and 1% had three likes.

When you ‘like’ something on Twitter, it is a way of favouring the message in that particular tweet and showing one’s allegiance to the intention behind. The fact that the vast amount of likes was centralized on a few tweets could be a consequence of a social (media) mechanism in times of crisis: In the immediate aftermath, users mainly rely on information from very trusted sources.

The most liked tweet was by Katie Hopkins (@KTHopkins) with 1378 likes at the time of scraping. The tweet has now been deleted - but I managed to capture a screenshot hours after the scrape:

It turns out Katie Hopkins was a frequent tweeter during the period. In a time-span of 1 hour and 15 minutes she tweeted 8 unique tweets from her IPhone. They are all similar in their tone and aggression: Attacking liberal narratives of shared human common cause and condemning the notion of cultures standing “should to shoulder”. Instead she demands immediate response from political leaders by acknowledging Muslims as a threat to society at large.

Of the 13 most liked tweets in the data, a staggering 6 tweets came from the Katie Hopkins-profile. This gives a sense of her function as a major gatekeeper of information on Twitter in relation to the incident. She could potentially be a 'key node', connecting and spreading information to otherwise isolated networks.

It prompts the question as to what kind(s) of network responded and consolidated her messages so quickly and in such numbers? The three most liked tweets are all from Hopkins followed by Amy Mek (@AmyMek) with a tweet with 451 likes. Amy Mek has one other tweet in the top 13:

The meta-data from the @AmyMek-profile tells us she is from the US and has 205.000 followers. The tweets from her profile are clearly aimed at an ALT-Right audience: One tweet goes as far as claiming Sweden has areas ruled by Islamic law. To back up her claim, she links to a distorted news piece from the Russian state-financed media, RT.

All in all, of the 13 most liked tweets, 8 seems to be supporting a far-right narrative, 4 tweets had a liberal-humanistic narrative and 1 tweet was news-related.

Tweets

The 10.000 tweets in the data-set came from 6.640 unique usernames. The most active tweeter during the 3-hour period was the user @hitech_guru with the name Sandeep Shenoy with 42 tweets (or roughly 1 tweet every 5 minutes).

Unlike @KTHopkins or @AmyMek, @hitech_guru doesn’t get many likes or retweets from his tweets. The profile is from Toronto, Canada, active since 2008 and has a couple of thousand followers.

All that is normal. But the profile also tweeted more than 138.000 times and looking through the feed, it is clear the profile tweets any hour of the day. It is automated to some degree. However, the profile doesn’t seem to have any other function than running through news-sites and tweeting news from around the world. It is neither left- nor right-leaning – and has no particular focus on Australia or terror-attacks as such. It might be a pure bot or a kind of cyborg partly controlled by a real person.

Sentiment Analysis

From the top 13 most-liked tweets, we saw that the far-right messages were twice as many as the liberal. But how do we determine this ratio on the aggregated level? Naturally, it would be cumbersome to count all the tweets by hand. Instead it is possible to make a sentiment analysis using Python’s Textblob-library.

The library is based on a machine learning model used in natural language processing. I’ve used a pre-made classifier even though it would be possible to build my own using the tweets as learning- and testing data. As mentioned, this classifier is not build to analyse political sentiment in tweets – but is set up to divide between a more general positive/negative sentiment.

The tweets seem to divide more evenly between positive (21.8%) and negative (29.7%). Almost half the tweets had a neutral sentiment.

An argument could be made that the more aggressive, xenophobic tweets by profiles like Katie Hopkins would get a more negative sentiment-score than the more liberal tweets. However, it is hard for the Textblob algorithm to differentiate between what is comments on the incident (i.e. “what a horrible accident…”) versus xenophobic comments (i.e. “the horrible Muslims”). This would probably require an independently trained classifier as noted above.

Opinion Formation

The Twitter-response to the Flinders Street-incident is an interesting case on how the structure of social media could affect users’ perception of reality. A user like @AmyMek represents the foreign ally to the ALT-right movement drawing on a network of alternative media such as RT.com (but could also be Breibart, Inforwars etc.). They have no specific interest in Melbourne or Australia other than promoting a worldwide resent against multi-culturalism and racial diversity.

@hitech_guru showed how bots is an integrated part of the Twitter echo-system (some claim 15% of accounts could be bots).

Finally, the tweets of Katie Hopkins was a perfect example of how reality gets distorted on social media. In those critical hours after the attack, she took advantage of her position as an information gate-keeper and boosted a non-verified message – Muslim extremists were behind the attack. This only increased the confirmation-bias of her peers: When she eventually deleted the tweets, the 'damage' had already been done.

The Flinders Street incident shows different facets of how political fractions use Twitter to compete for control over users' opinion-formation.