Facebook, Google – which owns YouTube – and Twitter struggled over the weekend to deal with the footage and the terrorist’s manifesto being repeatedly reposted.

“You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms,” the UK Home Secretary, Sajid Javid, tweeted on Friday. “Take some ownership. Enough is enough.”

Mr Morrison said that “there is very real discussions that have to be had about how these facilities and capabilities, as they exist on social media, can continue to be offered where there can’t be the assurances given at a technology level. Once these images get out there, it is very difficult to prevent them.”

Difficult task

Facebook, Google and Twitter have had staff across time zones working around the clock in co-ordination with their various artificial intelligence platforms to track down and remove the video.

The complete removal of the content is not easy and goes to the core of what these platforms were set up to do – give people an unrestrained voice to share their views. The gunman’s various social media accounts were removed and the technology giants were also proactively searching for accounts being set up in his name to prevent impersonation and further spreading.

The technology being used to track down the video takes a visual footprint of the footage, meaning that specific version can be banked and blocked across a platform. The problem is the video is being downloaded and edited then uploaded and shared again, creating a new visual footprint that the likes of Facebook, YouTube and Twitter need to track down and block.

Each edit can modify colour, add watermarks or captions, making the task challenging. Facebook, YouTube and Twitter have removed and blocked hundreds of different versions of the video.

Essentially, it’s a game of whack-a-mole.

“We are deeply saddened by the shootings in Christchurch on Friday,” a Twitter spokesman said.

“Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this. We also co-operate with law enforcement to facilitate their investigations as required. We have dedicated government and law enforcement reporting channels for illegal content. We have a specially trained team that reviews each report against the Twitter Rules and our Terms of Service, and determines whether or not it is in violation. We remain committed to working with governments around the world, including in Australia and New Zealand, to encourage healthy behaviour on the platform.”

Facebook director of policy in Australia and New Zealand Mia Garlick said: “We continue to work around the clock to remove violating content from our site using a combination of technology and people. In the first 24 hours, we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload. Out of respect for the people affected by this tragedy and the concerns of local authorities, we’re also removing all edited versions of the video that do not show graphic content.”

As Peter Kafka quoted Facebook boss Mark Zuckerberg in Recode from a post relating to the spread of misinformation by Russians on the platform in 2017; these platforms put no roadblocks to the spouting of hate speech and now a terrorist attack before it has actually happened.

“We don’t check what people say before they say it, and frankly, I don’t think society should want us to. Freedom means you don’t have to ask for permission first, and by default, you can say what you want,” Kafka quoted from a Zuckerberg response in 2017.

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here