The Christchurch example shows the ever-evolving nature of internet harm and how profoundly the risks have changed during eSafety’s first five years. In 2015, we simply wouldn’t have imagined a terror attack could be posted live on Facebook, or – as occurred just months later – on a gaming platform such as Twitch.
Moreover, five years ago, iPhones with advanced video-streaming capability weren’t routinely being placed in the hands of pre-schoolers, leaving them vulnerable to confronting content and contact from strangers on seemingly benign, fun platforms such as Roblox, TikTok and Snapchat. And now add to this roiling threat landscape a pandemic that has forced the world to transfer so much of its social, educational and economic activity to the internet; there is no question that the internet and smart phones have become “essential utilities”.
To be fair, the major platforms have evolved from their position in 2015, when user safety was a footnote. Facebook, Google and other companies are finally investing in both human and AI systems to block harmful content and altering other policies and processes to better meet the threats.
Is it enough? Not nearly. As the meteoric rise of Zoom during the pandemic has shown, we are still in what I call the “wash-rinse-repeat” cycle, in which a new product is massively scaled up without any of the safety systems needed to protect its millions of new clients. Enter “zoombombing” and the post-hoc scramble to deal with it.
Until the tech companies adopt what we call safety-by-design – understanding the risks and mitigating them by building in safety protections at the front end – there will continue to be online trainwrecks, with the safety fixes retrofitted after the damage has been done.
That is why, as I now look down the road at the coming five years of online safety regulation, I want to make sure we get ahead of these issues – or our children will be the detritus left behind.
I cannot sugarcoat this. I fear that the connected devices we provide our children – not just phones and tablets, but dolls and toys and other products – will leave them increasingly vulnerable to a range of nasties, from scammers to bullies, and even to paedophiles.
I fear that toxic online behaviours, such as bullying, abuse, and casual sexting will become far more ingrained – especially given how well we adults model these poor behaviours for our kids. We see every day how these more normalised behaviours can go wrong, with devastating effect. I also fear that the tsunami of hard-core pornography that floods the internet will create a generation who think that violence and domination are normal ways to express sexual intimacy.
And as the parent of three young children, I fear for them, too, and sometimes feel paralysed about how to protect them, knowing all that I know. Sure, there are promising technical assists, such as age-verification for pornography sites, potentially on the horizon. At eSafety, we’ll continue to do our level best to counter existing threats, as well as using our technical capabilities to throw forward and anticipate emerging ones.
But as far as children are concerned, no technology and no agency can function as the principal barrier between them and the harms I’ve described. That barrier is we: their parents. We need to become just as involved in our kids’ online lives as we are in their everyday lives and give them the support and critical reasoning skills they need to navigate this complex online world.
You wouldn’t drop your six-year-old off at the Bourke Street Mall at night and tell them to wander freely. Then why would you let them roam – unattended and unprotected – across the chaotic badlands of the internet?
Julie Inman Grant is eSafety Commissioner for Australia.