Comment

Molly’s death showed social media firms can’t be trusted to protect children

The Online Safety Act is a vital piece of legislation but Ofcom must move quickly to catch up with tech firms who have a 20-year head start

In the last six months of her life my youngest daughter Molly was shown over 2,000 suicide and self-harm posts on Instagram.

It was only in the weeks after she ended her life that we started to discover the torture she had suffered at the hands of social media algorithms that deluged her with life-threatening content.

Our legal team, who retrieved the posts from Instagram’s parent company Meta, as well as others from Pinterest, estimated that that number is just a fraction of what Molly saw – perhaps only five or 10 per cent. During her inquest we learnt there were only 12 days in the last six months of her life that Molly didn’t engage with self-harm or suicide posts.

To this day, the posts she saw are hard for adults to view. The police officers who examined Molly’s devices said it brought tears to their eyes, the child psychiatrist who gave evidence at Molly’s inquest said it affected his sleep for weeks and the lawyers who sifted through tens-of-thousands of posts had to seek professional help to cope with what they had seen.

Even now, when we show some of what Molly, who was 14 when she died, saw on social media to MPs and regulators, grown adults have had to leave the room in tears.

That is why the Online Safety Act is such a vital piece of legislation. Four long years after it was first proposed, and six after Molly’s death, it is a crucial first step towards addressing harmful material online. It gives Ofcom new powers to regulate and fine social media companies who fall below a basic duty of care towards their users.

It was not just what Molly saw but the sheer volume that we believe helped overwhelm her Credit: Family handout/PA Wire

This has been a long time in the making, and the bill has rightly been subject to intense scrutiny and debate. I would like to thank the Telegraph for its unrelenting Duty of Care campaign, which was crucial in bringing us to this point. There have also been many diligent ministers, such as Michelle Donelan, the current Secretary of State, and courageous campaigners right across civil society such as Baroness Beeban Kidron who have made this landmark legislation a reality.

As the bill finally passes, it feels in some respects like we are crossing the finish line after a long race. But the reality is we are only just at the starting line. This process is only beginning and Ofcom needs to be quick out of the blocks to catch up with tech companies who have a 20-year head start on them.

We also must not underestimate the task in front of Ofcom, which will be coming up against the most powerful and influential companies on the planet. There is a risk the tech companies will try to skew the evidence base and tangle the regulator in appeals and litigation to protect their business models.

When faced with such powerful companies, Ofcom may be tempted to act in an initially risk-averse or largely supervisory way. This would not be good enough. We need an effective and proactive regulator that can swiftly deliver change to make social media fundamentally safe-by-design. Ofcom needs to move fast to mend things.

The regulator needs to be a robust watchdog guided by the public interest and not pressure from Silicon Valley. To be effective, the new regime needs to look at the basic design of social networks and the fundamental decisions about how they function.

For instance, at Molly’s inquest, Meta’s Head of Well-being Policy, Liz Lagone, said that many of the posts my daughter saw on Instagram were considered “safe” by the company. It is clear tech giants like Meta cannot be trusted to be the sole judge and jury on what is harmful, which is why we need an impartial yet rigorous regulator to intervene.

The problem is of course not just the content on social media, but the role played by tech companies’ powerful algorithms, which decide what every user sees and can direct them towards harm.

It was not just that there was self-harm and suicide content on Instagram and other apps, but that their algorithms scooped up thousands of posts and funnelled them towards Molly. It was not just what Molly saw but the sheer volume that we believe helped overwhelm her.

If algorithms are going to be allowed to target children as young as 13 with content, companies must be able to demonstrate they aren’t serving up posts that are harmful or even life-threatening.

Throughout this process, Ofcom must never lose sight of the most important aspect of their work: protecting children from entirely preventable harm. The young people like Molly who are today sat in their bedrooms scrolling through dozens of posts on their phones selected by these tech companies. This regulation needs to make sure they are not being sucked into distorted realities where suicide and self-harm are dangerously normalised.

If this act doesn’t stop children and teenagers being pushed down lethal rabbit holes like Molly was, it will have failed.

Ian Russell founded the Molly Rose Foundation in Molly’s memory to campaign on suicide prevention