Europe Россия Внешние малые острова США Китай Объединённые Арабские Эмираты Корея Индия

Children will have to show ID to use social media: Under-13s will be kicked off sites under tough new guidelines as firms are told to use facial recognition technology to prevent children accessing harmful content

6 months ago 24

By Meike Leonard

Published: 22:13 BST, 7 May 2024 | Updated: 08:12 BST, 8 May 2024

Social media giants could use facial recognition to prevent children accessing violent content, new Ofcom guidance suggests.

The regulator said that tech firms must step up to 'aggressive algorithms' that can leave children exposed to suicide, self-harm, eating disorders, violence or pornography.

Under the new code, companies must assess the risks posed to children by their platform's content and implement safety measures to mitigate these risks.

The latest draft of the Online Safety Rules outlines 40 practical measures to be carried out by tech companies with child users – with hefty fines for those found to be in breach.

This includes guidance on how to institute more robust age-verification measures to ensure children can't access harmful content on the platform. 

Social media giants could use facial recognition to prevent children accessing violent content, new Ofcom guidance suggests (stock image)

Sufficient methods given in the guidance include facial identification technologies such as matching photographs to ID, or apps that estimate a user's age from a photograph. 

The regulator warns that existing methods – such as relying on users to self-declare that they are over 18 – will no longer be sufficient.

Platforms have also been told that they need to redesign their algorithms to filter out the most harmful content for child users. 

Most social media platforms rely on algorithms to recommend content to users that they believe will interest them or keep them scrolling.

However, as the Ofcom proposal says: 'Evidence shows that recommender systems are a key pathway for children to encounter harmful content. 

'They also play a part in narrowing down the type of content presented to a user, which can lead to increasingly harmful content recommendations as well as exposing users to cumulative harm.'

Children quoted in the new safety code, published today, cited worries over being contacted by strangers or added to chats online without their consent. 

Others mentioned wanting more restrictions on the type of images or information being recommended to them. One 15-year-old said: 'If you watch [violent content], you get more of it'.

Ofcom said that tech firms must step up to 'aggressive algorithms' that can leave children exposed to suicide, self-harm, eating disorders, violence or pornography (stock image)

It comes just months after violent online content was deemed 'unavoidable' for children in the UK. 

Every British child interviewed for an Ofcom study released in March had watched violent material on the internet, with some viewing it while in primary school.

Ofcom Chief Executive Dame Melanie Dawes said: 'Our measures will deliver a step-change in online safety for children in the UK. We won't hesitate to use our full range of enforcement powers to hold platforms to account.'

Child online safety campaigner Ian Russell – the father of 14-year-old Molly Russell, who took her own life in 2017 after viewing harmful material on social media – said the code, while welcome, is not enough.

'Its overall set of proposals need to be more ambitious to prevent children encountering harmful content that cost Molly's life,' he explained.

The final safety codes are expected to be published by Ofcom at the end of 2025, with Parliamentary approval expected in spring 2026.

Read Entire Article