Ofcom adds pressure on tech industry to protect children online

0 7

Unlock the Editor’s Digest for free

The UK media regulator has set out its expectations for technology companies to protect underage users from harmful content online as stringent new legislation comes into effect.

On Wednesday, Ofcom outlined more than 40 measures, such as age verification and better moderation, that online services must implement to keep under 18s safe and meet incoming legal requirements.

“Our proposed codes firmly place the responsibility for keeping children safer on tech firms,” said Ofcom chief executive Melanie Dawes. “They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age.”

These codes form part of the UK’s Online Safety Act, which was passed in October and is considered to be one of the broadest and strictest legislative frameworks for the internet.

The measures “go way beyond current industry standards”, Dawes said, adding that, once the laws are fully in force, Ofcom “won’t hesitate to use [its] powers to hold platforms to account”.

The Act gives Ofcom substantial powers to enforce hefty fines and criminal liability for tech companies that fail to protect under 18s. Ofcom’s document comes days after the regulator cracked down on OnlyFans, a streaming platform used by sex workers, over concerns the site’s age verification process is failing to block children from accessing pornography.

Ofcom’s codes will apply to all internet services that children can access, including social media networks and search engines. In particular, the online services must prevent children from encountering suicide and self-harm material as well as pornography and matters related to eating disorders. Companies must minimise children’s exposure to other harmful content, such as violence, bullying and dangerous viral challenges.

“There is an urgency to this,” said Gill Whitehead, who leads Ofcom’s online safety implementation. “Therefore, we are working with the largest and riskiest firms already on a one-to-one basis to ensure that they understand what it is going to take for them to comply with this new law and that they’re putting in some of those changes as soon as possible.”

Tech platforms are also obliged to identify an individual who is accountable for the compliance of these safeguarding rules.

Whitehead said that the biggest tech groups were already engaging with their new responsibilities and making changes ahead of time. 

For example, she highlighted that Meta, which owns Facebook and Instagram, changed its messaging features in January to prevent teens from being contacted by people they do not already follow or are not connected to.

She said these changes were made in order to tackle grooming, as predators often adopt a “scattergun” approach to messaging children.

Instagram and TikTok have recently introduced age-verification tools that estimate users’ ages based on selfies to prevent under-13s from using their products.

The codes announced on Wednesday are now open to consultation and will be finalised within a year. After that, services will have three months to conduct risk assessments before complying with the rules.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy