Meta End-to-End Encryption to Impact Child Safety
Meta, the parent company of Facebook and Instagram, has been applauded of cracking down on child sexual abuse material on its platforms and services. But it might not be the same now that the company has began to roll out end-to-end encryption.
This will protect users from abusive legal requests from non-democratic governments. However, Meta cannot itself access the content of messages even if it received a legal order to do so. And this has raised concerns.
Child safety groups warn that encryption will impact online safety for children.
Encryption Hits Child Safety
The National Center for Missing and Exploited Children (NCMEC) said the end-to-end encryption would make communications on the platform go dark, a devastating blow to child protection. A senior official at the United Kingdom’ National Crime Agency, James Babbage, said this was hugely disappointing and undermined the agency’s role in protecting children from sexual abuse and exploitation. He highlighted that the social media company had an important responsibility to keep children safe on their platform and sadly, this will no longer be possible.
Meta informs the US National Center for Missing & Exploited Children (NCMEC) annually when it detects child predators on its platforms attempting to contact children, alongside millions of reports when users upload media containing child sexual exploitation. The NCMEC warned that 70 percent of these reports, and as many as 85 percent to 92 percent, could be lost due to the implementation of end-to-end encryption, which they argue will bind Meta’s monitoring teams to content that reveals abusive behaviors.
Security officials say the bar is much lower for Meta to ban users based on suspect signals than it is for law enforcement to prosecute offenders and safeguard children.
Meta Uses Machine Learning
Meta said its systems are designed to identify suspicious behavior, then restrict account features to make it harder for those users to find and contact people they don’t know, including children, thereby disrupting potential harm before it happens.
The company uses machine learning to detect patterns of behavior and stop predator accounts before they can contact children or share content. “Bad actors often reveal their intentions with obvious public signals, including child-sexualized content, code language in bios or joining questionable groups. Much like email spam filters, analyzing behavioral signals in a private space with privacy-preserving techniques provides opportunities to detect bad actors connecting with one another, and most importantly, to detect when they may be targeting victims.”
Meta said it also has limits on who can message children, profiles are private by default, and there are limits to the search teen profile outside Facebook.