How to Protect Minors from Malicious Deepfakes  

Scroll to see more
fair-jovenes

One in five young people in Spain reports that, while underage, someone shared nude images of them generated using artificial intelligence. This is one of the findings in the latest report by Save the Children, which warns of the growing use of AI tools for the sexual exploitation of minors in digital environments. Moreover, 97% of those surveyed say they experienced some form of online sexual violence during childhood or adolescence. 

These results come from a survey of over one thousand young people aged 18 to 21. The report distinguishes between several forms of digital violence: from the production and distribution of child sexual abuse material to blackmail, sextortion, and the generation of manipulated content using AI tools. These technologies can produce fake images later used to threaten, extort, or humiliate. 

These data highlight the urgent need to develop tools that prevent the malicious use of this technology. That’s why we’re creating solutions to help detect and stop this kind of abuse. 

AI as a Protective Tool: This is How fAIr Works 

fAIr is our artificial intelligence-based platform for detecting AI-generated content. Its goal is to serve as an effective tool in the fight against malicious uses of generative AI from manipulated images and videos to cloned voices by providing companies, governments, and users with reliable tools to verify the authenticity of digital content. We are also working to prevent the manipulation of images using some of the most accessible and widely used AI editing tools. 

With a comprehensive approach, fAIr cross-analyzes audio, image, and video to detect manipulations. This makes it possible to identify whether a seemingly real video has been artificially generated, whether a voice has been cloned, or whether an image has been maliciously altered. Detection is further enhanced by traceability tools that help track how and where manipulated content has spread. 

In the context of minors, tools like fAIr could be essential to prevent the circulation of sexual deepfakes, rapidly identify manipulated content, and provide technical evidence to support legal complaints or content takedown requests. 

Towards Legislation That Protects Our Digital Identity 

As technology advances, some countries are beginning to take steps to protect identity in digital spaces. Denmark, for example, is on track to become the first European country to legally recognize copyright over a person’s face, voice, and body. This legislative reform would allow any citizen to request the removal of AI-generated content created without their consentsuch as fake videos, manipulated audio, or sexualized images and to seek compensation for damages. The measure aims to curb the rise of malicious deepfakes, from non-consensual pornography to voice scams, and could serve as a model for the rest of Europe. Proposals like this show that protecting against AI misuse requires not only technological solutions but also updated legal frameworks. 

From Digital Threat to Multi-Million Dollar Fraud 

Minors are not the only victims of malicious AI use. In recent years, other types of fraud exploiting generative AI have surged. From identity theft using voice cloning (voice hacking) such as fake calls claiming a family member has been kidnapped or a CEO urgently requesting a money transfer to low-cost scams using manipulated images for false claims to food delivery platforms or insurers. 

According to the Deepfake Trends 2024 report, fraud based on synthetic content costs companies over $450,000 on average, a figure that rises to $637,000 in the case of fintech firms. And while 56% of companies trust their ability to detect such fraud, only 6% have actually managed to prevent it. 

What Can We Do? 

Faced with this new wave of cybercrime, human judgment alone is no longer enough. The eye and ear can no longer be fully trusted. We need technology that helps us verify what we see and hear. And we need that technology to be accessible to those who need it most: minors, educators, families, businesses, and public institutions. 

Initiatives like Denmark’s proposed law granting copyright over one’s face and voice, or campaigns like Save the Children’s #DerechosSinConexión (#RightsOffline), are important first steps to protect citizens from malicious uses of AI. But for such measures to be effective, they must be supported by robust technical solutions. At Gradiant, with fAIr, we aim to play an active role in that protection. 

This publication is part of fAIr (Fight Fire with fAIr), funded by the European Union NextGeneration-EU and the Spanish Recovery, Transformation and Resilience Plan (PRTR) through INCIBE. 

image

The views and opinions expressed are solely those of the author(s) and do not necessarily reflect those of the European Union or the European Commission. Neither the EU nor the EC is responsible for them.