fair

Fight Fire with fAIr develops a prototipe of suite of multimodal AI-based tools to combat threats arising from the malicious use of AI.

In the era of artificial intelligence (AI), especially with the rise of generative models, we face a landscape of complex threats. The ability to create synthetic content—such as voice, images, videos, and text—has opened new possibilities but has also introduced significant challenges in terms of security, privacy, and trust. The tools used to create or modify content, particularly in this new era of generative AI, are becoming increasingly easy to use and widely accessible. This is clearly exemplified by Deep Fakes and their negative impact in terms of fraud, misinformation, and defamation across various scenarios, including online identity verification processes, video conferences (e.g., the $25M fraud case in Hong Kong), phone calls, the creation of fake sexual content (e.g., the Almendralejo case), and misinformation in electoral processes.

Beyond the challenges posed by Deep Fakes, the online world faces multiple threats. These include the creation of synthetic identity documents or the distribution of falsified audio, video, and image content for criminal purposes.

For this reason, it is crucial to develop a suite of multimodal AI-based tools to combat threats arising from the malicious use of AI.

Funded by: CPI. Cuarta convocatoria | ED2026 | INCIBE

Leer más

Related Sectors