fair
Start / End
Sep 2024 / Sep 2026
Code
PR-01568

Fight Fire with fAIr develops a suite of multimodal AI-based tools to combat threats arising from the malicious use of AI.

In the era of artificial intelligence (AI), especially with the rise of generative models, we face a landscape of complex threats. The ability to create synthetic content—such as voice, images, videos, and text—has opened new possibilities but has also introduced significant challenges in terms of security, privacy, and trust. The tools used to create or modify content, particularly in this new era of generative AI, are becoming increasingly easy to use and widely accessible. This is clearly exemplified by Deep Fakes and their negative impact in terms of fraud, misinformation, and defamation across various scenarios, including online identity verification processes, video conferences (e.g., the $25M fraud case in Hong Kong), phone calls, the creation of fake sexual content (e.g., the Almendralejo case), and misinformation in electoral processes.

Beyond the challenges posed by Deep Fakes, the online world faces multiple threats. These include the creation of synthetic identity documents or the distribution of falsified audio, video, and image content for criminal purposes.

For this reason, it is crucial to develop a suite of multimodal AI-based tools to combat threats arising from the malicious use of AI.

Read more
Financing
INCIBE
CPP4 - CPP001/24
Consortium
VICOMTECH (Colaborador) , FUAM- Fundación Autónoma de Madrid (Colaborador) , GTM - Grupo de Tecnologías Multimedia (Universidade de Vigo) (Colaborador) , GPSC - Universidade de Vigo (Colaborador) , S.M.E. Instituto Nacional de Ciberseguridad de España M.P., S.A. (Cliente)

Related Sectors