Sovereign data spaces (I): privacy, risks and solutions for a safer Europe

Scroll to see more
trusted

Data spaces are technical and governance ecosystems where multiple actors,from public institutions to companies and startups, share and use data under common rules. Their structure relies on three essential pillars: interoperabilityaccess control, and traceability, ensuring the transparent and secure use of information.

Their main value lies in their ability to break information silos, foster collaboration among entities, and increase the economic value of data without breaching regulations such as the GDPR or the eIDAS2 Regulation.

Indeed, data spaces are becoming the cornerstone of Europe’s digital sovereignty strategy, as they allow participants to retain control over their information, decide how it is shared, and under what conditions.

The European approach seeks to move from a simple data exchange model to one of responsible data governance. This means setting clear rules for access, use, and reuse of information; promoting interoperability; and removing technical and legal barriers.

In this way, Europe is moving towards a collaborative and sovereign data economy, where innovation is built on trust, transparency, and respect for citizens’ digital rights.

Risks in the Use and Processing of Personal Data

The use of data,particularly personal or sensitive data, involves significant technical and ethical risks, both in authentication and access control processes, as well as in the training of artificial intelligence models.

Access Control Risks

In sectors such as healthcare, finance, or public administration, authentication systems must ensure two essential principles:

  • Data minimization: data sharing must be strictly limited to the information necessary to identify the user. Moreover, users should be able to verify that they are not being asked for more information than required.
  • Avoiding correlation and tracking: two different services (for example, a public administration and a bank) should not be able to identify that they are interacting with the same user when requesting a similar verification, such as proof of legal age.

Risks in AI Model Training

Training models on personal data introduces additional risks:

  • Member reidentification: determining whether a specific data point was part of the training set, for example, inferring whether a person was a patient in a hospital or a client of a bank.
  • Model inversion: some models can “remember” representations of their training data, which could allow personal information to be reconstructed from the model itself.
  • Data poisoning: small malicious alterations to training data can lead a model to learn incorrect patterns, with severe consequences in critical applications such as medical diagnosis or public safety.
  •  

PETs: Technologies for Sharing Without Exposing

To address these risks, Privacy-Enhancing Technologies (PETs) have emerged —a set of technologies designed to preserve privacy during data use, sharing, or analysis.

Their main goal is to enable computation on data without revealing it in its raw form, or revealing only what is strictly necessary, thereby reducing the risk of leakage or misuse.

In general, PETs address four key dimensions of privacy:

  • Input privacy: adds a confidentiality layer to data before processing.
  • Output privacy: protects data after processing, ensuring results do not expose sensitive information.
  • Input verification: ensures the authenticity and integrity of the data being used.
  • Output verification: guarantees that operations performed on data are verifiable, enabling proof that calculations or inferences are correct.

PETs may rely on cryptographic mechanisms (such as homomorphic encryption or zero-knowledge proofs) or on statistical methods (such as differential privacy). Each technology entails a different balance between privacy, accuracy, and performance: some slightly reduce analytical quality in exchange for greater protection; others increase processing time.

Technical and Regulatory Challenges

The main technical challenge lies in evolving PETs to balance privacy with usability. It is unlikely that a single technology can cover all use cases, but combining several techniques can offer strong and adaptable protection.

From a legal perspective, the challenge is to translate regulatory requirements into verifiable technical guarantees, ensuring that the protections offered by a PET directly correspond to the obligations of the European legal framework. Only then will widespread adoption be possible, achieving a true balance between innovation and regulation.

 

TRUSTED: Making Data Sovereignty a Reality in Europe

In this context, TRUSTED, a project led by Gradiant, aims to make data sovereignty tangible by combining data spacesdigital identity technologies, and PETs within a coherent and secure architecture.

The alliance integrates three interrelated technological blocks:

  • Data spaces, which provide the infrastructure to discover entities, share information, and establish secure exchange channels.
  • The European Digital Identity Wallet (EUDI Wallet), which offers a trust framework for verifiable identification of entities and users, giving them control over the information they share.
  • Federated learning + PETs, enabling different organizations to collaboratively train AI models without exchanging original data, thereby minimizing exposure and reidentification risks.

With this combination, TRUSTED empowers citizens and organizations to maintain control over their data, ensures privacy and security in data exchanges, and strengthens trust in European digital services.

 

Why TRUSTED

TRUSTED not only accelerates secure innovation in sectors such as healthcare, finance, and public administration, but also reduces regulatory friction by offering solutions aligned with European privacy and digital sovereignty policies.

By providing verifiable privacy guarantees, it increases organizations’ confidence to participate in data spaces, thereby expanding the amount of data available to train more accurate AI models.

Ultimately, TRUSTED strengthens Europe’s technological autonomy and demonstrates that privacy and innovation are not opposing forces, but the twin pillars upon which to build a safer, more ethical, and more competitive data-driven economy.

Logos_instituciones_horizonte_europa_funded-1024x215

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No. 101168467