Embedding EU Legal Obligations in AI Cybersecurity
Within the cPAID project, AI-enabled cybersecurity is developed alongside a legal and ethical perspective. In particular, the data fabric and related modules are designed in a context where privacy and data protection by design, security by design, and transparency obligations are not abstract principles, but requirements that shape system architecture, governance, testing, and documentation choices.
Ethical and Legal Management
cPAID includes an Ethical and Legal Management task led by VUB. Its role is to identify ethical and legal issues that arise during the project and to support the consortium in selecting and applying appropriate mitigation measures under the applicable framework. A central emphasis is the protection of privacy and privacy rights under the GDPR and the AI Act, coupled with requirements for the processing of personal data in an experimental setting. This includes principles such as fairness, lawfulness, and compatible use, as well as relevant sectoral rules.
From Legal Principles to Design Constraints
Under the GDPR, data protection by design and by default (art. 25) requires embedding privacy-preserving measures into the architecture of systems from inception to deployment, including technical and organisational measures such as pseudonymisation and data minimisation. In practice, this maps to role allocation, consent policies for pilots where relevant, facilitation of data subject rights, and logging and record-keeping.
While the GDPR does not explicitly refer to “security by design,” art. 32 (security of processing) is commonly operationalised as such: implementing risk-appropriate technical and organisational measures (e.g., encryption, access control, resilience and recovery) and maintaining processes for regular testing and evaluation of safeguards. Transparency obligations (art. 13 and 14) also remain relevant in security testing contexts, including where temporary deception is used and must be followed by appropriate debriefing and restoration of transparency.
AI Act Obligations, Risk Categorisation, and Transparency
The AI Act adds a risk-based layer whose applicability depends on how components are implemented and deployed; there is therefore a question whether the data fabric qualifies as an AI system, assesses potential risk categorisation, and identifies the relevant roles of actors involved.
Where high-risk rules apply, obligations include: lifecycle risk management (art. 9), technical documentation (art. 11), quality management (art. 17), and robustness, accuracy, and cybersecurity (art. 15), complemented by ongoing monitoring and incident-related obligations. For limited-risk systems, transparency obligations become the principal mandatory requirement, while privacy- and security-by-design approaches remain relevant as governance principles.
A Methodology for a Techno-Legal Conversation
A key contribution of cPAID is methodological: demonstrating how legal by design obligations can be translated into technical requirements. The ongoing work includes privacy- and security-preserving data flows, logging and record-keeping, and legally-aware testing approaches that connect engineering decisions to legal requirements. Just as importantly, the project’s iterative techno-legal dialogue weaves between evolving legal interpretation and concrete implementation choices, and provides a model for future projects facing the same legal-to-tech translation problem.
Horizon Scanning in an Evolving EU Rulebook
Finally, the EU legal framework is itself dynamic. The Digital and AI Omnibus proposals put forward by the Commission explicitly revise parts of the EU digital rulebook across different frameworks, including the AI Act implementation timelines and transparency obligations, as well as proposals touching principles and obligations in the GDPR.
In that context, VUB’s horizon scanning function is a practical necessity: it helps ensure that the project’s compliance assumptions and by design methodology remain aligned with the most current legislative trajectory.
Source: pixabay.com
