DPO with long‑standing AI assessment experience and formal AI compliance training (CAICO), and AI‑augmented knowledge builder exploring how generative AI can responsibly amplify expertise, creativity and governance, informed both by long‑standing governance practice and by direct exposure to how AI initiatives are developed, translated and accelerated within the business.
Keywords
Generative AI ; AI assessments ; Prompt engineering ; Human‑AI collaboration ; AI governance ; AI-unlocked expertise ; Responsible AI ; AI-assisted prototyping
Alongside my primary role as Data Protection Officer, I actively design, test and refine human‑AI systems for knowledge work, creativity and governance.
This AI work is a deliberate extension of my DPO practice. It builds on my long‑standing involvement in assessing, advising on and governing the responsible use of AI and data‑driven systems within regulated environments.
I have been engaged with emerging AI‑related risks and frameworks since 2019, when early European and sector‑specific guidance on trustworthy and ethical AI began to shape expectations for organizations. Since then, my work has consistently focused on translating evolving ethical, legal and supervisory principles into practical assessment criteria, advice and oversight.
In this AI work I focus on turning generative AI from an ad‑hoc chat tool into structured, controllable workflows that can augment human expertise without eroding responsibility, judgment or authorship.
My background in IT, law, compliance and governance fundamentally shapes this AI work: I am less interested in raw automation, and more in how AI can responsibly support human thinking and work.
Across multiple side projects and experiments, I design reusable prompt architectures and workflow patterns rather than one‑off prompts, informed by my experience reviewing and assessing AI use cases in practice.
Design principles: human‑in‑the‑loop, predictability over cleverness, explicit role separation between human and AI.
This work reflects a core belief shaped by my DPO role: AI becomes valuable when it is embedded in a system with clear boundaries and not replaces thinking.
I run hands‑on experiments to understand how AI behaves over time, which I consider essential concern for governance, risk assessment and oversight.
Goal: understand where AI supports human judgment and, especially, where reliance on AI output becomes risky.
These experiments directly inform my professional perspective on explainability, oversight and meaningful human control in AI assessments.
As part of my DPO role, I have been actively involved in assessing, advising on and reviewing AI and data‑driven applications over multiple years.
This long‑term assessment work forms the normative backbone of my experimental AI practice: I build and test AI systems precisely to understand what good governance requires in reality.
I use creative side projects as controlled environments to test human‑AI co‑creation under clear constraints.
Rather than “letting AI write”, I structure creativity so that generation follows intention.
Insight: creativity scales when structure precedes generation.
In direct connection with my DPO responsibilities, I apply generative AI to explore and structure complex domains where nuance and responsibility matter.
Lens: responsibility, explainability, proportionality, role clarity.
This work sits at the intersection of AI literacy, assessment and governance: understanding not just what AI can do, but what it should not do.
In addition to my assessment and governance work, I have actively invested in understanding how AI initiatives take shape within organizations, and how legal, ethical and supervisory considerations connect to business and data‑science practice.
These programmes strengthened my ability to translate between domains: business objectives, data and AI capabilities, legal and ethical constraints.
This perspective directly informs my work as a DPO: I assess AI systems not only from the outside, but with a clear understanding of how and why they are built, promoted and scaled inside organizations.
In addition to hands‑on assessment and advisory work, I completed the Certified AI Compliance Officer (CAICO) programme at ICTRecht Academy.
This practice‑oriented programme focuses on AI compliance and governance at the intersection of technology, ethics and law, with a strong emphasis on real‑world AI applications and assessments.
The training deepened my ability to:
The CAICO certification provides a formal and structured foundation under my ongoing work in AI assessments, advice and governance as a DPO, and complements my practical experimentation with generative AI systems.