Trustworthy AI
Embedding multi-objective fairness and explainability into AI models that help Dutch inspectorates prioritize and plan their inspections.
Dutch government inspectorates are responsible for safeguarding public interests, from food safety and fair working conditions to educational quality and environmental protection. As the volume and complexity of data grow, these agencies need data-driven methods to make their limited inspection capacity as effective as possible. Our research sits within the AI4Oversight lab, where we work hand in hand with multiple inspectorates to co-develop AI tools that are not only powerful but also trustworthy.
Key Challanges in this domain
Inspectorates face a dual challenge: they must cover an ever-increasing amount of potential risks with only limited staff, and they must do so in a way that upholds fairness, transparency and accountability. Traditional machine-learning models can inadvertently amplify biases, risking unfair targeting of certain businesses or communities, and often act as “black boxes”, making it hard for inspectors and the public to understand why a particular site was flagged. Balancing operational efficiency with ethical, legal and societal requirements calls for methods that integrate these values into the heart of the model, rather than treating them as after-thoughts.
Research Questions
-
How can we formally integrate fairness, explainability and other ethical requirements into the training of inspection-prioritization models?
-
What are the trade-offs between different trustworthiness objectives, and how can inspectors navigate them?
-
How can we present model outputs and fairness assessments in a clear, actionable way to support transparent decision-making?
Solutions
We propose to build AI-driven tools that seamlessly integrate fairness and transparency into inspection planning. Rather than delivering a single, black-box recommendation, our system offers a range of balanced options that reflect different trade-offs between effectiveness, equity, and clarity. Users can explore these options through intuitive visual interfaces that highlight fairness metrics and provide clear explanations for why certain cases are prioritized, empowering inspectorates to make informed and trustworthy decisions.
Meet the researcher
Sofoklis Kitharidis
Leiden University
My name is Sofoklis Kitharidis, a PhD candidate passionate about blending advanced AI methods with real-world impact. Beyond developing fair and transparent machine-learning models, I love exploring new cuisines in my kitchen, planning my next travel adventure, and watching a lot of football. When I am not coding or diving into research papers, you will often find me curled up with a good movie.
“I believe that AI can greatly enhance the impact of government oversight, but only if we build models that people can understand and trust. By weaving fairness and transparency into every stage of development, we empower inspectorates to make decisions that are both effective and fair.”
Results
Inspectorate Use Cases
Description of use cases that have been executed within this work packages
Publications
Check out the publications related to Trustworthy AI