Publication
Actionable Trustworthy AI with a Knowledge-based Debugger
Priyabanta Sandulu; Andrea Sipka; Sergey Redyuk; Sebastian VollmerAbstract
The rapidly evolving regulatory landscape in AI presents significant challenges to establishing and maintaining
trust. AI practitioners face a substantial burden in understanding and operationalizing abstract requirements.
Existing solutions often lack concrete strategies for effective risk mitigation. We address these gaps by proposing
an AI debugger, powered by an expandable knowledge base, that identifies risks and suggests actionable mitigation
with little overhead to the end-user. A Human-in-the-Loop component supports adaptive decision-making, and
the unique Requirement & Knowledge Engineering pipeline suggests the mapping between abstract guidelines
and actionable specifications, pending validation by the end-user. Our framework aims to reduce the compliance
overhead and streamline the development of trustworthy AI systems.
