Skip to main content Skip to main navigation
© issaronow - stock.adobe.com

Research Group Responsible AI and Machine Ethics (RAIME)

The research group Responsible AI and Machine Ethics (RAIME) is dedicated to the complex ethical and generally normative challenges that arise in the development and deployment of AI systems. The focus is on the numerous necessary trade-offs regarding conflicting objectives, such as, for example, between fairness and accuracy, transparency and efficiency, or individual and collective benefit. The central research question is how these challenges can be addressed in the context of normative or moral uncertainty, meaning in the absence of universally accepted criteria of correctness. To this end, philosophically-informed and, at the same time, application-oriented, structured approaches and processes will be developed, which, drawing on practical reason and argumentation, will allow for making the respective decisions as justified as possible.

RAIME focuses particularly on the role and interplay of requirements for AI decisions concerning transparency, explainability, and justifiability. The research within RAIME centers on developing formally sound yet practically applicable frameworks that enable both AI systems and humans to make well-justified decisions, which can flexibly and appropriately adapt to new information and changing circumstances. The reliance on explicit justifications ensures that such decisions not only remain comprehensible to humans but can also be questioned and thus improved by them, significantly enhancing the trustworthiness of AI systems. Beyond the benefits of this approach for the responsible development and use of AI systems, RAIME also explores the application of this approach to decisions made by AI systems, addressing both philosophical and technical aspects of machine ethics. The team particularly investigates neuro-explicit concepts to ensure morally acceptable behavior in machines, combining principle-oriented top-down approaches with bottom-up approaches based on machine learning.

RAIME's approach prioritizes value-based and human-centered moral thinking and aims at the gradual improvement of systems in alignment with ethical norms, based on human feedback. In this way, humans remain the ultimate decision-makers, with AI complementing their judgment without replacing it. This framework also strengthens responsible decision-making in the process (“humans in and on the loop”) and helps close responsibility gaps, addressing issues of accountability, liability, and effective human oversight in the deployment of AI systems in ethically sensitive contexts.

Kevin Baum, Head of RAIME Research Group

"In the complex landscape of AI ethics, the 'right' decision is often elusive. But by carefully weighing reasons for and against, we can find solutions that better align with our values than others do, even when faced with deep moral uncertainty. RAIME has set itself the task of finding reasonable, practical paths towards responsible AI development and deployment."

Kevin Baum, Head of RAIME Research Group

Contact

Head:
Kevin Baum M.A. M.SC.

Phone: +49 681 85775 5251

Office:
Tatjana Bungert

Phone: +49 681 85775 5357

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Gebäude D3 2
Stuhlsatzenhausweg 3
66123 Saarbrücken
Germany