
The research group Neuro-Explicit Sequential Decision-Making (NEXD) develops novel methods to improve sequential decision-making by explicitly integrating structured information. At the core of our work is the combination of powerful deep learning approaches with symbolic or structural knowledge to create algorithms that are more reliable, safer, and more explainable.
The group’s research focuses on two central topics:
Established symbolic AI methods, particularly from the field of AI planning, are integrated into modern deep learning techniques. The goal is to develop explicitly hybrid decision models that combine the robustness, structural fidelity, and interpretability of classical algorithmic approaches with the efficiency, generalization capability, and scalability of neural methods. This results in solution approaches that are not only high-performing but also structurally grounded, controllable, and reliable in deployment.
Explicitly available environmental knowledge — such as safety constraints, domain-specific rules, or structured world models — is systematically incorporated into the learning process. This accelerates learning while improving the safety, efficiency, and reliability of trained agents, particularly with respect to exploration and decision quality.
The mission of NEXD is twofold:
First, the group conducts fundamental research to produce high-quality scientific publications at leading conferences such as ICAPS, AAAI, IJCAI, and CAV. Second, the developed methods are transferred to industrial applications, particularly in domains that require optimization and robust sequential decision-making. Key application areas include (automotive) manufacturing, steel production, retail, and robotics-based production systems. NEXD therefore aims not only to advance fundamental research but also to ensure effective transfer into real-world settings, so that innovation and progress lead not only to scientific insight but also to impact.

"We do not develop isolated neural or symbolic methods, but rather deliberately hybrid decision models that combine structured algorithmic principles with learning-based methods. In this way, we combine robustness and interpretability with scalability and efficiency — a prerequisite for reliable AI in real-world decision-making processes."

Head of NEXD:
Timo P. Gros
timo_philipp.gros@dfki.de
Tel.: +49 681 857755375
Team assistance NEXD:
Sophie van Rossum
sophie_paulina.van_rossum@dfki.de
Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Gebäude D3 2
Stuhlsatzenhausweg 3
66123 Saarbrücken
Germany
Magnus Cunow
Behkam Fallah
Julius Gabelman
Pascal Held
Felix Kuntz
Nicola Müller
Joshua Meyer
Moritz Oster
Naya Rudolph
Nils Waltner