Skip to main content Skip to main navigation

Responsible AI in the Automotive Industry – Accenture and DFKI Present Joint White Paper

| Press release | Mobility | Autonomous Systems | Machine Learning & Deep Learning | Agents and Simulated Reality | Neuro-mechanistic Modeling | Saarbrücken

Deep learning is an AI technology that has significantly shaped the last decade, whether it's recognizing medical conditions, creative applications for text or image generation, or autonomous driving. However, despite much progress, machine learning's successes, especially in autonomous driving, have fallen short of expectations. Accenture and DFKI’s joint white paper, “Responsible AI in the Automotive Industry – Techniques and Use Cases,” is dedicated to finding the reasons and proposing new technological approaches.

© Scharfsinn86 - stock.adobe.com

According to the team of authors, current AI models, such as deep learning, are not trustworthy and responsible enough to be reliably used in highly critical application areas such as autonomous driving. They often suffer from problems related to explainability, robustness, and generalizability. Furthermore, they require large amounts of training data and have high energy demands. Deep learning models are powerful, but they are unable to explain their decisions, making it difficult to trust their results in safety-critical applications.

As a solution, the authors propose the concept of neuro-explicit AI, a hybrid approach that combines the strengths of neural networks with symbolic reasoning and explicit knowledge representation. Neuro-explicit AI aims to create models that are more transparent, interpretable, and robust by integrating domain-specific knowledge and physical laws into the AI decision-making process. Neuro-explicit AI uses symbolic arguments to explain the decisions made, promising a future where AI decisions are more transparent, and the AI system is more reliable.

The white paper discusses several use cases that demonstrate the potential of neuro-explicit AI for autonomous driving. The authors conclude that deep reinforcement learning, together with online planning methods, can improve the safety and performance of autonomous vehicles in uncertain real-time environments. This approach uses neural networks and symbolic models to enable safer decision-making in dynamic situations, such as avoiding pedestrians or navigating complex traffic situations.

Another application area focuses on improving the perception of autonomous driving systems by incorporating knowledge of visual features. The system uses high-level, symbolic knowledge of objects’ physical properties, such as light reflections, to increase the accuracy of object recognition. Incorporating such symbolic information into perception models makes the technology more resilient to disruptions and can better interpret complex visual data, resulting in greater reliability and safety.

Accenture and DFKI emphasize the importance of responsible AI practices for achieving AI maturity, i.e., developing AI systems that not only perform technically proficiently but also function in an ethical, fair, and transparent manner. Their framework for responsible AI highlights several key principles, including fairness, transparency, explainability, accountability, and sustainability. These principles are designed to ensure that AI technologies benefit society while minimizing risks such as bias, discrimination, and privacy violations. For example, fairness in AI ensures that algorithms do not produce biased or discriminatory results, while explainability allows stakeholders to understand how AI systems make decisions. Similarly, accountability ensures that there are clear responsibilities for AI-driven outcomes, and sustainability focuses on minimizing the environmental impact of AI technologies.

The paper also discusses the challenges of AI governance and the need for organizations to adopt cross-functional governance structures that promote transparency and accountability in AI development. By establishing clear roles, policies, and expectations, companies can better manage the risks associated with AI while increasing the trust of consumers and other stakeholders.

Whitepaper

Contact:

Dr.-Ing. Christian Müller

Head of DFKI Competence Center Autonomous Driving, DFKI

Press contact:

Heike Leonhard, M.A.

Communications & Media, DFKI