Artificial intelligence (AI) is at a turning point. The AI Action Summit in Paris, 6-11 February, has also highlighted in the context of the US major AI offensive Stargate that global debates on technological excellence, regulation and economic sovereignty are intensifying. In this context, EU Commission President Ursula von der Leyen sent a strong signal when she announced that the EU would mobilise 200 billion euros for trustworthy AI: Europe recognises the strategic relevance of trustworthy AI and wants to position itself as a global player that is sovereign in the field of AI.
While the US is pursuing a largely unregulated innovation dynamic and China is establishing a centralised AI model, the EU is pursuing a third path: excellence through collaboration, openness and strong trust guarantees. This view is not only supported by political decisions, but also by scientific and industrial actors who, with their initiatives and investments, are creating the foundations for a resilient European AI infrastructure.
At the summit, the debate on the future path of AI development was shaped by central scientific arguments. While in the USA, the scaling of generative models is generally seen as the path to artificial general intelligence (AGI), European research shows that this approach has fundamental limitations. Generative models, which are currently the focus of public attention, remain probabilistic systems that do not master deep symbolic generalisation. They are highly efficient in pattern recognition and content generation, but show weaknesses in transparency, explainability and security.
The European alternative lies in neuro-explicit AI systems that combine generative methods (neuro) with a formally provable approach (explicit). The latter focuses on explicit knowledge representations and logical inferences, thus enabling self-explanatory and verifiable decision-making processes. These properties are particularly important in the context of European AI regulation, since the AI Act explicitly demands transparency and explainability as cornerstones of trustworthy AI.
A neuro-explicit approach merges the strengths of both methods: an AI based on explicit representations and inference mechanisms complements the performance of generative models that gain knowledge from processing large amounts of unstructured data – you could say that the AI system becomes more reasonable through the entanglement. This symbiosis offers Europe the opportunity to develop robust and trustworthy AI solutions that meet the high standards of the AI Act while remaining competitive with US and Chinese developments.
To advance this vision, not only technological innovations are needed, but also the establishment of a uniform standard for trustworthy AI. The European approach goes beyond mere technical specifications and relies on the integration of ethical and legal principles into the development of AI systems. This includes methodological concepts that ensure transparency, traceability and fairness.
One crucial element is the development of robust mechanisms that enable the trustworthiness of AI systems to be verified. Standardised testing and certification processes play a central role here. Particularly safety-critical applications, such as the use of AI in autonomous systems or in medical diagnostics, require evidence-based guarantees for their functionality. Only a transparent and objective evaluation framework can create long-term social trust.
European stakeholders are therefore working on comprehensive concepts that enable rigorous evaluation of AI systems under realistic conditions. In addition to technical verification, social, cultural and ethical aspects are also included. A crucial goal is not only to define the principles of trustworthy AI, but also to establish them as a binding standard at the European and global level.
The EU's investment offensive, in particular the planned AI gigafactories, shows that Europe is aware of the need for technological independence. The initiative is designed not only to ensure access to high-performance models, but also to provide answers to non-European innovation projects. Cooperation between European industry, science and political actors is crucial here. Without close integration of these areas, Europe will not be able to take a leading role in the global AI competition.
One crucial aspect is the political framework. The EU's AI Act was once again the subject of heated debate in Paris. While official US representatives criticised AI regulation as a brake on innovation, Europe embraces the balance between technological development and responsibility. The establishment of trust mechanisms and clear regulatory guidelines is seen as a prerequisite for broad social acceptance of AI. The need for a framework that ensures data protection, autonomy, and social and ethical principles was clearly emphasised in Paris.
Developments in recent weeks show that Europe has heard the wake-up call. The combination of political will, financial resources and scientific excellence will put the continent in a leading position in trustworthy AI. European stakeholders have laid the crucial foundations to realise this vision.
The European AI strategy must not remain a mere regulatory project. It must be an innovation project that combines excellence, competitiveness and social acceptance. With the right balance of scientific precision, political determination and economic willingness to invest, Europe can not only help shape the global AI competition, but lead it.
company spokesperson, DFKI
Editor & Public Relations Officer, DFKI
Further information:
Live broadcast with DFKI CEO Antonio Krüger from the ARD studio in Paris in conversation with Helge Fuhst, ARD Tagesthemen, 10.02.2025, about the opportunities and challenges of artificial intelligence in Germany and Europe. tagesthemen, 10.02.2025, 22:30