Based on the recommendations of the High Level Expert Group (AI HLEG), the EU Commission published core requirements for a trustworthy AI yesterday. Prof. Dr. Philipp Slusallek, head of the DFKI site in Saarbrücken and co-founder of the European AI initiative CLAIRE, has been intensively involved since June 2018: "The guidelines are a very important first step towards defining the core of a European AI strategy. However, much remains to be done to develop the document further and to make it practical. DFKI is committed to these guidelines and inititiates a new testing laboratory in which AI systems will be certified. The guidelines and their further development will play an important role".
“Certification is a prerequisite for credibility and trustworthiness of AI systems. It makes a fundamental and sustainable contribution to digital sovereignty in the sense of human-centric AI. CERTLAB strives to create a Center of Excellence for the certification of AI systems, coordinated with technical and social standards in different contexts and framework conditions,” says CERTLAB Director Roland Vogt.
The Laboratory for Certification and Digital Sovereignty (CERTLAB) will actively promote the development, standardization and application of certification criteria for AI systems. Such criteria will predictably address the protection of security, privacy and autonomy of all persons affected by the use of AI systems. Particular attention will be paid to the aspects of controllability, explainability, stability, safety and fairness. The CERTLAB will be headed by Roland Vogt, who, as head of the DFKI's recognised IT Security Testing Centre (PITS), benefits from almost 20 years of experience in this field. Among other things, PITS carries out the independent evaluation of IT products in accordance with the requirements of the Common Criteria (CC) and furthermore the qualified development of protection profiles and security targets.
With the CLAIRE initiative, Philipp Slusallek advocates an “AI made in Europe” that takes European core values into account. CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe) is the European answer to the global AI competition. The initiative brings together stakeholders from research, industry, politics and society throughout Europe and involves them in the discussion of new research topics, technologies and solutions. CLAIRE's concrete goals include the further development of competence centers, which will be strategically distributed across Europe, and a central hub, data and compute infrastructure as a CERN for AI.
DFKI's overarching strategies for its research programme and activities follow ethical principles and values based on fundamental human rights. An important part of these strategies is to support the objective evaluation of AI systems in order to confirm their compliance with ethical principles and values. The DFKI AI Assessment Program will be concentrated in the new Laboratory for Certification and Digital Sovereignty.
Further Information
CERTLAB
http://europa.eu/rapid/press-release_IP-19-1893_de.htm?fbclid=IwAR2eFuS2BoFVoT70tlpMehEuFX0lwWOmyiskydbPZLjeJ52qFyoa_ep4TQg
https://claire-ai.org
Contact
Prof. Dr. Philipp Slusallek
Head of Research Department Agents and Simulated Reality
Site Director DFKI Saarbrücken
German Research Center for Artificial Intelligence (DFKI)
E-Mail: Philipp.Slusallek@dfki.de
Tel.: +49 681 85775 5390
Roland Vogt
Head CERTLAB – Laboratory for Certification and Digital Sovereignty
German Research Center for Artificial Intelligence (DFKI)
E-Mail: Roland.Vogt@dfki.de
Tel.: +49 681 85775 4131
Press contact
Reinhard Karger
Corporate Spokesperson
German Research Center for Artificial Intelligence (DFKI)
E-Mail: Reinhard.Karger@dfki.de
Tel.: + 49 681 85775 5253