Generative AI models have made great progress in recent years and have achieved impressive results. However, these models have so far only been of limited use to SMEs, as they are not sufficiently adapted to the specialised domains of companies and therefore produce erroneous content more frequently than in general fields of knowledge. In addition, the underlying generation process is often opaque and hard to follow for (lay) users. All these factors have a detrimental effect on trust in the models and their output, reducing their acceptance and thus also the development of optimised or new business processes. Particularly in the media, cultural and creative sectors, day-to-day editorial work is still characterised by time-consuming processes that require manual research and integration of multimodal materials as well as laborious checks on quality and legal requirements. The aim of the GenKI4Media project is to tap into the innovative potential of generative AI with three new generative AI assistants for (1) ‘Generating multimodal media formats for culture, politics and education’, (2) ‘Standards and regulations in the media sector’ and (3) ‘Demonstrators for the creative/cultural sector’ in order to effectively support editorial work. The AI assistants can be used dynamically for a wide range of tasks and do not have to be individually programmed for each task and target group, as was previously the case. The basis for the assistants is an innovative, continual development of AI technologies through plug-ins for knowledge organisation and transparency of LLMs.
The aim of the DFKI sub-project is to research and develop methods and generative AI models to improve the transparency, traceability and trustworthiness of AI-generated content. To achieve these goals, the DFKI subproject focuses on three complementary R&D areas. The first area deals with the development of conversational methods for explainable AI that enable the end user to explore explanations in the form of an interactive dialogue. The second area covers the design and development of methods and algorithms that enable generated content to be automatically related to external sources, validated on the basis of these sources and corrected if necessary. The third area comprises the design and creation of task- and domain-specific test data sets, so-called challenge test sets, which rigorously test critical cases in generation tasks - for example, the avoidance of incorrect causal or temporal conclusions, factual misgenerations (hallucinations), as well as the correct processing of long tail or very domain-specific information.
Partners
- Condat AG
- 3pc GmbH Neue Kommunikation
- Art+Com GmbH
- FhG FOKUS - Fraunhofer-Institut für offene Kommunikationssysteme
- rbb - Rundfunk Berlin Brandenburg
- dpa Deutsche Presse-Agentur GmbH