Publikation
Large Language Models as Knowledge Engineers
Florian Brand; Lukas Malburg; Ralph Bergmann
In: Lukas Malburg (Hrsg.). Proceedings of the Workshops at the 32nd International Conference on Case-Based Reasoning (ICCBR-WS 2024) co-located with the 32nd International Conference on Case-Based Reasoning (ICCBR 2024), Mérida, Mexico, July 1, 2024. International Conference on Case-Based Reasoning (ICCBR-2024), Pages 3-18, CEUR Workshop Proceedings, Vol. 3708, CEUR-WS.org, 2024.
Zusammenfassung
Many Artificial Intelligence (AI) systems require human-engineered knowledge at their core to reason about new problems based on this knowledge, with Case-Based Reasoning (CBR) being no exception. However, the acquisition of this knowledge is a time-consuming and laborious task for the domain experts that provide the needed knowledge. We propose an approach to help in the creation of this knowledge by leveraging Large Language Models (LLMs) in conjunction with existing knowledge to create the vocabulary and case base for a complex real-world domain. We find that LLMs are capable of generating knowledge, with results improving by using natural language and instructions. Furthermore, permissively licensed models like CodeLlama and Mixtral perform similar or better than closed state-of-the-art models like GPT-3.5 Turbo and GPT-4 Turbo.