Skip to main content Skip to main navigation

Publication

BloomLLM: Large Language Models Based Question Generation Combining Supervised Fine-Tuning and Bloom’s Taxonomy

Nghia Duong-Trung; Xia Wang; Milos Kravcik
In: Rafael Ferreira Mello; Nikol Rummel; Ioana Jivet; Gerti Pishtari; José A. Ruipérez Valiente (Hrsg.). Technology Enhanced Learning for Inclusive and Equitable Quality Education. European Conference on Technology Enhanced Learning (EC-TEL-2024), 19th European Conference on Technology Enhanced Learning, September 16-20, Krems, Austria, Pages 93-98, Lecture Notes in Computer Science (LNCS), Vol. 15160, ISBN 978-3-031-72311-7, Springer, Cham, 9/2024.

Abstract

Adaptive assessment is challenging, and considering various competence levels and their relations makes it even more complex. Nevertheless, recent developments in artificial intelligence (AI) provide new means of addressing these relevant issues. In this paper, we introduce BloomLLM, a novel adaptation of Large Language Models (LLMs) specifically designed to enhance the generation of educational content in alignment with Bloom’s Revised Taxonomy. BloomLLM performs well across all levels of competencies by providing meaningful, semantically connected questions. It is achieved by addressing the challenges of foundational LLMs, such as lack of semantic interdependence of levels and increased hallucination, which often result in unrealistic and impractical questions. BloomLLM, fine-tuned on ChatGPT-3.5-turbo, was developed by fine-tuning 1026 questions spanning 29 topics in two master courses during the winter semester 2023. The model’s performance, outpacing ChatGPT-4, even with varied prompting strategies, marks a significant advancement in applying generative AI in education. We have publicly made the BloomLLM codes and training datasets available to promote transparency and reproducibility.

Projects

More links