Publication
Unreflected Acceptance – Investigating the Negative Consequences of ChatGPT-Assisted Problem Solving in Physics Education
Lars Krupp; Steffen Steinert; Maximilian Kiefer-Emmanouilidis; Karina E. Avila; Paul Lukowicz; Jochen Kuhn; Stefan Küchemann; Jakob Karolus
In: HHAI 2024: Hybrid Human AI Systems for the Social Good. International Conference on Hybrid Human-Artificial Intelligence (HHAI-2024), Hybrid Human AI Systems for the Social Good, June 10-14, Malmö, Sweden, IOS Press, 2024.
Abstract
The general availability of large language models and thus unrestricted usage in sensitive areas of everyday life, such as education, remains a major debate. We argue that employing generative artificial intelligence (AI) tools warrants informed usage and examined their impact on problem solving strategies in higher education. In a study, students with a background in physics were assigned to solve physics exercises, with one group having access to an internet search engine (N=12) and the other group being allowed unrestricted use of ChatGPT (N=27). We evaluated their performance, strategies, and interaction with the provided tools. Our results showed that nearly half of the solutions provided with the support of ChatGPT were mistakenly assumed to be correct by students, indicating that they overly trusted ChatGPT even in their field of expertise. Likewise, in 42% of cases, students used copy & paste to query ChatGPT - an approach only used in 4% of search engine queries - highlighting the stark differences in interaction behavior between the groups and indicating limited task reflection when using ChatGPT. In our work, we demonstrated a need to (1) guide students on how to interact with LLMs and (2) create awareness of potential shortcomings for users.