Skip to main content Skip to main navigation

Publikation

ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models

Sophie F. Jentzsch; Kristian Kersting
In: Jeremy Barnes; Orphée De Clercq; Roman Klinger (Hrsg.). Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis. Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), July 14, Toronto, Canada, Pages 325-340, Association for Computational Linguistics, 2023.

Zusammenfassung

Humor is a central aspect of human communication that has not been solved for artificial agents so far. Large language models (LLMs) are increasingly able to capture implicit and contextual information. Especially, OpenAI's ChatGPT recently gained immense public attention. The GPT3-based model almost seems to communicate on a human level and can even tell jokes. Humor is an essential component of human communication. But is ChatGPT really funny? We put ChatGPT's sense of humor to the test. In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT's capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90% of 1008 generated jokes were the same 25 Jokes. The system accurately explains valid jokes but also comes up with fictional explanations for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the classification of jokes. ChatGPT has not solved computational humor yet but it can be a big leap toward "funny" machines.

Weitere Links