Publikation
Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair Outputs
Markus Langer; Kevin Baum; Nadine Schlicker
In: Minds and Machines - Journal for Artificial Intelligence, Philosophy and Cognitive Science, Vol. 35, No. 1, Pages 1-30, Springer Nature, 11/2024.
Zusammenfassung
Legislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions for effective human oversight. We argue that the reliable detection of errors (as an umbrella term for inaccuracies and unfairness) is crucial for effective human oversight. We then propose that Signal Detection Theory (SDT) offers a promising framework for better understanding what affects people’s sensitivity (i.e., how well they are able to detect errors) and response bias (i.e., the tendency to report errors given a perceived evidence of an error) in detecting errors. Whereas an SDT perspective on the detection of inaccuracies is straightforward, we demonstrate its broader applicability by detailing the specifics for an SDT perspective on unfairness detection, including the need to choose a standard for (un)fairness. Additionally, we illustrate that an SDT perspective helps to better understand the conditions for effective error detection by showing examples of task-, system-, and person-related factors that may affect the sensitivity and response bias of humans tasked with detecting unfairness associated with the use of AI-based systems. Finally, we discuss future research directions for an SDT perspective on error detection.