Publication
How do we assess the trustworthiness of AI? Introducing the trustworthiness assessment model (TrAM)
Nadine Schlicker; Kevin Baum; Alarith Ude; Sarah Sterz; Martin C. Hirsch; Markus Langer
In: Matthieu J. Guitton (Hrsg.). Computers in Human Behavior, Vol. 170, Pages 1-19, Elsevier, 2025.
Abstract
Designing trustworthy AI-based systems and enabling external parties to accurately assess the trustworthiness of these systems are crucial objectives. Only if trustors assess system trustworthiness accurately, they can base their trust on adequate expectations about the system and reasonably rely on or reject its outputs. However, the process by which trustors assess a system's actual trustworthiness to arrive at their perceived trustworthiness remains underexplored. In this paper, we conceptually distinguish between actual and perceived trustworthiness, trust propensity, trust, and trusting behavior. Drawing on psychological models of how humans assess other people's characteristics, we present the two-level Trustworthiness Assessment Model (TrAM). At the micro level, we propose that trustors assess system trustworthiness based on cues associated with the system. The accuracy of this assessment depends on cue relevance and availability on the system's side, and on cue detection and utilization on the human's side. At the macro level, we propose that individual micro-level trustworthiness assessments propagate across different trustors – one stakeholder's trustworthiness assessment of a system affects other stakeholders' trustworthiness assessments of the same system. The TrAM advances existing models of trust and sheds light on factors influencing the (accuracy of) trustworthiness assessments. It contributes to theoretical clarity in trust research, has implications for the measurement of trust-related variables, and practical implications for system design, stakeholder training, AI alignment, and AI regulation related to trustworthiness assessments.