Publication
SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs
Ruben Härle; Felix Friedrich; Manuel Brack; Björn Deiseroth; Patrick Schramowski; Kristian Kersting
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2411.07122, Pages 1-11, arXiv, 2024.
Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in
generating human-like text, but their output may not be aligned with the user or even
produce harmful content. This paper presents a novel approach to detect and steer
concepts such as toxicity before generation. We introduce the Sparse Conditioned
Autoencoder (SCAR), a single trained module that extends the otherwise untouched
LLM. SCAR ensures full steerability, towards and away from concepts (e.g., toxic
content), without compromising the quality of the model’s text generation on
standard evaluation benchmarks. We demonstrate the effective application of our
approach through a variety of concepts, including toxicity, safety, and writing style
alignment. As such, this work establishes a robust framework for controlling LLM
generations, ensuring their ethical and safe deployment in real-world applications.
