Skip to main content Skip to main navigation

Publikation

Introducing Language Guidance in Prompt-based Continual Learning

Muhammad Gulzain Ali Khan; Muhammad Ferjad Naeem; Luc Van Gool; Federico Tombari; Didier Stricker; Muhammad Zeshan Afzal
In: International Conference on Computer Vision 2023. International Conference on Computer Vision (ICCV-2023), October 2-6, Paris, France, IEEE, 10/2023.

Zusammenfassung

Continual Learning aims to learn a single model on a sequence of tasks without having access to data from previous tasks. The biggest challenge in the domain still re- mains catastrophic forgetting: a loss in performance on seen classes of earlier tasks. Some existing methods rely on an expensive replay buffer to store a chunk of data from previous tasks. This, while promising, becomes expensive when the number of tasks becomes large or data can not be stored for privacy reasons. As an alternative, prompt-based methods have been proposed that store the task information in a learnable prompt pool. This prompt pool instructs a frozen image encoder on how to solve each task. While the model faces a disjoint set of classes in each task in this set- ting, we argue that these classes can be encoded to the same embedding space of a pre-trained language encoder. In this work, we propose Language Guidance for Prompt-based Continual Learning (LGCL) as a plug-in for prompt-based methods. LGCL is model agnostic and introduces language guidance at the task level in the prompt pool and at the class level on the output feature of the vision encoder. We show with extensive experimentation that LGCL consistently im- proves the performance of prompt-based continual learning methods to set a new state-of-the-art. LGCL achieves these performance improvements without needing any additional learnable parameters.