Skip to main content Skip to main navigation

Publication

DeiSAM: Segment Anything with Deictic Prompting

. (Hrsg.)
AAAI Workshop on Neuro-Symbolic Learning and Reasoning in the Era of Large Language Models, located at AAAI, 2024.

Abstract

Large-scale, pre-trained neural networks have demonstrated strong capabilities in various tasks, including zero-shot image segmentation. To identify concrete objects in complex scenes, humans instinctively rely on deictic descriptions in natural language, i.e. , referring to something depending on the con- text, e.g. ”The object that is on the desk and behind the cup.”. However, deep learning approaches cannot reliably interpret these deictic representations due to their lack of reasoning ca- pabilities in complex scenarios. To remedy this issue, we pro- pose DeiSAM, which integrates large pre-trained neural net- works with differentiable logic reasoners. Given a complex, textual segmentation description, DeiSAM leverages Large Language Models (LLMs) to generate first-order logic rules and performs differentiable forward reasoning on generated scene graphs. Subsequently, DeiSAM segments objects by matching them to the logically inferred image regions. As part of our evaluation, we propose the Deictic Visual Genome (DeiVG) dataset, containing paired visual input and complex, deictic textual prompts. Our empirical results demonstrate that DeiSAM is a substantial improvement over data-driven neural baselines on deictic segmentation tasks.

Weitere Links