“Robot, bring me my cordless screwdriver!” This is a plausible command for a service robot, in a production environment as in a private home. Robots handling over a tool to a human have been demonstrated a lot. Bringing objects is state of the art. Well – perceiving a screwdriver (in some arbitrary, previously unknown pose), picking it up, bringing it and handing it over is state of the art. How about this “my screwdriver”? Not just any screwdriver, or any one of some particular type, but “my”? Distinguishing between individual objects in everyday environments is trivial for us humans: we would do this without any conscious effort in daily life. Doing this technically requires solving the conceptual problem of effectively maintaining the correspondence between physical objects and their representation – even if the objects don’t differ sensorially. For robots to solve this, we have to enable them to do this based on knowledge and context: “I left my screwdriver on the workbench” or “my mug is the one on my desk”. The currently unsolved general problem behind this is called “anchoring”: the process of “creating and maintaining the correspondence between symbols and sensor data that refer to the same physical objects” (Coradeschi & Saffiotti 2003). The goal of the CoPDA project is to provide software that is able to solve the anchoring problem under given particular conditions; we call it the Dynamic Anchoring Agent (DAA). This DAA, based on a particular knowledge representation and a particular (multi-)sensor configuration, will be conceived, implemented, tested and demonstrated in varying environments, including an indoor and an outdoor use case (lab production environment and real marina). The DAA will be made publicly available for use for the robotics community in the ROS (Robot Operating System) ecosystem.