Skip to main content Skip to main navigation

Project

SC_Gomaa

TeachTAM: Machine Teaching with Hybrid Neurosymbolic Reinforcement Learning; The Apprenticeship Model

TeachTAM: Machine Teaching with Hybrid Neurosymbolic Reinforcement Learning; The Apprenticeship Model

Recent advances in Machine learning (specifically Computer Vision and Reinforcement Learning) allowed robots to understand objects and the surrounding environment on a perceptual non-symbolic level (e.g., object detection, sensors fusion, and language understanding), however, a trending area of research is to understand objects on a conceptual symbolic level so we can achieve a level of robots learning like humans. So, imagine a system where a professional human worker can be a teacher to an industrial apprentice robot with conceptual knowledge.

Therefore, researchers recently attempted implicitly combining symbolic and non-symbolic learning paradigms through Deep Reinforcement Learning (RL) to ultimately achieve a human-like apprentice robot, but it has several drawbacks such as: (1) the need for very long training time with respect to traditional deep learning approaches, (2) convergence to optimum policy is not guaranteed and it can get stuck in a sub-optimal policy, (3) a RL agent is trained over a simulated environment so it cannot foresee actions that only exist in the physical environment.

Thus, this project goal is to build a real-time practical machine teaching system in a physical environment based on the apprenticeship paradigm (e.g., Imitation Learning, Behavioral Cloning, and Inverse Reinforcement Learning). The artificial agent (i.e., robot) would explicitly learn on both perceptual and conceptual levels through direct feedback from a human teacher while learning and understanding implicitly using its existing view (i.e., sensors) of the world in natural multimodal manner. Furthermore, this system would be a dynamic universal system that can accommodate multiple domains (e.g., industrial robots, medical robots, and robotic tradesman) with little to no changes to the model architecture (through Transfer Learning) while being specific enough for domain experts to insert their knowledge into the learning process.

Partners

ZEISS Group

Sponsors

BMBF - Federal Ministry of Education and Research

BMBF - Federal Ministry of Education and Research

Publications about the project

Amr Gomaa; Robin Zitt; Guillermo Reyes; Antonio Krüger

In: Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology (UIST-2023), New York, NY, USA, UIST '23 Adjunct, ISBN 9798400700965, Association for Computing Machinery (ACM), 10/2023.

To the publication

Amr Gomaa; Michael Feld

In: Proceedings of the 25th International Conference on Multimodal Interaction. International Conference on Multimodal Interfaces (ICMI-2023), Paris, France, ICMI '23, ISBN 9798400700552, Association for Computing Machinery, 10/2023.

To the publication

Amr Gomaa; Bilal Mahdy

In: Proceedings of the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML 2023). International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML-2023), Luxembourg, CEUR Workshop Proceedings (CEUR-WS.org), 9/2023.

To the publication