Publikation
PresSim: An End-to-end Framework for Dynamic Ground Pressure Profile Generation from Monocular Videos Using Physics-based 3D Simulation
Lala Ray; Bo Zhou; Sungho Suh; Paul Lukowicz
In: IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). IEEE International Conference on Pervasive Computing and Communications (PerCom-2023), March 13-17, Atlanta, USA, IEEE, 3/2023.
Zusammenfassung
Ground pressure exerted by the human body is a valuable source of information for human activity recognition (HAR) in unobtrusive pervasive sensing. While data collection from pressure sensors to develop HAR solutions requires significant resources and effort, we present a novel end-to-end framework, PresSim, to synthesize sensor data from videos of human activities to reduce such effort significantly. PresSim adopts a 3-stage process: first, extract the 3D activity information from videos with computer vision architectures; then simulate the floor mesh deformation profiles based on the 3D activity information and gravity-included physics simulation; lastly, generate the simulated pressure sensor data with deep learning models. We explored two approaches for the 3D activity information: inverse kinematics with mesh re-targeting, and volumetric pose and shape estimation. We validated PresSim with an experimental setup with a monocular camera to provide input and a pressure-sensing fitness mat ( 80×28 spatial resolution) to provide the sensor ground truth, where nine participants performed a set of predefined yoga sequences. Comparing the synthesized pressure map with the pressure sensor's ground truth based on pressure shapes is quantified through an R square value of 0.948 on the binarized pressure maps and precision of activated sensing node pressure values with corrected R square value of 0.811 within areas with ground contact. We publish our nine-hour dataset and the source code to contribute to the broader research community.
Projekte
Specter - Kontext- und affektsensitive persönliche Assistenz in instrumentierten Umgebungen,
VidGenSense - Methods for Generating Synthetic Wearable And Ubiquitous Sensor Training Data
VidGenSense - Methods for Generating Synthetic Wearable And Ubiquitous Sensor Training Data