Continual Egocentric Activity Recognition with Foreseeable-Generalized Visual-IMU Representations

Activity Recognition
DOI: 10.36227/techrxiv.24041583.v3 Publication Date: 2024-03-29T19:02:29Z
ABSTRACT
The rapid development of wearable sensors promotes convenient data collection in human daily life. Human Activity Recognition (HAR), as a prominent research direction for applications, has made remarkable progress recent years. However, existing efforts mostly focus on improving recognition accuracy, paying limited attention to the model’s functional scalability, specifically its ability continual learning. This limitation greatly restricts application open-world scenarios. Moreover, due storage and privacy concerns, it is often impractical retain activity different users subsequent tasks, especially egocentric visual information. Furthermore, imbalance between visual-based inertial-measurement-unit (IMU) sensing modality introduces challenges lack generalization when employing conventional learning techniques. In this paper, we propose motivational scheme address caused by modal imbalance, enabling foreseeable visual-IMU multimodal network. To overcome forgetting, introduce robust representation estimation technique pseudo-representation generation strategy Experimental results dataset UESTC-MMEA-CL demonstrate effectiveness our proposed method. method effectively leverages capabilities IMU-based representations, outperforming general state-of-the-art methods various task settings.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)