ARGUS: Visualization of AI-Assisted Task Guidance in AR
Trouble shooting
Headset
DOI:
10.48550/arxiv.2308.06246
Publication Date:
2023-01-01
AUTHORS (18)
ABSTRACT
The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive 3D environment, reason about physical tasks, and model performer, all in real-time. Within framework, wide variety sensors are needed generate data across different modalities, such as audio, video, depth, speech, time-of-flight. required typically part AR headset, providing performer sensing interaction through visual, haptic feedback. AI not only record they perform activities, but also require machine learning (ML) models understand assist interact with world. Therefore, developing challenging task. We propose ARGUS, visual analytics system support development intelligent assistants. Our was designed multi year-long collaboration between visualization researchers ML experts. This co-design process led advances AR. allows online object, action, step detection well offline analysis previously recorded sessions. It visualizes multimodal sensor streams output models. developers gain insights into activities models, helping them troubleshoot, improve, fine tune components assistant.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....