Ishan Chatterjee

ORCID: 0000-0002-2123-6392
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Interactive and Immersive Displays
  • Augmented Reality Applications
  • Tactile and Sensory Interactions
  • Virtual Reality Applications and Impacts
  • Hand Gesture Recognition Systems
  • Advanced Optical Imaging Technologies
  • Gaze Tracking and Assistive Technology
  • Advanced Sensor and Energy Harvesting Materials
  • Speech and Audio Processing
  • Indoor and Outdoor Localization Technologies
  • Music and Audio Processing
  • Bluetooth and Wireless Communication Technologies
  • Optical Wireless Communication Technologies
  • Speech Recognition and Synthesis
  • Advanced Adaptive Filtering Techniques
  • Speech and dialogue systems
  • Parallel Computing and Optimization Techniques
  • Ergonomics and Musculoskeletal Disorders
  • Robot Manipulation and Learning
  • Modular Robots and Swarm Intelligence
  • Energy Harvesting in Wireless Networks
  • Electrowetting and Microfluidic Technologies
  • Video Analysis and Summarization
  • Energy Efficient Wireless Sensor Networks

Google (United States)
2024

University of Washington
2021-2024

University of Cambridge
2023

Georgia Institute of Technology
2023

Microsoft (United States)
2020-2023

Park Plaza Hospital
2023

Association for Computing Machinery
2023

Seoul National University
2023

Harvard University Press
2015

Humans rely on eye gaze and hand manipulations extensively in their everyday activities. Most often, users at an object to perceive it then use hands manipulate it. We propose applying a multimodal, plus free-space gesture approach enable rapid, precise expressive touch-free interactions. show the input methods are highly complementary, mitigating issues of imprecision limited expressivity gaze-alone systems, targeting speed gesture-alone systems. extend existing interaction taxonomy that...

10.1145/2818346.2820752 article EN 2015-11-06

Abstract This paper is a review and analysis of the various implementation architectures diffractive waveguide combiners for augmented reality (AR), mixed (MR) headsets, smart glasses. Extended (XR) another acronym frequently used to refer all variants across MR spectrum. Such devices have potential revolutionize how we work, communicate, travel, learn, teach, shop, are entertained. Already, market analysts show very optimistic expectations on return investment in MR, both enterprise...

10.1515/nanoph-2020-0410 article EN cc-by Nanophotonics 2020-10-07

We present ClearBuds, the first hardware and software system that utilizes a neural network to enhance speech streamed from two wireless earbuds. Real-time enhancement for earbuds requires high-quality sound separation background cancellation, operating in real-time on mobile phone. Clear-Buds bridges state-of-the-art deep learning blind audio source in-ear systems by making key technical contributions: 1) new earbud design capable of as synchronized, binaural microphone array, 2)...

10.1145/3498361.3538933 preprint EN 2022-06-16

English as a Second Language (ESL) learners often encounter unknown words that hinder their text comprehension. Automatically detecting these users read can enable computing systems to provide just-in-time definitions, synonyms, or contextual explanations, thereby helping learn vocabulary in natural and seamless manner. This paper presents EyeLingo, transformer-based machine learning method predicts the probability of based on content eye gaze trajectory real time with high accuracy. A...

10.48550/arxiv.2502.10378 preprint EN arXiv (Cornell University) 2025-02-14

Face orientation can often indicate users' intended interaction target. In this paper, we propose FaceOri, a novel face tracking technique based on acoustic ranging using earphones. FaceOri leverage the speaker commodity device to emit an ultrasonic chirp, which is picked up by set of microphones user's earphone, and then processed calculate distance from each microphone device. These measurements are used derive with respect We conduct ground truth comparison user study evaluate FaceOri's...

10.1145/3491102.3517698 article EN CHI Conference on Human Factors in Computing Systems 2022-04-28

We present Z-Ring, a wearable ring that enables gesture input, object detection, user identification, and interaction with passive interface (UI) elements using single sensing modality point of instrumentation on the finger. Z-Ring uses active electrical field to detect changes in hand's impedance caused by finger motions or contact external surfaces. develop diverse set interactions evaluate them 21 users. demonstrate: (1) Single- two-handed recognition up 93% accuracy (2) Tangible input...

10.1145/3544548.3581422 article EN 2023-04-19

Interactions with Extended Reality Head-Mounted Displays (XR HMDs) require precise, intuitive, and efficient input methods. Current approaches either rely on power-intensive sensors, such as cameras for hand tracking, or specialized hardware controllers. Previous work has explored the use of familiar, available devices smartphones smartwatches more a practical alternative. However, this approach risks interaction overload – how can one determine whether user's gestures watch phone are...

10.1145/3613905.3650758 article EN 2024-05-02

We present ClearBuds, the first end-to-end hardware and software system that utilizes a neural network to enhance speech streamed from two wireless earbuds. Real-time enhancement for earbuds requires high-quality sound separation background cancellation, operating in real-time on mobile phone. Clear-Buds bridges state-of-the-art deep learning blind audio source in-ear systems by making key technical contributions: 1) new earbud design capable of as synchronized, binaural microphone array, 2)...

10.1145/3498361.3538654 article EN 2022-06-16

QWERTY is the primary smartphone text input keyboard configuration. However, insertion and substitution errors caused by hand tremors, often experienced users with Parkinson's disease, can severely affect typing efficiency user experience. In this paper, we investigated users' behavior on smartphones. particular, identified compared characteristics generated without symptoms. We then proposed an elastic probabilistic model for prediction. By incorporating both spatial temporal features,...

10.1145/3411764.3445352 article EN 2021-05-06

Subject-aware vocal activity sensing on wearables, which specifically recognizes and monitors the wearer's distinct activities, is essential in advancing personal health monitoring enabling context-aware applications. While recent advancements earables present new opportunities, absence of relevant datasets effective methods remains a significant challenge. In this paper, we introduce EarSAVAS, first publicly available dataset constructed for subject-aware human earables. EarSAVAS...

10.1145/3659616 article EN cc-by Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies 2024-05-13

10.1109/ismar-adjunct64951.2024.00179 article EN 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) 2024-10-21

This paper introduces Z-Pose, a method for continuous 3D hand pose tracking that uses swept-frequency RF sensing via worn ring device. By modeling the as an antenna and analyzing its geometry-dependent impedance, Z-Pose performs real-time based on unique impedance signature of each pose. Unlike camera-based systems, this technique is robust to occlusion illumination, performing well in our evaluation studies even with obstructions such sleeves gloves. The results indicate system achieves...

10.1145/3615592.3616851 article EN cc-by-nc-sa 2023-09-05

Two technologies were combined to demonstrate a compact, foveated occlusive Mixed Reality (MR) headset. Waveguide displays used create the central, high-resolution Field of View (FOV), and Heterogeneous Multi-Lens Array (HMLA) based display formed periphery. A HoloLens 2, employing transparent waveguide displays, was display, covering horizontal FOV 43◦ with resolution 47 Pixels Per Degree (ppd). Each peripheral custom-made HMLA an off-the-shelf OLED microdisplay, each lens array acting as...

10.1117/12.2679099 article EN 2023-08-07

Debugging printed circuit boards (PCBs) requires frequent context switching and spatial pattern matching between software design files physical boards. To reduce this overhead, we conduct a series of interviews with electrical engineers to understand their workflows, around which set AR interaction techniques, call Augmented Silkscreen, streamline identification, localization, annotation, measurement tasks. We then run remote user studies illustrative video sketches simulated PCB tasks...

10.1145/3461778.3462091 article EN Designing Interactive Systems Conference 2021-06-28

Debugging printed circuit boards (PCBs) can be a time-consuming process, requiring frequent context switching between PCB design files (schematic and layout) the physical PCB. To assist electrical engineers in debugging PCBs, we present ARDW, an augmented reality workbench consisting of monitor interface featuring files, projector-augmented workspace for tracked test probes selection measurement, connected instrument. The system supports common workflows visualization on as well interaction...

10.1145/3526113.3545684 article EN 2022-10-28

This paper presents Z-Ring, a novel wearable device that uses radio frequency (RF) based sensing to offer unique capabilities for human-computer interaction, including subtle input, object recognition, user identification, and passive surface interaction. With only single modality, Z-Ring achieves diverse concurrent interactions can enhance the experience. We illustrate potential of enable seamless context-aware via custom music player application. In future, we plan expand Z-Ring's...

10.1145/3586182.3615809 article EN 2023-10-27

Augmented, mixed and virtual Reality (AR/MR) headsets as well smart glasses have the potential to revolutionize how we work, communicate, travel, learn, teach, shop get entertained [1],[2]. An MR headset places content into user's view of real world, either via an optical see-through mode or video-pass-through (VR/MR). Today, return on investment for use has been demonstrated widely enterprise defense sectors, but only partially consumer. In order meet high market expectations especially...

10.1117/12.2571473 article EN 2020-08-20

Augmented, mixed and virtual Reality (AR/MR) headsets as well smart glasses have the potential to revolutionize how we work, communicate, travel, learn, teach, shop get entertained. An MR headset places content into user's view of real world, either via an optical see through mode or video pass‐through‐mode (VR/MR). Today, return on investment for use has been demonstrated widely enterprise defense sectors, but only partially consumer. In order meet high market expectations especially...

10.1002/sdtp.13800 article EN SID Symposium Digest of Technical Papers 2020-08-01
Coming Soon ...