Ruei-Che Chang

ORCID: 0000-0001-7545-4136
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Tactile and Sensory Interactions
  • Interactive and Immersive Displays
  • Virtual Reality Applications and Impacts
  • Advanced Image and Video Retrieval Techniques
  • Video Analysis and Summarization
  • Digital Accessibility for Disabilities
  • Music and Audio Processing
  • Augmented Reality Applications
  • Human Motion and Animation
  • Subtitles and Audiovisual Media
  • Teleoperation and Haptic Systems
  • Multimodal Machine Learning Applications
  • User Authentication and Security Systems
  • Multisensory perception and integration
  • Noise Effects and Management
  • Natural Language Processing Techniques
  • Geographic Information Systems Studies
  • Social Robot Interaction and HRI
  • Human Pose and Action Recognition
  • Hearing Loss and Rehabilitation
  • AI in Service Interactions
  • Digital Communication and Language
  • Mobile Crowdsensing and Crowdsourcing
  • Hand Gesture Recognition Systems
  • Pediatric health and respiratory diseases

University of Michigan
2022-2024

Michigan United
2022-2024

National Taiwan University
2021-2024

IT University of Copenhagen
2024

Dartmouth College
2020-2021

National Yang Ming Chiao Tung University
2019-2020

National Tsing Hua University
2020

Automated live visual descriptions can aid blind people in understanding their surroundings with autonomy and independence. However, providing that are rich, contextual, just-in-time has been a long-standing challenge accessibility. In this work, we develop WorldScribe, system generates automated real-world customizable adaptive to users' contexts: (i) WorldScribe's tailored intents prioritized based on semantic relevance. (ii) WorldScribe is contexts, e.g., consecutively succinct for...

10.1145/3654777.3676375 preprint EN 2024-10-11

Blind and low-vision (BLV) people use audio descriptions (ADs) to access videos. However, current ADs are unalterable by end users, thus incapable of supporting BLV individuals' potentially diverse needs preferences. This research investigates if customizing AD could improve how individuals consume We conducted an interview study (Study 1) with fifteen participants, which revealed desires for properties like length, emphasis, speed, voice, format, tone, language. At the same time, concerns...

10.1145/3663548.3675617 preprint EN 2024-10-20

Figure 1: We present SoundShift, a concept to manipulate sounds improve mixed-reality awareness.(a) SoundShift is situated in the auditory Reality-Virtuality Continuum with full transparency and noise cancellation as two ends, comprises (b) six sound manipulators, which are Transparency Shift, Envelope Position Style Sound Append, Time Shift.(c) In scenario, BVI user navigates busy street white cane audio directions.They sometimes may receive ringtones pass by construction sites drilling...

10.1145/3643834.3661556 article EN Designing Interactive Systems Conference 2024-06-29

We propose integrating an array of skin stretch modules with head-mounted display (HMD) to provide two-dimensional feedback on the user's face. Skin has been found effective induce perception force (e.g. weight or inertia) and enable directional haptic cues. However, its potential as HMD output for virtual reality (VR) remains be exploited. Our explorative study firstly investigated design shear tactors. Based our results, Masque implemented prototype actuating six tactors positioned HMD's...

10.1145/3332165.3347898 article EN 2019-10-17

Blind people typically access videos via audio descriptions (AD) crafted by sighted describers who comprehend, select, and describe crucial visual content in the videos. 360° video is an emerging storytelling medium that enables immersive experiences may not possibly reach everyday life. However, omnidirectional nature of makes it challenging for to perceive holistic interpret spatial information essential create ADs blind people. Through a formative study with professional describer, we...

10.1145/3526113.3545613 article EN 2022-10-28

AI-enabled smart-home agents that automate household routines are increasingly viable, but the design space of how and what such systems should communicate with their users remains underexplored. Through a user-enactment study, we identified various interpretations feelings toward system's confidence in its automated acts. That own mental models influenced participants wanted system to communicate, as well they would assess, diagnose, subsequently improve it. Automated acts resulted from...

10.1145/3313831.3376501 article EN 2020-04-21

In this paper, we propose the designs for low cost and 3D-printable add-on components to adapt existing breadboards, circuit electronics tools blind or vision (BLV) users. Through an initial user study, identified several barriers entry beginners with BLV in prototyping. These guided design development of our components. We focused on developing adaptations that provide additional information about specific component pins breadboard holes, modify make them easier use users BLV, expand...

10.1145/3411764.3445690 article EN 2021-05-06

Laser-cutting is a promising fabrication method that empowers makers, including blind or visually-impaired (BVI) creators, to create technologies fit their needs. Existing work on laser-cut accessibility has facilitated easier assembly as workaround for existing models. However, models are still not designed accommodate the needs of BVI users. Integrating can enrich greater maker community by enabling cross-group discourse making. To investigate how model design be more accessible overall,...

10.1145/3544548.3580684 article EN 2023-04-19

Figure 1: EditScribe supports non-visual image editing using natural language verification loops.The user first comprehends the content through initial general and object descriptions, then specifies edit actions language.EditScribe performs edit, provides four types of feedback for to verify performed including a summary visual changes, AI judgement, updated descriptions.The can ask follow-up questions clarify probe into edits or feedback, before performing another edit.

10.1145/3663548.3675599 preprint EN 2024-10-20

Body-controlled avatars provide a more intuitive method to real-time control virtual but require larger environment space and user effort. In contrast, hand-controlled give dexterous fewer fatigue manipulations within close-range for avatar sensory cues than the body-based method. This paper investigates differences between two explores possibility of combination. We first performed formative study understand when how users prefer manipulating hands bodies represent avatars' actions in...

10.1145/3562939.3565609 article EN 2022-11-22

Design tools and research regarding laser-cut architectures have been widely explored in the past decade. However, such discussion has mostly revolved around technical structural design questions instead of another essential element models — assembly a process that relies heavily on components' visual affordance, therefore less accessible to blind or low vision (BLV) people. To narrow gap this area, we co-designed with 7 BLV people examine their experience different architectures. From...

10.1145/3472749.3474754 article EN 2021-10-10

This paper introduces Glissade, a digital pen that generates balance shifting feedback by changing the weight distribution of pen. A pulley system shifts brass mass inside to change pen's center and moment inertia. When is stationary, delivers constant yet natural sensation weight, which can be used convey status. The also generate variety haptic clues actuating according tilt or rotation pen, two commonly-used auxiliary input channels. Glissade demonstrates new possibilities bring...

10.1145/3313831.3376505 article EN 2020-04-21

Sounds are everywhere, from real-world content to virtual audio presented by hearing devices, which create a mixed-reality soundscape that entails rich but intricate information. However, sounds often overlap and conflict in priorities, makes them hard perceive differentiate. This is exacerbated settings, where can with each other. may exacerbate the awareness of mixed reality for blind people who heavily rely on information their everyday life. To address this, we present sound rendering...

10.1145/3586182.3615787 article EN 2023-10-27

Mixed-reality (MR) soundscapes blend real-world sound with virtual audio from hearing devices, presenting intricate auditory information that is hard to discern and differentiate. This particularly challenging for blind or visually impaired individuals, who rely on sounds descriptions in their everyday lives. To understand how complex consumed, we analyzed online forum posts within the community, identifying prevailing challenges, needs, desired solutions. We synthesized results proposed...

10.48550/arxiv.2401.11095 preprint EN cc-by-nc-sa arXiv (Cornell University) 2024-01-01
Ashu Adhikari Ana Paula Afonso Roland Aigner Aditya Satya Georgia Albuquerque and 95 more Rawan Alghofaili Toshiyuki Amano Swamy Ananthanarayan Daniel Andersen Samuel Ang Christoph Anthes Tenglong Ao Imtiaz Arafat Mohammed Arefin Arevalo Safayet Stephanie Arévalo Arboleda Andreas Aristidou Clemens Arth Sahar Aseeri M Omar Faruque Sarker Mushfiqur Azam Miroslav Bachinski Junxuan Bai Gilles Bailly Kristin A. Bartlett Scott Bateman Brett Benda Edurne Bernal-Berdun Eliezer Emanuel Bernart Florent Berthaut Wes Bethel Briana Bettin Michael Beyeler Katherine Bezanson Ayush Bhargava Uttaran Bhattacharya Verena Biener Mark Billinghurst Pauline Bimberg Arnab Biswas João Marcos Bittencourt Loën Boban Antonio Boccaccio Alberto Boem Dan Bohus Elise Bonnail Saeed Boorboor Zahra Borhani Pierre Bourdin Sam Bourgault Guillaume Bouyer Riccardo Bovo Judy Bowen Doug A. Bowman Efe Bozkir Klara Brandstätter Hugo Brument Lauren Buck Andreas Bulling François Cabestaing Araya Cabrera Raquel Cai Shaoyu Cai Zhuojiang Caluya Nicko Fidalgo Graciela Cantory Ville Cao Ruochen Cardenas Hernandez Fer- Nando Casas Dan Cellary Wojciech Center Evan Caillet Adrien Chalmers Andrew T. Chan Liwei Chang Ruei-Che Chang Zhuang Chardonnet J.K. Chen Changan Chen Kan Chen Karen Chen Yuedong Chen Cheng Zhang Haojie Cheng Haonan Cheng Shiwei Cheng Yi Chiossi Nusrat Choudhury Iain Chuang Jung-Hong Chung Ji Clark Jack Clarke T. Colley Mark Çöltekin Arzu Conversy Stéphane Cordeil Maxime Correa De Almeida Gustavo Corrêa Rosa Cleber Costa Samuel E. Cox

10.1109/vr58804.2024.00011 article AF 2024-03-16

We present TanGo, an always-available input modality on VR headset, which can be complementary to current accessories. TanGO is active mechanical structure symmetrically equipped Head-Mounted Display, enabling 3-dimensional bimanual sliding with each degree of freedom furnished a brake system driven by micro servo generating totally 6 passive resistive force profiles. TanGo all-in-one that possess rich and output while keeping compactness the trade-offs between size, weight usability. Users...

10.1145/3385959.3418457 article EN Symposium on Spatial User Interaction 2020-10-26

We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. selected 17 avatar actions discovered in the pilot study conducted a gesture elicitation to invite 12 participants design best representing postures for each action. Then we implemented using MediaPipe framework detect keypoints self-trained model recognize Finally, three applications demonstrate interactions enabled by Puppeteer.

10.1145/3526114.3558689 article EN 2022-10-28
Coming Soon ...