Kenneth DeHaan

ORCID: 0000-0003-2903-6894
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Hand Gesture Recognition Systems
  • Hearing Impairment and Communication
  • Human Pose and Action Recognition
  • Indoor and Outdoor Localization Technologies
  • Speech and dialogue systems
  • Tactile and Sensory Interactions
  • Health and Medical Research Impacts
  • Career Development and Diversity
  • Cerebral Palsy and Movement Disorders
  • Advances in Oncology and Radiotherapy

Gallaudet University
2021-2025

University of Pittsburgh
2025

One of the factors that have hindered progress in areas sign language recognition, translation, and production is absence large annotated datasets. Towards this end, we introduce How2Sign, a multimodal multiview continuous American Sign Language (ASL) dataset, consisting parallel corpus more than 80 hours videos set corresponding modalities including speech, English transcripts, depth. A three-hour subset was further recorded Panoptic studio enabling detailed 3D pose estimation. To evaluate...

10.1109/cvpr46437.2021.00276 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

Purpose: The present study assessed the test–retest reliability of American Sign Language (ASL) version Computerized Revised Token Test (CRTT-ASL) and compared differences similarities between ASL English reading by Deaf hearing users ASL. Method: Creation CRTT-ASL involved filming, editing, validating CRTT instructions, sentence commands, scoring. proficient (DP), nonproficient (HNP), sign language completed self-paced, word-by-word (CRTT-Reading-Word Fade [CRTT-R-wf]). Both tests were...

10.1044/2024_jslhr-24-00207 article EN Journal of Speech Language and Hearing Research 2025-01-24

Search Engines such as Google, Baidu, and Bing have revolutionized the way we interact with cyber world a number of applications in recommendations, learning, advertisements, healthcare, entertainment, etc. In this paper, design search engines for sign languages American Sign Language (ASL). use hand body motion communication rich grammar, complexity, vocabulary that is comparable to spoken languages. This primary language Deaf community global population ≈ 500 million. However, support...

10.1145/3570361.3613286 article EN Proceedings of the 28th Annual International Conference on Mobile Computing And Networking 2023-09-30

Abstract Over the past decade, there have been great advancements in radio frequency sensor technology for human–computer interaction applications, such as gesture recognition, and human activity recognition more broadly. While is a significant amount of study on these topics, most cases, experimental data are acquired controlled settings by directing participants what motion to articulate. However, especially communicative motions, sign language, directed sets do not accurately capture...

10.1049/rsn2.12565 article EN cc-by-nc-nd IET Radar Sonar & Navigation 2024-05-10

In recent years, there have been significant developments in radio frequency (RF) sensor technology used human-computer interaction (HCI) applications, specifically areas like gesture recognition and more broadly, human activity recognition. Although extensive research has conducted on these subjects, most experiments involve controlled settings where participants are instructed how to perform specific movements. However, when such sign language they lack capturing dialectal...

10.1109/radarconf2458775.2024.10548148 article EN 2022 IEEE Radar Conference (RadarConf22) 2024-05-06

Sign Language is widely used by over 500 million Deaf and hard of hearing (DHH) individuals in their daily lives. While prior works made notable efforts to show the feasibility recognizing signs with various sensing modalities both from wireless wearable domains, they recruited sign language learners for validation. Based on our interactions native users, we found that signal diversity hinders generalization users (e.g., different backgrounds interpret differently, have complex articulated...

10.1109/iotdi61053.2024.00022 article EN 2024-05-13

Coreference resolution is key to many natural language processing tasks and yet has been relatively unexplored in Sign Language Processing. In signed languages, space primarily used establish reference. Solving coreference for languages would not only enable higher-level Processing systems, but also enhance our understanding of different modalities situated references, which are problems studying grounded language. this paper, we: (1) introduce Signed Resolution (SCR), a new challenge...

10.18653/v1/2021.emnlp-main.405 article EN cc-by Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021-01-01

One of the factors that have hindered progress in areas sign language recognition, translation, and production is absence large annotated datasets. Towards this end, we introduce How2Sign, a multimodal multiview continuous American Sign Language (ASL) dataset, consisting parallel corpus more than 80 hours videos set corresponding modalities including speech, English transcripts, depth. A three-hour subset was further recorded Panoptic studio enabling detailed 3D pose estimation. To evaluate...

10.48550/arxiv.2008.08143 preprint EN other-oa arXiv (Cornell University) 2020-01-01

The University of Pittsburgh Medical Center Hillman Cancer Academy (Hillman Academy) has the primary goal reaching high school students from underrepresented and disadvantaged backgrounds guiding them through a cutting-edge research professional development experience that positions for success in STEM. With this focus, provided nearly 300 authentic mentored internship opportunities to 239 diverse over past 13 years most whom matriculated into STEM majors higher education. These efforts have...

10.15695/jstem/v5i2.02 article EN The Journal of STEM Outreach 2022-08-03
Coming Soon ...