Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification

Skeleton (computer programming)
DOI: 10.32604/cmes.2023.046334 Publication Date: 2024-01-23T03:36:35Z
ABSTRACT
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan, approximately 360,000 individuals with hearing speech disabilities rely on Japanese Language (JSL) communication.However, existing JSL systems have faced significant performance limitations due to inherent complexities.In response these challenges, we present a novel system that employs strategic fusion approach, combining joint skeleton-based handcrafted features pixel-based deep learning features.Our incorporates two distinct streams: first stream extracts crucial features, emphasizing capture of hand body movements within gestures.Simultaneously, learning-based transfer captures hierarchical representations gestures in second stream.Then, concatenated critical information hierarchy produce multiple levels aiming create comprehensive representation gestures.After reducing dimensionality feature, feature selection approach kernel-based support vector machine (SVM) were used classification.To assess effectiveness our conducted extensive experiments Lab dataset publicly available Arabic sign (ArSL) dataset.Our results unequivocally demonstrate significantly enhances accuracy robustness compared individual sets or traditional methods.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (39)
CITATIONS (5)