Video-Based Sign Language Recognition via ResNet and LSTM Network

Residual neural network
DOI: 10.20944/preprints202405.1851.v1 Publication Date: 2024-05-29T04:29:38Z
ABSTRACT
Sign language recognition technology can help people with hearing impairments to communicate those who are impaired. At present, the rapid development of society, deep learning also provided certain technical support for sign work. In tasks, use traditional convolutional neural networks extract spatio-temporal features from videos suffers insufficient feature extraction, resulting in low rates. Nevertheless, video-based datasets very large and require a lot computational resources training generalisation must be ensured, which poses challenge recognition. this paper, we have presented method based on resnet lstm. As number network layers increases, ResNet effectively solve granularity explosion problem obtain better time series features. We Resnet as backbone model. initialisation stage, using ResNet; then, learned space is used input LSTM long sequence The experimental results show that accuracy above model than mainstream model, it improve rate actions.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....