Earticle

다운로드

Masked-GRU 딥러닝을 통한 단어 수준 수화 인식을 위한 미디어파이프 핸드 랜드마크의 잠재력 탐색
Exploring the Potential of Mediapipe Hand Landmarks for Word-level Sign Language Recognition through Masked-GRU Deep Learning

원문정보

초록

영어
This paper addresses the critical need for sign language recognition to improve communication for individuals with hearing impairments. Our approach involves leveraging the capabilities of computer vision and deep learning to develop an efficient sign language recognition system. By utilizing the Mediapipe library, we extract detailed hand landmarks from video input, thereby effectively capturing the nuanced movements and configurations inherent in sign language gestures. At the core of our model is the GRU network, a type of recurrent neural network (RNN) specifically designed for analyzing sequential data. The GRU architecture excels at capturing temporal dependencies, making it a well-suited choice for recognizing the dynamic nature of sign language expressions. To train our GRU network, we leverage the LSA64 dataset, which comprises comprehensive videos featuring expert sign language users. We extract hand landmarks from these videos using the Mediapipe module, enabling our model to effectively learn and recognize the intricate patterns of sign language gestures.

목차

Abstract
1. Introduction
2. Related works
3. Methods
3.1. Dataset
3.2. Experiment setup
4. Experiment result
5. Conclusions
Acknowledgment
References

저자

  • Zubia Naz [ EECS Gwangju Institute of Science and Technology Gwangju,South Korea ]
  • Muhammad Ishfaq Hussain [ EECS Gwangju Institute of Science and Technology Gwangju,South Korea ]
  • Suyeon Oh [ EECS Gwangju Institute of Science and Technology Gwangju,South Korea ]
  • Moongu Jeon [ EECS Gwangju Institute of Science and Technology Gwangju,South Korea ] Corresponding Author

참고문헌

자료제공 : 네이버학술정보

    간행물 정보

    • 간행물
      한국차세대컴퓨팅학회 학술대회
    • 간기
      반년간
    • 수록기간
      2021~2025
    • 십진분류
      KDC 566 DDC 004