Sign Language Recognition With Self-Learning Fusion Model

IEEE SENSORS JOURNAL(2023)

引用 0|浏览0
暂无评分
摘要
Sign language recognition (SLR) is the task of recognizing human actions that represent the language, which is not only helpful for deaf-mute people but also a means for human-computer interaction. Although data from wearable sensors have been proven useful for this task, it is still difficult to collect such data for training deep fusion models. In this study, our contributions are twofold: 1) we collect and release a dataset for SLR consisting of both video and sensor data obtained from wearable devices and 2) we propose the first self-learning fusion model for SLR, termed STSLR, that utilizes a portion of annotated data to simulate sensor embedding vectors. By virtue of the simulated sensor features, the video features from video-only data are enhanced to allow the fusion model to recognize the annotated actions more effectively. We empirically demonstrate the superiority of STSLR over competitive benchmarks on our newly released dataset and well-known publicly available ones.
更多
查看译文
关键词
Deep learning,fusion,human activity recognition (HAR),sign language recognition (SLR)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要