Yayınlanmış 1 Ocak 2022
| Sürüm v1
Dergi makalesi
Açık
Using Motion History Images With 3D Convolutional Networks in Isolated Sign Language Recognition
Oluşturanlar
- 1. Ankara Univ, Comp Engn Dept, TR-06830 Ankara, Turkey
- 2. Hacettepe Univ, Comp Engn Dept, TR-06800 Ankara, Turkey
Açıklama
Sign language recognition using computational models is a challenging problem that requires simultaneous spatio-temporal modeling of the multiple sources, i.e. faces, hands, body, etc. In this paper, we propose an isolated sign language recognition model based on a model trained using Motion History Images (MHI) that are generated from RGB video frames. RGB-MHI images represent spatio-temporal summary of each sign video effectively in a single RGB image. We propose two different approaches using this RGB-MHI model. In the first approach, we use the RGB-MHI model as a motion-based spatial attention module integrated into a 3D-CNN architecture. In the second approach, we use RGB-MHI model features directly with the features of a 3D-CNN model using a late fusion technique. We perform extensive experiments on two recently released large-scale isolated sign language datasets, namely AUTSL and BosphorusSign22k. Our experiments show that our models, which use only RGB data, can compete with the state-of-the-art models in the literature that use multi-modal data.
Dosyalar
bib-8d9300b8-7fdc-4727-b25e-b21787c5e3a0.txt
Dosyalar
(175 Bytes)
| Ad | Boyut | Hepisini indir |
|---|---|---|
|
md5:9dd620f37ea9ac19bc868108e9c3d6d8
|
175 Bytes | Ön İzleme İndir |