Logo image
MLMSign: Multi-lingual multi-modal illumination-invariant sign language recognition
Journal article   Open access   Peer reviewed

MLMSign: Multi-lingual multi-modal illumination-invariant sign language recognition

Arezoo Sadeghzadeh, A.F.M. Shahen Shah and Md Baharul Islam
Intelligent systems with applications, Vol.22, p.200384
06-01-2024

Abstract

Ensemble learning Hand-crafted features Illumination-invariant system Multi-lingual system Sign language recognition
Sign language (SL) serves as a visual communication tool bearing great significance for deaf people to interact with others and facilitate their daily life. Wide varieties of SLs and the lack of interpretation knowledge necessitate developing automated sign language recognition (SLR) systems to attenuate the communication gap between the deaf and hearing communities. Despite numerous advanced static SLR systems, they are not practical and favorable enough for real-life scenarios once assessed simultaneously from different critical aspects: accuracy in dealing with high intra- and slight inter-class variations, robustness, computational complexity, and generalization ability. To this end, we propose a novel multi-lingual multi-modal SLR system, namely MLMSign, by taking full strengths of hand-crafted features and deep learning models to enhance the performance and the robustness of the system against illumination changes while minimizing computational cost. The RGB sign images and 2D visualizations of their hand-crafted features, i.e., Histogram of Oriented Gradients (HOG) features and a∗ channel of L∗a∗b∗ color space, are employed as three input modalities to train a novel Convolutional Neural Network (CNN). The number of layers, filters, kernel size, learning rate, and optimization technique are carefully selected through an extensive parametric study to minimize the computational cost without compromising accuracy. The system’s performance and robustness are significantly enhanced by jointly deploying the models of these three modalities through ensemble learning. The impact of each modality is optimized based on their impact coefficient determined by grid search. In addition to the comprehensive quantitative assessment, the capabilities of our proposed model and the effectiveness of ensembling over three modalities are evaluated qualitatively using the Grad-CAM visualization model. Experimental results on the test data with additional illumination changes verify the high robustness of our system in dealing with overexposed and underexposed lighting conditions. Achieving a high accuracy (>99.33%) on six benchmark datasets (i.e., Massey, Static ASL, NUS II, TSL Fingerspelling, BdSL36v1, and PSL) demonstrates that our system notably outperforms the recent state-of-the-art approaches with a minimum number of parameters and high generalization ability over complex datasets. Its promising performance for four different sign languages makes it a feasible system for multi-lingual applications. [Display omitted] •Propose multi-lingual sign language recognition using handcrafted and deep features.•Extract HOG and L∗a∗b∗ features to generate robust and representative modalities.•Offer a parametric study to optimize a CNN for high performance with minimized cost.•Apply weighted ensemble on CNNs of 3 modalities to improve accuracy and robustness.•Evaluate performance and lighting-invariance on 6 datasets for multi-lingual apps.
url
https://doi.org/10.1016/j.iswa.2024.200384View
Published (Version of record) Open

Related links

Metrics

14 Record Views
16 Times Cited - Scopus

Details

UN Sustainable Development Goals (SDGs)

This output has contributed to the advancement of the following goals:

#11 Sustainable Cities and Communities
Logo image