-
IEEE Transactions on Neural Systems and... 2021Limb motion decoding is an important part of brain-computer interface (BCI) research. Among the limb motion, sign language not only contains rich semantic information...
Limb motion decoding is an important part of brain-computer interface (BCI) research. Among the limb motion, sign language not only contains rich semantic information and abundant maneuverable actions but also provides different executable commands. However, many researchers focus on decoding the gross motor skills, such as the decoding of ordinary motor imagery or simple upper limb movements. Here we explored the neural features and decoding of Chinese sign language from electroencephalograph (EEG) signal with motor imagery and motor execution. Sign language not only contains rich semantic information, but also has abundant maneuverable actions, and provides us with more different executable commands. In this paper, twenty subjects were instructed to perform movement execution and movement imagery based on Chinese sign language. Seven classifiers are employed to classify the selected features of sign language EEG. L1 regularization is used to learn and select features that contain more information from the mean, power spectral density, sample entropy, and brain network connectivity. The best average classification accuracy of the classifier is 89.90% (imagery sign language is 83.40%). These results have shown the feasibility of decoding between different sign languages. The source location reveals that the neural circuits involved in sign language are related to the visual contact area and the pre-movement area. Experimental evaluation shows that the proposed decoding strategy based on sign language can obtain outstanding classification results, which provides a certain reference value for the subsequent research of limb decoding based on sign language.
Topics: Brain-Computer Interfaces; China; Electroencephalography; Humans; Imagination; Machine Learning; Movement; Sign Language
PubMed: 34932480
DOI: 10.1109/TNSRE.2021.3137340 -
Novel Spatio-Temporal Continuous Sign Language Recognition Using an Attentive Multi-Feature Network.Sensors (Basel, Switzerland) Aug 2022Given video streams, we aim to correctly detect unsegmented signs related to continuous sign language recognition (CSLR). Despite the increase in proposed deep learning...
Given video streams, we aim to correctly detect unsegmented signs related to continuous sign language recognition (CSLR). Despite the increase in proposed deep learning methods in this area, most of them mainly focus on using only an RGB feature, either the full-frame image or details of hands and face. The scarcity of information for the CSLR training process heavily constrains the capability to learn multiple features using the video input frames. Moreover, exploiting all frames in a video for the CSLR task could lead to suboptimal performance since each frame contains a different level of information, including main features in the inferencing of noise. Therefore, we propose novel spatio-temporal continuous sign language recognition using the attentive multi-feature network to enhance CSLR by providing extra keypoint features. In addition, we exploit the attention layer in the spatial and temporal modules to simultaneously emphasize multiple important features. Experimental results from both CSLR datasets demonstrate that the proposed method achieves superior performance in comparison with current state-of-the-art methods by 0.76 and 20.56 for the WER score on CSL and PHOENIX datasets, respectively.
Topics: Attention; Humans; Recognition, Psychology; Sign Language
PubMed: 36080911
DOI: 10.3390/s22176452 -
American Annals of the Deaf 2021Research rarely focuses on how deaf and hard of hearing (DHH) students address mathematical ideas. Complexities involved in using sign language (SL) in mathematics...
Research rarely focuses on how deaf and hard of hearing (DHH) students address mathematical ideas. Complexities involved in using sign language (SL) in mathematics classrooms include not just challenges, but opportunities that accompany mathematics learning in this gestural-somatic medium. The authors consider DHH students primarily as learners of mathematics, and their SL use as a special case of language in the mathematics classroom. More specifically, using SL in teaching and learning mathematics is explored within semiotic and embodiment perspectives to gain a better understanding of how using SL affects the development, conceptualization, and representation of mathematical meaning. The theoretical discussion employs examples from the authors' work and research on geometry, arithmetic, and fraction concepts with Deaf German and Austrian learners and experts. The examples inform the context of mathematics teaching and learning more generally by illuminating SL features that distinguish mathematics learning for DHH learners.
Topics: Deafness; Education of Hearing Disabled; Humans; Mathematics; Persons With Hearing Impairments; Sign Language
PubMed: 34719521
DOI: 10.1353/aad.2021.0025 -
Sensors (Basel, Switzerland) Aug 2021Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important...
Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies.
Topics: Algorithms; Fingers; Hand; Humans; Recognition, Psychology; Sign Language; United States
PubMed: 34502747
DOI: 10.3390/s21175856 -
Computational Intelligence and... 2022Sign language plays a pivotal role in the lives of impaired people having speaking and hearing disabilities. They can convey messages using hand gesture movements....
Sign language plays a pivotal role in the lives of impaired people having speaking and hearing disabilities. They can convey messages using hand gesture movements. American Sign Language (ASL) recognition is challenging due to the increasing intra-class similarity and high complexity. This paper used a deep convolutional neural network for ASL alphabet recognition to overcome ASL recognition challenges. This paper presents an ASL recognition approach using a deep convolutional neural network. The performance of the DeepCNN model improves with the amount of given data; for this purpose, we applied the data augmentation technique to expand the size of training data from existing data artificially. According to the experiments, the proposed DeepCNN model provides consistent results for the ASL dataset. Experiments prove that the DeepCNN gives a better accuracy gain of 19.84%, 8.37%, 16.31%, 17.17%, 5.86%, and 3.26% as compared to various state-of-the-art approaches.
Topics: Gestures; Humans; Movement; Neural Networks, Computer; Recognition, Psychology; Sign Language
PubMed: 35535197
DOI: 10.1155/2022/1450822 -
Journal of Deaf Studies and Deaf... Jan 2021Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed...
Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed whether fluency is needed to achieve this performance benefit or what it may look like on the neurobiological level. We conducted an electroencephalography experiment and assessed accuracy on a classic mental rotation task given to deaf fluent signers, hearing fluent signers, hearing non-fluent signers, and hearing non-signers. Two of the main findings of the study are as follows: (1) Sign language comprehension and mental rotation abilities are positively correlated and (2) Behavioral performance differences between signers and non-signers are not clearly reflected in brain activity typically associated with mental rotation. In addition, we propose that the robust impact sign language appears to have on mental rotation abilities strongly suggests that "sign language use" should be added to future measures of spatial experiences.
Topics: Comprehension; Deafness; Hearing; Hearing Tests; Humans; Sign Language
PubMed: 32978623
DOI: 10.1093/deafed/enaa030 -
Sensors (Basel, Switzerland) Jul 2023Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this...
Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this problem by designing a low-cost data glove that utilizes multiple inertial sensors with the purpose of achieving efficient and accurate sign language recognition. In this study, four machine learning models-decision tree (DT), support vector machine (SVM), K-nearest neighbor method (KNN), and random forest (RF)-were employed to recognize 20 different types of dynamic sign language data used by deaf individuals. Additionally, a proposed attention-based mechanism of long and short-term memory neural networks (Attention-BiLSTM) was utilized in the process. Furthermore, this study verifies the impact of the number and position of data glove nodes on the accuracy of recognizing complex dynamic sign language. Finally, the proposed method is compared with existing state-of-the-art algorithms using nine public datasets. The results indicate that both the Attention-BiLSTM and RF algorithms have the highest performance in recognizing the twenty dynamic sign language gestures, with an accuracy of 98.85% and 97.58%, respectively. This provides evidence for the feasibility of our proposed data glove and recognition methods. This study may serve as a valuable reference for the development of wearable sign language recognition devices and promote easier communication between deaf and able-bodied individuals.
Topics: Humans; Sign Language; Speech; Algorithms; Hearing; Wearable Electronic Devices
PubMed: 37571476
DOI: 10.3390/s23156693 -
Journal of Experimental Psychology.... Jun 2021Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega,...
Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Topics: Adult; Electroencephalography; Evoked Potentials; Female; Humans; Male; Middle Aged; Psycholinguistics; Sign Language
PubMed: 33211523
DOI: 10.1037/xlm0000958 -
Sensors (Basel, Switzerland) Oct 2020Sign languages are developed around the world for hearing-impaired people to communicate with others who understand them. Different grammar and alphabets limit the usage...
Sign languages are developed around the world for hearing-impaired people to communicate with others who understand them. Different grammar and alphabets limit the usage of sign languages between different sign language users. Furthermore, training is required for hearing-intact people to communicate with them. Therefore, in this paper, a real-time motion recognition system based on an electromyography signal is proposed for recognizing actual American Sign Language (ASL) hand motions for helping hearing-impaired people communicate with others and training normal people to understand the sign languages. A bilinear model is applied to deal with the on electromyography (EMG) data for decreasing the individual difference among different people. A long short-term memory neural network is used in this paper as the classifier. Twenty sign language motions in the ASL library are selected for recognition in order to increase the practicability of the system. The results indicate that this system can recognize these twenty motions with high accuracy among twenty participants. Therefore, this system has the potential to be widely applied to help hearing-impaired people for daily communication and normal people to understand the sign languages.
Topics: Deafness; Electromyography; Hand; Humans; Movement; Pattern Recognition, Automated; Sign Language
PubMed: 33066452
DOI: 10.3390/s20205807 -
Sensors (Basel, Switzerland) Jul 2020We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory...
We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29-54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the most complete ones, and the classifiers top performed in comparison with other relevant works reported in the literature.
Topics: Adult; Algorithms; Female; Humans; Male; Middle Aged; Neural Networks, Computer; Sign Language; Wearable Electronic Devices
PubMed: 32664586
DOI: 10.3390/s20143879