-
Neuropsychologia May 2023Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two...
Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two separate hypotheses: (1) a task-specific hypothesis that suggests these effects occur because visual features of the iconic sign form can map onto the visual features of the pictures, and (2) a semantic feature hypothesis that suggests that the retrieval of iconic signs results in greater semantic activation due to the robust representation of sensory-motor semantic features compared to non-iconic signs. To test these two hypotheses, iconic and non-iconic American Sign Language (ASL) signs were elicited from deaf native/early signers using a picture-naming task and an English-to-ASL translation task, while electrophysiological recordings were made. Behavioral facilitation (faster response times) and reduced negativities were observed for iconic signs (both prior to and within the N400 time window), but only in the picture-naming task. No ERP or behavioral differences were found between iconic and non-iconic signs in the translation task. This pattern of results supports the task-specific hypothesis and provides evidence that iconicity only facilitates sign production when the eliciting stimulus and the form of the sign can visually overlap (a picture-sign alignment effect).
Topics: Sign Language; Electrophysiology; United States; Evoked Potentials; Translations; Reaction Time; Photic Stimulation; Semantics; Humans; Deafness; Male; Female; Adult; Analysis of Variance; Models, Neurological
PubMed: 36796720
DOI: 10.1016/j.neuropsychologia.2023.108516 -
Scandinavian Journal of Psychology Oct 2009Working memory (WM) for sign language has an architecture similar to that for speech-based languages at both functional and neural levels. However, there are some... (Review)
Review
Working memory (WM) for sign language has an architecture similar to that for speech-based languages at both functional and neural levels. However, there are some processing differences between language modalities that are not yet fully explained, although a number of hypotheses have been mooted. This article reviews some of the literature on differences in sensory, perceptual and cognitive processing systems induced by auditory deprivation and sign language use and discusses how these differences may contribute to differences in WM architecture for signed and speech-based languages. In conclusion, it is suggested that left-hemisphere reorganization of the motion-processing system as a result of native sign-language use may interfere with the development of the order processing system in WM.
Topics: Brain; Brain Mapping; Deafness; Magnetic Resonance Imaging; Memory, Short-Term; Sign Language; Space Perception
PubMed: 19778397
DOI: 10.1111/j.1467-9450.2009.00744.x -
Nature Communications Sep 2021Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and...
Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers' daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.
Topics: Communication Aids for Disabled; Deafness; Deep Learning; Gestures; Humans; Sign Language; Virtual Reality; Wearable Electronic Devices
PubMed: 34508076
DOI: 10.1038/s41467-021-25637-w -
Sensors (Basel, Switzerland) Feb 2023With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless...
With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals-UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact.
Topics: Humans; Ultrasonics; Gestures; Sign Language; COVID-19; Acoustics
PubMed: 36850389
DOI: 10.3390/s23041790 -
Preventing Chronic Disease Jun 2022The COVID-19 pandemic has caused a dramatic shift in work conditions, bringing increased attention to the occupational health of remote workers. We aimed to investigate...
INTRODUCTION
The COVID-19 pandemic has caused a dramatic shift in work conditions, bringing increased attention to the occupational health of remote workers. We aimed to investigate the physical and mental health of sign language interpreters working remotely from home because of the pandemic.
METHODS
We measured the physical and mental health of certified interpreters who worked remotely 10 or more hours per week. We evaluated associations within the overall sample and compared separate generalized linear models across primary interpreting settings and platforms. We hypothesized that physical health would be correlated with mental health and that differences across settings would exist.
RESULTS
We recruited 120 interpreters to participate. We calculated scores for disability (mean score, 13.93 [standard error of the mean (SEM), 1.43] of 100), work disability (mean score, 10.86 [SEM, 1.59] of 100), and pain (mean score, 3.53 [SEM, 0.29] of 10). Shoulder pain was most prevalent (27.5%). Respondents had scores that were not within normal limits for depression (22.5%), anxiety (16.7%), and stress (24.2%). Although disability was not associated with depression, all other outcomes for physical health were correlated with mental health (r ≥ 0.223, P ≤ .02). Educational and community/freelance interpreters trended toward greater adverse physical health, whereas educational and video remote interpreters trended toward more mental health concerns.
CONCLUSION
Maintaining the occupational health of sign language interpreters is critical for addressing the language barriers that have resulted in health inequities for deaf communities. Associations of disability, work disability, and pain with mental health warrant a holistic approach in the clinical treatment and research of these essential workers.
Topics: COVID-19; Deafness; Humans; Occupational Health; Pain; Pandemics; Sign Language
PubMed: 35679479
DOI: 10.5888/pcd19.210462 -
The South African Journal of... Aug 2022The emergence of the coronavirus disease 2019 (COVID-19) pandemic has resulted in communication being heightened as one of the critical aspects in the implementation... (Review)
Review
A proposed artificial intelligence-based real-time speech-to-text to sign language translator for South African official languages for the COVID-19 era and beyond: In pursuit of solutions for the hearing impaired.
BACKGROUND
The emergence of the coronavirus disease 2019 (COVID-19) pandemic has resulted in communication being heightened as one of the critical aspects in the implementation of interventions. Delays in the relaying of vital information by policymakers have the potential to be detrimental, especially for the hearing impaired.
OBJECTIVES
This study aims to conduct a scoping review on the application of artificial intelligence (AI) for real-time speech-to-text to sign language translation and consequently propose an AI-based real-time translation solution for South African languages from speech-to-text to sign language.
METHODS
Electronic bibliographic databases including ScienceDirect, PubMed, Scopus, MEDLINE and ProQuest were searched to identify peer-reviewed publications published in English between 2019 and 2021 that provided evidence on AI-based real-time speech-to-text to sign language translation as a solution for the hearing impaired. This review was done as a precursor to the proposed real-time South African translator.
RESULTS
The review revealed a dearth of evidence on the adoption and/or maximisation of AI and machine learning (ML) as possible solutions for the hearing impaired. There is a clear lag in clinical utilisation and investigation of these technological advances, particularly in the African continent.
CONCLUSION
Assistive technology that caters specifically for the South African community is essential to ensuring a two-way communication between individuals who can hear clearly and individuals with hearing impairments, thus the proposed solution presented in this article.
Topics: Artificial Intelligence; COVID-19; Hearing; Hearing Loss; Humans; Sign Language; South Africa; Speech
PubMed: 36073078
DOI: 10.4102/sajcd.v69i2.915 -
Journal of Deaf Studies and Deaf... Oct 2018This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared... (Comparative Study)
Comparative Study
This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in 1 min) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.
Topics: Adult; Aptitude; Female; Humans; Language; Male; Multilingualism; Persons With Hearing Impairments; Semantics; Sign Language
PubMed: 29733368
DOI: 10.1093/deafed/eny013 -
Acta Psychologica Jul 2020Motor simulation has emerged as a mechanism for both predictive action perception and language comprehension. By deriving a motor command, individuals can predictively...
Motor simulation has emerged as a mechanism for both predictive action perception and language comprehension. By deriving a motor command, individuals can predictively represent the outcome of an unfolding action as a forward model. Evidence of simulation can be seen via improved participant performance for stimuli that conform to the participant's individual characteristics (an egocentric bias). There is little evidence, however, from individuals for whom action and language take place in the same modality: sign language users. The present study asked signers and nonsigners to shadow (perform actions in tandem with various models), and the delay between the model and participant ("lag time") served as an indicator of the strength of the predictive model (shorter lag time = more robust model). This design allowed us to examine the role of (a) motor simulation during action prediction, (b) linguistic status in predictive representations (i.e., pseudosigns vs. grooming gestures), and (c) language experience in generating predictions (i.e., signers vs. nonsigners). An egocentric bias was only observed under limited circumstances: when nonsigners began shadowing grooming gestures. The data do not support strong motor simulation proposals, and instead highlight the role of (a) production fluency and (b) manual rhythm for signer productions. Signers showed significantly faster lag times for the highly skilled pseudosign model and increased temporal regularity (i.e., lower standard deviations) compared to nonsigners. We conclude sign language experience may (a) reduce reliance on motor simulation during action observation, (b) attune users to prosodic cues (c) and induce temporal regularities during action production.
Topics: Cues; Gestures; Humans; Language; Linguistics; Sign Language
PubMed: 32531500
DOI: 10.1016/j.actpsy.2020.103092 -
Journal of Deaf Studies and Deaf... Oct 2017Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of...
Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.
Topics: Child; Computer-Assisted Instruction; Education of Hearing Disabled; Female; Humans; Literacy; Male; Reading; Sign Language
PubMed: 28961874
DOI: 10.1093/deafed/enx023 -
Sensors (Basel, Switzerland) Jan 2022A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign...
A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.
Topics: Deep Learning; Hand; Humans; Machine Learning; Neural Networks, Computer; Sign Language
PubMed: 35062533
DOI: 10.3390/s22020574