-
Journal of Deaf Studies and Deaf... Jan 2021Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed...
Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed whether fluency is needed to achieve this performance benefit or what it may look like on the neurobiological level. We conducted an electroencephalography experiment and assessed accuracy on a classic mental rotation task given to deaf fluent signers, hearing fluent signers, hearing non-fluent signers, and hearing non-signers. Two of the main findings of the study are as follows: (1) Sign language comprehension and mental rotation abilities are positively correlated and (2) Behavioral performance differences between signers and non-signers are not clearly reflected in brain activity typically associated with mental rotation. In addition, we propose that the robust impact sign language appears to have on mental rotation abilities strongly suggests that "sign language use" should be added to future measures of spatial experiences.
Topics: Comprehension; Deafness; Hearing; Hearing Tests; Humans; Sign Language
PubMed: 32978623
DOI: 10.1093/deafed/enaa030 -
Nature Communications Sep 2021Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and...
Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers' daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.
Topics: Communication Aids for Disabled; Deafness; Deep Learning; Gestures; Humans; Sign Language; Virtual Reality; Wearable Electronic Devices
PubMed: 34508076
DOI: 10.1038/s41467-021-25637-w -
Clinical Linguistics & Phonetics 2019Intelligibility of spoken languages is a widely discussed construct; however, intelligibility, as it pertains to signed languages, has rarely been considered. This study...
Intelligibility of spoken languages is a widely discussed construct; however, intelligibility, as it pertains to signed languages, has rarely been considered. This study provides an initial investigation of the construct of intelligibility in American Sign Language (ASL) and evaluates potential measures for self-report and expert ratings of sign intelligibility that examined the frequency of understanding, amount of understanding, and ease of understanding. Participants were 66 college students (42 Deaf, 24 hearing) who had self-rated ASL skills ranging from poor to excellent. Participants rated their own intelligibility in ASL and then provided a signed language sample through a picture description task. Language samples were reviewed by an expert rater and measures of intelligibility were completed. Results indicated that expert ratings of sign intelligibility across all measures were significantly and positively correlated. Understanding of the signer was predicted by the amount of understanding, frequency of understanding, and ASL production skills, while understanding the picture being described was predicted by ease of understanding and ASL grammar skills. Self- and expert ratings of sign intelligibility using the ASL version of the Intelligibility in Context Scale were not significantly different. Self-report of sign intelligibility for viewers of different familiarity using the ICS-ASL was found not to be feasible due to many participants not being in contact with ASL users in the relationships defined by the measure. In conclusion, this preliminary investigation suggests that sign intelligibility is a construct worthy of further investigation.
Topics: Adult; Case-Control Studies; Comprehension; Female; Humans; Male; Persons With Hearing Impairments; Self Report; Sign Language; Speech Intelligibility; Young Adult
PubMed: 31017006
DOI: 10.1080/02699206.2019.1600169 -
Sensors (Basel, Switzerland) Feb 2023With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless...
With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals-UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact.
Topics: Humans; Ultrasonics; Gestures; Sign Language; COVID-19; Acoustics
PubMed: 36850389
DOI: 10.3390/s23041790 -
Prehospital Emergency Care 2022: We sought to identify current Emergency Medical Services (EMS) practitioner comfort levels and communication strategies when caring for the Deaf American Sign Language...
: We sought to identify current Emergency Medical Services (EMS) practitioner comfort levels and communication strategies when caring for the Deaf American Sign Language (ASL) user. Additionally, we created and evaluated the effect of an educational intervention and visual communication tool on EMS practitioner comfort levels and communication. : This was a descriptive study assessing communication barriers at baseline and after the implementation of a novel educational intervention with cross-sectional surveys conducted at three time points (pre-, immediate-post, and three months post-intervention). Descriptive statistics characterized the study sample and we quantified responses from the baseline survey and both post-intervention surveys. : There were 148 EMS practitioners who responded to the baseline survey. The majority of participants (74%; 109/148) previously responded to a 9-1-1 call for a Deaf patient and 24% (35/148) reported previous training regarding the Deaf community. The majority felt that important details were lost during communication (83%; 90/109), reported that the Deaf patient appeared frustrated during an encounter (72%; 78/109), and felt that communication limited patient care (67%; 73/109). When interacting with a Deaf person, the most common communication strategies included written text (90%; 98/109), friend/family member (90%; 98/109), lip reading (55%; 60/109), and spoken English (50%; 55/109). Immediately after the training, most participants reported that the educational training expanded their knowledge of Deaf culture (93%; 126/135), communication strategies to use (93%; 125/135), and common pitfalls to avoid (96%; 129/135) when caring for Deaf patients. At 3 months, all participants (100%, 79/79) reported that the educational module was helpful. Some participants (19%, 15/79) also reported using the communication tool with other non-English speaking patients. : The majority of EMS practitioners reported difficulty communicating with Deaf ASL users and acknowledged a sense of patient frustration. Nearly all participants felt the educational training was beneficial and clinically relevant; three months later, all participants found it to still be helpful. Additionally, the communication tool may be applicable to other populations that use English as a second language.
Topics: Communication; Communication Barriers; Cross-Sectional Studies; Emergency Medical Services; Humans; Sign Language
PubMed: 34060987
DOI: 10.1080/10903127.2021.1936314 -
American Journal of Pharmaceutical... Oct 2019To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related...
To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related to the provision of care to deaf patients. A cross-sectional, descriptive study was conducted between March and June 2017. Data were collected from the websites of undergraduate pharmacy education programs in Brazil. Sign language courses were classified according to type (mandatory or elective), nature (theoretical or theoretical-practical), course period and workload. The course contents were extracted and analyzed by content analysis. Of the 35 schools of pharmacy included in the study, 18 (51.4%) included a sign language course in their curriculum. Eighteen (100%) of the sign language courses were elective, one (5.6%) was theorical-practical, 16 (89.0%) did not have a predetermined point in the curriculum for students to complete the course, and 11 (61.1%) had a workload equal to or greater than 60 hours. The main pedagogical content identified related to the teaching and learning of sign language. Learning sign language in undergraduate pharmacy is important for these professionals could provide humanistic and integral care to deaf patients. Therefore, there is considerable room for improvement in teaching sign language to undergraduate pharmacy students in Brazil.
Topics: Brazil; Cross-Sectional Studies; Curriculum; Education, Pharmacy; Humans; Learning; Schools, Pharmacy; Sign Language; Students, Pharmacy
PubMed: 31831902
DOI: 10.5688/ajpe7239 -
Pediatrics Nov 2017
Topics: Cochlear Implantation; Deafness; Educational Measurement; Humans; Sign Language
PubMed: 29089399
DOI: 10.1542/peds.2017-2655B -
American Annals of the Deaf 2021Using grounded theory, the researcher posed this question in this qualitative study: What childhood literacy-learning and current literacy-teaching experiences have...
Using grounded theory, the researcher posed this question in this qualitative study: What childhood literacy-learning and current literacy-teaching experiences have influenced Chinese Deaf teachers' views on literacy learning? Responses were obtained from Deaf teachers by means of videotaped interviews about their literacy-learning and literacy-teaching experiences. When the interviews, which were conducted in Chinese Sign Language (CSL) glossed to written Chinese and English, were analyzed, six themes emerged. Extracted core categories provide the unique context for a "boomerang effect" related to language and literacy through a bilingual path to literacy. Recommendations for future research using bilingual theory and practice are discussed.
Topics: Child; China; Humans; Language; Learning; Literacy; Sign Language
PubMed: 35185038
DOI: 10.1353/aad.2021.0042 -
Sensors (Basel, Switzerland) Oct 2023The analysis and recognition of sign languages are currently active fields of research focused on sign recognition. Various approaches differ in terms of analysis... (Review)
Review
The analysis and recognition of sign languages are currently active fields of research focused on sign recognition. Various approaches differ in terms of analysis methods and the devices used for sign acquisition. Traditional methods rely on video analysis or spatial positioning data calculated using motion capture tools. In contrast to these conventional recognition and classification approaches, electromyogram (EMG) signals, which measure muscle electrical activity, offer potential technology for detecting gestures. These EMG-based approaches have recently gained attention due to their advantages. This prompted us to conduct a comprehensive study on the methods, approaches, and projects utilizing EMG sensors for sign language handshape recognition. In this paper, we provided an overview of the sign language recognition field through a literature review, with the objective of offering an in-depth review of the most significant techniques. These techniques were categorized in this article based on their respective methodologies. The survey discussed the progress and challenges in sign language recognition systems based on surface electromyography (sEMG) signals. These systems have shown promise but face issues like sEMG data variability and sensor placement. Multiple sensors enhance reliability and accuracy. Machine learning, including deep learning, is used to address these challenges. Common classifiers in sEMG-based sign language recognition include SVM, ANN, CNN, KNN, HMM, and LSTM. While SVM and ANN are widely used, random forest and KNN have shown better performance in some cases. A multilayer perceptron neural network achieved perfect accuracy in one study. CNN, often paired with LSTM, ranks as the third most popular classifier and can achieve exceptional accuracy, reaching up to 99.6% when utilizing both EMG and IMU data. LSTM is highly regarded for handling sequential dependencies in EMG signals, making it a critical component of sign language recognition systems. In summary, the survey highlights the prevalence of SVM and ANN classifiers but also suggests the effectiveness of alternative classifiers like random forests and KNNs. LSTM emerges as the most suitable algorithm for capturing sequential dependencies and improving gesture recognition in EMG-based sign language recognition systems.
Topics: Humans; Sign Language; Reproducibility of Results; Pattern Recognition, Automated; Neural Networks, Computer; Algorithms; Electromyography; Gestures
PubMed: 37837173
DOI: 10.3390/s23198343 -
Journal of Deaf Studies and Deaf... Dec 2022Deaf professionals, whom we term Deaf Language Specialists (DLS), are frequently employed to work with children and young people who have difficulties learning sign...
Deaf professionals, whom we term Deaf Language Specialists (DLS), are frequently employed to work with children and young people who have difficulties learning sign language, but there are few accounts of this work in the literature. Through questionnaires and focus groups, 23 DLSs described their work in this area. Deductive thematic analysis was used to identify how this compared to the work of professionals (typically Speech and Language Therapists/Pathologists, SLPs) working with hearing children with difficulties learning spoken language. Inductive thematic analysis resulted in the identification of two additional themes: while many practices by DLSs are similar to those of SLPs working with hearing children, a lack of training, information, and resources hampers their work; additionally, the cultural context of language and deafness makes this a complex and demanding area of work. These findings add to the limited literature on providing language interventions in the signed modality with clinical implications for meeting the needs of deaf and hard-of-hearing children who do not achieve expectations of learning a first language in their early years. The use of these initial results in two further study phases to co-deliver interventions and co-produce training for DLSs is briefly described.
Topics: Adolescent; Child; Humans; Deafness; Language; Language Therapy; Learning; Sign Language
PubMed: 36504375
DOI: 10.1093/deafed/enac029