-
Handbook of Clinical Neurology 2022Signed languages are naturally occurring, fully formed linguistic systems that rely on the movement of the hands, arms, torso, and face within a sign space for... (Review)
Review
Signed languages are naturally occurring, fully formed linguistic systems that rely on the movement of the hands, arms, torso, and face within a sign space for production, and are perceived predominantly using visual perception. Despite stark differences in modality and linguistic structure, functional neural organization is strikingly similar to spoken language. Generally speaking, left frontal areas support sign production, and regions in the auditory cortex underlie sign comprehension-despite signers not relying on audition to process language. Given this, should a deaf or hearing signer suffer damage to the left cerebral hemisphere, language is vulnerable to impairment. Multiple cases of sign language aphasia have been documented following left hemisphere injury, and the general pattern of linguistic deficits mirrors those observed in spoken language. The right hemisphere likely plays a role in non-linguistic but critical visuospatial functions of sign language; therefore, individuals who are spared from damage to the left hemisphere but suffer injury to the right are at risk for a different set of communication deficits. In this chapter, we review the neurobiology of sign language and patterns of language deficits that follow brain injury in the deaf signing population.
Topics: Aphasia; Deafness; Humans; Language; Sign Language; Vision, Ocular; Visual Perception
PubMed: 35078607
DOI: 10.1016/B978-0-12-823384-9.00019-0 -
Sensors (Basel, Switzerland) Aug 2021AI technologies can play an important role in breaking down the communication barriers of deaf or hearing-impaired people with other communities, contributing... (Review)
Review
AI technologies can play an important role in breaking down the communication barriers of deaf or hearing-impaired people with other communities, contributing significantly to their social inclusion. Recent advances in both sensing technologies and AI algorithms have paved the way for the development of various applications aiming at fulfilling the needs of deaf and hearing-impaired communities. To this end, this survey aims to provide a comprehensive review of state-of-the-art methods in sign language capturing, recognition, translation and representation, pinpointing their advantages and limitations. In addition, the survey presents a number of applications, while it discusses the main challenges in the field of sign language technologies. Future research direction are also proposed in order to assist prospective researchers towards further advancing the field.
Topics: Algorithms; Artificial Intelligence; Humans; Prospective Studies; Sign Language
PubMed: 34502733
DOI: 10.3390/s21175843 -
IEEE Reviews in Biomedical Engineering 2021Sign language is used as a primary form of communication by many people who are Deaf, deafened, hard of hearing, and non-verbal. Communication barriers exist for members... (Review)
Review
Sign language is used as a primary form of communication by many people who are Deaf, deafened, hard of hearing, and non-verbal. Communication barriers exist for members of these populations during daily interactions with those who are unable to understand or use sign language. Advancements in technology and machine learning techniques have led to the development of innovative approaches for gesture recognition. This literature review focuses on analyzing studies that use wearable sensor-based systems to classify sign language gestures. A review of 72 studies from 1991 to 2019 was performed to identify trends, best practices, and common challenges. Attributes including sign language variation, sensor configuration, classification method, study design, and performance metrics were analyzed and compared. Results from this literature review could aid in the development of user-centred and robust wearable sensor-based systems for sign language recognition.
Topics: Biomedical Engineering; Electromyography; Gestures; Humans; Machine Learning; Sign Language; Wearable Electronic Devices
PubMed: 32845843
DOI: 10.1109/RBME.2020.3019769 -
CoDAS 2021
Topics: Deafness; Humans; Language Therapy; Multilingualism; Persons With Hearing Impairments; Sign Language
PubMed: 33909844
DOI: 10.1590/2317-1782/20202020248 -
Special Care in Dentistry : Official... Nov 2022This manuscript aimed to produce an illustrated booklet of Brazilian sign language (LIBRAS) booklet to facilitate the communication between dentists (and academics) and... (Review)
Review
AIMS
This manuscript aimed to produce an illustrated booklet of Brazilian sign language (LIBRAS) booklet to facilitate the communication between dentists (and academics) and deaf patients during dental treatment and other healthcare promotion activities.
METHODS AND RESULTS
A literature review was conducted to select signs, symptoms, and diseases related to dentistry expressed in LIBRAS; in addition, photographs were taken to illustrate and produce the booklet. The booklet (in PDF format) was made available on an open-access website and printed copies were freely distributed at the dental clinics of the Federal University of Pará.
CONCLUSION
Learning of specific LIBRAS is extremely important to guarantee social inclusion and improve dental treatment of deaf patients.
Topics: Humans; Sign Language; Brazil; Pamphlets; Communication; Dentistry
PubMed: 35397186
DOI: 10.1111/scd.12722 -
Sensors (Basel, Switzerland) Nov 2023This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using...
This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.
Topics: Humans; Algorithms; Sign Language; Motion; Movement; Hand
PubMed: 38067738
DOI: 10.3390/s23239365 -
Journal of Deaf Studies and Deaf... Oct 2019The United Nations Convention on the Rights of Persons with Disabilities requests "Nothing about us without us." User-centered methodological research is the way to... (Review)
Review
The United Nations Convention on the Rights of Persons with Disabilities requests "Nothing about us without us." User-centered methodological research is the way to comply with this convention. Interaction with the deaf community must be in their language; hence sign language questionnaires are one of the tools to gather data. While in the past interacting with an online video questionnaire was out of the question, today it is a reality. This article focuses on the design of an interactive video questionnaire for sign language users. From a historical review of the existing literature on research methods and previous sign language questionnaire, the article examines the design features affected in the process of making accessible questionnaires with sign language videos: format and layout. The article finishes with the solution developed toward mainstreaming sign language questionnaires in order to contribute to a diverse and inclusive society for all citizens.
Topics: Biomedical Research; Humans; Persons With Hearing Impairments; Sign Language; Surveys and Questionnaires; Video Recording
PubMed: 31271435
DOI: 10.1093/deafed/enz021 -
Computational Intelligence and... 2022Sign language is essential for deaf and mute people to communicate with normal people and themselves. As ordinary people tend to ignore the importance of sign language,...
Sign language is essential for deaf and mute people to communicate with normal people and themselves. As ordinary people tend to ignore the importance of sign language, which is the mere source of communication for the deaf and the mute communities. These people are facing significant downfalls in their lives because of these disabilities or impairments leading to unemployment, severe depression, and several other symptoms. One of the services they are using for communication is the sign language interpreters. But hiring these interpreters is very costly, and therefore, a cheap solution is required for resolving this issue. Therefore, a system has been developed that will use the visual hand dataset based on an Arabic Sign Language and interpret this visual data in textual information. The dataset used consists of 54049 images of Arabic sign language alphabets consisting of 1500\ images per class, and each class represents a different meaning by its hand gesture or sign. Various preprocessing and data augmentation techniques have been applied to the images. The experiments have been performed using various pretrained models on the given dataset. Most of them performed pretty normally and in the final stage, the EfficientNetB4 model has been considered the best fit for the case. Considering the complexity of the dataset, models other than EfficientNetB4 do not perform well due to their lightweight architecture. EfficientNetB4 is a heavy-weight architecture that possesses more complexities comparatively. The best model is exposed with a training accuracy of 98 percent and a testing accuracy of 95 percent.
Topics: Deafness; Gestures; Humans; Language; Machine Learning; Sign Language
PubMed: 35498192
DOI: 10.1155/2022/4567989 -
American Journal of Pharmaceutical... Oct 2019To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related...
To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related to the provision of care to deaf patients. A cross-sectional, descriptive study was conducted between March and June 2017. Data were collected from the websites of undergraduate pharmacy education programs in Brazil. Sign language courses were classified according to type (mandatory or elective), nature (theoretical or theoretical-practical), course period and workload. The course contents were extracted and analyzed by content analysis. Of the 35 schools of pharmacy included in the study, 18 (51.4%) included a sign language course in their curriculum. Eighteen (100%) of the sign language courses were elective, one (5.6%) was theorical-practical, 16 (89.0%) did not have a predetermined point in the curriculum for students to complete the course, and 11 (61.1%) had a workload equal to or greater than 60 hours. The main pedagogical content identified related to the teaching and learning of sign language. Learning sign language in undergraduate pharmacy is important for these professionals could provide humanistic and integral care to deaf patients. Therefore, there is considerable room for improvement in teaching sign language to undergraduate pharmacy students in Brazil.
Topics: Brazil; Cross-Sectional Studies; Curriculum; Education, Pharmacy; Humans; Learning; Schools, Pharmacy; Sign Language; Students, Pharmacy
PubMed: 31831902
DOI: 10.5688/ajpe7239 -
Sensors (Basel, Switzerland) Nov 2022It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China...
It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China currently do not have the ability to interpret sign language. Normal medical treatment is a luxury for deaf people. In this paper, we propose a sign language recognition system: Heart-Speaker. Heart-Speaker is applied to a deaf-mute consultation scenario. The system provides a low-cost solution for the difficult problem of treating deaf-mute patients. The doctor only needs to point the Heart-Speaker at the deaf patient and the system automatically captures the sign language movements and translates the sign language semantics. When a doctor issues a diagnosis or asks a patient a question, the system displays the corresponding sign language video and subtitles to meet the needs of two-way communication between doctors and patients. The system uses the MobileNet-YOLOv3 model to recognize sign language. It meets the needs of running on embedded terminals and provides favorable recognition accuracy. We performed experiments to verify the accuracy of the measurements. The experimental results show that the accuracy rate of Heart-Speaker in recognizing sign language can reach 90.77%.
Topics: Humans; Sign Language; Communication; Referral and Consultation; Recognition, Psychology; China
PubMed: 36501809
DOI: 10.3390/s22239107