-
Scientific Reports Jan 2024The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing...
The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.
Topics: Humans; Sign Language; Learning; Individuality; Linguistics; Physical Examination
PubMed: 38200108
DOI: 10.1038/s41598-024-51330-1 -
Health Expectations : An International... Dec 2023Deaf and hard-of-hearing (DHH) patients are a priority population for emergency medicine health services research. DHH patients are at higher risk than non-DHH patients...
BACKGROUND
Deaf and hard-of-hearing (DHH) patients are a priority population for emergency medicine health services research. DHH patients are at higher risk than non-DHH patients of using the emergency department (ED), have longer lengths of stay in the ED and report poor patient-provider communication. This qualitative study aimed to describe ED care-seeking and patient-centred care perspectives among DHH patients.
METHODS
This qualitative study is the second phase of a mixed-methods study. The goal of this study was to further explain quantitative findings related to ED outcomes among DHH and non-DHH patients. We conducted semistructured interviews with 4 DHH American Sign Language (ASL)-users and 6 DHH English speakers from North Central Florida. Interviews were transcribed and analysed using a descriptive qualitative approach.
RESULTS
Two themes were developed: (1) DHH patients engage in a complex decision-making process to determine ED utilization and (2) patient-centred ED care differs between DHH ASL-users and DHH English speakers. The first theme describes the social-behavioural processes through which DHH patients assess their need to use the ED. The second theme focuses on the social environment within the ED: patients feeling stereotyped, involvement in the care process, pain communication, receipt of accommodations and discharge processes.
CONCLUSIONS
This study underscores the importance of better understanding, and intervening in, DHH patient ED care-seeking and care delivery to improve patient outcomes. Like other studies, this study also finds that DHH patients are not a monolithic group and language status is an equity-relevant indicator. We also discuss recommendations for emergency medicine.
PATIENT OR PUBLIC CONTRIBUTION
This study convened a community advisory group made up of four DHH people to assist in developing research questions, data collection tools and validation of the analysis and interpretation of data. Community advisory group members who were interested in co-authorship are listed in the byline, with others in the acknowledgements. In addition, several academic-based co-authors are also deaf or hard of hearing.
Topics: Humans; Deafness; Persons With Hearing Impairments; Language; Sign Language; Emergency Service, Hospital
PubMed: 37555478
DOI: 10.1111/hex.13842 -
Proceedings of the Conference on... Dec 2023Large language models (LLMs) can generate natural language texts for various domains and tasks, but their potential for clinical text mining, a domain with scarce,...
Large language models (LLMs) can generate natural language texts for various domains and tasks, but their potential for clinical text mining, a domain with scarce, sensitive, and imbalanced medical data, is under-explored. We investigate whether LLMs can augment clinical data for detecting Alzheimer's Disease (AD)-related signs and symptoms from electronic health records (EHRs), a challenging task that requires high expertise. We create a novel pragmatic taxonomy for AD sign and symptom progression based on expert knowledge and generated three datasets: (1) a gold dataset annotated by human experts on longitudinal EHRs of AD patients; (2) a silver dataset created by the data-to-label method, which labels sentences from a public EHR collection with AD-related signs and symptoms; and (3) a bronze dataset created by the label-to-data method which generates sentences with AD-related signs and symptoms based on the label definition. We train a system to detect AD-related signs and symptoms from EHRs. We find that the silver and bronze datasets improves the system performance, outperforming the system using only the gold dataset. This shows that LLMs can generate synthetic clinical data for a complex task by incorporating expert knowledge, and our label-to-data method can produce datasets that are free of sensitive information, while maintaining acceptable quality.
PubMed: 38213944
DOI: 10.18653/v1/2023.findings-emnlp.474 -
Sensors (Basel, Switzerland) Jun 2024Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in...
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.
Topics: Sign Language; Humans; Deep Learning; Neural Networks, Computer; Saudi Arabia; Language; Gestures
PubMed: 38894473
DOI: 10.3390/s24113683 -
PeerJ. Computer Science 2024This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion...
This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion history images (MHIs) derived from these data. Our research combines spatial information obtained from body, hand, and face poses with the comprehensive details provided by three-channel MHI data concerning the temporal dynamics of the sign. Particularly, our developed finger pose-based MHI (FP-MHI) feature significantly enhances the recognition success, capturing the nuances of finger movements and gestures, unlike existing approaches in SLR. This feature improves the accuracy and reliability of SLR systems by more accurately capturing the fine details and richness of sign language. Additionally, we enhance the overall model accuracy by predicting missing pose data through linear interpolation. Our study, based on the randomized leaky rectified linear unit (RReLU) enhanced ResNet-18 model, successfully handles the interaction between manual and non-manual features through the fusion of extracted features and classification with a support vector machine (SVM). This innovative integration demonstrates competitive and superior results compared to current methodologies in the field of SLR across various datasets, including BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL, in our experiments.
PubMed: 38855212
DOI: 10.7717/peerj-cs.2054 -
Frontiers in Clinical Diabetes and... 2023Language barriers can pose a significant hurdle to successfully educating children and young people with type 1 diabetes (CYPD) and their families, potentially...
INTRODUCTION
Language barriers can pose a significant hurdle to successfully educating children and young people with type 1 diabetes (CYPD) and their families, potentially influencing their glycaemic control.
METHODS
Retrospective case-control study assessing HbA1c values at 0, 3, 6, 9, 12 and 18 months post-diagnosis in 41 CYPD requiring interpreter support (INT) and 100 age-, sex- and mode-of-therapy-matched CYPD not requiring interpreter support (CTR) in our multi-diverse tertiary diabetes centre. Data were captured between 2009-2016. English indices of deprivation for each cohort are reported based on the UK 2015 census data.
RESULTS
The main languages spoken were Somali (27%), Urdu (19.5%), Romanian (17%) and Arabic (12%), but also Polish, Hindi, Tigrinya, Portuguese, Bengali and sign language. Overall deprivation was worse in the INT group according to the Index of Multiple Deprivation (IMD [median]: INT 1.642; CTR 3.741; p=0.001). The median HbA1c was higher at diagnosis in the CTR group (9.95% [85.2 mmol/mol] versus 9.0% [74.9 mmol/mol], p=0.046) but was higher in the INT group subsequently: the median HbA1c at 18 months post diagnosis was 8.3% (67.2 mmol/mol; INT) versus 7.9% (62.8 mmol/mol; CTR) (p=0.014). There was no hospitalisation secondary to diabetes-related complications in either cohorts.
SUMMARY AND CONCLUSIONS
Glycaemic control is worse in CYPD with language barriers. These subset of patients also come from the most deprived areas which adds to the disadvantage. Health care providers should offer tailored support for CYP/families with language barriers, including provision of diabetes-specific training for interpreters, and explore additional factors contributing to poor glycaemic control. The findings of this study suggest that poor health outcomes in CYPD with language barriers is multifactorial and warrants a multi-dimensional management approach.
PubMed: 38090274
DOI: 10.3389/fcdhc.2023.1228820 -
Health Promotion Practice Jan 2024School-based programs are an important tobacco prevention tool. Yet, existing programs are not suitable for Deaf and Hard-of-Hearing (DHH) youth. Moreover, little...
School-based programs are an important tobacco prevention tool. Yet, existing programs are not suitable for Deaf and Hard-of-Hearing (DHH) youth. Moreover, little research has examined the use of the full range of tobacco products and related knowledge in this group. To address this gap and inform development of a school-based tobacco prevention program for this population, we conducted a pilot study among DHH middle school (MS) and high school (HS) students attending Schools for the Deaf and mainstream schools in California (n = 114). American Sign Language (ASL) administered surveys, before and after receipt of a draft curriculum delivered by health or physical education teachers, assessed product use and tobacco knowledge. Thirty-five percent of students reported exposure to tobacco products at home, including cigarettes (19%) and e-cigarettes (15%). Tobacco knowledge at baseline was limited; 35% of students knew e-cigarettes contain nicotine, and 56% were aware vaping is prohibited on school grounds. Current product use was reported by 16% of students, most commonly e-cigarettes (12%) and cigarettes (10%); overall, 7% of students reported dual use. Use was greater among HS versus MS students. Changes in student knowledge following program delivery included increased understanding of harmful chemicals in tobacco products, including nicotine in e-cigarettes. Post-program debriefings with teachers yielded specific recommendations for modifications to better meet the educational needs of DHH students. Findings based on student and teacher feedback will guide curriculum development and inform next steps in our program of research aimed to prevent tobacco use in this vulnerable and heretofore understudied population group.
Topics: Humans; Adolescent; Smoking; Electronic Nicotine Delivery Systems; Nicotine; Persons With Hearing Impairments; Pilot Projects; Tobacco Products
PubMed: 36760068
DOI: 10.1177/15248399221151180 -
Heliyon Oct 2023Massive Open Online Courses (MOOCs) have become important resources in educational environments worldwide because they have a positive impact on teaching and learning...
Massive Open Online Courses (MOOCs) have become important resources in educational environments worldwide because they have a positive impact on teaching and learning processes. Nevertheless, the way they are designed is crucial to properly address the requirements of special needs people in educational processes. Thus, this paper proposes a methodology for designing and developing MOOCs for Deaf or hard-of-hearing individuals. This exploratory and descriptive study adopted an inclusive education approach based on a literature review and expert consultation. The results highlight the importance of four aspects in MOOC development for these special needs individuals: (i) designing and incorporating elements that meet the needs of Deaf or hard-of-hearing people so that they can use MOOCs effectively; (ii) combining different methodologies and resources; (iii) properly planning and sequencing the design stages; and (iv) using appropriate tools, contents, and times for the process. The findings show that MOOCs should be adequately designed to address the demands of the Deaf community by considering their characteristics and requirements and incorporating current tools, practices, and resources.
PubMed: 37842617
DOI: 10.1016/j.heliyon.2023.e20456 -
Data in Brief Dec 2023Sign language is a form of communication medium for speech and hearing disabled people. It has various forms with different troublesome patterns, which are difficult for...
Sign language is a form of communication medium for speech and hearing disabled people. It has various forms with different troublesome patterns, which are difficult for the general mass to comprehend. Bengali sign language (BdSL) is one of the difficult sign languages due to its immense number of alphabet, words, and expression techniques. Machine translation can ease the difficulty for disabled people to communicate with generals. From the machine learning (ML) domain, computer vision can be the solution for them, and every ML solution requires a optimized model and a proper dataset. Therefore, in this research work, we have created a BdSL dataset and named `KU-BdSL', which consists of 30 classes describing 38 consonants ('banjonborno') of the Bengali alphabet. The dataset includes 1500 images of hand signs in total, each representing Bengali consonant(s). Thirty-nine participants (30 males and 9 females) of different ages (21-38 years) participated in the creation of this dataset. We adopted smartphones to capture the images due to the availability of their high-definition cameras. We believe that this dataset can be beneficial to the deaf and dumb (D&D) community. Identification of Bengali consonants of BdSL from images or videos is feasible using the dataset. It can also be employed for a human-machine interface for disabled people. In the future, we will work on the vowels and word level of BdSL.
PubMed: 38075609
DOI: 10.1016/j.dib.2023.109797 -
Sensors (Basel, Switzerland) Dec 2023Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive...
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone is not always desirable (e.g., silent commands during task-force training) or simply not possible (e.g., if the user has hearing loss). Data gloves help to increase immersion within VR, as they correspond to our natural interaction. At the same time, they offer the possibility of accurately capturing hand shapes, such as those used in non-verbal communication (e.g., thumbs up, okay gesture, …) and in sign language. In this paper, we present a hand-shape recognition system using data gloves, including data acquisition, data preprocessing, and data classification to enable nonverbal communication within VR. We investigate the impact on accuracy and classification time of using an and a approach in our data preprocessing. To obtain a more generalized approach, we also studied the impact of artificial , i.e., we created new artificial data from the recorded and filtered data to augment the training data set. With our approach, 56 different hand shapes could be distinguished with an accuracy of up to 93.28%. With a reduced number of 27 hand shapes, an accuracy of up to 95.55% could be achieved. The voting meta-classifier (VL2) proved to be the most accurate, albeit slowest, classifier. A good alternative is random forest (RF), which was even able to achieve better accuracy values in a few cases and was generally somewhat faster. was proven to be an effective approach, especially in improving the classification time. Overall, we have shown that our hand-shape recognition system using data gloves is suitable for communication within VR.
Topics: Humans; Hand; Recognition, Psychology; Gestures; Virtual Reality; Sign Language
PubMed: 38139692
DOI: 10.3390/s23249847