-
Data in Brief Apr 2024Nepali Sign Language (NSL) is used by the Nepali-speaking community in Nepal and in Indian states such as Sikkim, the hilly region of North Bengal, some parts of...
Nepali Sign Language (NSL) is used by the Nepali-speaking community in Nepal and in Indian states such as Sikkim, the hilly region of North Bengal, some parts of Uttarakhand, Meghalaya, and Assam. It consists of the International Manual Alphabet (A-Z), Nepali consonants, vowels, conjunct letters, and numbers represented in the form of one-handed fingerspelling or Nepali manual alphabet. The standard gestures for NSL have been published by the Nepal National Federation of the Deaf & Hard of Hearing (NFDH). To learn Nepali Sign Language, the first step is to understand its alphabet set. The use of technology can help ease the learning process. One of the application areas of computer vision is translating sign language gestures to either text or audio to facilitate communication. This is an open research area. However, NSL translation is one of the less explored research areas because there is no dataset available to work on for NSL. This paper introduces the Nepali Sign Language Dataset (NSL23), which is the first of its kind and includes vowels and consonants of the Nepali Sign Language alphabet. The dataset consists of .mov videos performed by 14 volunteers who have demonstrated 36 consonant signs and 13 vowel signs either in one full video or character by character. The dataset has been prepared under various conditions, including normal lighting, dark lighting conditions, prepared environments, unprepared environments, and real-world environments. The volunteers who performed the NSL gesture have been classified as 9 beginners who are using NSL for the first time and 5 experts who have been using NSL for 5 to 25 years. NSL23 contains 630 total videos representing 1205 gestures. The dataset can be used to train machine learning models to classify the alphabet set of NSL and further develop a sign language translator.
PubMed: 38328296
DOI: 10.1016/j.dib.2024.110080 -
Journal of Deaf Studies and Deaf... Sep 2023Children of Deaf Adults (CODAs) are uniquely positioned at the intersection between Deaf and hearing communities and often act as interpreters for their parents and...
Children of Deaf Adults (CODAs) are uniquely positioned at the intersection between Deaf and hearing communities and often act as interpreters for their parents and hearing individuals. Informed by previous research which has highlighted language brokering as a core element of CODAs' experiences, along with the research which identifies the risk for parentification among CODAs, the aim of this study is to explore CODAs' experiences of their roles within deaf-parented households and beyond the household, at the intersection between the Deaf and hearing worlds. Semi-structured interviews were conducted with 12 CODAs (Mean age 36.33 years, Range 22-54 years) in Ireland. Three themes were generated from the analysis of the interviews: "It was really normal", Facing the Stigma associated with Deafness, and Being a Language Broker. The findings suggest that healthcare and education providers need a better understanding of the unique situations faced by CODAs in their roles as mediators between their parents and the hearing community, so that children and Deaf parents can be appropriately supported in their interactions with professionals.
Topics: Adult; Child; Humans; Young Adult; Middle Aged; Deafness; Ireland; Hearing; Parents; Sign Language
PubMed: 37384375
DOI: 10.1093/deafed/enad018 -
Scientific Reports May 2024Sign language is an important way to provide expression information to people with hearing and speaking disabilities. Therefore, sign language recognition has always...
Sign language is an important way to provide expression information to people with hearing and speaking disabilities. Therefore, sign language recognition has always been a very important research topic. However, many sign language recognition systems currently require complex deep models and rely on expensive sensors, which limits the application scenarios of sign language recognition. To address this issue, based on computer vision, this study proposed a lightweight, dual-path background erasing deep convolutional neural network (DPCNN) model for sign language recognition. The DPCNN consists of two paths. One path is used to learn the overall features, while the other path learns the background features. The background features are gradually subtracted from the overall features to obtain an effective representation of hand features. Then, these features are flatten into a one-dimensional layer, and pass through a fully connected layer with an output unit of 128. Finally, use a fully connected layer with an output unit of 24 as the output layer. Based on the ASL Finger Spelling dataset, the total accuracy and Macro-F1 scores of the proposed method is 99.52% and 0.997, respectively. More importantly, the proposed method can be applied to small terminals, thereby improving the application scenarios of sign language recognition. Through experimental comparison, the dual path background erasure network model proposed in this paper has better generalization ability.
PubMed: 38762676
DOI: 10.1038/s41598-024-62008-z -
Nature Aug 2023Patients from historically under-represented racial and ethnic groups are enrolled in cancer clinical trials at disproportionately low rates in the USA. As these...
Patients from historically under-represented racial and ethnic groups are enrolled in cancer clinical trials at disproportionately low rates in the USA. As these patients often have limited English proficiency, we hypothesized that one barrier to their inclusion is the cost to investigators of translating consent documents. To test this hypothesis, we evaluated more than 12,000 consent events at a large cancer centre and assessed whether patients requiring translated consent documents would sign consent documents less frequently in studies lacking industry sponsorship (for which the principal investigator pays the translation costs) than for industry-sponsored studies (for which the translation costs are covered by the sponsor). Here we show that the proportion of consent events for patients with limited English proficiency in studies not sponsored by industry was approximately half of that seen in industry-sponsored studies. We also show that among those signing consent documents, the proportion of consent documents translated into the patient's primary language in studies without industry sponsorship was approximately half of that seen in industry-sponsored studies. The results suggest that the cost of consent document translation in trials not sponsored by industry could be a potentially modifiable barrier to the inclusion of patients with limited English proficiency.
Topics: Humans; Consent Forms; Translating; Translations; Clinical Trials as Topic; Drug Industry; Communication Barriers; Research Personnel
PubMed: 37532930
DOI: 10.1038/s41586-023-06382-0 -
Journal of Imaging Jun 2024Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of...
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64).
PubMed: 38921626
DOI: 10.3390/jimaging10060149 -
PloS One 2024(i) To identify peer reviewed publications reporting the mental and/or physical health outcomes of Deaf adults who are sign language users and to synthesise evidence;...
OBJECTIVES
(i) To identify peer reviewed publications reporting the mental and/or physical health outcomes of Deaf adults who are sign language users and to synthesise evidence; (ii) If data available, to analyse how the health of the adult Deaf population compares to that of the general population; (iii) to evaluate the quality of evidence in the identified publications; (iv) to identify limitations of the current evidence base and suggest directions for future research.
DESIGN
Systematic review.
DATA SOURCES
Medline, Embase, PsychINFO, and Web of Science.
ELIGIBILITY CRITERIA FOR SELECTING STUDIES
The inclusion criteria were Deaf adult populations who used a signed language, all study types, including methods-focused papers which also contain results in relation to health outcomes of Deaf signing populations. Full-text articles, published in peer-review journals were searched up to 13th June 2023, published in English or a signed language such as ASL (American Sign Language).
DATA EXTRACTION
Supported by the Rayyan systematic review software, two authors independently reviewed identified publications at each screening stage (primary and secondary). A third reviewer was consulted to settle any disagreements. Comprehensive data extraction included research design, study sample, methodology, findings, and a quality assessment.
RESULTS
Of the 35 included studies, the majority (25 out of 35) concerned mental health outcomes. The findings from this review highlighted the inequalities in health and mental health outcomes for Deaf signing populations in comparison with the general population, gaps in the range of conditions studied in relation to Deaf people, and the poor quality of available data.
CONCLUSIONS
Population sample definition and consistency of standards of reporting of health outcomes for Deaf people who use sign language should be improved. Further research on health outcomes not previously reported is needed to gain better understanding of Deaf people's state of health.
Topics: Adult; Humans; Sign Language; Outcome Assessment, Health Care
PubMed: 38625906
DOI: 10.1371/journal.pone.0298479 -
Comprehensive Psychiatry Nov 2023To determine whether dissociative experiences moderate online problem gambling treatment effectiveness, and to characterize the temporal persistence of the relationship...
AIMS
To determine whether dissociative experiences moderate online problem gambling treatment effectiveness, and to characterize the temporal persistence of the relationship between dissociation and problem gambling.
DESIGN
Repeatedly measured self-report data on a guided online cognitive behavioral therapy for problem gambling collected on four occasions: before treatment, after treatment, and at 6- and 12-month follow-ups.
SETTING AND PARTICIPANTS
The data (N = 1243, 59.2% males) were collected in Finland between 2019 and 2021.
MEASUREMENTS
The primary outcome variable was the self-reported level of problem gambling. The predictors were the treatment phase and dissociative experiences, their interaction, and the demographic covariates of age, education, income, and gender.
FINDINGS
Problem gambling scores and dissociative experiences declined significantly following treatment and remained low through the follow-ups (retention rates: 52.6% [post-treatment], 26.3% [at the 6-month follow-up], and 16.1% [at the 12-month follow-up]). However, the treatment was significantly less effective in reducing problem gambling for individuals who kept experiencing dissociation after the treatment.
CONCLUSIONS
Dissociation is an integral sign of problem gambling severity and sustained dissociative experiences may significantly reduce the long-term effectiveness of online problem gambling treatments. Treatment efforts should be customized to account for individual differences in dissociative tendencies, and future research should broaden the study of dissociative experiences to other behavioral addictions.
Topics: Male; Humans; Female; Gambling; Cognitive Behavioral Therapy; Self Report; Treatment Outcome; Dissociative Disorders
PubMed: 37688936
DOI: 10.1016/j.comppsych.2023.152414 -
Scientific Reports Jan 2024The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing...
The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.
Topics: Humans; Sign Language; Learning; Individuality; Linguistics; Physical Examination
PubMed: 38200108
DOI: 10.1038/s41598-024-51330-1 -
Health Expectations : An International... Dec 2023Deaf and hard-of-hearing (DHH) patients are a priority population for emergency medicine health services research. DHH patients are at higher risk than non-DHH patients...
BACKGROUND
Deaf and hard-of-hearing (DHH) patients are a priority population for emergency medicine health services research. DHH patients are at higher risk than non-DHH patients of using the emergency department (ED), have longer lengths of stay in the ED and report poor patient-provider communication. This qualitative study aimed to describe ED care-seeking and patient-centred care perspectives among DHH patients.
METHODS
This qualitative study is the second phase of a mixed-methods study. The goal of this study was to further explain quantitative findings related to ED outcomes among DHH and non-DHH patients. We conducted semistructured interviews with 4 DHH American Sign Language (ASL)-users and 6 DHH English speakers from North Central Florida. Interviews were transcribed and analysed using a descriptive qualitative approach.
RESULTS
Two themes were developed: (1) DHH patients engage in a complex decision-making process to determine ED utilization and (2) patient-centred ED care differs between DHH ASL-users and DHH English speakers. The first theme describes the social-behavioural processes through which DHH patients assess their need to use the ED. The second theme focuses on the social environment within the ED: patients feeling stereotyped, involvement in the care process, pain communication, receipt of accommodations and discharge processes.
CONCLUSIONS
This study underscores the importance of better understanding, and intervening in, DHH patient ED care-seeking and care delivery to improve patient outcomes. Like other studies, this study also finds that DHH patients are not a monolithic group and language status is an equity-relevant indicator. We also discuss recommendations for emergency medicine.
PATIENT OR PUBLIC CONTRIBUTION
This study convened a community advisory group made up of four DHH people to assist in developing research questions, data collection tools and validation of the analysis and interpretation of data. Community advisory group members who were interested in co-authorship are listed in the byline, with others in the acknowledgements. In addition, several academic-based co-authors are also deaf or hard of hearing.
Topics: Humans; Deafness; Persons With Hearing Impairments; Language; Sign Language; Emergency Service, Hospital
PubMed: 37555478
DOI: 10.1111/hex.13842 -
Proceedings of the Conference on... Dec 2023Large language models (LLMs) can generate natural language texts for various domains and tasks, but their potential for clinical text mining, a domain with scarce,...
Large language models (LLMs) can generate natural language texts for various domains and tasks, but their potential for clinical text mining, a domain with scarce, sensitive, and imbalanced medical data, is under-explored. We investigate whether LLMs can augment clinical data for detecting Alzheimer's Disease (AD)-related signs and symptoms from electronic health records (EHRs), a challenging task that requires high expertise. We create a novel pragmatic taxonomy for AD sign and symptom progression based on expert knowledge and generated three datasets: (1) a gold dataset annotated by human experts on longitudinal EHRs of AD patients; (2) a silver dataset created by the data-to-label method, which labels sentences from a public EHR collection with AD-related signs and symptoms; and (3) a bronze dataset created by the label-to-data method which generates sentences with AD-related signs and symptoms based on the label definition. We train a system to detect AD-related signs and symptoms from EHRs. We find that the silver and bronze datasets improves the system performance, outperforming the system using only the gold dataset. This shows that LLMs can generate synthetic clinical data for a complex task by incorporating expert knowledge, and our label-to-data method can produce datasets that are free of sensitive information, while maintaining acceptable quality.
PubMed: 38213944
DOI: 10.18653/v1/2023.findings-emnlp.474