-
IEEE Transactions on Image Processing :... Jun 2024Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture...
Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code are publicly available at https://github.com/sakura/Code.
PubMed: 38917290
DOI: 10.1109/TIP.2024.3416881 -
European Archives of Psychiatry and... Jun 2024A large body of research has shown that schizophrenia patients demonstrate increased brain structural aging. Although this process may be coupled with aberrant changes...
A large body of research has shown that schizophrenia patients demonstrate increased brain structural aging. Although this process may be coupled with aberrant changes in intrinsic functional architecture of the brain, they remain understudied. We hypothesized that there are brain regions whose whole-brain functional connectivity at rest is differently associated with brain structural aging in schizophrenia patients compared to healthy controls. Eighty-four male schizophrenia patients and eighty-six male healthy controls underwent structural MRI and resting-state fMRI. The brain-predicted age difference (b-PAD) was a measure of brain structural aging. Resting-state fMRI was applied to obtain global correlation (GCOR) maps comprising voxelwise values of the strength and sign of functional connectivity of a given voxel with the rest of the brain. Schizophrenia patients had higher b-PAD compared to controls (mean between-group difference + 2.9 years). Greater b-PAD in schizophrenia patients, compared to controls, was associated with lower whole-brain functional connectivity of a region in frontal orbital cortex, inferior frontal gyrus, Heschl's Gyrus, plana temporale and polare, insula, and opercular cortices of the right hemisphere (rFTI). According to post hoc seed-based correlation analysis, decrease of functional connectivity with the posterior cingulate gyrus, left superior temporal cortices, as well as right angular gyrus/superior lateral occipital cortex has mainly driven the results. Lower functional connectivity of the rFTI was related to worse verbal working memory and language production. Our findings demonstrate that well-established frontotemporal functional abnormalities in schizophrenia are related to increased brain structural aging.
PubMed: 38914851
DOI: 10.1007/s00406-024-01837-5 -
Journal of Deaf Studies and Deaf... Jun 2024
Topics: Humans; Child; Deafness; Sign Language; Persons With Hearing Impairments; Language; Education of Hearing Disabled; Comprehension
PubMed: 38913495
DOI: 10.1093/deafed/enae016 -
Journal of Deaf Studies and Deaf... Jun 2024For some deaf people, sign language is the preferred language, the one in which they feel most comfortable. However, there are very few assessment tools developed or...
For some deaf people, sign language is the preferred language, the one in which they feel most comfortable. However, there are very few assessment tools developed or adapted for sign languages. The aim of this study was to translate and adapt in Italian Sign Language (LIS) the Italian version of the Youth Quality of Life Instrument-Deaf and Hard of Hearing Module (YQOL-DHH). The YQOL-DHH is a questionnaire assessing health-related quality of life in young deaf people. The guidelines provided by the authors of the original version were followed. Further controls and changes were made to take into account variability in signers' linguistic skills. This work and availability of the YQOL-DHH questionnaire in LIS, in addition to the Italian version, will ensure accessibility for Italian deaf adolescents.
PubMed: 38899805
DOI: 10.1093/jdsade/enae025 -
Hospital Pediatrics Jul 2024Food insecurity (FI) has increasingly become a focus for hospitalized patients. The best methods for screening practices, particularly in hospitalized children, are...
BACKGROUND AND OBJECTIVES
Food insecurity (FI) has increasingly become a focus for hospitalized patients. The best methods for screening practices, particularly in hospitalized children, are unknown. The purpose of the study was to evaluate results of an electronic medical record (EMR) embedded, brief screening tool for FI among inpatients.
METHODS
This was a cross-sectional study from August 2020 to September 2022 for all children admitted to a quaternary children's hospital. Primary outcomes were proportion of those screened for FI and those identified to have a positive screen. FI was evaluated by The Hunger Vital Sign, a validated 2-question screen verbally obtained in the nursing intake form in the EMR. Covariates include demographic variables of age, sex, race, ethnicity, primary language, and insurance. Statistical analyses including all univariate outcome and bivariate comparisons were performed with SAS 9.4.
RESULTS
There were 31 553 patient encounters with 81.7% screened for FI. Patients had a median age of 6.3 years, were mostly male (54.2%), White (60.6%), non-Hispanic (92.7%), English-speaking (94.3%), and had government insurance (79.8%). Younger (0-2 years), non-White, and noninsured patients were all screened significantly less often for FI (all P < .001). A total of 3.4% were identified as having FI. Patients who were older, non-White, Hispanic, non-English speaking, and had nonprivate insurance had higher FI (all P < .001).
CONCLUSIONS
Despite the use of an EMR screening tool intended to be universal, we found variation in how we screen for FI. At times, we missed those who would benefit the most from intervention, and thus it may be subject to implementation bias.
Topics: Humans; Food Insecurity; Cross-Sectional Studies; Female; Male; Child; Child, Preschool; Infant; Mass Screening; Electronic Health Records; Hospitals, Pediatric; Adolescent; Bias; Hospitalization; Child, Hospitalized; Infant, Newborn
PubMed: 38899389
DOI: 10.1542/hpeds.2023-007602 -
Sensors (Basel, Switzerland) Jun 2024Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in...
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.
Topics: Sign Language; Humans; Deep Learning; Neural Networks, Computer; Saudi Arabia; Language; Gestures
PubMed: 38894473
DOI: 10.3390/s24113683 -
Joint Commission Journal on Quality and... May 2024Prior studies have documented that, despite federal mandates, clinicians infrequently provide accommodations that enable equitable health care engagement for patients...
BACKGROUND
Prior studies have documented that, despite federal mandates, clinicians infrequently provide accommodations that enable equitable health care engagement for patients with communication disabilities. To date, there has been a paucity of empirical research describing the organizational approach to implementing these accommodations. The authors asked US health care organizations how they were delivering these accommodations in the context of clinical care, what communication accommodations they provided, and what disability populations they addressed.
METHODS
In this study, 19 qualitative interviews were conducted with disability coordinators representing 15 US health care organizations actively implementing communication accommodations. A conventional qualitative content analysis approach was used to code the data and derive themes.
RESULTS
The authors identified three major themes related to how US health care organizations are implementing the provision of this service: (1) Operationalizing the delivery of communication accommodations in health care required executive leadership support and preparatory work at clinic and organization levels; (2) The primary focus of communication accommodations was sign language interpreter services for Deaf patients and, secondarily, other hearing- and visual-related accommodations; and (3) Providing communication accommodations for patients with speech and language and cognitive disabilities was less frequent, but when done involved more than providing a single aid or service.
CONCLUSION
These findings suggest that, in addition to individual clinician efforts, there are organization-level factors that affect consistent provision of communication accommodations across the full range of communication disabilities. Future research should investigate these factors and test targeted implementation strategies to promote equitable access to health care for all patients with communication disabilities.
PubMed: 38879438
DOI: 10.1016/j.jcjq.2024.05.003 -
Journal of the American Psychoanalytic... Jun 2024In this essay the author describes some of the transformations that occur as one moves from preverbal functioning to verbally symbolic language. In preverbal experience,...
In this essay the author describes some of the transformations that occur as one moves from preverbal functioning to verbally symbolic language. In preverbal experience, there is a direct connection between the sign and what is signified. An infant or child signifies displeasure by throwing his food or other objects to the floor. Much of the emotional tie between mother and infant and patient and analyst is communicated in this way. When a transformation occurs from preverbal to verbally symbolic language, as occurs in early development and as one interprets a dream, meaning is not merely translated, meaning is created. On acquiring verbally symbolic language, a "space" mediated by an interpreting subject opens between the symbol (for instance, the word ) and the symbolized (the experience of guilt) and a new subjectivity is created. On entry into verbally symbolic language, one becomes able to experience oneself in a qualitatively different way; one becomes both subject and object, I and me; one becomes able to experience a far broader range of feelings and types of thinking. Helen Keller's account of her experience of acquiring verbally symbolic language is drawn upon.
PubMed: 38877745
DOI: 10.1177/00030651241257263 -
Scientific Reports Jun 2024As a form of body language, the gesture plays an important role in smart homes, game interactions, and sign language communication, etc. The gesture recognition methods...
As a form of body language, the gesture plays an important role in smart homes, game interactions, and sign language communication, etc. The gesture recognition methods have been carried out extensively. The existing methods have inherent limitations regarding user experience, visual environment, and recognition granularity. Millimeter wave radar provides an effective method for the problems lie ahead gesture recognition because of the advantage of considerable bandwidth and high precision perception. Interfering factors and the complexity of the model raise an enormous challenge to the practical application of gesture recognition methods as the millimeter wave radar is applied to complex scenes. Based on multi-feature fusion, a gesture recognition method for complex scenes is proposed in this work. We collected data in variety places to improve sample reliability, filtered clutters to improve the signal's signal-to-noise ratio (SNR), and then obtained multi features involves range-time map (RTM), Doppler-time map (DTM) and angle-time map (ATM) and fused them to enhance the richness and expression ability of the features. A lightweight neural network model multi-CNN-LSTM is designed to gestures recognition. This model consists of three convolutional neural network (CNN) for three obtained features and one long short-term memory (LSTM) for temporal features. We analyzed the performance and complexity of the model and verified the effectiveness of feature extraction. Numerous experiments have shown that this method has generalization ability, adaptability, and high robustness in complex scenarios. The recognition accuracy of 14 experimental gestures reached 97.28%.
PubMed: 38877076
DOI: 10.1038/s41598-024-64576-6 -
IEEE Transactions on Neural Systems and... Jun 2024Gesture recognition is crucial for enhancing human-computer interaction and is particularly pivotal in rehabilitation contexts, aiding individuals recovering from...
Gesture recognition is crucial for enhancing human-computer interaction and is particularly pivotal in rehabilitation contexts, aiding individuals recovering from physical impairments and significantly improving their mobility and interactive capabilities. However, current wearable hand gesture recognition approaches are often limited in detection performance, wearability, and generalization. We thus introduce EchoGest, a novel hand gesture recognition system based on soft, stretchable, transparent artificial skin with integrated ultrasonic waveguides. Our presented system is the first to use soft ultrasonic waveguides for hand gesture recognition. Ecoflex™ 00-31 and Ecoflex™ 00-45 Near Clear™ silicone elastomers were employed to fabricate the artificial skin and ultrasonic waveguides, while 0.1 mm diameter silver-plated copper wires connected the transducers in the waveguides to the electrical system. The wires are enclosed within an additional elastomer layer, achieving a sensing skin with a total thickness of around 500 μm. Ten participants wore the EchoGest system and performed static hand gestures from two gesture sets: 8 daily life gestures and 10 American Sign Language (ASL) digits 0-9. Leave-One-Subject-Out Cross-Validation analysis demonstrated accuracies of 91.13% for daily life gestures and 88.5% for ASL gestures. The EchoGest system has significant potential in rehabilitation, particularly for tracking and evaluating hand mobility, which could substantially reduce the workload of therapists in both clinical and home-based settings. Integrating this technology could revolutionize hand gesture recognition applications, from real-time sign language translation to innovative rehabilitation techniques.
PubMed: 38869995
DOI: 10.1109/TNSRE.2024.3414136