-
PloS One 2024This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in...
This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in their phonetic characteristics. Earlier research on spoken and sign language interaction has revealed that head nods vary in the form of the movement. However, most claims about the phonetic properties of head nods have been based on manual annotation without reference to naturalistic text types and the head nods produced by the addressee have been largely ignored. There is a lack of detailed information about the phonetic properties of the addressee's head nods and their interaction with manual cues in DGS as well as in other sign languages, and the existence of a form-function relationship of head nods remains uncertain. We hypothesize that head nods functioning in the context of affirmation differ from those signaling feedback in their form and the co-occurrence with manual items. To test the hypothesis, we apply OpenPose, a computer vision toolkit, to extract head nod measurements from video recordings and examine head nods in terms of their duration, amplitude and velocity. We describe the basic phonetic properties of head nods in DGS and their interaction with manual items in naturalistic corpus data. Our results show that phonetic properties of affirmative nods differ from those of feedback nods. Feedback nods appear to be on average slower in production and smaller in amplitude than affirmation nods, and they are commonly produced without a co-occurring manual element. We attribute the variations in phonetic properties to the distinct roles these cues fulfill in turn-taking system. This research underlines the importance of non-manual cues in shaping the turn-taking system of sign languages, establishing the links between such research fields as sign language linguistics, conversational analysis, quantitative linguistics and computer vision.
Topics: Humans; Sign Language; Phonetics; Germany; Male; Head; Female; Language; Head Movements
PubMed: 38814896
DOI: 10.1371/journal.pone.0304040 -
IEEE Transactions on Image Processing :... 2024Continuous sign language recognition (CSLR) is to recognize the glosses in a sign language video. Enhancing the generalization ability of CSLR's visual feature extractor...
Continuous sign language recognition (CSLR) is to recognize the glosses in a sign language video. Enhancing the generalization ability of CSLR's visual feature extractor is a worthy area of investigation. In this paper, we model glosses as priors that help to learn more generalizable visual features. Specifically, the signer-invariant gloss feature is extracted by a pre-trained gloss BERT model. Then we design a gloss prior guidance network (GPGN). It contains a novel parallel densely-connected temporal feature extraction (PDC-TFE) module for multi-resolution visual feature extraction. The PDC-TFE captures the complex temporal patterns of the glosses. The pre-trained gloss feature guides the visual feature learning through a cross-modality matching loss. We propose to formulate the cross-modality feature matching into a regularized optimal transport problem, it can be efficiently solved by a variant of the Sinkhorn algorithm. The GPGN parameters are learned by optimizing a weighted sum of the cross-modality matching loss and CTC loss. The experiment results on German and Chinese sign language benchmarks demonstrate that the proposed GPGN achieves competitive performance. The ablation study verifies the effectiveness of several critical components of the GPGN. Furthermore, the proposed pre-trained gloss BERT model and cross-modality matching can be seamlessly integrated into other RGB-cue-based CSLR methods as plug-and-play formulations to enhance the generalization ability of the visual feature extractor.
PubMed: 38814773
DOI: 10.1109/TIP.2024.3404869 -
American Journal of Health-system... Jun 2024
Topics: Humans; Hearing Loss; Communication; Health Personnel; Professional-Patient Relations
PubMed: 38813670
DOI: 10.1093/ajhp/zxae073 -
Disability and Health Journal May 2024Deaf and hard-of-hearing (DHH) people are at higher risk than their non-DHH counterparts of experiencing adverse birth outcomes. There is a lack of research focusing on...
BACKGROUND
Deaf and hard-of-hearing (DHH) people are at higher risk than their non-DHH counterparts of experiencing adverse birth outcomes. There is a lack of research focusing on social, linguistic, and medical factors related to being DHH which may identify groups of DHH people who experience more inequity.
OBJECTIVE
Examine difference in prevalence of cesarean and adverse birth outcomes among diverse sub-groups of DHH people.
METHODS
We conducted a cross-sectional survey of DHH birthing people in the U.S. who gave birth within the past 10 years. The sample was predominantly white, college educated, and married. We assessed cesarean birth and three adverse birth outcomes: preterm birth, low birthweight, and NICU admission post-delivery. DHH-specific variables were genetic etiology of hearing loss, preferred language (i.e., American Sign Language, English, or bilingual), severity of hearing loss, age of onset of hearing loss, and self-reported quality of perinatal care communication. We estimated prevalence, 95 % confidence intervals, and unadjusted prevalence ratios.
RESULTS
Thirty-one percent of our sample reported a cesarean birth. Overall, there were no significant differences in prevalence across the outcome variables with respect to preferred language, genetic etiology, severity, and age of onset. Poorer perinatal care communication quality was associated with higher prevalence of preterm birth (PR = 2.37) and NICU admission (PR = 1.91).
CONCLUSIONS
Our study found no evidence supporting differences in obstetric outcomes among DHH birthing people across medical factors related to deafness. Findings support the important role of communication access for DHH people in healthcare environments.
PubMed: 38811248
DOI: 10.1016/j.dhjo.2024.101639 -
Orthopedics May 2024Patients with limited health literacy have difficulty understanding their injuries and postoperative treatment, which can negatively affect their outcomes.
Limited Health Literacy Among Patients With Orthopedic Injuries: A Cross-sectional Survey of Patients Who Underwent Orthopedic Trauma Surgery in a County Hospital Setting.
BACKGROUND
Patients with limited health literacy have difficulty understanding their injuries and postoperative treatment, which can negatively affect their outcomes.
MATERIALS AND METHODS
This cross-sectional questionnaire-based study of 103 adult patients sought to quantify patients' health literacy at a single county hospital's orthopedic trauma clinic and to examine their ability to understand injuries and treatment plans. Demographics, Newest Vital Sign (NVS) health literacy assessment, and knowledge scores were used to assess patients' comprehension of their injuries and treatment plan. Patients were grouped by NVS score (NVS <4: limited health literacy). Fisher's exact tests and tests were used to compare demographic and comprehension scores. Multivariate logistic regression analysis was used to examine the association among low health literacy, sociodemographic variables, and knowledge scores.
RESULTS
Of the 103 patients, 75% were determined to have limited health literacy. Patients younger than 30 years were more likely to have adequate literacy (50% vs 23%, =.01). Patients who spoke Spanish as their primary language were 8.77 times more likely to have limited health literacy with respect to sociodemographic factors (odds ratio, 8.77; 95% CI, 1.03-76.92; =.04). Low health literacy was 3.52 and 4.14 times more likely to predict discordance in answers to specific bone fractures and the narcotics prescribed (=.04 and =.02, respectively).
CONCLUSION
Spanish-speaking patients have demonstrated limited health literacy and difficulty understanding their injuries and postoperative treatment plans compared with English-speaking patients. Patients with low health literacy are more likely to be unsure regarding which bone they fractured or their prescribed opiates. [. 202x;4x(x):xx-xx.].
PubMed: 38810131
DOI: 10.3928/01477447-20240520-01 -
Biomaterials Oct 2024The proliferation of medical wearables necessitates the development of novel electrodes for cutaneous electrophysiology. In this work, poly(3,4-ethylenedioxythiophene)...
The proliferation of medical wearables necessitates the development of novel electrodes for cutaneous electrophysiology. In this work, poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is combined with a deep eutectic solvent (DES) and polyethylene glycol diacrylate (PEGDA) to develop printable and biocompatible electrodes for long-term cutaneous electrophysiology recordings. The impact of printing parameters on the conducting properties, morphological characteristics, mechanical stability and biocompatibility of the material were investigated. The optimised eutectogel formulations were fabricated in four different patterns -flat, pyramidal, striped and wavy- to explore the influence of electrode geometry on skin conformability and mechanical contact. These electrodes were employed for impedance and forearm EMG measurements. Furthermore, arrays of twenty electrodes were embedded into a textile and used to generate body surface potential maps (BSPMs) of the forearm, where different finger movements were recorded and analysed. Finally, BSPMs for three different letters (B, I, O) in sign-language were recorded and used to train a logistic regressor classifier able to reliably identify each letter. This novel cutaneous electrode fabrication approach offers new opportunities for long-term electrophysiological recordings, online sign-language translation and brain-machine interfaces.
Topics: Printing, Three-Dimensional; Humans; Electrodes; Polystyrenes; Textiles; Machine Learning; Electric Conductivity; Wearable Electronic Devices; Bridged Bicyclo Compounds, Heterocyclic; Gels; Polymers; Polyethylene Glycols; Electromyography; Biocompatible Materials
PubMed: 38805956
DOI: 10.1016/j.biomaterials.2024.122624 -
Health Communication May 2024While dissemination of information is a key function of health communication, signage at medical facilities has other functions: signs can be a type of marketing (e.g.,...
While dissemination of information is a key function of health communication, signage at medical facilities has other functions: signs can be a type of marketing (e.g., services offered), can promote credibility and inspire trust, can exacerbate or ameliorate social inequalities and can provide educational opportunities. All of these functions are influenced by cultural, contextual and social factors as evidenced by a linguistic landscape (LL) perspective. Traditional Chinese medicine (TCM) is a particularly instructive case for considering the functions of signage in healthcare LL as it has a strong cultural component from its historical Chinese roots, but its practice has been popularized around the globe in recent years. Given the role of TCM as a main or complementary medical treatment and healthcare option, this study investigates TCM LLs as sites of healthcare communication. Specifically, we analyze a set of 1,659 signs from two TCM hospitals in a multilingual, ethnic minority region of China as a case study which can be useful for healthcare providers when considering their own use of LL. We describe the way language and other sign features are used for informational, symbolic and other functions, showing how explicit communication channels as well as implicit ideological channels can impact healthcare communication. We discuss these findings in light of the need for healthcare communication which is sensitive to stakeholder needs.
PubMed: 38797965
DOI: 10.1080/10410236.2024.2346676 -
Sensors (Basel, Switzerland) May 2024Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make...
Deaf and hard-of-hearing people mainly communicate using sign language, which is a set of signs made using hand gestures combined with facial expressions to make meaningful and complete sentences. The problem that faces deaf and hard-of-hearing people is the lack of automatic tools that translate sign languages into written or spoken text, which has led to a communication gap between them and their communities. Most state-of-the-art vision-based sign language recognition approaches focus on translating non-Arabic sign languages, with few targeting the Arabic Sign Language (ArSL) and even fewer targeting the Saudi Sign Language (SSL). This paper proposes a mobile application that helps deaf and hard-of-hearing people in Saudi Arabia to communicate efficiently with their communities. The prototype is an Android-based mobile application that applies deep learning techniques to translate isolated SSL to text and audio and includes unique features that are not available in other related applications targeting ArSL. The proposed approach, when evaluated on a comprehensive dataset, has demonstrated its effectiveness by outperforming several state-of-the-art approaches and producing results that are comparable to these approaches. Moreover, testing the prototype on several deaf and hard-of-hearing users, in addition to hearing users, proved its usefulness. In the future, we aim to improve the accuracy of the model and enrich the application with more features.
Topics: Sign Language; Humans; Deep Learning; Saudi Arabia; Mobile Applications; Deafness; Persons With Hearing Impairments
PubMed: 38793964
DOI: 10.3390/s24103112 -
Biosensors May 2024At the heart of the non-implantable electronic revolution lies ionogels, which are remarkably conductive, thermally stable, and even antimicrobial materials. Yet, their...
At the heart of the non-implantable electronic revolution lies ionogels, which are remarkably conductive, thermally stable, and even antimicrobial materials. Yet, their potential has been hindered by poor mechanical properties. Herein, a double network (DN) ionogel crafted from 1-Ethyl-3-methylimidazolium chloride ([Emim]Cl), acrylamide (AM), and polyvinyl alcohol (PVA) was constructed. Tensile strength, fracture elongation, and conductivity can be adjusted across a wide range, enabling researchers to fabricate the material to meet specific needs. With adjustable mechanical properties, such as tensile strength (0.06-5.30 MPa) and fracture elongation (363-1373%), this ionogel possesses both robustness and flexibility. This ionogel exhibits a bi-modal response to temperature and strain, making it an ideal candidate for strain sensor applications. It also functions as a flexible strain sensor that can detect physiological signals in real time, opening doors to personalized health monitoring and disease management. Moreover, these gels' ability to decode the intricate movements of sign language paves the way for improved communication accessibility for the deaf and hard-of-hearing community. This DN ionogel lays the foundation for a future in which e-skins and wearable sensors will seamlessly integrate into our lives, revolutionizing healthcare, human-machine interaction, and beyond.
Topics: Humans; Sign Language; Polyvinyl Alcohol; Monitoring, Physiologic; Wearable Electronic Devices; Gels; Imidazoles; Biosensing Techniques; Acrylamide; Tensile Strength
PubMed: 38785701
DOI: 10.3390/bios14050227 -
Cognition Aug 2024Adults with no knowledge of sign languages can perceive distinctive markers that signal event boundedness (telicity), suggesting that telicity is a cognitively natural...
Adults with no knowledge of sign languages can perceive distinctive markers that signal event boundedness (telicity), suggesting that telicity is a cognitively natural semantic feature that can be marked iconically (Strickland et al., 2015). This study asks if non-signing children (5-year-olds) can also link telicity to iconic markers in sign. Experiment 1 attempted three close replications of Strickland et al. (2015) and found only limited success. However, Experiment 2 showed that children can both perceive the relevant visual feature and can succeed at linking the visual property to telicity semantics when allowed to filter their answer through their own linguistic choices. Children's performance demonstrates the cognitive naturalness and early availability of the semantics of telicity, supporting the idea that telicity helps guide the language acquisition process.
Topics: Humans; Sign Language; Male; Female; Child, Preschool; Semantics; Language Development
PubMed: 38776621
DOI: 10.1016/j.cognition.2024.105811