-
IEEE International Conference on... Jun 2023Fast and flexible communication options are limited for speech-impaired people. Hand gestures coupled with fast, generated speech can enable a more natural social...
Fast and flexible communication options are limited for speech-impaired people. Hand gestures coupled with fast, generated speech can enable a more natural social dynamic for those individuals - particularly individuals without the fine motor skills to type on a keyboard or tablet reliably. We created a mobile phone application prototype that generates audible responses associated with trained hand movements and collects and organizes the accelerometer data for rapid training to allow tailored models for individuals who may not be able to perform standard movements such as sign language. Six participants performed 11 distinct gestures to produce the dataset. A mobile application was developed that integrated a bidirectional LSTM network architecture which was trained from this data. After evaluation using nested subject-wise cross-validation, our integrated bidirectional LSTM model demonstrates an overall recall of 91.8% in recognition of these pre-selected 11 hand gestures, with recall at 95.8% when two commonly confused gestures were not assessed. This prototype is a step in creating a mobile phone system capable of capturing new gestures and developing tailored gesture recognition models for individuals in speech-impaired populations. Further refinement of this prototype can enable fast and efficient communication with the goal of further improving social interaction for individuals unable to speak.
PubMed: 38405383
DOI: 10.1109/ichi57859.2023.00062 -
Micromachines Jan 2024Flexible pressure sensors play a crucial role in detecting human motion and facilitating human-computer interaction. In this paper, a type of flexible pressure sensor...
Gesture Recognition Based on a Convolutional Neural Network-Bidirectional Long Short-Term Memory Network for a Wearable Wrist Sensor with Multi-Walled Carbon Nanotube/Cotton Fabric Material.
Flexible pressure sensors play a crucial role in detecting human motion and facilitating human-computer interaction. In this paper, a type of flexible pressure sensor unit with high sensitivity (2.242 kPa), fast response time (80 ms), and remarkable stability (1000 cycles) is proposed and fabricated by the multi-walled carbon nanotube (MWCNT)/cotton fabric (CF) material based on a dip-coating method. Six flexible pressure sensor units are integrated into a flexible wristband and made into a wearable and portable wrist sensor with favorable stability. Then, seven wrist gestures (Gesture Group #1), five letter gestures (Gesture Group #2), and eight sign language gestures (Gesture Group #3) are performed by wearing the wrist sensor, and the corresponding time sequence signals of the three gesture groups (#1, #2, and #3) from the wrist sensor are collected, respectively. To efficiently recognize different gestures from the three groups detected by the wrist sensor, a fusion network model combined with a convolutional neural network (CNN) and the bidirectional long short-term memory (BiLSTM) neural network, named CNN-BiLSTM, which has strong robustness and generalization ability, is constructed. The three types of Gesture Groups were recognized based on the CNN-BiLSTM model with accuracies of 99.40%, 95.00%, and 98.44%. Twenty gestures (merged by Group #1, #2, and #3) were recognized with an accuracy of 96.88% to validate the applicability of the wrist sensor based on this model for gesture recognition. The experimental results denote that the CNN-BiLSTM model has very efficient performance in recognizing different gestures collected from the flexible wrist sensor.
PubMed: 38398915
DOI: 10.3390/mi15020185 -
Revista Cientifica Odontologica... 2023Sign language is the main means of communication for deaf people; consists of the combination of manual, body and facial movements with specific meanings. Among the main... (Review)
Review
Sign language is the main means of communication for deaf people; consists of the combination of manual, body and facial movements with specific meanings. Among the main barriers faced by this community are the lack of communication, shortage of health services, added to the inexperience and lack of knowledge of sign language by the health professional. This article aims to raise awareness of the importance and need of sign language in dental clinical care and professional training, as well as strategies, tools and recommendations for better care for deaf people. Databases such as SciELO, Google Scholar, PubMed and ScienceDirect were included during the years 2007 and 2021 It was concluded that the main barriers during communication are the little knowledge of sign language in dentists, the little access to interpreters and difficulties in obtaining appointments; affecting the susceptibility to develop oral diseases. The inclusion of sign language courses within the university curriculum provides academic and professional ethics benefits; however, follow-up is necessary to confirm compliance with the requested parameters. Among the main tools and strategies, the use of educational videos, the accompaniment of interpreters and, in view of technological development, the use of mobile applications, can facilitate communication. Likewise, a set of recommendations is proposed for the approach of adult and child deaf patients.
PubMed: 38390608
DOI: 10.21142/2523-2754-1004-2022-135 -
BMJ Open Feb 2024Research using animal models suggests that intensive motor skill training in infants under 2 years old with cerebral palsy (CP) may significantly reduce, or even...
INTRODUCTION
Research using animal models suggests that intensive motor skill training in infants under 2 years old with cerebral palsy (CP) may significantly reduce, or even prevent, maladaptive neuroplastic changes following brain injury. However, the effects of such interventions to tentatively prevent secondary neurological damages have never been assessed in infants with CP. This study aims to determine the effect of the baby Hand and Arm Bimanual Intensive Therapy Including Lower Extremities (baby HABIT-ILE) in infants with unilateral CP, compared with a control intervention.
METHODS AND ANALYSIS
This randomised controlled trial will include 48 infants with unilateral CP aged (corrected if preterm) 6-18 months at the first assessment. They will be paired by age and by aetiology of the CP, and randomised into two groups (immediate and delayed). Assessments will be performed at baseline and at 1 month, 3 months and 6 months after baseline. The immediate group will receive 50 hours of baby HABIT-ILE intervention over 2 weeks, between first and second assessment, while the delayed group will continue their usual activities. This last group will receive baby HABIT-ILE intervention after the 3-month assessment. Primary outcome will be the Mini-Assisting Hand Assessment. Secondary outcomes will include behavioural assessments for gross and fine motricity, visual-cognitive-language abilities as well as MRI and kinematics measures. Moreover, parents will determine and score child-relevant goals and fill out questionnaires of participation, daily activities and mobility.
ETHICS AND DISSEMINATION
Full ethical approval has been obtained by the , Brussels (2013/01MAR/069 B403201316810g). The recommendations of the ethical board and the Belgian law of 7 May 2004 concerning human experiments will be followed. Parents will sign a written informed consent ahead of participation. Findings will be published in peer-reviewed journals and conference presentations.
TRIAL REGISTRATION NUMBER
NCT04698395. Registered on the International Clinical Trials Registry Platform (ICTRP) on 2 December 2020 and NIH Clinical Trials Registry on 6 January 2021. URL of trial registry record: https://clinicaltrials.gov/ct2/show/NCT04698395?term=bleyenheuft&draw=1&rank=7.
Topics: Infant, Newborn; Infant; Humans; Child, Preschool; Cerebral Palsy; Upper Extremity; Hand; Parents; Brain Injuries; Randomized Controlled Trials as Topic
PubMed: 38367973
DOI: 10.1136/bmjopen-2023-078383 -
Sensors (Basel, Switzerland) Jan 2024Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types,...
Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.
Topics: Humans; Japan; Sign Language; Pattern Recognition, Automated; Hand; Algorithms; Gestures
PubMed: 38339542
DOI: 10.3390/s24030826 -
Digital Health 2024Ineffective communication with Deaf individuals in healthcare settings has led to poor outcomes including miscommunication, waste, and errors. To help address these...
BACKGROUND
Ineffective communication with Deaf individuals in healthcare settings has led to poor outcomes including miscommunication, waste, and errors. To help address these challenges, we developed a mobile app, Deaf in Touch Everywhere (DITE) which aims to connect the Deaf community in Malaysia with a pool of off-site interpreters through secure video conferencing.
OBJECTIVES
The aims of this study were to (a) assess the feasibility and acceptability of measuring unified theory of acceptance and use of technology (UTAUT) constructs for DITE with the Deaf community and Malaysian sign language (BIM) interpreters and (b) seek input from Deaf people and BIM interpreters on DITE to improve its design.
METHODS
Two versions of the UTAUT questionnaire were adapted for BIM interpreters and the Deaf community. Participants were recruited from both groups and asked to test the DITE app features over a 2-week period. They then completed the questionnaire and participated in focus group discussions to share their feedback on the app.
RESULTS
A total of 18 participants completed the questionnaire and participated in the focus group discussions. Ratings of and were high across both groups, and suggestions were provided to improve the app. High levels of engagement suggest that measurement of UTAUT constructs with these groups (through a modified questionnaire) is feasible and acceptable.
CONCLUSIONS
The process of engaging end users in the design process provided valuable insights and will help to ensure that the DITE app continues to address the needs of both the Deaf community and BIM interpreters in Malaysia.
PubMed: 38333634
DOI: 10.1177/20552076241228432 -
Data in Brief Apr 2024Nepali Sign Language (NSL) is used by the Nepali-speaking community in Nepal and in Indian states such as Sikkim, the hilly region of North Bengal, some parts of...
Nepali Sign Language (NSL) is used by the Nepali-speaking community in Nepal and in Indian states such as Sikkim, the hilly region of North Bengal, some parts of Uttarakhand, Meghalaya, and Assam. It consists of the International Manual Alphabet (A-Z), Nepali consonants, vowels, conjunct letters, and numbers represented in the form of one-handed fingerspelling or Nepali manual alphabet. The standard gestures for NSL have been published by the Nepal National Federation of the Deaf & Hard of Hearing (NFDH). To learn Nepali Sign Language, the first step is to understand its alphabet set. The use of technology can help ease the learning process. One of the application areas of computer vision is translating sign language gestures to either text or audio to facilitate communication. This is an open research area. However, NSL translation is one of the less explored research areas because there is no dataset available to work on for NSL. This paper introduces the Nepali Sign Language Dataset (NSL23), which is the first of its kind and includes vowels and consonants of the Nepali Sign Language alphabet. The dataset consists of .mov videos performed by 14 volunteers who have demonstrated 36 consonant signs and 13 vowel signs either in one full video or character by character. The dataset has been prepared under various conditions, including normal lighting, dark lighting conditions, prepared environments, unprepared environments, and real-world environments. The volunteers who performed the NSL gesture have been classified as 9 beginners who are using NSL for the first time and 5 experts who have been using NSL for 5 to 25 years. NSL23 contains 630 total videos representing 1205 gestures. The dataset can be used to train machine learning models to classify the alphabet set of NSL and further develop a sign language translator.
PubMed: 38328296
DOI: 10.1016/j.dib.2024.110080 -
Frontiers in Neuroscience 2024[This corrects the article DOI: 10.3389/fnins.2022.850245.].
[This corrects the article DOI: 10.3389/fnins.2022.850245.].
PubMed: 38318466
DOI: 10.3389/fnins.2024.1354571 -
Journal of Medical Genetics May 2024Speech and language impairments are core features of the neurodevelopmental genetic condition Kleefstra syndrome. Communication has not been systematically examined to...
OBJECTIVES
Speech and language impairments are core features of the neurodevelopmental genetic condition Kleefstra syndrome. Communication has not been systematically examined to guide intervention recommendations. We define the speech, language and cognitive phenotypic spectrum in a large cohort of individuals with Kleefstra syndrome.
METHOD
103 individuals with Kleefstra syndrome (40 males, median age 9.5 years, range 1-43 years) with pathogenic variants (52 9q34.3 deletions, 50 intragenic variants, 1 balanced translocation) were included. Speech, language and non-verbal communication were assessed. Cognitive, health and neurodevelopmental data were obtained.
RESULTS
The cognitive spectrum ranged from average intelligence (12/79, 15%) to severe intellectual disability (12/79, 15%). Language ability also ranged from average intelligence (10/90, 11%) to severe intellectual disability (53/90, 59%). Speech disorders occurred in 48/49 (98%) verbal individuals and even occurred alongside average language and cognition. Developmental regression occurred in 11/80 (14%) individuals across motor, language and psychosocial domains. Communication aids, such as sign and speech-generating devices, were crucial for 61/103 (59%) individuals including those who were minimally verbal, had a speech disorder or following regression.
CONCLUSIONS
The speech, language and cognitive profile of Kleefstra syndrome is broad, ranging from severe impairment to average ability. Genotype and age do not explain the phenotypic variability. Early access to communication aids may improve communication and quality of life.
Topics: Humans; Male; Intellectual Disability; Child; Chromosome Deletion; Phenotype; Adolescent; Female; Adult; Child, Preschool; Chromosomes, Human, Pair 9; Young Adult; Cognition; Infant; Craniofacial Abnormalities; Speech; Speech Disorders; Language; Intelligence; Language Disorders; Heart Defects, Congenital
PubMed: 38290825
DOI: 10.1136/jmg-2023-109702 -
Journal of Deaf Studies and Deaf... Jun 2024The deaf population of Martha's Vineyard has fascinated scholars for more than a century since Alexander Graham Bell's research on the frequent occurrence of deafness...
The deaf population of Martha's Vineyard has fascinated scholars for more than a century since Alexander Graham Bell's research on the frequent occurrence of deafness there and since Groce's book on the island's signing community (Groce, N. E. (1985). Everyone here spoke sign language: Hereditary deafness on Martha's Vineyard. Cambridge, MA: Harvard University Press.). In Groce's work, and in that of subsequent scholars, the Vineyard signing community has often been portrayed as remote and outlying, having developed independently of mainland signing communities for roughly 133 years until 1825. We re-examine that interpretation in light of historical, demographic, and genealogical evidence covering the period 1692-2008. We argue that the Vineyard signing community began in Chilmark in 1785, 93 years later than previously thought, and that it had had a brief period of independent development, roughly 40 years, before becoming well connected, through deaf education, to the nascent New England signing community. We consider the implications of the Vineyard community's history for our understanding of how village signing communities develop.
Topics: Humans; History, 19th Century; History, 18th Century; Sign Language; History, 17th Century; Deafness; History, 21st Century; Demography; History, 20th Century; Persons With Hearing Impairments
PubMed: 38287681
DOI: 10.1093/deafed/enad058