-
Journal of Deaf Studies and Deaf... Aug 2022Since its publication in 2013, the Visual Communication and Sign Language (VCSL) Checklist has been widely utilized to assess the development of early American Sign...
Since its publication in 2013, the Visual Communication and Sign Language (VCSL) Checklist has been widely utilized to assess the development of early American Sign Language skills of deaf children from birth to age 5. However, little research has been published using the results of VCSL assessments. Notably, no psychometric analyses have been conducted to verify the validity of the VCSL in a population whose characteristics are different from those of the small sample of native signing children from whom the published norms were created. The current paper, using data from the online version of the VCSL (VCSL:O), addresses this shortcoming. Ratings of the 114 VCSL items from 562 evaluations were analyzed using a partial-credit Rasch model. Results indicate that the underlying skill across the age range comprises an adequate single dimension. Within the items' age groupings, however, the dimensionality is not so clear. Item ordering, as well as item fit, is explored in detail. In addition, the paper reports the benefits of using the resulting Rasch scale scores, which, unlike the published scoring strategy that focuses on basal and ceiling performance, makes use of the ratings of partial credit, or emerging, skills. Strategies for revising the VCSL are recommended.
Topics: Checklist; Child; Child, Preschool; Humans; Psychometrics; Reproducibility of Results; Sign Language; Surveys and Questionnaires
PubMed: 35589092
DOI: 10.1093/deafed/enac011 -
Journal of Deaf Studies and Deaf... Mar 2024Research has demonstrated that deaf children of deaf signing parents (DOD) are afforded developmental advantages. This can be misconstrued as indicating that no DOD...
Research has demonstrated that deaf children of deaf signing parents (DOD) are afforded developmental advantages. This can be misconstrued as indicating that no DOD children exhibit early language delays (ELDs) because of their early access to a visual language. Little research has studied this presumption. In this study, we examine 174 ratings of DOD 3- to 5-year-old children, for whom signing in the home was indicated, using archival data from the online database of the Visual Communication and Sign Language Checklist. Our goals were to (1) examine the incidence of ELDs in a cohort of DOD children; (2) compare alternative scaling strategies for identifying ELD children; (3) explore patterns among behavioral ratings with a view toward developing a greater understanding of the types of language behaviors that may lie at the root of language delays; and (4) suggest recommendations for parents and professionals working with language-delayed DOD children. The results indicated that a significant number of ratings suggested ELDs, with a subset significantly delayed. These children likely require further evaluation. Among the less delayed group, ASL skills, rather than communication or cognition, were seen as the major concern, suggesting that even DOD children may require support developing linguistically accurate ASL. Overall, these findings support the need for early and ongoing evaluation of visual language skills in young DOD children.
Topics: Humans; Child, Preschool; Sign Language; Deafness; Language; Parents; Cognition
PubMed: 38079616
DOI: 10.1093/deafed/enad059 -
Sensors (Basel, Switzerland) Sep 2023Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern...
Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.
Topics: Humans; United States; Sign Language; Deep Learning; Quality of Life; Gestures; Technology
PubMed: 37766026
DOI: 10.3390/s23187970 -
Sensors (Basel, Switzerland) Dec 2022This paper presents the development and implementation of an application that recognizes American Sign Language signs with the use of deep learning algorithms based on...
This paper presents the development and implementation of an application that recognizes American Sign Language signs with the use of deep learning algorithms based on convolutional neural network architectures. The project implementation includes the development of a training set, the preparation of a module that converts photos to a form readable by the artificial neural network, the selection of the appropriate neural network architecture and the development of the model. The neural network undergoes a learning process, and its results are verified accordingly. An internet application that allows recognition of sign language based on a sign from any photo taken by the user is implemented, and its results are analyzed. The network effectiveness ratio reaches 99% for the training set. Nevertheless, conclusions and recommendations are formulated to improve the operation of the application.
Topics: Humans; Machine Learning; Gestures; Sign Language; Neural Networks, Computer; Algorithms
PubMed: 36560231
DOI: 10.3390/s22249864 -
Computational Intelligence and... 2021The deaf-mutes population always feels helpless when they are not understood by others and vice versa. This is a big humanitarian problem and needs localised solution....
The deaf-mutes population always feels helpless when they are not understood by others and vice versa. This is a big humanitarian problem and needs localised solution. To solve this problem, this study implements a convolutional neural network (CNN), convolutional-based attention module (CBAM) to recognise Malaysian Sign Language (MSL) from images. Two different experiments were conducted for MSL signs, using CBAM-2DResNet (2-Dimensional Residual Network) implementing "Within Blocks" and "Before Classifier" methods. Various metrics such as the accuracy, loss, precision, recall, 1-score, confusion matrix, and training time are recorded to evaluate the models' efficiency. The experimental results showed that CBAM-ResNet models achieved a good performance in MSL signs recognition tasks, with accuracy rates of over 90% through a little of variations. The CBAM-ResNet "Before Classifier" models are more efficient than "Within Blocks" CBAM-ResNet models. Thus, the best trained model of CBAM-2DResNet is chosen to develop a real-time sign recognition system for translating from sign language to text and from text to sign language in an easy way of communication between deaf-mutes and other people. All experiment results indicated that the "Before Classifier" of CBAMResNet models is more efficient in recognising MSL and it is worth for future research.
Topics: Attention; Computer Systems; Humans; Neural Networks, Computer; Sign Language; Translations
PubMed: 34925497
DOI: 10.1155/2021/9023010 -
PloS One 2021Sign Language (SL) is a continuous and complex stream of multiple body movement features. That raises the challenging issue of providing efficient computational models...
Sign Language (SL) is a continuous and complex stream of multiple body movement features. That raises the challenging issue of providing efficient computational models for the description and analysis of these movements. In the present paper, we used Principal Component Analysis (PCA) to decompose SL motion into elementary movements called principal movements (PMs). PCA was applied to the upper-body motion capture data of six different signers freely producing discourses in French Sign Language. Common PMs were extracted from the whole dataset containing all signers, while individual PMs were extracted separately from the data of individual signers. This study provides three main findings: (1) although the data were not synchronized in time across signers and discourses, the first eight common PMs contained 94.6% of the variance of the movements; (2) the number of PMs that represented 94.6% of the variance was nearly the same for individual as for common PMs; (3) the PM subspaces were highly similar across signers. These results suggest that upper-body motion in unconstrained continuous SL discourses can be described through the dynamic combination of a reduced number of elementary movements. This opens up promising perspectives toward providing efficient automatic SL processing tools based on heavy mocap datasets, in particular for automatic recognition and generation.
Topics: Adult; Biomechanical Phenomena; Female; Humans; Male; Movement; Principal Component Analysis; Sign Language
PubMed: 34714862
DOI: 10.1371/journal.pone.0259464 -
Community Dental Health Feb 2024Individuals with special needs requiring special care are more vulnerable to oral health problems. Sign language is a communication medium and language of instruction... (Meta-Analysis)
Meta-Analysis
Sign language based educational interventions vs. other educational interventions to improve the oral health of hearing-impaired individuals: A systematic review and meta-analysis.
OBJECTIVE
Individuals with special needs requiring special care are more vulnerable to oral health problems. Sign language is a communication medium and language of instruction for individuals with hearing impairments. The purpose of this systematic review and meta-analysis was to assess the effectiveness of sign language-based educational interventions compared to other educational interventions in improving the oral health of hearing-impaired individuals.
METHODS
PubMed, Scopus, Embase, and Cochrane Central Register of Controlled Trials databases were searched without any restriction on the publication date. Analytical and experimental studies that evaluated and compared the effectiveness of sign language with other educational intervention groups such as videos, posters etc were included.
RESULTS
Initially, 5568 records were identified. Three relevant publications from India were eligible and included in the systematic review and meta-analysis. Differences were reported in favour of sign language over other interventions concerning plaque status, gingival health, and oral hygiene status.
CONCLUSION
Sign language-based interventions were found to be effective. However, further studies in different locations and populations are required to support their effectiveness.
Topics: Humans; Dental Plaque; Hearing; Oral Health; Oral Hygiene; Sign Language; Deafness
PubMed: 37988657
DOI: 10.1922/CDH_00109Bhadauria06 -
Neuropsychologia Oct 2021It is currently unclear to what degree language control, which minimizes non-target language interference and increases the probability of selecting target-language...
It is currently unclear to what degree language control, which minimizes non-target language interference and increases the probability of selecting target-language words, is similar for sign-speech (bimodal) bilinguals and spoken language (unimodal) bilinguals. To further investigate the nature of language control processes in bimodal bilinguals, we conducted the first event-related potential (ERP) language switching study with hearing American Sign Language (ASL)-English bilinguals. The results showed a pattern that has not been observed in any unimodal language switching study: a switch-related positivity over anterior sites and a switch-related negativity over posterior sites during ASL production in both early and late time windows. No such pattern was found during English production. We interpret these results as evidence that bimodal bilinguals uniquely engage language control at the level of output modalities.
Topics: Evoked Potentials; Humans; Language; Multilingualism; Sign Language; Speech
PubMed: 34487737
DOI: 10.1016/j.neuropsychologia.2021.108019 -
American Annals of the Deaf Feb 1980
Topics: Child; Communication Methods, Total; Education; Humans; Manual Communication; Sign Language
PubMed: 7377051
DOI: No ID Found -
ASHA 1991
Topics: Humans; Sign Language; United States
PubMed: 1878001
DOI: No ID Found