-
Public Health Reports (Washington, D.C.... 2023Interpreting during the COVID-19 pandemic caused stress and adverse mental health among sign language interpreters. The objective of this study was to summarize the...
OBJECTIVE
Interpreting during the COVID-19 pandemic caused stress and adverse mental health among sign language interpreters. The objective of this study was to summarize the pandemic-related work experiences of sign language interpreters and interpreting administrators upon transitioning from on-site to remote work.
METHODS
From March through August 2021, we conducted focus groups with 22 sign language interpreters in 5 settings, 1 focus group for each setting: staff, educational, community/freelance, video remote interpreting, and video relay services. We also conducted 5 individual interviews with interpreting administrators or individuals in positions of administrative leadership in each represented setting. The 22 interpreters had a mean (SD) age of 43.4 (9.8) years, 18 were female, 17 were White, all identified as hearing, and all worked a mean (SD) of 30.6 (11.6) hours per week in remote interpreting. We asked participants about the positive and negative consequences of transitioning from on-site to remote at-home interpreting. We established a thematic framework by way of qualitative description for data analysis.
RESULTS
We found considerable overlap across positive and negative consequences identified by interpreters and interpreting administrators. Positive consequences of transitioning from on-site to remote-at-home interpreting were realized across 5 overarching topic areas: organizational support, new opportunities, well-being, connections/relationships, and scheduling. Negative consequences emerged across 4 overarching topic areas: technology, financial aspects, availability of the interpreter workforce, and concerns about the occupational health of interpreters.
CONCLUSIONS
The positive and negative consequences shared by interpreters and interpreting administrators provide foundational knowledge upon which to create recommendations for the anticipated sustainment of some remote interpreting practice in a manner that protects and promotes occupational health.
Topics: Humans; Female; Adult; Male; Communication Barriers; Pandemics; Sign Language; COVID-19; Allied Health Personnel
PubMed: 37243519
DOI: 10.1177/00333549231173941 -
The Journal of Neuroscience : the... Oct 2012Theoretical advances in language research and the availability of increasingly high-resolution experimental techniques in the cognitive neurosciences are profoundly... (Review)
Review
Theoretical advances in language research and the availability of increasingly high-resolution experimental techniques in the cognitive neurosciences are profoundly changing how we investigate and conceive of the neural basis of speech and language processing. Recent work closely aligns language research with issues at the core of systems neuroscience, ranging from neurophysiological and neuroanatomic characterizations to questions about neural coding. Here we highlight, across different aspects of language processing (perception, production, sign language, meaning construction), new insights and approaches to the neurobiology of language, aiming to describe promising new areas of investigation in which the neurosciences intersect with linguistic research more closely than before. This paper summarizes in brief some of the issues that constitute the background for talks presented in a symposium at the Annual Meeting of the Society for Neuroscience. It is not a comprehensive review of any of the issues that are discussed in the symposium.
Topics: Animals; Brain; Humans; Language; Neural Pathways; Sign Language; Speech; Speech Perception
PubMed: 23055482
DOI: 10.1523/JNEUROSCI.3244-12.2012 -
Sensors (Basel, Switzerland) Jul 2023Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this...
Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this problem by designing a low-cost data glove that utilizes multiple inertial sensors with the purpose of achieving efficient and accurate sign language recognition. In this study, four machine learning models-decision tree (DT), support vector machine (SVM), K-nearest neighbor method (KNN), and random forest (RF)-were employed to recognize 20 different types of dynamic sign language data used by deaf individuals. Additionally, a proposed attention-based mechanism of long and short-term memory neural networks (Attention-BiLSTM) was utilized in the process. Furthermore, this study verifies the impact of the number and position of data glove nodes on the accuracy of recognizing complex dynamic sign language. Finally, the proposed method is compared with existing state-of-the-art algorithms using nine public datasets. The results indicate that both the Attention-BiLSTM and RF algorithms have the highest performance in recognizing the twenty dynamic sign language gestures, with an accuracy of 98.85% and 97.58%, respectively. This provides evidence for the feasibility of our proposed data glove and recognition methods. This study may serve as a valuable reference for the development of wearable sign language recognition devices and promote easier communication between deaf and able-bodied individuals.
Topics: Humans; Sign Language; Speech; Algorithms; Hearing; Wearable Electronic Devices
PubMed: 37571476
DOI: 10.3390/s23156693 -
Scientific Reports Dec 2022In order to perform their daily activities, a person is required to communicating with others. This can be a major obstacle for the deaf population of the world, who...
In order to perform their daily activities, a person is required to communicating with others. This can be a major obstacle for the deaf population of the world, who communicate using sign languages (SL). Pakistani Sign Language (PSL) is used by more than 250,000 deaf Pakistanis. Developing a SL recognition system would greatly facilitate these people. This study aimed to collect data of static and dynamic PSL alphabets and to develop a vision-based system for their recognition using Bag-of-Words (BoW) and Support Vector Machine (SVM) techniques. A total of 5120 images for 36 static PSL alphabet signs and 353 videos with 45,224 frames for 3 dynamic PSL alphabet signs were collected from 10 native signers of PSL. The developed system used the collected data as input, resized the data to various scales and converted the RGB images into grayscale. The resized grayscale images were segmented using Thresholding technique and features were extracted using Speeded Up Robust Feature (SURF). The obtained SURF descriptors were clustered using K-means clustering. A BoW was obtained by computing the Euclidean distance between the SURF descriptors and the clustered data. The codebooks were divided into training and testing using fivefold cross validation. The highest overall classification accuracy for static PSL signs was 97.80% at 750 × 750 image dimensions and 500 Bags. For dynamic PSL signs a 96.53% accuracy was obtained at 480 × 270 video resolution and 200 Bags.
Topics: Humans; Support Vector Machine; Sign Language; Cluster Analysis
PubMed: 36494382
DOI: 10.1038/s41598-022-15864-6 -
PloS One 2023Sign language (SL) has strong structural features. Various gestures and the complex trajectories of hand movements bring challenges to sign language recognition (SLR)....
Sign language (SL) has strong structural features. Various gestures and the complex trajectories of hand movements bring challenges to sign language recognition (SLR). Based on the inherent correlation between gesture and trajectory of SL action, SLR is organically divided into gesture-based recognition and gesture-related movement trajectory recognition. One hundred and twenty commonly used Chinese SL words involving 9 gestures and 8 movement trajectories, are selected as research and test objects. The method based on the amplitude state of surface electromyography (sEMG) signal and acceleration signal is used for vocabulary segmentation. The multi-sensor decision fusion method of coupled hidden Markov model is used to complete the recognition of SL vocabulary, and the average recognition rate is 90.41%. Experiments show that the method of sEMG signal and motion information fusion has good practicability in SLR.
Topics: Humans; Electromyography; Sign Language; Pattern Recognition, Automated; Gestures; Hand; China; Algorithms
PubMed: 38060609
DOI: 10.1371/journal.pone.0295398 -
Scientific Reports Oct 2023Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous...
Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
Topics: Humans; Artificial Intelligence; Sign Language; Neural Networks, Computer; Hand; Image Processing, Computer-Assisted
PubMed: 37813932
DOI: 10.1038/s41598-023-43852-x -
Cognition Jul 2022If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in...
If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in the visual-gestural modality is different from the auditory-oral modality, and we ask here whether sign languages have adapted to the affordances and constraints of the signed modality. During sign perception, perceivers look almost exclusively at the lower face, rarely looking down at the hands. This means that signs articulated far from the lower face must be perceived through peripheral vision, which has less acuity than central vision. We tested the hypothesis that signs that are more predictable (high frequency signs, signs with common handshapes) can be produced further from the face because precise visual resolution is not necessary for recognition. Using pose estimation algorithms, we examined the structure of over 2000 American Sign Language lexical signs to identify whether lexical frequency and handshape probability affect the position of the wrist in 2D space. We found that frequent signs with rare handshapes tended to occur closer to the signer's face than frequent signs with common handshapes, and that frequent signs are generally more likely to be articulated further from the face than infrequent signs. Together these results provide empirical support for anecdotal assertions that the phonological structure of sign language is shaped by the properties of the human visual and motor systems.
Topics: Gestures; Humans; Language; Recognition, Psychology; Sign Language; Visual Perception
PubMed: 35192994
DOI: 10.1016/j.cognition.2022.105040 -
Psychonomic Bulletin & Review Feb 2017Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with... (Review)
Review
Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats--a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code is more easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manual modality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts--when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)--in our analysis of language, we gain new perspectives on the origins and continuing development of language.
Topics: Gestures; Humans; Language; Sign Language
PubMed: 27368641
DOI: 10.3758/s13423-016-1074-x -
Journal of Speech, Language, and... Apr 2023The purpose of this study is to determine whether and how learning American Sign Language (ASL) is associated with spoken English skills in a sample of ASL-English...
PURPOSE
The purpose of this study is to determine whether and how learning American Sign Language (ASL) is associated with spoken English skills in a sample of ASL-English bilingual deaf and hard of hearing (DHH) children.
METHOD
This cross-sectional study of vocabulary size included 56 DHH children between 8 and 60 months of age who were learning both ASL and spoken English and had hearing parents. English and ASL vocabulary were independently assessed via parent report checklists.
RESULTS
ASL vocabulary size positively correlated with spoken English vocabulary size. Spoken English vocabulary sizes in the ASL-English bilingual DHH children in the present sample were comparable to those in previous reports of monolingual DHH children who were learning only English. ASL-English bilingual DHH children had total vocabularies (combining ASL and English) that were equivalent to same-age hearing monolingual children. Children with large ASL vocabularies were more likely to have spoken English vocabularies in the average range based on norms for hearing monolingual children.
CONCLUSIONS
Contrary to predictions often cited in the literature, acquisition of sign language does not harm spoken vocabulary acquisition. This retrospective, correlational study cannot determine whether there is a causal relationship between sign language and spoken language vocabulary acquisition, but if a causal relationship exists, the evidence here suggests that the effect would be positive. Bilingual DHH children have age-expected vocabularies when considering the entirety of their language skills. We found no evidence to support recommendations that families with DHH children avoid learning sign language. Rather, our findings show that children with early ASL exposure can develop age-appropriate vocabulary skills in both ASL and spoken English.
Topics: Child; Humans; Sign Language; Retrospective Studies; Cross-Sectional Studies; Deafness; Language; Vocabulary; Language Development
PubMed: 36972338
DOI: 10.1044/2022_JSLHR-22-00505 -
Acta Psychologica Sep 2022In bilingual word recognition, cross-language activation has been found in unimodal bilinguals (e.g., Chinese-English bilinguals) and bimodal bilinguals (e.g., American...
In bilingual word recognition, cross-language activation has been found in unimodal bilinguals (e.g., Chinese-English bilinguals) and bimodal bilinguals (e.g., American Sign language-English bilinguals). However, it remains unclear how signs' phonological parameters, spoken words' orthographic and phonological representation, and language proficiency affect cross-language activation in bimodal bilinguals. To resolve the issues, we recruited deaf Chinese sign language (CSL)-Chinese bimodal bilinguals as participants. We conducted two experiments with the implicit priming paradigm and the semantic relatedness decision task. Experiment 1 first showed cross-language activation from Chinese to CSL, and the CSL words' phonological parameter affected the cross-language activation. Experiment 2 further revealed inverse cross-language activation from CSL to Chinese. The Chinese words' orthographic and phonological representation played a similar role in the cross-language activation. Moreover, a comparison between Experiments 1 and 2 indicated that language proficiency influenced cross-language activation. The findings were further discussed with the Bilingual Interactive Activation Plus (BIA+) model, the deaf BIA+ model, and the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS) model.
Topics: China; Humans; Language; Multilingualism; Semantics; Sign Language
PubMed: 35933798
DOI: 10.1016/j.actpsy.2022.103693