-
Journal of Deaf Studies and Deaf... Apr 2016Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those...
Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals.
Topics: Hearing Loss; Humans; Memory, Short-Term; Persons With Hearing Impairments; Sign Language
PubMed: 26755684
DOI: 10.1093/deafed/env070 -
Cognition Oct 2021The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious...
The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.
Topics: Auditory Perception; Female; Hearing; Humans; Infant; Language; Language Development; Sign Language
PubMed: 34273677
DOI: 10.1016/j.cognition.2021.104845 -
Kulak Burun Bogaz Ihtisas Dergisi : KBB... 2012Sign language is the natural language of the prelingually deaf people particularly without hearing-speech rehabilitation. Otorhinolaryngologists, regarding health as...
Sign language is the natural language of the prelingually deaf people particularly without hearing-speech rehabilitation. Otorhinolaryngologists, regarding health as complete physical, mental and psychosocial well-being, aim hearing by diagnosing deafness as deviance from normality. However, it's obvious that the perception conflicted with the behavior which does not meet the mental and social well-being of the individual also contradicts with the definition mentioned above. This article aims to investigate the effects of hearing-speech target ignoring the sign language in Turkish population and its consistency with the history through statistical data, scientific publications and historical documents and to support critical perspective on this issue. The study results showed that maximum 50% of the deaf benefited from hearing-speech program for last 60 years before hearing screening programs; however, systems including sign language in education were not generated. In the light of these data, it is clear that the approach ignoring sign language particularly before the development of screening programs is not reasonable. In addition, considering sign language being part of the Anatolian history from Hittites to Ottomans, it is a question to be answered that why evaluation, habilitation and education systems excluding sign language are still the only choice for deaf individuals in Turkey. Despite legislative amendments in the last 6-7 years, the primary cause of failure to come into force is probably because of inadequate conception of the issue content and importance, as well as limited effort to offer solutions by academicians and authorized politicians. Within this context, this paper aims to make a positive effect on this issue offering a review for the medical staff, particularly otorhinolaryngologists and audiologists.
Topics: Correction of Hearing Impairment; Education of Hearing Disabled; History, 15th Century; History, 16th Century; History, 17th Century; History, 18th Century; History, 19th Century; History, 20th Century; History, 21st Century; History, Ancient; History, Medieval; Humans; Persons With Hearing Impairments; Sign Language; Turkey
PubMed: 22548262
DOI: 10.5606/kbbihtisas.2012.013 -
Sensors (Basel, Switzerland) May 2022Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on...
Facial motion analysis is a research field with many practical applications, and has been strongly developed in the last years. However, most effort has been focused on the recognition of basic facial expressions of emotion and neglects the analysis of facial motions related to non-verbal communication signals. This paper focuses on the classification of facial expressions that are of the utmost importance in sign languages (Grammatical Facial Expressions) but also present in expressive spoken language. We have collected a dataset of Spanish Sign Language sentences and extracted the intervals for three types of Grammatical Facial Expressions: negation, closed queries and open queries. A study of several deep learning models using different input features on the collected dataset (LSE_GFE) and an external dataset (BUHMAP) shows that GFEs can be learned reliably with Graph Convolutional Networks simply fed with face landmarks.
Topics: Emotions; Face; Facial Expression; Humans; Recognition, Psychology; Sign Language
PubMed: 35632248
DOI: 10.3390/s22103839 -
American Journal of Pharmaceutical... Oct 2019To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related...
To evaluate undergraduate pharmacy curricula at Federal Institutions of Higher Education in Brazil in order to identify sign language courses and other content related to the provision of care to deaf patients. A cross-sectional, descriptive study was conducted between March and June 2017. Data were collected from the websites of undergraduate pharmacy education programs in Brazil. Sign language courses were classified according to type (mandatory or elective), nature (theoretical or theoretical-practical), course period and workload. The course contents were extracted and analyzed by content analysis. Of the 35 schools of pharmacy included in the study, 18 (51.4%) included a sign language course in their curriculum. Eighteen (100%) of the sign language courses were elective, one (5.6%) was theorical-practical, 16 (89.0%) did not have a predetermined point in the curriculum for students to complete the course, and 11 (61.1%) had a workload equal to or greater than 60 hours. The main pedagogical content identified related to the teaching and learning of sign language. Learning sign language in undergraduate pharmacy is important for these professionals could provide humanistic and integral care to deaf patients. Therefore, there is considerable room for improvement in teaching sign language to undergraduate pharmacy students in Brazil.
Topics: Brazil; Cross-Sectional Studies; Curriculum; Education, Pharmacy; Humans; Learning; Schools, Pharmacy; Sign Language; Students, Pharmacy
PubMed: 31831902
DOI: 10.5688/ajpe7239 -
Neuropsychologia May 2023Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two...
Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two separate hypotheses: (1) a task-specific hypothesis that suggests these effects occur because visual features of the iconic sign form can map onto the visual features of the pictures, and (2) a semantic feature hypothesis that suggests that the retrieval of iconic signs results in greater semantic activation due to the robust representation of sensory-motor semantic features compared to non-iconic signs. To test these two hypotheses, iconic and non-iconic American Sign Language (ASL) signs were elicited from deaf native/early signers using a picture-naming task and an English-to-ASL translation task, while electrophysiological recordings were made. Behavioral facilitation (faster response times) and reduced negativities were observed for iconic signs (both prior to and within the N400 time window), but only in the picture-naming task. No ERP or behavioral differences were found between iconic and non-iconic signs in the translation task. This pattern of results supports the task-specific hypothesis and provides evidence that iconicity only facilitates sign production when the eliciting stimulus and the form of the sign can visually overlap (a picture-sign alignment effect).
Topics: Sign Language; Electrophysiology; United States; Evoked Potentials; Translations; Reaction Time; Photic Stimulation; Semantics; Humans; Deafness; Male; Female; Adult; Analysis of Variance; Models, Neurological
PubMed: 36796720
DOI: 10.1016/j.neuropsychologia.2023.108516 -
Sensors (Basel, Switzerland) Oct 2023The analysis and recognition of sign languages are currently active fields of research focused on sign recognition. Various approaches differ in terms of analysis... (Review)
Review
The analysis and recognition of sign languages are currently active fields of research focused on sign recognition. Various approaches differ in terms of analysis methods and the devices used for sign acquisition. Traditional methods rely on video analysis or spatial positioning data calculated using motion capture tools. In contrast to these conventional recognition and classification approaches, electromyogram (EMG) signals, which measure muscle electrical activity, offer potential technology for detecting gestures. These EMG-based approaches have recently gained attention due to their advantages. This prompted us to conduct a comprehensive study on the methods, approaches, and projects utilizing EMG sensors for sign language handshape recognition. In this paper, we provided an overview of the sign language recognition field through a literature review, with the objective of offering an in-depth review of the most significant techniques. These techniques were categorized in this article based on their respective methodologies. The survey discussed the progress and challenges in sign language recognition systems based on surface electromyography (sEMG) signals. These systems have shown promise but face issues like sEMG data variability and sensor placement. Multiple sensors enhance reliability and accuracy. Machine learning, including deep learning, is used to address these challenges. Common classifiers in sEMG-based sign language recognition include SVM, ANN, CNN, KNN, HMM, and LSTM. While SVM and ANN are widely used, random forest and KNN have shown better performance in some cases. A multilayer perceptron neural network achieved perfect accuracy in one study. CNN, often paired with LSTM, ranks as the third most popular classifier and can achieve exceptional accuracy, reaching up to 99.6% when utilizing both EMG and IMU data. LSTM is highly regarded for handling sequential dependencies in EMG signals, making it a critical component of sign language recognition systems. In summary, the survey highlights the prevalence of SVM and ANN classifiers but also suggests the effectiveness of alternative classifiers like random forests and KNNs. LSTM emerges as the most suitable algorithm for capturing sequential dependencies and improving gesture recognition in EMG-based sign language recognition systems.
Topics: Humans; Sign Language; Reproducibility of Results; Pattern Recognition, Automated; Neural Networks, Computer; Algorithms; Electromyography; Gestures
PubMed: 37837173
DOI: 10.3390/s23198343 -
Journal of Deaf Studies and Deaf... Dec 2022Deaf professionals, whom we term Deaf Language Specialists (DLS), are frequently employed to work with children and young people who have difficulties learning sign...
Deaf professionals, whom we term Deaf Language Specialists (DLS), are frequently employed to work with children and young people who have difficulties learning sign language, but there are few accounts of this work in the literature. Through questionnaires and focus groups, 23 DLSs described their work in this area. Deductive thematic analysis was used to identify how this compared to the work of professionals (typically Speech and Language Therapists/Pathologists, SLPs) working with hearing children with difficulties learning spoken language. Inductive thematic analysis resulted in the identification of two additional themes: while many practices by DLSs are similar to those of SLPs working with hearing children, a lack of training, information, and resources hampers their work; additionally, the cultural context of language and deafness makes this a complex and demanding area of work. These findings add to the limited literature on providing language interventions in the signed modality with clinical implications for meeting the needs of deaf and hard-of-hearing children who do not achieve expectations of learning a first language in their early years. The use of these initial results in two further study phases to co-deliver interventions and co-produce training for DLSs is briefly described.
Topics: Adolescent; Child; Humans; Deafness; Language; Language Therapy; Learning; Sign Language
PubMed: 36504375
DOI: 10.1093/deafed/enac029 -
Journal of Experimental Psychology.... Jan 2020Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that...
Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that symbols are more learnable when they are grounded in a child's firsthand experiences. As such, pantomimic iconic signs, which use the signer's body to represent a body, might be more readily learned than other types of iconic signs. Alternatively, the structure mapping theory of iconicity predicts that learners are sensitive to the amount of overlap between form and meaning. In this exploratory study of early vocabulary development in American Sign Language (ASL), we asked whether type of iconicity predicts sign acquisition above and beyond degree of iconicity. We also controlled for concreteness and relevance to babies, two possible confounding factors. Highly concrete referents and concepts that are germane to babies may be amenable to iconic mappings. We reanalyzed a previously published set of ASL Communicative Development Inventory (CDI) reports from 58 deaf children learning ASL from their deaf parents (Anderson & Reilly, 2002). Pantomimic signs were more iconic than other types of iconic signs (perceptual, both pantomimic and perceptual, or arbitrary), but type of iconicity had no effect on acquisition. Children may not make use of the special status of pantomimic elements of signs. Their vocabularies are, however, shaped by degree of iconicity, which aligns with a structure mapping theory of iconicity, though other explanations are also compatible (e.g., iconicity in child-directed signing). Previously demonstrated effects of type of iconicity may be an artifact of the increased degree of iconicity among pantomimic signs. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Topics: Child, Preschool; Concept Formation; Deafness; Female; Humans; Infant; Language Development; Learning; Male; Psycholinguistics; Sign Language; Vocabulary
PubMed: 31094562
DOI: 10.1037/xlm0000713 -
Quality of Life Research : An... Jul 2016To translate the health questionnaire EuroQol EQ-5D-5L into British Sign Language (BSL), to test its reliability with the signing Deaf population of BSL users in the UK...
PURPOSE
To translate the health questionnaire EuroQol EQ-5D-5L into British Sign Language (BSL), to test its reliability with the signing Deaf population of BSL users in the UK and to validate its psychometric properties.
METHODS
The EQ-5D-5L BSL was developed following the international standard for translation required by EuroQol, with additional agreed features appropriate to a visual language. Data collection used an online platform to view the signed (BSL) version of the tests. The psychometric testing included content validity, assessed by interviewing a small sample of Deaf people. Reliability was tested by internal consistency of the items and test-retest, and convergent validity was assessed by determining how well EQ-5D-5L BSL correlates with CORE-10 BSL and CORE-6D BSL.
RESULTS
The psychometric properties of the EQ-5D-5L BSL are good, indicating that it can be used to measure health status in the Deaf signing population in the UK. Convergent validity between EQ-5D-5L BSL and CORE-10 BSL and CORE-6D BSL is consistent, demonstrating that the BSL version of EQ-5D-5L is a good measure of the health status of an individual. The test-retest reliability of EQ-5D-5L BSL, for each dimension of health, was shown to have Cohen's kappa values of 0.47-0.61; these were in the range of moderate to good and were therefore acceptable.
CONCLUSIONS
This is the first time EQ-5D-5L has been translated into a signed language for use with Deaf people and is a significant step forward towards conducting studies of health status and cost-effectiveness in this population.
Topics: Adolescent; Adult; Aged; Female; Health Status; Humans; Language; Male; Middle Aged; Persons With Hearing Impairments; Physical Examination; Psychometrics; Quality of Life; Reproducibility of Results; Sign Language; Surveys and Questionnaires; Translating; Translations; Young Adult
PubMed: 26887955
DOI: 10.1007/s11136-016-1235-4