-
CoDAS 2021
Topics: Deafness; Humans; Language Therapy; Multilingualism; Persons With Hearing Impairments; Sign Language
PubMed: 33909844
DOI: 10.1590/2317-1782/20202020248 -
Frontiers in Psychology 2019
PubMed: 31428020
DOI: 10.3389/fpsyg.2019.01765 -
Topics in Cognitive Science Jan 2015Zinacantec Family Homesign (Z) is a new sign language emerging spontaneously over the past three decades in a single family in a remote Mayan Indian village. Three deaf...
Zinacantec Family Homesign (Z) is a new sign language emerging spontaneously over the past three decades in a single family in a remote Mayan Indian village. Three deaf siblings, their Tzotzil-speaking age-mates, and now their children, who have had contact with no other deaf people, represent the first generation of Z signers. I postulate an augmented grammaticalization path, beginning with the adoption of a Tzotzil cospeech holophrastic gesture-meaning "come!"-into Z, and then its apparent stylization as an attention-getting sign, followed by grammatical regimentation and pragmatic generalization as an utterance initial change of speaker or turn marker.
Topics: Adult; Aged; Child; Family Characteristics; Female; Gestures; Humans; Language Development; Linguistics; Male; Mexico; Middle Aged; Pedigree; Persons With Hearing Impairments; Sign Language; Young Adult
PubMed: 25627101
DOI: 10.1111/tops.12126 -
Annual Review of Linguistics Jan 2021Natural sign languages of deaf communities are acquired on the same time scale as that of spoken languages if children have access to fluent signers providing input from...
Natural sign languages of deaf communities are acquired on the same time scale as that of spoken languages if children have access to fluent signers providing input from birth. Infants are sensitive to linguistic information provided visually, and early milestones show many parallels. The modality may affect various areas of language acquisition; such effects include the form of signs (sign phonology), the potential advantage presented by visual iconicity, and the use of spatial locations to represent referents, locations, and movement events. Unfortunately, the vast majority of deaf children do not receive accessible linguistic input in infancy, and these children experience language deprivation. Negative effects on language are observed when first-language acquisition is delayed. For those who eventually begin to learn a sign language, earlier input is associated with better language and academic outcomes. Further research is especially needed with a broader diversity of participants.
PubMed: 34746335
DOI: 10.1146/annurev-linguistics-043020-092357 -
Sensors (Basel, Switzerland) Aug 2021AI technologies can play an important role in breaking down the communication barriers of deaf or hearing-impaired people with other communities, contributing... (Review)
Review
AI technologies can play an important role in breaking down the communication barriers of deaf or hearing-impaired people with other communities, contributing significantly to their social inclusion. Recent advances in both sensing technologies and AI algorithms have paved the way for the development of various applications aiming at fulfilling the needs of deaf and hearing-impaired communities. To this end, this survey aims to provide a comprehensive review of state-of-the-art methods in sign language capturing, recognition, translation and representation, pinpointing their advantages and limitations. In addition, the survey presents a number of applications, while it discusses the main challenges in the field of sign language technologies. Future research direction are also proposed in order to assist prospective researchers towards further advancing the field.
Topics: Algorithms; Artificial Intelligence; Humans; Prospective Studies; Sign Language
PubMed: 34502733
DOI: 10.3390/s21175843 -
ENeuro 2021How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and...
How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.
Topics: Attention; Brain; Electroencephalography; Humans; Periodicity; Speech
PubMed: 34341067
DOI: 10.1523/ENEURO.0065-21.2021 -
Sensors (Basel, Switzerland) Nov 2023This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using...
This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.
Topics: Humans; Algorithms; Sign Language; Motion; Movement; Hand
PubMed: 38067738
DOI: 10.3390/s23239365 -
Frontiers in Psychology 2022In contrast to scholars and signers in the nineteenth century, William Stokoe conceived of American Sign Language (ASL) as a unique linguistic tradition with roots in... (Review)
Review
In contrast to scholars and signers in the nineteenth century, William Stokoe conceived of American Sign Language (ASL) as a unique linguistic tradition with roots in nineteenth-century , a conception that is apparent in his earliest scholarship on ASL. Stokoe thus contributed to the theoretical foundations upon which the field of sign language historical linguistics would later develop. This review focuses on the development of sign language historical linguistics since Stokoe, including the field's significant progress and the theoretical and methodological problems that it still faces. The review examines the field's development through the lens of two related problems pertaining to how we understand sign language relationships and to our understanding of cognacy, as the term pertains to signs. It is suggested that the theoretical notions underlying these terms do not straightforwardly map onto the historical development of many sign languages. Recent approaches in sign language historical linguistics are highlighted and future directions for research are suggested to address the problems discussed in this review.
PubMed: 35356353
DOI: 10.3389/fpsyg.2022.818753 -
Cognition Jul 2022If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in...
If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in the visual-gestural modality is different from the auditory-oral modality, and we ask here whether sign languages have adapted to the affordances and constraints of the signed modality. During sign perception, perceivers look almost exclusively at the lower face, rarely looking down at the hands. This means that signs articulated far from the lower face must be perceived through peripheral vision, which has less acuity than central vision. We tested the hypothesis that signs that are more predictable (high frequency signs, signs with common handshapes) can be produced further from the face because precise visual resolution is not necessary for recognition. Using pose estimation algorithms, we examined the structure of over 2000 American Sign Language lexical signs to identify whether lexical frequency and handshape probability affect the position of the wrist in 2D space. We found that frequent signs with rare handshapes tended to occur closer to the signer's face than frequent signs with common handshapes, and that frequent signs are generally more likely to be articulated further from the face than infrequent signs. Together these results provide empirical support for anecdotal assertions that the phonological structure of sign language is shaped by the properties of the human visual and motor systems.
Topics: Gestures; Humans; Language; Recognition, Psychology; Sign Language; Visual Perception
PubMed: 35192994
DOI: 10.1016/j.cognition.2022.105040 -
Frontiers in Psychology 2021
PubMed: 34127922
DOI: 10.3389/fpsyg.2021.691614