-
Data in Brief Aug 2024Sign language is a complete language with its own grammatical rules, akin to any spoken language used worldwide. It comprises two main components: static words and...
Sign language is a complete language with its own grammatical rules, akin to any spoken language used worldwide. It comprises two main components: static words and ideograms. Ideograms involve hand movements and contact with various parts of the body to convey meaning. Variations in sign language are evident across different countries, necessitating comprehensive documentation of each country's sign language. In Mexico, there is a lack of formal datasets for Mexican Sign Language (MSL), to solve this issue we structure a dataset of 249 words of the MSL divided into 17 sub-sets, we use background and clothes of black color to enhance the areas of interest (hands and face), for each word we use an average of 11 individuals, from every video sequence we obtain an average of 15 frames from each individual, obtaining 31442 jpg images.
PubMed: 38948409
DOI: 10.1016/j.dib.2024.110566 -
Neurobiology of Language (Cambridge,... 2024We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic...
We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users ( = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals ( = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.
PubMed: 38939730
DOI: 10.1162/nol_a_00143 -
Health Care Science Feb 2024Given the strikingly high diagnostic error rate in hospitals, and the recent development of Large Language Models (LLMs), we set out to measure the diagnostic...
BACKGROUND
Given the strikingly high diagnostic error rate in hospitals, and the recent development of Large Language Models (LLMs), we set out to measure the diagnostic sensitivity of two popular LLMs: GPT-4 and PaLM2. Small-scale studies to evaluate the diagnostic ability of LLMs have shown promising results, with GPT-4 demonstrating high accuracy in diagnosing test cases. However, larger evaluations on real electronic patient data are needed to provide more reliable estimates.
METHODS
To fill this gap in the literature, we used a deidentified Electronic Health Record (EHR) data set of about 300,000 patients admitted to the Beth Israel Deaconess Medical Center in Boston. This data set contained blood, imaging, microbiology and vital sign information as well as the patients' medical diagnostic codes. Based on the available EHR data, doctors curated a set of diagnoses for each patient, which we will refer to as ground truth diagnoses. We then designed carefully-written prompts to get patient diagnostic predictions from the LLMs and compared this to the ground truth diagnoses in a random sample of 1000 patients.
RESULTS
Based on the proportion of correctly predicted ground truth diagnoses, we estimated the diagnostic hit rate of GPT-4 to be 93.9%. PaLM2 achieved 84.7% on the same data set. On these 1000 randomly selected EHRs, GPT-4 correctly identified 1116 unique diagnoses.
CONCLUSION
The results suggest that artificial intelligence (AI) has the potential when working alongside clinicians to reduce cognitive errors which lead to hundreds of thousands of misdiagnoses every year. However, human oversight of AI remains essential: LLMs cannot replace clinicians, especially when it comes to human understanding and empathy. Furthermore, a significant number of challenges in incorporating AI into health care exist, including ethical, liability and regulatory barriers.
PubMed: 38939167
DOI: 10.1002/hcs2.79 -
Journal of Public Health (Oxford,... Jun 2024Deaf and hard of hearing people persistently experience barriers accessing health services, largely due to ineffective communication systems, a lack of flexible booking...
BACKGROUND
Deaf and hard of hearing people persistently experience barriers accessing health services, largely due to ineffective communication systems, a lack of flexible booking arrangements, and a lack of Deaf awareness training for health professional staff.
METHODS
Face to face focus groups were conducted with 66 Deaf and hard of hearing people in Deaf clubs across Wales, UK. Thematic analysis was undertaken.
RESULTS
Responses identified from focus groups are reported as barriers faced using health services, improvements that would make a difference, impact of accessibility of health services, and a potential Sign language badge for healthcare staff.
CONCLUSIONS
Deaf people report that health professionals lack training on Deaf awareness and do not know how to communicate effectively with Deaf and hard of hearing people. Further research into Deaf awareness and training resources for health professionals are needed to establish what improves Deaf cultural competencies, and ultimately makes healthcare experiences more positive for people who are Deaf.
PubMed: 38936826
DOI: 10.1093/pubmed/fdae112 -
Molecular Genetics & Genomic Medicine Jun 2024To further comprehend the phenotype of multiple mitochondrial dysfunction syndrome type 3 (MMDS3:OMIM#615330) caused by IBA57 mutation. We present a case involving a...
OBJECTIVE
To further comprehend the phenotype of multiple mitochondrial dysfunction syndrome type 3 (MMDS3:OMIM#615330) caused by IBA57 mutation. We present a case involving a patient who experienced acute neurological regression, and the literature was reviewed.
METHODS
Clinical data and laboratory test results were collected; early language and development progress were tested; and genetic testing was performed. Bioinformatics analysis was performed using Mutation Taster and PolyPhen-2, and the literature in databases such as PubMed and CNKI was searched using MMDS3 and IBA57 as keywords.
RESULTS
The child, aged 1 year and 2 months, had motor decline, unable to sit alone, limited right arm movement, hypotonia, hyperreflexia of both knees, and Babinski sign positivity on the right side, accompanied by nystagmus. Blood lactate levels were elevated at 2.50 mmol/L. Brain MR indicated slight swelling in the bilateral frontoparietal and occipital white matter areas and the corpus callosum, with extensive abnormal signals on T1 and T2 images, along with the semioval center and occipital lobes bilaterally. The multiple abnormal signals in the brain suggested metabolic leukoencephalopathy. Whole-exome sequencing analysis revealed that the child had two heterozygous mutations in the IBA57 gene, c.286T>C (p.Y96H) (likely pathogenic, LP) and c.992T>A (p.L331Q) (variant of uncertain significance, VUS). As of March 2023, a literature search showed that 56 cases of MMDS3 caused by IBA57 mutation had been reported worldwide, with 35 cases reported in China. Among the 35 IBA57 mutations listed in the HGMD database, there were 28 missense or nonsense mutations, 2 splicing mutations, 2 small deletions, and 3 small insertions.
CONCLUSION
MMDS3 predominantly manifests in infancy, with primary symptoms including feeding difficulties, neurological functional regression, muscle weakness, with severe cases potentially leading to mortality. Diagnosis is supported by elevated lactate levels, multisystem impairment (including auditory and visual systems), and distinctive MRI findings. Whole-exome sequencing is crucial for diagnosis. Currently, cocktail therapy offers symptomatic relief.
Topics: Humans; Infant; Male; Phenotype; Mutation; Female; Microfilament Proteins; Carrier Proteins; Mitochondrial Diseases
PubMed: 38923322
DOI: 10.1002/mgg3.2485 -
Journal of Imaging Jun 2024Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of...
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64).
PubMed: 38921626
DOI: 10.3390/jimaging10060149 -
Open Research Europe 2024Computer-assisted approaches to historical language comparison have made great progress during the past two decades. Scholars can now routinely use computational tools...
Computer-assisted approaches to historical language comparison have made great progress during the past two decades. Scholars can now routinely use computational tools to annotate cognate sets, align words, and search for regularly recurring sound correspondences. However, computational approaches still suffer from a very rigid sequence model of the form part of the linguistic sign, in which words and morphemes are segmented into fixed sound units which cannot be modified. In order to bring the representation of sound sequences in computational historical linguistics closer to the research practice of scholars who apply the traditional comparative method, we introduce improved sound sequence representations in which individual sound segments can be grouped into evolving sound units in order to capture language-specific sound laws more efficiently. We illustrate the usefulness of this enhanced representation of sound sequences in concrete examples and complement it by providing a small software library that allows scholars to convert their data from forms segmented into sound units to forms segmented into evolving sound units and vice versa.
PubMed: 38919583
DOI: 10.12688/openreseurope.16839.1 -
IEEE Transactions on Image Processing :... Jun 2024Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture...
Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code are publicly available at https://github.com/sakura/Code.
PubMed: 38917290
DOI: 10.1109/TIP.2024.3416881 -
European Archives of Psychiatry and... Jun 2024A large body of research has shown that schizophrenia patients demonstrate increased brain structural aging. Although this process may be coupled with aberrant changes...
A large body of research has shown that schizophrenia patients demonstrate increased brain structural aging. Although this process may be coupled with aberrant changes in intrinsic functional architecture of the brain, they remain understudied. We hypothesized that there are brain regions whose whole-brain functional connectivity at rest is differently associated with brain structural aging in schizophrenia patients compared to healthy controls. Eighty-four male schizophrenia patients and eighty-six male healthy controls underwent structural MRI and resting-state fMRI. The brain-predicted age difference (b-PAD) was a measure of brain structural aging. Resting-state fMRI was applied to obtain global correlation (GCOR) maps comprising voxelwise values of the strength and sign of functional connectivity of a given voxel with the rest of the brain. Schizophrenia patients had higher b-PAD compared to controls (mean between-group difference + 2.9 years). Greater b-PAD in schizophrenia patients, compared to controls, was associated with lower whole-brain functional connectivity of a region in frontal orbital cortex, inferior frontal gyrus, Heschl's Gyrus, plana temporale and polare, insula, and opercular cortices of the right hemisphere (rFTI). According to post hoc seed-based correlation analysis, decrease of functional connectivity with the posterior cingulate gyrus, left superior temporal cortices, as well as right angular gyrus/superior lateral occipital cortex has mainly driven the results. Lower functional connectivity of the rFTI was related to worse verbal working memory and language production. Our findings demonstrate that well-established frontotemporal functional abnormalities in schizophrenia are related to increased brain structural aging.
PubMed: 38914851
DOI: 10.1007/s00406-024-01837-5 -
Journal of Deaf Studies and Deaf... Jun 2024
Topics: Humans; Child; Deafness; Sign Language; Persons With Hearing Impairments; Language; Education of Hearing Disabled; Comprehension
PubMed: 38913495
DOI: 10.1093/deafed/enae016