-
Molecular Genetics & Genomic Medicine Jun 2024To further comprehend the phenotype of multiple mitochondrial dysfunction syndrome type 3 (MMDS3:OMIM#615330) caused by IBA57 mutation. We present a case involving a...
OBJECTIVE
To further comprehend the phenotype of multiple mitochondrial dysfunction syndrome type 3 (MMDS3:OMIM#615330) caused by IBA57 mutation. We present a case involving a patient who experienced acute neurological regression, and the literature was reviewed.
METHODS
Clinical data and laboratory test results were collected; early language and development progress were tested; and genetic testing was performed. Bioinformatics analysis was performed using Mutation Taster and PolyPhen-2, and the literature in databases such as PubMed and CNKI was searched using MMDS3 and IBA57 as keywords.
RESULTS
The child, aged 1 year and 2 months, had motor decline, unable to sit alone, limited right arm movement, hypotonia, hyperreflexia of both knees, and Babinski sign positivity on the right side, accompanied by nystagmus. Blood lactate levels were elevated at 2.50 mmol/L. Brain MR indicated slight swelling in the bilateral frontoparietal and occipital white matter areas and the corpus callosum, with extensive abnormal signals on T1 and T2 images, along with the semioval center and occipital lobes bilaterally. The multiple abnormal signals in the brain suggested metabolic leukoencephalopathy. Whole-exome sequencing analysis revealed that the child had two heterozygous mutations in the IBA57 gene, c.286T>C (p.Y96H) (likely pathogenic, LP) and c.992T>A (p.L331Q) (variant of uncertain significance, VUS). As of March 2023, a literature search showed that 56 cases of MMDS3 caused by IBA57 mutation had been reported worldwide, with 35 cases reported in China. Among the 35 IBA57 mutations listed in the HGMD database, there were 28 missense or nonsense mutations, 2 splicing mutations, 2 small deletions, and 3 small insertions.
CONCLUSION
MMDS3 predominantly manifests in infancy, with primary symptoms including feeding difficulties, neurological functional regression, muscle weakness, with severe cases potentially leading to mortality. Diagnosis is supported by elevated lactate levels, multisystem impairment (including auditory and visual systems), and distinctive MRI findings. Whole-exome sequencing is crucial for diagnosis. Currently, cocktail therapy offers symptomatic relief.
Topics: Humans; Infant; Male; Phenotype; Mutation; Female; Microfilament Proteins; Carrier Proteins; Mitochondrial Diseases
PubMed: 38923322
DOI: 10.1002/mgg3.2485 -
Journal of Imaging Jun 2024Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of...
Sign language recognition technology can help people with hearing impairments to communicate with non-hearing-impaired people. At present, with the rapid development of society, deep learning also provides certain technical support for sign language recognition work. In sign language recognition tasks, traditional convolutional neural networks used to extract spatio-temporal features from sign language videos suffer from insufficient feature extraction, resulting in low recognition rates. Nevertheless, a large number of video-based sign language datasets require a significant amount of computing resources for training while ensuring the generalization of the network, which poses a challenge for recognition. In this paper, we present a video-based sign language recognition method based on Residual Network (ResNet) and Long Short-Term Memory (LSTM). As the number of network layers increases, the ResNet network can effectively solve the granularity explosion problem and obtain better time series features. We use the ResNet convolutional network as the backbone model. LSTM utilizes the concept of gates to control unit states and update the output feature values of sequences. ResNet extracts the sign language features. Then, the learned feature space is used as the input of the LSTM network to obtain long sequence features. It can effectively extract the spatio-temporal features in sign language videos and improve the recognition rate of sign language actions. An extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed method, with an accuracy of 85.26%, F1-score of 84.98%, and precision of 87.77% on Argentine Sign Language (LSA64).
PubMed: 38921626
DOI: 10.3390/jimaging10060149 -
Open Research Europe 2024Computer-assisted approaches to historical language comparison have made great progress during the past two decades. Scholars can now routinely use computational tools...
Computer-assisted approaches to historical language comparison have made great progress during the past two decades. Scholars can now routinely use computational tools to annotate cognate sets, align words, and search for regularly recurring sound correspondences. However, computational approaches still suffer from a very rigid sequence model of the form part of the linguistic sign, in which words and morphemes are segmented into fixed sound units which cannot be modified. In order to bring the representation of sound sequences in computational historical linguistics closer to the research practice of scholars who apply the traditional comparative method, we introduce improved sound sequence representations in which individual sound segments can be grouped into evolving sound units in order to capture language-specific sound laws more efficiently. We illustrate the usefulness of this enhanced representation of sound sequences in concrete examples and complement it by providing a small software library that allows scholars to convert their data from forms segmented into sound units to forms segmented into evolving sound units and vice versa.
PubMed: 38919583
DOI: 10.12688/openreseurope.16839.1 -
Sensors (Basel, Switzerland) Jun 2024Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in...
Sign language is an essential means of communication for individuals with hearing disabilities. However, there is a significant shortage of sign language interpreters in some languages, especially in Saudi Arabia. This shortage results in a large proportion of the hearing-impaired population being deprived of services, especially in public places. This paper aims to address this gap in accessibility by leveraging technology to develop systems capable of recognizing Arabic Sign Language (ArSL) using deep learning techniques. In this paper, we propose a hybrid model to capture the spatio-temporal aspects of sign language (i.e., letters and words). The hybrid model consists of a Convolutional Neural Network (CNN) classifier to extract spatial features from sign language data and a Long Short-Term Memory (LSTM) classifier to extract spatial and temporal characteristics to handle sequential data (i.e., hand movements). To demonstrate the feasibility of our proposed hybrid model, we created a dataset of 20 different words, resulting in 4000 images for ArSL: 10 static gesture words and 500 videos for 10 dynamic gesture words. Our proposed hybrid model demonstrates promising performance, with the CNN and LSTM classifiers achieving accuracy rates of 94.40% and 82.70%, respectively. These results indicate that our approach can significantly enhance communication accessibility for the hearing-impaired community in Saudi Arabia. Thus, this paper represents a major step toward promoting inclusivity and improving the quality of life for the hearing impaired.
Topics: Sign Language; Humans; Deep Learning; Neural Networks, Computer; Saudi Arabia; Language; Gestures
PubMed: 38894473
DOI: 10.3390/s24113683 -
Scientific Reports Jun 2024As a form of body language, the gesture plays an important role in smart homes, game interactions, and sign language communication, etc. The gesture recognition methods...
As a form of body language, the gesture plays an important role in smart homes, game interactions, and sign language communication, etc. The gesture recognition methods have been carried out extensively. The existing methods have inherent limitations regarding user experience, visual environment, and recognition granularity. Millimeter wave radar provides an effective method for the problems lie ahead gesture recognition because of the advantage of considerable bandwidth and high precision perception. Interfering factors and the complexity of the model raise an enormous challenge to the practical application of gesture recognition methods as the millimeter wave radar is applied to complex scenes. Based on multi-feature fusion, a gesture recognition method for complex scenes is proposed in this work. We collected data in variety places to improve sample reliability, filtered clutters to improve the signal's signal-to-noise ratio (SNR), and then obtained multi features involves range-time map (RTM), Doppler-time map (DTM) and angle-time map (ATM) and fused them to enhance the richness and expression ability of the features. A lightweight neural network model multi-CNN-LSTM is designed to gestures recognition. This model consists of three convolutional neural network (CNN) for three obtained features and one long short-term memory (LSTM) for temporal features. We analyzed the performance and complexity of the model and verified the effectiveness of feature extraction. Numerous experiments have shown that this method has generalization ability, adaptability, and high robustness in complex scenarios. The recognition accuracy of 14 experimental gestures reached 97.28%.
PubMed: 38877076
DOI: 10.1038/s41598-024-64576-6 -
Cancer Management and Research 2024In situations where pathological acquisition is difficult, there is a lack of consensus on distinguishing between adenocarcinoma and squamous cell carcinoma from imaging...
PURPOSE
In situations where pathological acquisition is difficult, there is a lack of consensus on distinguishing between adenocarcinoma and squamous cell carcinoma from imaging images, and each doctor can only make judgments based on their own experience. This study aims to extract imaging features of chest CT, extract sensitive factors through logistic univariate and multivariate analysis, and model to distinguish between lung squamous cell carcinoma and lung adenocarcinoma.
METHODS
We downloaded chest CT scans with clear diagnosis of adenocarcinoma and squamous cell carcinoma from The Cancer Imaging Archive (TCIA), extracted 19 imaging features by a radiologist and a thoracic surgeon, including location, spicule, lobulation, cavity, vacuolar sign, necrosis, pleural traction sign, vascular bundle sign, air bronchogram sign, calcification, enhancement degree, distance from pulmonary hilum, atelectasis, pulmonary hilum and bronchial lymph nodes, mediastinal lymph nodes, interlobular septal thickening, pulmonary metastasis, adjacent structures invasion, pleural effusion. Firstly, we apply the glm function of R language to perform logistic univariate analysis on all variables to select variables with P < 0.1. Then, perform logistic multivariate analysis on the selected variables to obtain a predictive model. Next, use the roc function in R language to calculate the AUC value and draw the ROC curve, use the val.prob function in R language to draw the Calibrat curve, and use the rmda package in R language to draw the DCA curve and clinical impact curve. At the same time, 45 patients diagnosed with lung squamous cell carcinoma and lung adenocarcinoma through surgery or biopsy in the Radiotherapy Department and Thoracic Surgery Department of our hospital from 2023 to 2024 were included in the validation group. The chest CT features were jointly determined and recorded by the two doctors mentioned above and included in the validation group. The included image feature data are complete and does not require preprocessing, so directly entering statistical calculations. Perform ROC curves, calibration curves, DCA, and clinical impact curves in the validation group to further validate the predictive model. If the predictive model performs well in the validation group, further draw a nomogram to demonstrate.
RESULTS
This study extracted 19 imaging features from the chest CT scans of 75 patients downloaded from TCIA and finally selected 18 complete data for analysis. First, univariate analysis and multivariate analysis were performed, and a total of 5 variables were obtained: spicule, necrosis, air bronchogram Sign, atelectasis, pulmonary hilum and bronchial lymph nodes. After conducting modeling analysis with AUC = 0.887, a validation group was established using clinical cases from our hospital, Draw ROC curve with AUC = 0.865 in the validation group, evaluate the accuracy of the model through Calibrate calibration curve, evaluate the reliability of the model in clinical practice through DCA curve, and further evaluate the practicality of the model in clinical practice through clinical impact curve.
CONCLUSION
It is possible to extract influential features from ordinary chest CT scans to determine lung adenocarcinoma and squamous cell carcinoma. The model we have set up performs well in terms of discrimination, accuracy, reliability, and practicality.
PubMed: 38855330
DOI: 10.2147/CMAR.S462951 -
PeerJ. Computer Science 2024This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion...
This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion history images (MHIs) derived from these data. Our research combines spatial information obtained from body, hand, and face poses with the comprehensive details provided by three-channel MHI data concerning the temporal dynamics of the sign. Particularly, our developed finger pose-based MHI (FP-MHI) feature significantly enhances the recognition success, capturing the nuances of finger movements and gestures, unlike existing approaches in SLR. This feature improves the accuracy and reliability of SLR systems by more accurately capturing the fine details and richness of sign language. Additionally, we enhance the overall model accuracy by predicting missing pose data through linear interpolation. Our study, based on the randomized leaky rectified linear unit (RReLU) enhanced ResNet-18 model, successfully handles the interaction between manual and non-manual features through the fusion of extracted features and classification with a support vector machine (SVM). This innovative integration demonstrates competitive and superior results compared to current methodologies in the field of SLR across various datasets, including BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL, in our experiments.
PubMed: 38855212
DOI: 10.7717/peerj-cs.2054 -
Surgical Neurology International 2024Although awake surgery is the gold standard for resecting brain tumors in eloquent regions, patients with hearing impairment require special consideration during...
BACKGROUND
Although awake surgery is the gold standard for resecting brain tumors in eloquent regions, patients with hearing impairment require special consideration during intraoperative tasks.
CASE DESCRIPTION
We present a case of awake surgery using sign language in a 45-year-old right-handed native male patient with hearing impairment and a neoplastic lesion in the left frontal lobe, pars triangularis (suspected to be a low-grade glioma). The patient primarily communicated through sign language and writing but was able to speak at a sufficiently audible level through childhood training. Although the patient remained asymptomatic, the tumors gradually grew in size. Awake surgery was performed for tumors resection. After the craniotomy, the patient was awake, and brain function mapping was performed using tasks such as counting, picture naming, and reading. A sign language-proficient nurse facilitated communication using sign language and the patient vocally responded. Intraoperative tasks proceeded smoothly without speech arrest or verbal comprehension difficulties during electrical stimulation of the tumor-adjacent areas. Gross total tumor resection was achieved, and the patient exhibited no apparent complications. Pathological examination revealed a World Health Organization grade II oligodendroglioma with an isocitrate dehydrogenase one mutant and 1p 19q codeletion.
CONCLUSION
Since the patient in this case had no dysphonia due to training from childhood, the task was presented in sign language, and the patient responded vocally, which enabled a safe operation. Regarding awake surgery in patients with hearing impairment, safe tumor resection can be achieved by performing intraoperative tasks depending on the degree of hearing impairment and dysphonia.
PubMed: 38840599
DOI: 10.25259/SNI_52_2024 -
PloS One 2024This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in...
This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in their phonetic characteristics. Earlier research on spoken and sign language interaction has revealed that head nods vary in the form of the movement. However, most claims about the phonetic properties of head nods have been based on manual annotation without reference to naturalistic text types and the head nods produced by the addressee have been largely ignored. There is a lack of detailed information about the phonetic properties of the addressee's head nods and their interaction with manual cues in DGS as well as in other sign languages, and the existence of a form-function relationship of head nods remains uncertain. We hypothesize that head nods functioning in the context of affirmation differ from those signaling feedback in their form and the co-occurrence with manual items. To test the hypothesis, we apply OpenPose, a computer vision toolkit, to extract head nod measurements from video recordings and examine head nods in terms of their duration, amplitude and velocity. We describe the basic phonetic properties of head nods in DGS and their interaction with manual items in naturalistic corpus data. Our results show that phonetic properties of affirmative nods differ from those of feedback nods. Feedback nods appear to be on average slower in production and smaller in amplitude than affirmation nods, and they are commonly produced without a co-occurring manual element. We attribute the variations in phonetic properties to the distinct roles these cues fulfill in turn-taking system. This research underlines the importance of non-manual cues in shaping the turn-taking system of sign languages, establishing the links between such research fields as sign language linguistics, conversational analysis, quantitative linguistics and computer vision.
Topics: Humans; Sign Language; Phonetics; Germany; Male; Head; Female; Language; Head Movements
PubMed: 38814896
DOI: 10.1371/journal.pone.0304040 -
Biomaterials Oct 2024The proliferation of medical wearables necessitates the development of novel electrodes for cutaneous electrophysiology. In this work, poly(3,4-ethylenedioxythiophene)...
The proliferation of medical wearables necessitates the development of novel electrodes for cutaneous electrophysiology. In this work, poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is combined with a deep eutectic solvent (DES) and polyethylene glycol diacrylate (PEGDA) to develop printable and biocompatible electrodes for long-term cutaneous electrophysiology recordings. The impact of printing parameters on the conducting properties, morphological characteristics, mechanical stability and biocompatibility of the material were investigated. The optimised eutectogel formulations were fabricated in four different patterns -flat, pyramidal, striped and wavy- to explore the influence of electrode geometry on skin conformability and mechanical contact. These electrodes were employed for impedance and forearm EMG measurements. Furthermore, arrays of twenty electrodes were embedded into a textile and used to generate body surface potential maps (BSPMs) of the forearm, where different finger movements were recorded and analysed. Finally, BSPMs for three different letters (B, I, O) in sign-language were recorded and used to train a logistic regressor classifier able to reliably identify each letter. This novel cutaneous electrode fabrication approach offers new opportunities for long-term electrophysiological recordings, online sign-language translation and brain-machine interfaces.
Topics: Printing, Three-Dimensional; Humans; Electrodes; Polystyrenes; Textiles; Machine Learning; Electric Conductivity; Wearable Electronic Devices; Bridged Bicyclo Compounds, Heterocyclic; Gels; Polymers; Polyethylene Glycols; Electromyography; Biocompatible Materials
PubMed: 38805956
DOI: 10.1016/j.biomaterials.2024.122624