-
Brain : a Journal of Neurology Jun 2024The fate of deprived sensory cortices - visual regions in the blind and auditory regions in the deaf - exemplifies the extent to which experience can change brain...
The fate of deprived sensory cortices - visual regions in the blind and auditory regions in the deaf - exemplifies the extent to which experience can change brain regions. These regions are frequently seen to activate during tasks involving other sensory modalities, leading many accounts to infer that these regions have started processing sensory information of other modalities. However, such observations can also imply that these regions are now activating to any task event regardless of the sensory modality. Activating to task events, irrespective of the sensory modality involved, is a feature of the multiple-demands (MD) network. These are a common set of regions within the frontal and parietal cortices that activate in response to any kind of control demand. Thus, demands as diverse as attention, perceptual difficulty, rule-switching, updating working memory, inhibiting responses, decision-making, and difficult arithmetic - all activate these same set of regions that are thought to instantiate domain-general cognitive control and underpin fluid intelligence. We investigated if deprived sensory cortices, or foci within them, become part of the MD network. We tested if the same foci within the visual regions of the blind and auditory regions of the deaf activated to different control demands. We found that control demands related to updating auditory working memory, difficult tactile decisions, time-duration judgments, and sensorimotor-speed - all activated the entire bilateral occipital regions in the blind but not in the sighted. These occipital regions in the blind were the only regions outside the canonical fronto-parietal MD regions to show such activation to multiple control demands. Further, compared to the sighted, these occipital regions in the blind had higher functional connectivity with fronto-parietal MD regions. Early deaf, in contrast, did not activate their auditory regions to different control demands, showing that auditory regions do not become MD regions in the deaf. We suggest that visual regions in the blind do not take a new sensory role but become part of the MD network, and this is not a response of all deprived sensory cortices but a feature unique to the visual regions.
PubMed: 38864500
DOI: 10.1093/brain/awae187 -
American Journal of Speech-language... Jun 2024Few studies have explored the feasibility of online language interventions for young children with Down syndrome. Additionally, none have manipulated dose frequency or...
BACKGROUND
Few studies have explored the feasibility of online language interventions for young children with Down syndrome. Additionally, none have manipulated dose frequency or reported on the use of music as a medium through which language and sign can be learned.
PURPOSE
The purpose of this study was to (a) examine the feasibility and acceptability of an online language through music intervention for young children (1-3;6 years) with Down syndrome and (b) compare effectiveness at two intervention dose frequencies.
METHOD
The study was carried out in two phases using a mixed-methods design. Qualitative data were gathered from parents to examine feasibility when implementing a video-based language intervention. Seventy-six families participated in an online language intervention at home. Effectiveness was examined comparing two groups, randomly assigned to a high and low dose frequency. The Down Syndrome Education (DSE) checklists (combined) were the primary outcome measure. Process data were gathered to determine intervention acceptability in practice and to identify factors that would improve successful future implementation. Acceptability data were analyzed with reference to the theoretical framework of acceptability (Version 2).
RESULTS
Forty-three parents completed the Phase 1 scoping questionnaire, five of whom took part in focus groups. Once weekly morning sessions were indicated as the preferred scheduling choice. Phase 2 quantitative data were analyzed using beta regression adjusted for baseline scores and indicated no additional benefit to receiving the higher dose. However, exploratory interaction models suggested that the efficacy of the high-dose intervention was higher (than low-dose intervention) in participants with higher baseline DSE performance. Parents perceived the intervention to be effective and positive for the family.
CONCLUSION
The results add to our knowledge of real-world effective online interventions and suggest that a critical minimum language level is required for children with Down syndrome to benefit optimally from a higher intervention dose frequency.
SUPPLEMENTAL MATERIAL
https://doi.org/10.23641/asha.25979704.
PubMed: 38861452
DOI: 10.1044/2024_AJSLP-23-00375 -
Acta Neurochirurgica Jun 2024The aim of this case study was to describe differences in English and British Sign Language (BSL) communication caused by a left temporal tumour resulting in discordant...
The aim of this case study was to describe differences in English and British Sign Language (BSL) communication caused by a left temporal tumour resulting in discordant presentation of symptoms, intraoperative stimulation mapping during awake craniotomy and post-operative language abilities. We report the first case of a hearing child of deaf adults, who acquired BSL with English as a second language. The patient presented with English word finding difficulty, phonemic paraphasias, and reading and writing challenges, with BSL preserved. Intraoperatively, object naming and semantic fluency tasks were performed in English and BSL, revealing differential language maps for each modality. Post-operative assessment confirmed mild dysphasia for English with BSL preserved. These findings suggest that in hearing people who acquire a signed language as a first language, topographical organisation may differ to that of a second, spoken, language.
Topics: Humans; Glioblastoma; Sign Language; Craniotomy; Brain Neoplasms; Temporal Lobe; Brain Mapping; Male; Wakefulness; Speech; Multilingualism; Language; Adult
PubMed: 38858238
DOI: 10.1007/s00701-024-06130-x -
Cancer Management and Research 2024In situations where pathological acquisition is difficult, there is a lack of consensus on distinguishing between adenocarcinoma and squamous cell carcinoma from imaging...
PURPOSE
In situations where pathological acquisition is difficult, there is a lack of consensus on distinguishing between adenocarcinoma and squamous cell carcinoma from imaging images, and each doctor can only make judgments based on their own experience. This study aims to extract imaging features of chest CT, extract sensitive factors through logistic univariate and multivariate analysis, and model to distinguish between lung squamous cell carcinoma and lung adenocarcinoma.
METHODS
We downloaded chest CT scans with clear diagnosis of adenocarcinoma and squamous cell carcinoma from The Cancer Imaging Archive (TCIA), extracted 19 imaging features by a radiologist and a thoracic surgeon, including location, spicule, lobulation, cavity, vacuolar sign, necrosis, pleural traction sign, vascular bundle sign, air bronchogram sign, calcification, enhancement degree, distance from pulmonary hilum, atelectasis, pulmonary hilum and bronchial lymph nodes, mediastinal lymph nodes, interlobular septal thickening, pulmonary metastasis, adjacent structures invasion, pleural effusion. Firstly, we apply the glm function of R language to perform logistic univariate analysis on all variables to select variables with P < 0.1. Then, perform logistic multivariate analysis on the selected variables to obtain a predictive model. Next, use the roc function in R language to calculate the AUC value and draw the ROC curve, use the val.prob function in R language to draw the Calibrat curve, and use the rmda package in R language to draw the DCA curve and clinical impact curve. At the same time, 45 patients diagnosed with lung squamous cell carcinoma and lung adenocarcinoma through surgery or biopsy in the Radiotherapy Department and Thoracic Surgery Department of our hospital from 2023 to 2024 were included in the validation group. The chest CT features were jointly determined and recorded by the two doctors mentioned above and included in the validation group. The included image feature data are complete and does not require preprocessing, so directly entering statistical calculations. Perform ROC curves, calibration curves, DCA, and clinical impact curves in the validation group to further validate the predictive model. If the predictive model performs well in the validation group, further draw a nomogram to demonstrate.
RESULTS
This study extracted 19 imaging features from the chest CT scans of 75 patients downloaded from TCIA and finally selected 18 complete data for analysis. First, univariate analysis and multivariate analysis were performed, and a total of 5 variables were obtained: spicule, necrosis, air bronchogram Sign, atelectasis, pulmonary hilum and bronchial lymph nodes. After conducting modeling analysis with AUC = 0.887, a validation group was established using clinical cases from our hospital, Draw ROC curve with AUC = 0.865 in the validation group, evaluate the accuracy of the model through Calibrate calibration curve, evaluate the reliability of the model in clinical practice through DCA curve, and further evaluate the practicality of the model in clinical practice through clinical impact curve.
CONCLUSION
It is possible to extract influential features from ordinary chest CT scans to determine lung adenocarcinoma and squamous cell carcinoma. The model we have set up performs well in terms of discrimination, accuracy, reliability, and practicality.
PubMed: 38855330
DOI: 10.2147/CMAR.S462951 -
PeerJ. Computer Science 2024This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion...
This article presents an innovative approach for the task of isolated sign language recognition (SLR); this approach centers on the integration of pose data with motion history images (MHIs) derived from these data. Our research combines spatial information obtained from body, hand, and face poses with the comprehensive details provided by three-channel MHI data concerning the temporal dynamics of the sign. Particularly, our developed finger pose-based MHI (FP-MHI) feature significantly enhances the recognition success, capturing the nuances of finger movements and gestures, unlike existing approaches in SLR. This feature improves the accuracy and reliability of SLR systems by more accurately capturing the fine details and richness of sign language. Additionally, we enhance the overall model accuracy by predicting missing pose data through linear interpolation. Our study, based on the randomized leaky rectified linear unit (RReLU) enhanced ResNet-18 model, successfully handles the interaction between manual and non-manual features through the fusion of extracted features and classification with a support vector machine (SVM). This innovative integration demonstrates competitive and superior results compared to current methodologies in the field of SLR across various datasets, including BosphorusSign22k-general, BosphorusSign22k, LSA64, and GSL, in our experiments.
PubMed: 38855212
DOI: 10.7717/peerj-cs.2054 -
Language, Speech, and Hearing Services... Jul 2024There are well-established guidelines for the recording, transcription, and analysis of spontaneous oral language samples by researchers, educators, and speech...
PURPOSE
There are well-established guidelines for the recording, transcription, and analysis of spontaneous oral language samples by researchers, educators, and speech pathologists. In contrast, there is presently no consensus regarding methods for the written documentation of sign language samples. The Handshape Analysis Recording Tool (HART) is an innovative method for documenting and analyzing word level samples of signed languages in real time. Fluent sign language users can document the expressive sign productions of children to gather data on sign use and accuracy.
METHOD
The HART was developed to document children's productions in Australian Sign Language (Auslan) in a bilingual-bicultural educational program for the Deaf in Australia. This written method was piloted with a group of fluent signing Deaf educational staff in 2014-2016, then used in 2022-2023 with a group of fluent signing professionals to examine inter- and intrarater reliability when coding parameters of sign accuracy.
RESULTS
Interrater reliability measured by Gwet's Agreement Coefficient, was "good" to "very good" across the four phonological parameters that are components of every sign: location, movement, handshape, and orientation.
CONCLUSIONS
The findings of this study indicate that the HART can be a reliable tool for coding the accuracy of location, orientation, movement, and handshape parameters of Auslan phonology when used by professionals fluent in Auslan. The HART can be utilized with any sign language to gather word level sign language samples in a written form and document the phonological accuracy of signed productions.
Topics: Sign Language; Humans; Child; Australia; Documentation; Reproducibility of Results; Schools; Male; Female; Education of Hearing Disabled; Deafness
PubMed: 38843410
DOI: 10.1044/2024_LSHSS-23-00189 -
Surgical Neurology International 2024Although awake surgery is the gold standard for resecting brain tumors in eloquent regions, patients with hearing impairment require special consideration during...
BACKGROUND
Although awake surgery is the gold standard for resecting brain tumors in eloquent regions, patients with hearing impairment require special consideration during intraoperative tasks.
CASE DESCRIPTION
We present a case of awake surgery using sign language in a 45-year-old right-handed native male patient with hearing impairment and a neoplastic lesion in the left frontal lobe, pars triangularis (suspected to be a low-grade glioma). The patient primarily communicated through sign language and writing but was able to speak at a sufficiently audible level through childhood training. Although the patient remained asymptomatic, the tumors gradually grew in size. Awake surgery was performed for tumors resection. After the craniotomy, the patient was awake, and brain function mapping was performed using tasks such as counting, picture naming, and reading. A sign language-proficient nurse facilitated communication using sign language and the patient vocally responded. Intraoperative tasks proceeded smoothly without speech arrest or verbal comprehension difficulties during electrical stimulation of the tumor-adjacent areas. Gross total tumor resection was achieved, and the patient exhibited no apparent complications. Pathological examination revealed a World Health Organization grade II oligodendroglioma with an isocitrate dehydrogenase one mutant and 1p 19q codeletion.
CONCLUSION
Since the patient in this case had no dysphonia due to training from childhood, the task was presented in sign language, and the patient responded vocally, which enabled a safe operation. Regarding awake surgery in patients with hearing impairment, safe tumor resection can be achieved by performing intraoperative tasks depending on the degree of hearing impairment and dysphonia.
PubMed: 38840599
DOI: 10.25259/SNI_52_2024 -
Journal of Deaf Studies and Deaf... Jun 2024Anecdotal evidence strongly suggests that members of the First Nations Deaf community experience more barriers when engaging with the criminal justice system than those...
Anecdotal evidence strongly suggests that members of the First Nations Deaf community experience more barriers when engaging with the criminal justice system than those who are not deaf. Therefore, our purpose for writing this article is to highlight legal and policy issues related to First Nations Deaf people, including perspectives of professionals working with these communities, living in Australia who have difficulty in accessing supports within the criminal justice system. In this article, we present data from semi-structured qualitative interviews focused on four key themes: (a) indefinite detention and unfit to plead, (b) a need for an intersectional approach to justice, (c) applying the maximum extent of the law while minimizing social services-related resources, and (d) the need for language access and qualified sign language interpreters. Through this article and the related larger sustaining project, we seek to center the experiences and needs of First Nations Deaf communities to render supports for fair, just, and equitable access in the Australian criminal justice system to this historically marginalized group.
PubMed: 38826120
DOI: 10.1093/jdsade/enae021 -
Biomedizinische Technik. Biomedical... Jun 2024The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The...
OBJECTIVES
The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text.
METHODS
To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes.
RESULTS
The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures.
CONCLUSIONS
The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.
PubMed: 38826069
DOI: 10.1515/bmt-2023-0245 -
Journal of Speech, Language, and... May 2024The current study aimed to examine morphosyntactic errors in sentences produced by DHH students, who are signers of Israeli Sign Language, and also users of Palestinian...
PURPOSE
The current study aimed to examine morphosyntactic errors in sentences produced by DHH students, who are signers of Israeli Sign Language, and also users of Palestinian Colloquial Arabic (PCA) and written Modern Standard Arabic (MSA).
METHOD
Nineteen school-age DHH students participated in a sentence elicitation task in which they retold events portrayed in 24 videos in PCA and MSA. A control group of 19 hearing students was tested with the same task. Sentences in each language variety were coded for grammatical versus ungrammatical productions and for type of morphosyntactic errors for the latter. In addition, code-switched words were counted.
RESULTS
The hearing group showed no morphosyntactic errors, whereas the DHH students showed morphosyntactic errors in both PCA and MSA. In addition, both groups code-switched in both PCA and MSA, with more code-switching in the MSA task than in the PCA task. Furthermore, an interaction with age revealed that young students code-switched more in MSA and older students code-switched more in PCA.
CONCLUSIONS
It is suggested that the morphosyntactic abilities of DHH students are incomplete in both language varieties. Lack of spoken language input may negatively influence the acquisition of spoken language, which impacts further the acquisition of the standard language in diglossic contexts. Code-switching is explained as both due to lexical gaps, when occurring in MSA, and an effort to raise the register in PCA.
PubMed: 38820238
DOI: 10.1044/2024_JSLHR-23-00542