-
NeuroImage Jun 2024Research indicates that hearing loss significantly contributes to tinnitus, but it alone doesn't fully explain its occurrence, as many people with hearing loss do not...
Research indicates that hearing loss significantly contributes to tinnitus, but it alone doesn't fully explain its occurrence, as many people with hearing loss do not experience tinnitus. To identify a secondary factor for tinnitus generation, we examined a unique dataset of individuals with intermittent chronic tinnitus, who experience fluctuating periods of tinnitus. EEGs of healthy controls were compared to EEGs of participants who reported perceiving tinnitus on certain days, but no tinnitus on other days.. The EEG data revealed that tinnitus onset is associated with increased theta activity in the pregenual anterior cingulate cortex and decreased theta functional connectivity between the pregenual anterior cingulate cortex and the auditory cortex. Additionally, there is increased alpha effective connectivity from the dorsal anterior cingulate cortex to the pregenual anterior cingulate cortex. When tinnitus is not perceived, differences from healthy controls include increased alpha activity in the pregenual anterior cingulate cortex and heightened alpha connectivity between the pregenual anterior cingulate cortex and auditory cortex. This suggests that tinnitus is triggered by a switch involving increased theta activity in the pregenual anterior cingulate cortex and decreased theta connectivity between the pregenual anterior cingulate cortex and auditory cortex, leading to increased theta-gamma cross-frequency coupling, which correlates with tinnitus loudness. Increased alpha activity in the dorsal anterior cingulate cortex correlates with distress. Conversely, increased alpha activity in the pregenual anterior cingulate cortex can transiently suppress the phantom sound by enhancing theta connectivity to the auditory cortex. This mechanism parallels chronic neuropathic pain and suggests potential treatments for tinnitus by promoting alpha activity in the pregenual anterior cingulate cortex and reducing alpha activity in the dorsal anterior cingulate cortex through pharmacological or neuromodulatory approaches.
PubMed: 38944171
DOI: 10.1016/j.neuroimage.2024.120713 -
Behavioral and Brain Functions : BBF Jun 2024Left-handedness is a condition that reverses the typical left cerebral dominance of motor control to an atypical right dominance. The impact of this distinct control -...
BACKGROUND
Left-handedness is a condition that reverses the typical left cerebral dominance of motor control to an atypical right dominance. The impact of this distinct control - and its associated neuroanatomical peculiarities - on other cognitive functions such as music processing or playing a musical instrument remains unexplored. Previous studies in right-handed population have linked musicianship to a larger volume in the (right) auditory cortex and a larger volume in the (right) arcuate fasciculus.
RESULTS
In our study, we reveal that left-handed musicians (n = 55), in comparison to left-handed non-musicians (n = 75), exhibit a larger gray matter volume in both the left and right Heschl's gyrus, critical for auditory processing. They also present a higher number of streamlines across the anterior segment of the right arcuate fasciculus. Importantly, atypical hemispheric lateralization of speech (notably prevalent among left-handers) was associated to a rightward asymmetry of the AF, in contrast to the leftward asymmetry exhibited by the typically lateralized.
CONCLUSIONS
These findings suggest that left-handed musicians share similar neuroanatomical characteristics with their right-handed counterparts. However, atypical lateralization of speech might potentiate the right audiomotor pathway, which has been associated with musicianship and better musical skills. This may help explain why musicians are more prevalent among left-handers and shed light on their cognitive advantages.
Topics: Humans; Music; Male; Functional Laterality; Female; Adult; Young Adult; Auditory Cortex; Magnetic Resonance Imaging; Gray Matter; Auditory Perception; Brain
PubMed: 38943215
DOI: 10.1186/s12993-024-00243-0 -
Brain Research Bulletin Jun 2024The ability to accurately encode the temporal information of sensory events and hence to make prompt action is fundamental to humans' prompt behavioral decision-making....
The ability to accurately encode the temporal information of sensory events and hence to make prompt action is fundamental to humans' prompt behavioral decision-making. Here we examined the ability of ensemble coding (averaging multiple inter-intervals in a sound sequence) and subsequent immediate reproduction of target duration at half, equal, or double that of the perceived mean interval in a sensorimotor loop. With magnetoencephalography (MEG), we found that the contingent magnetic variation (CMV) in the central scalp varied as a function of the averaging tasks, with a faster rate for buildup amplitudes and shorter peak latencies in the "half" condition as compared to the "double" condition. ERD (event-related desynchronization) -to-ERS (event-related synchronization) latency was shorter in the "half" condition. A robust beta band (15-23Hz) power suppression and recovery between the final tone and the action of key pressing was found for time reproduction. The beta modulation depth (i.e., the ERD-to-ERS power difference) was larger in motor areas than in primary auditory areas. Moreover, results of phase slope index (PSI) indicated that beta oscillations in the left supplementary motor area (SMA) led those in the right superior temporal gyrus (STG), showing SMA to STG directionality for the processing of sequential (temporal) auditory interval information. Our findings provide the first evidence to show that CMV and beta oscillations predict the coupling between perception and action in time averaging.
PubMed: 38942396
DOI: 10.1016/j.brainresbull.2024.111021 -
Hearing Research Jun 2024Following adult-onset hearing impairment, crossmodal plasticity can occur within various sensory cortices, often characterized by increased neural responses to visual...
Following adult-onset hearing impairment, crossmodal plasticity can occur within various sensory cortices, often characterized by increased neural responses to visual stimulation in not only the auditory cortex, but also in the visual and audiovisual cortices. In the present study, we used an established model of loud noise exposure in rats to examine, for the first time, whether the crossmodal plasticity in the audiovisual cortex that occurs following a relatively mild degree of hearing loss emerges solely from altered intracortical processing or if thalamocortical changes also contribute to the crossmodal effects. Using a combination of an established pharmacological 'cortical silencing' protocol and current source density analysis of the laminar activity recorded across the layers of the audiovisual cortex (i.e., the lateral extrastriate visual cortex, V2L), we observed layer-specific changes post-silencing in the strength of the residual visual, but not auditory, input in the noise exposed rats with mild hearing loss compared to rats with normal hearing. Furthermore, based on a comparison of the laminar profiles pre- versus post-silencing in both groups, we can conclude that noise exposure caused a re-allocation of the strength of visual inputs across the layers of the V2L cortex, including enhanced visual-evoked activity in the granular layer; findings consistent with thalamocortical plasticity. Finally, we confirmed that audiovisual integration within the V2L cortex depends on intact processing within intracortical circuits, and that this form of multisensory processing is vulnerable to disruption by noise-induced hearing loss. Ultimately, the present study furthers our understanding of the contribution of intracortical and thalamocortical processing to crossmodal plasticity as well as to audiovisual integration under both normal and mildly-impaired hearing conditions.
PubMed: 38941694
DOI: 10.1016/j.heares.2024.109071 -
Frontiers in Neuroanatomy 2024A new analysis is presented of the retrograde tracer measurements of connections between anatomical areas of the marmoset cortex. The original normalisation of raw data...
A new analysis is presented of the retrograde tracer measurements of connections between anatomical areas of the marmoset cortex. The original normalisation of raw data yields the fractional link weight measure, FLNe. That is re-examined to consider other possible measures that reveal the underlying in link weights. Predictions arising from both are used to examine network modules and hubs. With inclusion of the in weights the InfoMap algorithm identifies eight structural modules in marmoset cortex. In and out hubs and major connector nodes are identified using module assignment and participation coefficients. Time evolving network tracing around the major hubs reveals medium sized clusters in pFC, temporal, auditory and visual areas; the most tightly coupled and significant of which is in the pFC. A complementary viewpoint is provided by examining the highest traffic links in the cortical network, and reveals parallel sensory flows to pFC and via association areas to frontal areas.
PubMed: 38933918
DOI: 10.3389/fnana.2024.1403170 -
Frontiers in Neuroscience 2024Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory...
PURPOSE
Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory symptoms. Data-driven classifier modeling with the combination of neural static and dynamic imaging features could be effectively used to classify SNHL individuals and healthy controls (HCs).
METHODS
We conducted hearing evaluation, neurological scale tests and resting-state MRI on 110 SNHL patients and 106 HCs. A total of 1,267 static and dynamic imaging characteristics were extracted from MRI data, and three methods of feature selection were computed, including the Spearman rank correlation test, least absolute shrinkage and selection operator (LASSO) and t test as well as LASSO. Linear, polynomial, radial basis functional kernel (RBF) and sigmoid support vector machine (SVM) models were chosen as the classifiers with fivefold cross-validation. The receiver operating characteristic curve, area under the curve (AUC), sensitivity, specificity and accuracy were calculated for each model.
RESULTS
SNHL subjects had higher hearing thresholds in each frequency, as well as worse performance in cognitive and emotional evaluations, than HCs. After comparison, the selected brain regions using LASSO based on static and dynamic features were consistent with the between-group analysis, including auditory and nonauditory areas. The subsequent AUCs of the four SVM models (linear, polynomial, RBF and sigmoid) were as follows: 0.8075, 0.7340, 0.8462 and 0.8562. The RBF and sigmoid SVM had relatively higher accuracy, sensitivity and specificity.
CONCLUSION
Our research raised attention to static and dynamic alterations underlying hearing deprivation. Machine learning-based models may provide several useful biomarkers for the classification and diagnosis of SNHL.
PubMed: 38933814
DOI: 10.3389/fnins.2024.1402039 -
Sensors (Basel, Switzerland) Jun 2024Urban environments are undergoing significant transformations, with pedestrian areas emerging as complex hubs of diverse mobility modes. This shift demands a more...
Urban environments are undergoing significant transformations, with pedestrian areas emerging as complex hubs of diverse mobility modes. This shift demands a more nuanced approach to urban planning and navigation technologies, highlighting the limitations of traditional, road-centric datasets in capturing the detailed dynamics of pedestrian spaces. In response, we introduce the DELTA dataset, designed to improve the analysis and mapping of pedestrian zones, thereby filling the critical need for sidewalk-centric multimodal datasets. The DELTA dataset was collected in a single urban setting using a custom-designed modular multi-sensing e-scooter platform encompassing high-resolution and synchronized audio, visual, LiDAR, and GNSS/IMU data. This assembly provides a detailed, contextually varied view of urban pedestrian environments. We developed three distinct pedestrian route segmentation models for various sensors-the 4K camera, stereocamera, and LiDAR-each optimized to capitalize on the unique strengths and characteristics of the respective sensor. These models have demonstrated strong performance, with Mean Intersection over Union (IoU) values of 0.84 for the reflectivity channel, 0.96 for the 4K camera, and 0.92 for the stereocamera, underscoring their effectiveness in ensuring precise pedestrian route identification across different resolutions and sensor types. Further, we explored audio event-based classification to connect unique soundscapes with specific geolocations, enriching the spatial understanding of urban environments by associating distinctive auditory signatures with their precise geographical origins. We also discuss potential use cases for the DELTA dataset and the limitations and future possibilities of our research, aiming to expand our understanding of pedestrian environments.
PubMed: 38931648
DOI: 10.3390/s24123863 -
Brain Sciences May 2024Tinnitus is a common phantom auditory percept believed to be related to plastic changes in the brain due to hearing loss. However, tinnitus can also occur in the absence...
Tinnitus is a common phantom auditory percept believed to be related to plastic changes in the brain due to hearing loss. However, tinnitus can also occur in the absence of any clinical hearing loss. In this case, since there is no hearing loss, the mechanisms that drive plastic changes remain largely enigmatic. Previous studies showed subtle differences in sound-evoked brain activity associated with tinnitus in subjects with tinnitus and otherwise normal hearing, but the results are not consistent across studies. Here, we aimed to investigate these differences using monaural rather than binaural stimuli. Sound-evoked responses were measured using functional magnetic resonance imaging (MRI) in participants with and without tinnitus. All participants had clinically normal audiograms. The stimuli were pure tones with frequencies between 353 and 8000 Hz, presented monaurally. A Principal Component Analysis (PCA) of the response in the auditory cortex revealed no difference in tonotopic organization, which confirmed earlier studies. A GLM analysis showed hyperactivity in the lateral areas of the bilateral auditory cortex. Consistent with the tonotopic map, this hyperactivity mainly occurred in response to low stimulus frequencies. This may be related to hyperacusis. Furthermore, there was an interaction between stimulation side and tinnitus in the parahippocampus. This may reflect an interference between tinnitus and spatial orientation.
PubMed: 38928544
DOI: 10.3390/brainsci14060544 -
Brain Sciences May 2024Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked... (Review)
Review
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
PubMed: 38928534
DOI: 10.3390/brainsci14060535 -
Cell Reports Jun 2024During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology,...
During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology, projection-pattern-specific recordings, and circuit perturbations to show that neurons in mouse secondary motor cortex (M2) encode sensation and are influenced by expectation. When a movement unexpectedly produces a sound, M2 becomes dominated by sound-evoked activity. Sound responses in M2 are inherited partially from the auditory cortex and are routed back to the auditory cortex, providing a path for the reciprocal exchange of sensory-motor information during behavior. When the acoustic consequences of a movement become predictable, M2 responses to self-generated sounds are selectively gated off. These changes in single-cell responses are reflected in population dynamics, which are influenced by both sensation and expectation. Together, these findings reveal the embedding of sensory and expectation signals in motor cortical activity.
PubMed: 38923464
DOI: 10.1016/j.celrep.2024.114396