-
Frontiers in Neuroanatomy 2024A new analysis is presented of the retrograde tracer measurements of connections between anatomical areas of the marmoset cortex. The original normalisation of raw data...
A new analysis is presented of the retrograde tracer measurements of connections between anatomical areas of the marmoset cortex. The original normalisation of raw data yields the fractional link weight measure, FLNe. That is re-examined to consider other possible measures that reveal the underlying in link weights. Predictions arising from both are used to examine network modules and hubs. With inclusion of the in weights the InfoMap algorithm identifies eight structural modules in marmoset cortex. In and out hubs and major connector nodes are identified using module assignment and participation coefficients. Time evolving network tracing around the major hubs reveals medium sized clusters in pFC, temporal, auditory and visual areas; the most tightly coupled and significant of which is in the pFC. A complementary viewpoint is provided by examining the highest traffic links in the cortical network, and reveals parallel sensory flows to pFC and via association areas to frontal areas.
PubMed: 38933918
DOI: 10.3389/fnana.2024.1403170 -
Frontiers in Neuroscience 2024Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory...
PURPOSE
Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory symptoms. Data-driven classifier modeling with the combination of neural static and dynamic imaging features could be effectively used to classify SNHL individuals and healthy controls (HCs).
METHODS
We conducted hearing evaluation, neurological scale tests and resting-state MRI on 110 SNHL patients and 106 HCs. A total of 1,267 static and dynamic imaging characteristics were extracted from MRI data, and three methods of feature selection were computed, including the Spearman rank correlation test, least absolute shrinkage and selection operator (LASSO) and t test as well as LASSO. Linear, polynomial, radial basis functional kernel (RBF) and sigmoid support vector machine (SVM) models were chosen as the classifiers with fivefold cross-validation. The receiver operating characteristic curve, area under the curve (AUC), sensitivity, specificity and accuracy were calculated for each model.
RESULTS
SNHL subjects had higher hearing thresholds in each frequency, as well as worse performance in cognitive and emotional evaluations, than HCs. After comparison, the selected brain regions using LASSO based on static and dynamic features were consistent with the between-group analysis, including auditory and nonauditory areas. The subsequent AUCs of the four SVM models (linear, polynomial, RBF and sigmoid) were as follows: 0.8075, 0.7340, 0.8462 and 0.8562. The RBF and sigmoid SVM had relatively higher accuracy, sensitivity and specificity.
CONCLUSION
Our research raised attention to static and dynamic alterations underlying hearing deprivation. Machine learning-based models may provide several useful biomarkers for the classification and diagnosis of SNHL.
PubMed: 38933814
DOI: 10.3389/fnins.2024.1402039 -
Sensors (Basel, Switzerland) Jun 2024Urban environments are undergoing significant transformations, with pedestrian areas emerging as complex hubs of diverse mobility modes. This shift demands a more...
Urban environments are undergoing significant transformations, with pedestrian areas emerging as complex hubs of diverse mobility modes. This shift demands a more nuanced approach to urban planning and navigation technologies, highlighting the limitations of traditional, road-centric datasets in capturing the detailed dynamics of pedestrian spaces. In response, we introduce the DELTA dataset, designed to improve the analysis and mapping of pedestrian zones, thereby filling the critical need for sidewalk-centric multimodal datasets. The DELTA dataset was collected in a single urban setting using a custom-designed modular multi-sensing e-scooter platform encompassing high-resolution and synchronized audio, visual, LiDAR, and GNSS/IMU data. This assembly provides a detailed, contextually varied view of urban pedestrian environments. We developed three distinct pedestrian route segmentation models for various sensors-the 4K camera, stereocamera, and LiDAR-each optimized to capitalize on the unique strengths and characteristics of the respective sensor. These models have demonstrated strong performance, with Mean Intersection over Union (IoU) values of 0.84 for the reflectivity channel, 0.96 for the 4K camera, and 0.92 for the stereocamera, underscoring their effectiveness in ensuring precise pedestrian route identification across different resolutions and sensor types. Further, we explored audio event-based classification to connect unique soundscapes with specific geolocations, enriching the spatial understanding of urban environments by associating distinctive auditory signatures with their precise geographical origins. We also discuss potential use cases for the DELTA dataset and the limitations and future possibilities of our research, aiming to expand our understanding of pedestrian environments.
PubMed: 38931648
DOI: 10.3390/s24123863 -
Brain Sciences May 2024Tinnitus is a common phantom auditory percept believed to be related to plastic changes in the brain due to hearing loss. However, tinnitus can also occur in the absence...
Tinnitus is a common phantom auditory percept believed to be related to plastic changes in the brain due to hearing loss. However, tinnitus can also occur in the absence of any clinical hearing loss. In this case, since there is no hearing loss, the mechanisms that drive plastic changes remain largely enigmatic. Previous studies showed subtle differences in sound-evoked brain activity associated with tinnitus in subjects with tinnitus and otherwise normal hearing, but the results are not consistent across studies. Here, we aimed to investigate these differences using monaural rather than binaural stimuli. Sound-evoked responses were measured using functional magnetic resonance imaging (MRI) in participants with and without tinnitus. All participants had clinically normal audiograms. The stimuli were pure tones with frequencies between 353 and 8000 Hz, presented monaurally. A Principal Component Analysis (PCA) of the response in the auditory cortex revealed no difference in tonotopic organization, which confirmed earlier studies. A GLM analysis showed hyperactivity in the lateral areas of the bilateral auditory cortex. Consistent with the tonotopic map, this hyperactivity mainly occurred in response to low stimulus frequencies. This may be related to hyperacusis. Furthermore, there was an interaction between stimulation side and tinnitus in the parahippocampus. This may reflect an interference between tinnitus and spatial orientation.
PubMed: 38928544
DOI: 10.3390/brainsci14060544 -
Brain Sciences May 2024Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked... (Review)
Review
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
PubMed: 38928534
DOI: 10.3390/brainsci14060535 -
Cell Reports Jun 2024During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology,...
During behavior, the motor cortex sends copies of motor-related signals to sensory cortices. Here, we combine closed-loop behavior with large-scale physiology, projection-pattern-specific recordings, and circuit perturbations to show that neurons in mouse secondary motor cortex (M2) encode sensation and are influenced by expectation. When a movement unexpectedly produces a sound, M2 becomes dominated by sound-evoked activity. Sound responses in M2 are inherited partially from the auditory cortex and are routed back to the auditory cortex, providing a path for the reciprocal exchange of sensory-motor information during behavior. When the acoustic consequences of a movement become predictable, M2 responses to self-generated sounds are selectively gated off. These changes in single-cell responses are reflected in population dynamics, which are influenced by both sensation and expectation. Together, these findings reveal the embedding of sensory and expectation signals in motor cortical activity.
PubMed: 38923464
DOI: 10.1016/j.celrep.2024.114396 -
BioRxiv : the Preprint Server For... Jun 2024Fragile X syndrome (FXS) is an X-linked disorder that often leads to intellectual disability, anxiety, and sensory hypersensitivity. While sound sensitivity...
Fragile X syndrome (FXS) is an X-linked disorder that often leads to intellectual disability, anxiety, and sensory hypersensitivity. While sound sensitivity (hyperacusis) is a distressing symptom in FXS, its neural basis is not well understood. It is postulated that hyperacusis may stem from temporal lobe hyperexcitability or dysregulation in topdown modulation. Studying the neural mechanisms underlying sound sensitivity in FXS using scalp electroencephalography (EEG) is challenging because the temporal and frontal regions have overlapping neural projections that are difficult to differentiate. To overcome this challenge, we conducted EEG source analysis on a group of 36 individuals with FXS and 39 matched healthy controls. Our goal was to characterize the spatial and temporal properties of the response to an auditory chirp stimulus. Our results showed that males with FXS exhibit excessive activation in the frontal cortex in response to the stimulus onset, which may reflect changes in top-down modulation of auditory processing. Additionally, during the chirp stimulus, individuals with FXS demonstrated a reduction in typical gamma phase synchrony, along with an increase in asynchronous gamma power, across multiple regions, most strongly in temporal cortex. Consistent with these findings, we observed a decrease in the signal-to-noise ratio, estimated by the ratio of synchronous to asynchronous gamma activity, in individuals with FXS. Furthermore, this ratio was highly correlated with performance in an auditory attention task. Compared to controls, males with FXS demonstrated elevated bidirectional frontotemporal information flow at chirp onset. The evidence indicates that both temporal lobe hyperexcitability and disruptions in top-down regulation play a role in auditory sensitivity disturbances in FXS. These findings have the potential to guide the development of therapeutic targets and back-translation strategies.
PubMed: 38915683
DOI: 10.1101/2024.06.13.598957 -
BioRxiv : the Preprint Server For... Jun 2024Auditory perception is established through experience-dependent stimuli exposure during sensitive developmental periods; however, little is known regarding the...
Auditory perception is established through experience-dependent stimuli exposure during sensitive developmental periods; however, little is known regarding the structural development of the central auditory pathway in humans. The present study characterized the regional developmental trajectories of the ascending auditory pathway from the brainstem to the auditory cortex from infancy through adolescence using a novel diffusion MRI-based tractography approach and along-tract analyses. We used diffusion tensor imaging (DTI) and neurite orientation dispersion and density imaging (NODDI) to quantify the magnitude and timing of auditory pathway microstructural maturation. We found spatially varying patterns of white matter maturation along the length of the tract, with inferior brainstem regions developing earlier than thalamocortical projections and left hemisphere tracts developing earlier than the right. These results help to characterize the processes that give rise to functional auditory processing and may provide a baseline for detecting abnormal development.
PubMed: 38915661
DOI: 10.1101/2024.06.10.597798 -
BioRxiv : the Preprint Server For... Jun 2024Rapid learning confers significant advantages to animals in ecological environments. Despite the need for speed, animals appear to only slowly learn to associate...
Rapid learning confers significant advantages to animals in ecological environments. Despite the need for speed, animals appear to only slowly learn to associate rewarded actions with predictive cues. This slow learning is thought to be supported by a gradual expansion of predictive cue representation in the sensory cortex. However, evidence is growing that animals learn more rapidly than classical performance measures suggest, challenging the prevailing model of sensory cortical plasticity. Here, we investigated the relationship between learning and sensory cortical representations. We trained mice on an auditory go/no-go task that dissociated the rapid acquisition of task contingencies (learning) from its slower expression (performance). Optogenetic silencing demonstrated that the auditory cortex (AC) drives both rapid learning and slower performance gains but becomes dispensable at expert. Rather than enhancement or expansion of cue representations, two-photon calcium imaging of AC excitatory neurons throughout learning revealed two higher-order signals that were causal to learning and performance. First, a reward prediction (RP) signal emerged rapidly within tens of trials, was present after action-related errors only early in training, and faded at expert levels. Strikingly, silencing at the time of the RP signal impaired rapid learning, suggesting it serves an associative and teaching role. Second, a distinct cell ensemble encoded and controlled licking suppression that drove the slower performance improvements. These two ensembles were spatially clustered but uncoupled from underlying sensory representations, indicating a higher-order functional segregation within AC. Our results reveal that the sensory cortex manifests higher-order computations that separably drive rapid learning and slower performance improvements, reshaping our understanding of the fundamental role of the sensory cortex.
PubMed: 38915657
DOI: 10.1101/2024.06.10.597946 -
BioRxiv : the Preprint Server For... Jun 2024Socially coordinated threat responses support the survival of animal groups. Given their distinct social roles, males and females must differ in such coordination. Here,...
Socially coordinated threat responses support the survival of animal groups. Given their distinct social roles, males and females must differ in such coordination. Here, we report such differences during the synchronization of auditory-conditioned freezing in mouse dyads. To study the interaction of emotional states with social cues underlying synchronization, we modulated emotional states with prior stress or modified the social cues by pairing unfamiliar or opposite-sex mice. In same-sex dyads, males exhibited more robust synchrony than females. Stress disrupted male synchrony in a prefrontal cortex-dependent manner but enhanced it in females. Unfamiliarity moderately reduced synchrony in males but not in females. In dyads with opposite-sex partners, fear synchrony was resilient to both stress and unfamiliarity. Decomposing the synchronization process in the same-sex dyads revealed sex-specific behavioral strategies correlated with synchrony magnitude: following partners' state transitions in males and retroacting synchrony-breaking actions in females. Those were altered by stress and unfamiliarity. The opposite-sex dyads exhibited no synchrony-correlated strategy. These findings reveal sex-specific adaptations of socio-emotional integration defining coordinated behavior and suggest that sex-recognition circuits confer resilience to stress and unfamiliarity in opposite-sex dyads.
PubMed: 38915653
DOI: 10.1101/2024.06.09.598132