-
Heliyon May 2024Autosomal Recurrent Primary Microscopic (MCPH, OMIM: 251200) is a neurodevelopmental disorder that is characterized by a noticeable decrease in brain size, particularly...
Autosomal Recurrent Primary Microscopic (MCPH, OMIM: 251200) is a neurodevelopmental disorder that is characterized by a noticeable decrease in brain size, particularly in the cerebral cortex, but with a normal brain structure and a non-progressive intellectual disability. has been identified as the gene that triggers primary microcephaly (MCPH1,OMIM: 607117). Here we report a case of autosomal recessive primary microcephaly as caused by a novel variant in the gene. Head circumference was measured by Magnetic Resonance Imaging (MRI), while the Wechsler Intelligence Scale was used to evaluate the intelligence of the individual being tested. B-ultrasound was used to assess gonadal development, and semen routine was used to assess sperm status. The whole-exome sequencing (WES) was performed on the proband. Sanger sequencing was conducted on the parents of the proband to determine if the novel variant in the MCPH1 gene was present. The effect of the mutation on the splicing of was verified by minigene approach. It was observed that the proband had autosomal recessive primary microcephaly and azoospermatism. A novel splice-site homozygous mutation (c.233+2T > G) of the gene was identified, which inherited from his parents. Minigene approach confirmed that c.233+2T > G could affect the splicing of . Therefore, our findings contributed to the mutation spectrum of the gene and may be useful in the diagnosis and gene therapy of MCPH.
PubMed: 38818167
DOI: 10.1016/j.heliyon.2024.e30285 -
Schizophrenia Bulletin Open Jan 2023The auditory cortex (AC) may play a central role in the pathophysiology of schizophrenia and auditory hallucinations (AH). Previous schizophrenia studies report thinner...
BACKGROUND AND HYPOTHESIS
The auditory cortex (AC) may play a central role in the pathophysiology of schizophrenia and auditory hallucinations (AH). Previous schizophrenia studies report thinner AC and impaired AC function, as indicated by decreased N100 amplitude of the auditory evoked potential. However, whether these structural and functional alterations link to AH in schizophrenia remain poorly understood.
STUDY DESIGN
Patients with a schizophrenia spectrum disorder (SCZ), including patients with a lifetime experience of AH (AH+), without (AH-), and healthy controls underwent magnetic resonance imaging (39 SCZ, 22 AH+, 17 AH-, and 146 HC) and electroencephalography (33 SCZ, 17 AH+, 16 AH-, and 144 HC). Cortical thickness of the primary (AC1, Heschl's gyrus) and secondary (AC2, Heschl's sulcus, and the planum temporale) AC was compared between SCZ and controls and between AH+, AH-, and controls. To examine if the association between AC thickness and N100 amplitude differed between groups, we used regression models with interaction terms.
STUDY RESULTS
N100 amplitude was nominally smaller in SCZ ( = .03, = 0.42) and in AH- ( = .020, = 0.61), while AC2 was nominally thinner in AH+ ( = .02, = 0.53) compared with controls. AC1 thickness was positively associated with N100 amplitude in SCZ ( = 2.56, = .016) and AH- ( = 3.18, = .008), while AC2 thickness was positively associated with N100 amplitude in SCZ ( = 2.37, = .024) and in AH+ ( = 2.68, = .019).
CONCLUSIONS
The novel findings of positive associations between AC thickness and N100 amplitude in SCZ, suggest that a common neural substrate may underlie AC thickness and N100 amplitude alterations.
PubMed: 38812720
DOI: 10.1093/schizbullopen/sgad015 -
Journal of Integrative Neuroscience Apr 2024Magnetoencephalography (MEG) is a non-invasive imaging technique for directly measuring the external magnetic field generated from synchronously activated pyramidal...
BACKGROUND
Magnetoencephalography (MEG) is a non-invasive imaging technique for directly measuring the external magnetic field generated from synchronously activated pyramidal neurons in the brain. The optically pumped magnetometer (OPM) is known for its less expensive, non-cryogenic, movable and user-friendly custom-design provides the potential for a change in functional neuroimaging based on MEG.
METHODS
An array of OPMs covering the opposite sides of a subject's head is placed inside a magnetically shielded room (MSR) and responses evoked from the auditory cortices are measured.
RESULTS
High signal-to-noise ratio auditory evoked response fields (AEFs) were detected by a wearable OPM-MEG system in a MSR, for which a flexible helmet was specially designed to minimize the sensor-to-head distance, along with a set of bi-planar coils developed for background field and gradient nulling. Neuronal current sources activated in AEF experiments were localized and the auditory cortices showed the highest activities. Performance of the hybrid optically pumped magnetometer-magnetoencephalography/electroencephalography (OPM-MEG/EEG) system was also assessed.
CONCLUSIONS
The multi-channel OPM-MEG system performs well in a custom built MSR equipped with bi-planar coils and detects human AEFs with a flexible helmet. Moreover, the similarities and differences of auditory evoked potentials (AEPs) and AEFs are discussed, while the operation of OPM-MEG sensors in conjunction with EEG electrodes provides an encouraging combination for the exploration of hybrid OPM-MEG/EEG systems.
Topics: Humans; Magnetoencephalography; Evoked Potentials, Auditory; Auditory Cortex; Electroencephalography; Adult; Male
PubMed: 38812381
DOI: 10.31083/j.jin2305093 -
Journal of Neurophysiology Jul 2024Psilocybin is a serotonergic psychedelic believed to have therapeutic potential for neuropsychiatric conditions. Despite well-documented prevalence of perceptual...
Psilocybin is a serotonergic psychedelic believed to have therapeutic potential for neuropsychiatric conditions. Despite well-documented prevalence of perceptual alterations, hallucinations, and synesthesia associated with psychedelic experiences, little is known about how psilocybin affects sensory cortex or alters the activity of neurons in awake animals. To investigate, we conducted two-photon imaging experiments in auditory cortex of awake mice and collected video of free-roaming mouse behavior, both at baseline and during psilocybin treatment. In comparison with pre-dose neural activity, a 2 mg/kg ip dose of psilocybin initially increased the amplitude of neural responses to sound. Thirty minutes post-dose, behavioral activity and neural response amplitudes decreased, yet functional connectivity increased. In contrast, control mice given intraperitoneal saline injections showed no significant changes in either neural or behavioral activity across conditions. Notably, neuronal stimulus selectivity remained stable during psilocybin treatment, for both tonotopic cortical maps and single-cell pure-tone frequency tuning curves. Our results mirror similar findings regarding the effects of serotonergic psychedelics in visual cortex and suggest that psilocybin modulates the balance of intrinsic versus stimulus-driven influences on neural activity in auditory cortex. Recent studies have shown promising therapeutic potential for psychedelics in treating neuropsychiatric conditions. Musical experience during psilocybin-assisted therapy is predictive of treatment outcome, yet little is known about how psilocybin affects auditory processing. Here, we conducted two-photon imaging experiments in auditory cortex of awake mice that received a dose of psilocybin. Our results suggest that psilocybin modulates the roles of intrinsic neural activity versus stimulus-driven influences on auditory perception.
Topics: Animals; Auditory Cortex; Mice; Psilocybin; Hallucinogens; Male; Mice, Inbred C57BL; Neurons; Auditory Perception; Acoustic Stimulation
PubMed: 38810366
DOI: 10.1152/jn.00124.2024 -
PloS One 2024When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown...
When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.
Topics: Humans; Female; Male; Language; Adult; Speech Perception; Phonetics; Young Adult; Face; Visual Perception; Eye Movements; Speech; Eye-Tracking Technology
PubMed: 38805447
DOI: 10.1371/journal.pone.0304150 -
Current Research in Neurobiology 2024Tonotopic organization of the auditory cortex has been extensively studied in many mammalian species using various methodologies and physiological preparations. Tonotopy...
Tonotopic organization of the auditory cortex has been extensively studied in many mammalian species using various methodologies and physiological preparations. Tonotopy mapping in primates, however, is more limited due to constraints such as cortical folding, use of anesthetized subjects, and mapping methodology. Here we applied a combination of through-skull and through-window intrinsic optical signal imaging, wide-field calcium imaging, and neural probe recording techniques in awake marmosets (), a New World monkey with most of its auditory cortex located on a flat brain surface. Coarse tonotopic gradients, including a recently described rostral-temporal (RT) to parabelt gradient, were revealed by the through-skull imaging of intrinsic optical signals and were subsequently validated by single-unit recording. Furthermore, these tonotopic gradients were observed with more detail through chronically implanted cranial windows with additional verifications on the experimental design. Moreover, the tonotopy mapped by the intrinsic-signal imaging methods was verified by wide-field calcium imaging in an AAV-GCaMP labeled subject. After these validations and with further effort to expand the field of view more rostrally in both windowed and through-skull subjects, an additional putative tonotopic gradient was observed more rostrally to the area RT, which has not been previously described by the standard model of tonotopic organization of the primate auditory cortex. Together, these results provide the most comprehensive data of tonotopy mapping in an awake primate species with unprecedented coverage and details in the rostral proportion and support a caudal-rostrally arranged mesoscale organization of at least three repeats of functional gradients in the primate auditory cortex, similar to the ventral stream of primate visual cortex.
PubMed: 38799765
DOI: 10.1016/j.crneur.2024.100132 -
BioRxiv : the Preprint Server For... May 2024When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves...
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
PubMed: 38798574
DOI: 10.1101/2024.05.14.593257 -
BioRxiv : the Preprint Server For... May 2024Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms...
UNLABELLED
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites.
SIGNIFICANCE STATEMENT
Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes.
HIGHLIGHTS
Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
PubMed: 38798551
DOI: 10.1101/2024.05.13.593814 -
Hearing Research Aug 2024Neurons within a neuronal network can be grouped by bottom-up and top-down influences using synchrony in neuronal oscillations. This creates the representation of...
Neurons within a neuronal network can be grouped by bottom-up and top-down influences using synchrony in neuronal oscillations. This creates the representation of perceptual objects from sensory features. Oscillatory activity can be differentiated into stimulus-phase-locked (evoked) and non-phase-locked (induced). The former is mainly determined by sensory input, the latter by higher-level (cortical) processing. Effects of auditory deprivation on cortical oscillations have been studied in congenitally deaf cats (CDCs) using cochlear implant (CI) stimulation. CI-induced alpha, beta, and gamma activity were compromised in the auditory cortex of CDCs. Furthermore, top-down information flow between secondary and primary auditory areas in hearing cats, conveyed by induced alpha oscillations, was lost in CDCs. Here we used the matching pursuit algorithm to assess components of such oscillatory activity in local field potentials recorded in primary field A1. Additionally to the loss of induced alpha oscillations, we also found a loss of evoked theta activity in CDCs. The loss of theta and alpha activity in CDCs can be directly related to reduced high-frequency (gamma-band) activity due to cross-frequency coupling. Here we quantified such cross-frequency coupling in adult 1) hearing-experienced, acoustically stimulated cats (aHCs), 2) hearing-experienced cats following acute pharmacological deafening and subsequent CIs, thus in electrically stimulated cats (eHCs), and 3) electrically stimulated CDCs. We found significant cross-frequency coupling in all animal groups in > 70% of auditory-responsive sites. The predominant coupling in aHCs and eHCs was between theta/alpha phase and gamma power. In CDCs such coupling was lost and replaced by alpha oscillations coupling to delta/theta phase. Thus, alpha/theta oscillations synchronize high-frequency gamma activity only in hearing-experienced cats. The absence of induced alpha and theta oscillations contributes to the loss of induced gamma power in CDCs, thereby signifying impaired local network activity.
Topics: Animals; Cats; Auditory Cortex; Deafness; Acoustic Stimulation; Gamma Rhythm; Cochlear Implants; Alpha Rhythm; Evoked Potentials, Auditory; Algorithms; Auditory Pathways; Disease Models, Animal; Theta Rhythm
PubMed: 38797035
DOI: 10.1016/j.heares.2024.109032 -
Biomedicines Apr 2024This study investigates audiogenic epilepsy in Krushinsky-Molodkina (KM) rats, questioning the efficacy of conventional EEG techniques in capturing seizures during...
This study investigates audiogenic epilepsy in Krushinsky-Molodkina (KM) rats, questioning the efficacy of conventional EEG techniques in capturing seizures during animal restraint. Using a wireless EEG system that allows unrestricted movement, our aim was to gather ecologically valid data. Nine male KM rats, prone to audiogenic seizures, received implants of wireless EEG transmitters that target specific seizure-related brain regions. These regions included the inferior colliculus (IC), pontine reticular nucleus, oral part (PnO), ventrolateral periaqueductal gray (VLPAG), dorsal area of the secondary auditory cortex (AuD), and motor cortex (M1), facilitating seizure observation without movement constraints. Our findings indicate that targeted neural intervention via electrode implantation significantly reduced convulsive seizures in approximately half of the subjects, suggesting therapeutic potential. Furthermore, the amplitude of brain activity in the IC, PnO, and AuD upon audiogenic stimulus onset significantly influenced seizure severity and nature, highlighting these areas as pivotal for epileptic propagation. Severe cases exhibited dual waves of seizure generalization, indicative of intricate neural network interactions. Distinctive interplay between specific brain regions, disrupted during convulsive activity, suggests neural circuit reconfiguration in response to escalating seizure intensity. These discoveries challenge conventional methodologies, opening avenues for novel approaches in epilepsy research and therapeutic interventions.
PubMed: 38790907
DOI: 10.3390/biomedicines12050946