-
The Journal of Neuroscience : the... Jan 2023Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope...
Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope modulations drive synchronized activity in auditory cortex (AC), which correlates strongly with comprehension (Giraud and Poeppel, 2012; Peelle and Davis, 2012; Haegens and Zion Golumbic, 2018). Studies of envelope coding in single neurons, performed in nonhuman animals, have focused on periodic amplitude modulation (AM) stimuli and use response metrics that are not easy to juxtapose with data from humans. In this study, we sought to bridge these fields. Specifically, we looked directly at the temporal relationship between stimulus envelope and spiking, and we assessed whether the apparent diversity across neurons' AM responses contributes to the population representation of speech-like sound envelopes. We gathered responses from single neurons to vocoded speech stimuli and compared them to sinusoidal AM responses in auditory cortex (AC) of alert, freely moving Mongolian gerbils of both sexes. While AC neurons displayed heterogeneous tuning to AM rate, their temporal dynamics were stereotyped. Preferred response phases accumulated near the onsets of sinusoidal AM periods for slower rates (<8 Hz), and an over-representation of amplitude edges was apparent in population responses to both sinusoidal AM and vocoded speech envelopes. Crucially, this encoding bias imparted a decoding benefit: a classifier could discriminate vocoded speech stimuli using summed population activity, while higher frequency modulations required a more sophisticated decoder that tracked spiking responses from individual cells. Together, our results imply that the envelope structure relevant to parsing an acoustic stream could be read-out from a distributed, redundant population code. Animal communication sounds have rich temporal structure and are often produced in extended sequences, including the syllabic structure of human speech. Although the auditory cortex (AC) is known to play a crucial role in representing speech syllables, the contribution of individual neurons remains uncertain. Here, we characterized the representations of both simple, amplitude-modulated sounds and complex, speech-like stimuli within a broad population of cortical neurons, and we found an overrepresentation of amplitude edges. Thus, a phasic, redundant code in auditory cortex can provide a mechanistic explanation for segmenting acoustic streams like human speech.
Topics: Male; Animals; Female; Humans; Auditory Perception; Speech; Acoustic Stimulation; Sound; Speech Perception; Auditory Cortex
PubMed: 36379706
DOI: 10.1523/JNEUROSCI.1616-21.2022 -
Human Brain Mapping Oct 2023Adults and children show remarkable differences in cortical auditory activation which, in children, have shown relevance for cognitive performance, specifically...
Adults and children show remarkable differences in cortical auditory activation which, in children, have shown relevance for cognitive performance, specifically inhibitory control. However, it has not been tested whether these differences translate to functional differences in response inhibition between adults and children. We recorded auditory responses of adults and school-aged children (6-14 years) using combined magneto- and electroencephalography (M/EEG) during passive listening conditions and an auditory Go/No-go task. The associations between auditory cortical responses and inhibition performance measures diverge between adults and children; while in children the brain-behavior associations are not significant, or stronger responses are beneficial, adults show negative associations between auditory cortical responses and inhibitory performance. Furthermore, we found differences in brain responses between adults and children; the late (~200 ms post stimulation) adult peak activation shifts from auditory to frontomedial areas. In contrast, children show prolonged obligatory responses in the auditory cortex. Together this likely translates to a functional difference between adults and children in the cortical resources for performance consistency in auditory-based cognitive tasks.
Topics: Humans; Adult; Child; Acoustic Stimulation; Evoked Potentials, Auditory; Task Performance and Analysis; Electroencephalography; Auditory Cortex; Auditory Perception
PubMed: 37493309
DOI: 10.1002/hbm.26418 -
Journal of Neurophysiology May 2021Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory...
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process. We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Topics: Animals; Attention; Auditory Cortex; Auditory Perception; Behavior, Animal; Choice Behavior; Electrocorticography; Female; Macaca mulatta; Male; Psychomotor Performance
PubMed: 33788616
DOI: 10.1152/jn.00406.2020 -
Hearing Research Sep 2023Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode... (Review)
Review
Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset of a sentence after a silent period or the onset of the vowel in each syllable. Here, we used a group reduced rank regression model to show that distributed cortical responses support a low-dimensional latent state representation of temporal context in speech. The timing cues each capture more unique variance than all other phonetic features and exhibit rotational or cyclical dynamics in latent space from activity that is widespread over the superior temporal gyrus. We propose that these spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations.
Topics: Humans; Speech; Speech Perception; Temporal Lobe; Auditory Cortex; Phonetics; Acoustic Stimulation
PubMed: 37441880
DOI: 10.1016/j.heares.2023.108838 -
Nature Communications Jun 2022The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial...
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Topics: Acoustic Stimulation; Acoustics; Animals; Auditory Cortex; Chiroptera; Echolocation; Mammals; Vocalization, Animal
PubMed: 35752629
DOI: 10.1038/s41467-022-31230-6 -
Nature Neuroscience Dec 2023The human auditory system extracts rich linguistic abstractions from speech signals. Traditional approaches to understanding this complex process have used linear...
The human auditory system extracts rich linguistic abstractions from speech signals. Traditional approaches to understanding this complex process have used linear feature-encoding models, with limited success. Artificial neural networks excel in speech recognition tasks and offer promising computational models of speech processing. We used speech representations in state-of-the-art deep neural network (DNN) models to investigate neural coding from the auditory nerve to the speech cortex. Representations in hierarchical layers of the DNN correlated well with the neural activity throughout the ascending auditory system. Unsupervised speech models performed at least as well as other purely supervised or fine-tuned models. Deeper DNN layers were better correlated with the neural activity in the higher-order auditory cortex, with computations aligned with phonemic and syllabic structures in speech. Accordingly, DNN models trained on either English or Mandarin predicted cortical responses in native speakers of each language. These results reveal convergence between DNN model representations and the biological auditory pathway, offering new approaches for modeling neural coding in the auditory cortex.
Topics: Humans; Speech; Auditory Pathways; Auditory Cortex; Neural Networks, Computer; Perception; Speech Perception
PubMed: 37904043
DOI: 10.1038/s41593-023-01468-4 -
Cerebral Cortex (New York, N.Y. : 1991) Jun 2023Neurons in primary visual cortex (V1) may not only signal current visual input but also relevant contextual information such as reward expectancy and the subject's...
Neurons in primary visual cortex (V1) may not only signal current visual input but also relevant contextual information such as reward expectancy and the subject's spatial position. Such contextual representations need not be restricted to V1 but could participate in a coherent mapping throughout sensory cortices. Here, we show that spiking activity coherently represents a location-specific mapping across auditory cortex (AC) and lateral, secondary visual cortex (V2L) of freely moving rats engaged in a sensory detection task on a figure-8 maze. Single-unit activity of both areas showed extensive similarities in terms of spatial distribution, reliability, and position coding. Importantly, reconstructions of subject position based on spiking activity displayed decoding errors that were correlated between areas. Additionally, we found that head direction, but not locomotor speed or head angular velocity, was an important determinant of activity in AC and V2L. By contrast, variables related to the sensory task cues or to trial correctness and reward were not markedly encoded in AC and V2L. We conclude that sensory cortices participate in coherent, multimodal representations of the subject's sensory-specific location. These may provide a common reference frame for distributed cortical sensory and motor processes and may support crossmodal predictive processing.
Topics: Rats; Animals; Reproducibility of Results; Neurons; Auditory Cortex; Visual Cortex
PubMed: 36967108
DOI: 10.1093/cercor/bhad045 -
The Journal of Neuroscience : the... Feb 2023A key question in auditory neuroscience is to what extent are brain regions functionally specialized for processing specific sound features, such as location and...
A key question in auditory neuroscience is to what extent are brain regions functionally specialized for processing specific sound features, such as location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the contribution of auditory cortex to hearing in multiple contexts. Here we determined the role of ferret primary auditory cortex in both spatial and nonspatial hearing by reversibly inactivating the middle ectosylvian gyrus during behavior using cooling ( = 2 females) or optogenetics ( = 1 female). Optogenetic experiments used the mDLx promoter to express Channelrhodopsin-2 in GABAergic interneurons, and we confirmed both viral expression ( = 2 females) and light-driven suppression of spiking activity in auditory cortex, recorded using Neuropixels under anesthesia ( = 465 units from 2 additional untrained female ferrets). Cortical inactivation via cooling or optogenetics impaired vowel discrimination in colocated noise. Ferrets implanted with cooling loops were tested in additional conditions that revealed no deficit when identifying vowels in clean conditions, or when the temporally coincident vowel and noise were spatially separated by 180 degrees. These animals did, however, show impaired sound localization when inactivating the same auditory cortical region implicated in vowel discrimination in noise. Our results demonstrate that, as a brain region showing mixed selectivity for spatial and nonspatial features of sound, primary auditory cortex contributes to multiple forms of hearing. Neurons in primary auditory cortex are often sensitive to the location and identity of sounds. Here we inactivated auditory cortex during spatial and nonspatial listening tasks using cooling, or optogenetics. Auditory cortical inactivation impaired multiple behaviors, demonstrating a role in both the analysis of sound location and identity and confirming a functional contribution of mixed selectivity observed in neural activity. Parallel optogenetic experiments in two additional untrained ferrets linked behavior to physiology by demonstrating that expression of Channelrhodopsin-2 permitted rapid light-driven suppression of auditory cortical activity recorded under anesthesia.
Topics: Animals; Female; Auditory Cortex; Ferrets; Channelrhodopsins; Acoustic Stimulation; Sound Localization; Auditory Perception; Hearing
PubMed: 36604168
DOI: 10.1523/JNEUROSCI.1426-22.2022 -
Scientific Reports Mar 2020Auditory cortex volume and shape differences have been observed in the context of phonetic learning, musicianship and dyslexia. Heschl's gyrus, which includes primary...
Auditory cortex volume and shape differences have been observed in the context of phonetic learning, musicianship and dyslexia. Heschl's gyrus, which includes primary auditory cortex, displays large anatomical variability across individuals and hemispheres. Given this variability, manual labelling is the gold standard for segmenting HG, but is time consuming and error prone. Our novel toolbox, called 'Toolbox for the Automated Segmentation of HG' or TASH, automatically segments HG in brain structural MRI data, and extracts measures including its volume, surface area and cortical thickness. TASH builds upon FreeSurfer, which provides an initial segmentation of auditory regions, and implements further steps to perform finer auditory cortex delineation. We validate TASH by showing significant relationships between HG volumes obtained using manual labelling and using TASH, in three independent datasets acquired on different scanners and field strengths, and by showing good qualitative segmentation. We also present two applications of TASH, demonstrating replication and extension of previously published findings of relationships between HG volumes and (a) phonetic learning, and (b) musicianship. In sum, TASH effectively segments HG in a fully automated and reproducible manner, opening up a wide range of applications in the domains of expertise, disease, genetics and brain plasticity.
Topics: Adult; Auditory Cortex; Automation; Female; Humans; Image Processing, Computer-Assisted; Magnetic Resonance Imaging; Male; Middle Aged
PubMed: 32127593
DOI: 10.1038/s41598-020-60609-y -
NeuroImage May 2022Voicing is one of the most important characteristics of phonetic speech sounds. Despite its importance, voicing perception mechanisms remain largely unknown. To explore...
Voicing is one of the most important characteristics of phonetic speech sounds. Despite its importance, voicing perception mechanisms remain largely unknown. To explore auditory-motor networks associated with voicing perception, we firstly examined the brain regions that showed common activities for voicing production and perception using functional magnetic resonance imaging. Results indicated that the auditory and speech motor areas were activated with the operculum parietale 4 (OP4) during both voicing production and perception. Secondly, we used a magnetoencephalography and examined the dynamical functional connectivity of the auditory-motor networks during a perceptual categorization task of /da/-/ta/ continuum stimuli varying in voice onset time (VOT) from 0 to 40 ms in 10 ms steps. Significant functional connectivities from the auditory cortical regions to the larynx motor area via OP4 were observed only when perceiving the stimulus with VOT 30 ms. In addition, regional activity analysis showed that the neural representation of VOT in the auditory cortical regions was mostly correlated with categorical perception of voicing but did not reflect the perception of stimulus with VOT 30 ms. We suggest that the larynx motor area, which is considered to play a crucial role in voicing production, contributes to categorical perception of voicing by complementing the temporal processing in the auditory cortical regions.
Topics: Acoustic Stimulation; Auditory Cortex; Auditory Perception; Humans; Larynx; Multimodal Imaging; Phonetics; Speech Perception; Voice
PubMed: 35150835
DOI: 10.1016/j.neuroimage.2022.118981