-
Journal of Speech, Language, and... Oct 2019Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in... (Randomized Controlled Trial)
Randomized Controlled Trial
Purpose This study assessed the extent to which 6- to 8.5-month-old infants and 18- to 30-year-old adults detect and discriminate auditory syllables in noise better in the presence of visual speech than in auditory-only conditions. In addition, we examined whether visual cues to the onset and offset of the auditory signal account for this benefit. Method Sixty infants and 24 adults were randomly assigned to speech detection or discrimination tasks and were tested using a modified observer-based psychoacoustic procedure. Each participant completed 1-3 conditions: auditory-only, with visual speech, and with a visual signal that only cued the onset and offset of the auditory syllable. Results Mixed linear modeling indicated that infants and adults benefited from visual speech on both tasks. Adults relied on the onset-offset cue for detection, but the same cue did not improve their discrimination. The onset-offset cue benefited infants for both detection and discrimination. Whereas the onset-offset cue improved detection similarly for infants and adults, the full visual speech signal benefited infants to a lesser extent than adults on the discrimination task. Conclusions These results suggest that infants' use of visual onset-offset cues is mature, but their ability to use more complex visual speech cues is still developing. Additional research is needed to explore differences in audiovisual enhancement (a) of speech discrimination across speech targets and (b) with increasingly complex tasks and stimuli.
Topics: Acoustic Stimulation; Adolescent; Adult; Cues; Female; Healthy Volunteers; Humans; Infant; Male; Noise; Perceptual Masking; Photic Stimulation; Psychoacoustics; Signal-To-Noise Ratio; Speech Perception; Visual Perception; Young Adult
PubMed: 31618097
DOI: 10.1044/2019_JSLHR-H-19-0106 -
Hearing Research Nov 2019We explore stream segregation with temporally modulated acoustic features using behavioral experiments and modelling. The auditory streaming paradigm in which...
We explore stream segregation with temporally modulated acoustic features using behavioral experiments and modelling. The auditory streaming paradigm in which alternating high- A and low-frequency tones B appear in a repeating ABA-pattern, has been shown to be perceptually bistable for extended presentations (order of minutes). For a fixed, repeating stimulus, perception spontaneously changes (switches) at random times, every 2-15 s, between an integrated interpretation with a galloping rhythm and segregated streams. Streaming in a natural auditory environment requires segregation of auditory objects with features that evolve over time. With the relatively idealized ABA-triplet paradigm, we explore perceptual switching in a non-static environment by considering slowly and periodically varying stimulus features. Our previously published model captures the dynamics of auditory bistability and predicts here how perceptual switches are entrained, tightly locked to the rising and falling phase of modulation. In psychoacoustic experiments we find that entrainment depends on both the period of modulation and the intrinsic switch characteristics of individual listeners. The extended auditory streaming paradigm with slowly modulated stimulus features presented here will be of significant interest for future imaging and neurophysiology experiments by reducing the need for subjective perceptual reports of ongoing perception.
Topics: Acoustic Stimulation; Auditory Pathways; Computer Simulation; Environment; Female; Humans; Male; Models, Neurological; Perceptual Masking; Pitch Perception; Psychoacoustics; Young Adult
PubMed: 31622836
DOI: 10.1016/j.heares.2019.107807 -
The Journal of the Acoustical Society... Apr 2018Natural sounds have substantial acoustic structure (predictability, nonrandomness) in their spectral and temporal compositions. Listeners are expected to exploit this...
Natural sounds have substantial acoustic structure (predictability, nonrandomness) in their spectral and temporal compositions. Listeners are expected to exploit this structure to distinguish simultaneous sound sources; however, previous studies confounded acoustic structure and listening experience. Here, sensitivity to acoustic structure in novel sounds was measured in discrimination and identification tasks. Complementary signal-processing strategies independently varied relative acoustic entropy (the inverse of acoustic structure) across frequency or time. In one condition, instantaneous frequency of low-pass-filtered 300-ms random noise was rescaled to 5 kHz bandwidth and resynthesized. In another condition, the instantaneous frequency of a short gated 5-kHz noise was resampled up to 300 ms. In both cases, entropy relative to full bandwidth or full duration was a fraction of that in 300-ms noise sampled at 10 kHz. Discrimination of sounds improved with less relative entropy. Listeners identified a probe sound as a target sound (1%, 3.2%, or 10% relative entropy) that repeated amidst distractor sounds (1%, 10%, or 100% relative entropy) at 0 dB SNR. Performance depended on differences in relative entropy between targets and background. Lower-relative-entropy targets were better identified against higher-relative-entropy distractors than lower-relative-entropy distractors; higher-relative-entropy targets were better identified amidst lower-relative-entropy distractors. Results were consistent across signal-processing strategies.
Topics: Acoustic Stimulation; Auditory Perception; Case-Control Studies; Discrimination, Psychological; Humans; Psychoacoustics; Signal Processing, Computer-Assisted; Sound; Sound Localization
PubMed: 29716264
DOI: 10.1121/1.5031018 -
Nihon Eiseigaku Zasshi. Japanese... 2013Wind power generation is one of the good solutions to ensuring a clean and sustainable energy source. In recent years, therefore, many facilities for wind power... (Review)
Review
Wind power generation is one of the good solutions to ensuring a clean and sustainable energy source. In recent years, therefore, many facilities for wind power generation have been constructed in Japan. In contrast to its advantage, however, residents in some areas near a wind power generation site have complained that their well-being has been disturbed by noise from wind turbines. Wind turbines generate low-frequency noise, which can lead to adverse psychological effects such as annoyance. In Japan, the method of assessing appropriately the adverse effects caused by low-frequency noise has not been established. In this article, the characteristics and effects of low-frequency noise are outlined, and the present situation and research task on the assessment of psychological effects of low-frequency noise from wind turbines are presented.
Topics: Environmental Exposure; Humans; Japan; Noise; Power Plants; Psychoacoustics; Stress, Psychological; Wind
PubMed: 23718970
DOI: 10.1265/jjh.68.88 -
PloS One 2021To test the hypothesis that caffeine can influence tinnitus, we recruited 80 patients with chronic tinnitus and randomly allocated them into two groups (caffeine and... (Randomized Controlled Trial)
Randomized Controlled Trial
OBJECTIVE
To test the hypothesis that caffeine can influence tinnitus, we recruited 80 patients with chronic tinnitus and randomly allocated them into two groups (caffeine and placebo) to analyze the self-perception of tinnitus symptoms after caffeine consumption, assuming that this is an adequate sample for generalization.
METHODS
The participants were randomized into two groups: one group was administered a 300-mg capsule of caffeine, and the other group was given a placebo capsule (cornstarch). A diet that restricted caffeine consumption for 24 hours was implemented. The participants answered questionnaires (the Tinnitus Handicap Inventory-THI, the Visual Analog Scale-VAS, the profile of mood state-POMS) and underwent examinations (tonal and high frequency audiometry, acufenometry (frequency measure; intensity measure and the minimum level of tinnitus masking), transient otoacoustic emissions-TEOAE and distortion product otoacoustic emissions-DPOAE assessments) at two timepoints: at baseline and after capsule ingestion.
RESULTS
There was a significant change in mood (measured by the POMS) after caffeine consumption. The THI and VAS scores were improved at the second timepoint in both groups. The audiometry assessment showed a significant difference in some frequencies between baseline and follow-up measurements in both groups, but these differences were not clinically relevant. Similar findings were observed for the amplitude and signal-to-noise ratio in the TEOAE and DPOAE measurements.
CONCLUSIONS
Caffeine (300 mg) did not significantly alter the psychoacoustic measures, electroacoustic measures or the tinnitus-related degree of discomfort.
Topics: Adult; Audiometry, Pure-Tone; Caffeine; Female; Humans; Male; Middle Aged; Otoacoustic Emissions, Spontaneous; Psychoacoustics; Surveys and Questionnaires; Tinnitus
PubMed: 34543285
DOI: 10.1371/journal.pone.0256275 -
PloS One 2021Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A...
Sounds like "running water" and "buzzing bees" are classes of sounds which are a collective result of many similar acoustic events and are known as "sound textures". A recent psychoacoustic study using sound textures has reported that natural sounding textures can be synthesized from white noise by imposing statistical features such as marginals and correlations computed from the outputs of cochlear models responding to the textures. The outputs being the envelopes of bandpass filter responses, the 'cochlear envelope'. This suggests that the perceptual qualities of many natural sounds derive directly from such statistical features, and raises the question of how these statistical features are distributed in the acoustic environment. To address this question, we collected a corpus of 200 sound textures from public online sources and analyzed the distributions of the textures' marginal statistics (mean, variance, skew, and kurtosis), cross-frequency correlations and modulation power statistics. A principal component analysis of these parameters revealed a great deal of redundancy in the texture parameters. For example, just two marginal principal components, which can be thought of as measuring the sparseness or burstiness of a texture, capture as much as 64% of the variance of the 128 dimensional marginal parameter space, while the first two principal components of cochlear correlations capture as much as 88% of the variance in the 496 correlation parameters. Knowledge of the statistical distributions documented here may help guide the choice of acoustic stimuli with high ecological validity in future research.
Topics: Acoustic Stimulation; Acoustics; Auditory Perception; Cochlea; Databases, Factual; Humans; Models, Statistical; Noise; Principal Component Analysis; Psychoacoustics; Sound
PubMed: 34161323
DOI: 10.1371/journal.pone.0238960 -
The Journal of the Acoustical Society... Oct 2021There are psychoacoustic methods thought to measure gain reduction, which may be from the medial olivocochlear reflex (MOCR), a bilateral feedback loop that adjusts...
There are psychoacoustic methods thought to measure gain reduction, which may be from the medial olivocochlear reflex (MOCR), a bilateral feedback loop that adjusts cochlear gain. Although studies have used ipsilateral and contralateral elicitors and have examined strength at different signal frequencies, these factors have not been examined within a single study. Therefore, basic questions about gain reduction, such as the relative strength of ipsilateral vs contralateral elicitation and the relative strength across signal frequency, are not known. In the current study, gain reduction from ipsilateral, contralateral, and bilateral elicitors was measured at 1-, 2-, and 4-kHz signal frequencies using forward masking paradigms at a range of elicitor levels in a repeated measures design. Ipsilateral and bilateral strengths were similar and significantly larger than contralateral strength across signal frequencies. Growth of gain reduction with precursor level tended to differ with signal frequency, although not significantly. Data from previous studies are considered in light of the results of this study. Behavioral results are also considered relative to anatomical and physiological data on the MOCR. These results indicate that, in humans, cochlear gain reduction is broad across frequencies and is robust for ipsilateral and bilateral elicitation but small for contralateral elicitation.
Topics: Acoustic Stimulation; Cochlea; Functional Laterality; Humans; Olivary Nucleus; Psychoacoustics; Reflex
PubMed: 34717476
DOI: 10.1121/10.0006662 -
Journal of Neuroscience Methods Sep 2013To examine psychoacoustics in mice, we have used 2,2,2-tribromoethanol anesthesia in multiple studies. We find this drug is fast-acting and yields consistent results,...
BACKGROUND
To examine psychoacoustics in mice, we have used 2,2,2-tribromoethanol anesthesia in multiple studies. We find this drug is fast-acting and yields consistent results, providing 25-30 min of anesthesia. Our recent studies in binaural hearing prompted development of a regimen to anesthesia time to 1h. We tested a novel cocktail using 2,2,2-tribromoethanol coupled with low dose chloral hydrate to extend the effective anesthesia time.
NEW METHOD
We have established an intraperitoneal dosing regimen for 2,2,2-tribromoethanol-chloral hydrate anesthesia. To measure efficacy of the drug cocktail, we measured auditory brainstem responses (ABRs) at 10 min intervals to determine the effects on hearing thresholds and wave amplitudes and latencies.
RESULTS
This novel drug combination increases effective anesthesia to 1h. ABR Wave I amplitudes, but not latencies, are marginally suppressed. Additionally, amplitudes of the centrally derived Waves III and V show significant inter-animal variability that is independent of stimulus intensity. These data argue against the systematic suppression of ABRs by the drug cocktail.
COMPARISON WITH EXISTING METHODS
Using 2,2,2-tribromoethanol-chloral hydrate combination in psychoacoustic studies has several advantages over other drug cocktails, the most important being preservation of latencies from centrally- and peripherally-derived ABR waves. In addition, hearing thresholds are unchanged and wave amplitudes are not systematically suppressed, although they exhibit greater variability.
CONCLUSIONS
We demonstrate that 375 mg/kg 2,2,2-tribromoethanol followed after 5 min by 200mg/kg chloral hydrate provides an anesthesia time of 60 min, has negligible effects on ABR wave latencies and thresholds and non-systematic effects on amplitudes.
Topics: Analysis of Variance; Anesthesia, Intravenous; Anesthetics; Anesthetics, Intravenous; Animals; Chloral Hydrate; Ethanol; Evoked Potentials, Auditory, Brain Stem; Mice; Mice, Inbred C57BL; Peritoneal Cavity; Psychoacoustics; Vasodilation
PubMed: 23856212
DOI: 10.1016/j.jneumeth.2013.07.004 -
International Journal of Environmental... Aug 2021This paper presents the results of a study evaluating the human perception of the noise produced by four different small quadcopter unmanned aerial vehicles (UAVs). This...
This paper presents the results of a study evaluating the human perception of the noise produced by four different small quadcopter unmanned aerial vehicles (UAVs). This study utilised measurements and recordings of the noise produced by the quadcopter UAVs in hover and in constant-speed flight at a fixed altitude. Measurements made using a ½″ microphone were used to calculate a range of different noise metrics for each noise event. Noise recordings were also made using a spherical microphone array (an Eigenmike system). The recordings were reproduced using a 3D sound reproduction system installed in a large anechoic chamber located at The University of Auckland. Thirty-seven participants were subjected to the recordings and asked to rate their levels of annoyance in response to the noise, and asked to perform a simple cognitive task in order to assess the level of distraction caused by the noise. This study discusses the noise levels measured during the test and how the various noise metrics relate to the annoyance ratings. It was found that annoyance strongly correlates with the sound pressure level and loudness metrics, and that there is a very strong correlation between the annoyance caused by a UAV in hover and in flyby at the same height. While some significant differences between the distraction caused by the UAV noise for different cases were observed in the cognitive distraction test, the results were inconclusive. This was likely due to a ceiling effect observed in the participants' test scores.
Topics: Humans; Noise; Psychoacoustics; Sound
PubMed: 34501482
DOI: 10.3390/ijerph18178893 -
The Journal of the Acoustical Society... Aug 2018Dynamic spectral shape features accurately classify /t/ and /k/ productions across speakers and contexts. This paper shows that word-initial /t/ and /k/ tokens produced...
Dynamic spectral shape features accurately classify /t/ and /k/ productions across speakers and contexts. This paper shows that word-initial /t/ and /k/ tokens produced by 21 adults can be differentiated using a single, static spectral feature when spectral energy concentration is considered relative to expectations within a given speaker and vowel context. Centroid and peak frequency-calculated from both acoustic and psychoacoustic spectra-were compared to determine whether one feature could reliably differentiate /t/ and /k/, and, if so, which feature best differentiated them. Centroid frequency from both acoustic and psychoacoustic spectra accurately classified productions of /t/ and /k/.
Topics: Adult; Female; Humans; Male; Phonetics; Psychoacoustics; Speech Acoustics; Speech Perception
PubMed: 30180689
DOI: 10.1121/1.5049702