-
Trends in Hearing 2018Tinnitus masking and residual inhibition (RI) are two well-known psychoacoustic measures of tinnitus. While it has long been suggested that they may provide diagnostic...
Tinnitus masking and residual inhibition (RI) are two well-known psychoacoustic measures of tinnitus. While it has long been suggested that they may provide diagnostic and prognostic information, these measures are still rarely performed in clinics, as they are too time consuming. Given this issue, the main goal of the present study was to validate a new method for assessing these measures. An acoustic sequence made of pulsed stimuli, which included a fixed stimulus duration and interstimulus interval, was applied to 68 tinnitus patients at two testing sites. First, the minimum masking level (MML) was measured by raising the stimulus intensity until the tinnitus was unheard during the stimulus presentation. Second, the level of the stimulus was further increased until the tinnitus was suppressed during the silence interval between the acoustic pulses. This level was called the minimum residual inhibition level (MRIL). The sequential measurement of MML and MRIL from the same stimulus condition offers several advantages such as time efficiency and the ability to compare results between the MRIL and MML. Our study confirms that, from this new approach, MML and MRIL can be easily and quickly obtained from a wide variety of patients displaying either normal hearing or different hearing loss configurations. Indeed, MML was obtained in all patients except one (98.5%), and some level of MRIL was found on 59 patients (86.7%). More so, this approach allows the categorization of tinnitus patients into different subgroups based on the properties of their MRIL.
Topics: Adolescent; Adult; Aged; Audiometry; Female; Humans; Male; Middle Aged; Perceptual Masking; Psychoacoustics; Retrospective Studies; Tinnitus; Young Adult
PubMed: 29708062
DOI: 10.1177/2331216518769996 -
Brain : a Journal of Neurology Jun 2016The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using...
The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using non-linguistic stimuli devoid of meaning we assessed three key domains of auditory processing (pitch, timing and timbre) in a consecutive series of 18 patients with primary progressive aphasia (eight with semantic variant, six with non-fluent/agrammatic variant, and four with logopenic variant), as well as 28 age-matched healthy controls. We further examined whether performance on the psychoacoustic tasks in the three domains related to the patients' speech and language and neuropsychological profile. At the group level, patients were significantly impaired in the three domains. Patients had the most marked deficits within the rhythm domain for the processing of short sequences of up to seven tones. Patients with the non-fluent variant showed the most pronounced deficits at the group and the individual level. A subset of patients with the semantic variant were also impaired, though less severely. The patients with the logopenic variant did not show any significant impairments. Significant deficits in the non-fluent and the semantic variant remained after partialling out effects of executive dysfunction. Performance on a subset of the psychoacoustic tests correlated with conventional verbal repetition tests. In sum, a core central auditory impairment exists in primary progressive aphasia for non-linguistic stimuli. While the non-fluent variant is clinically characterized by a motor speech deficit (output problem), perceptual processing of tone sequences is clearly deficient. This may indicate the co-occurrence in the non-fluent variant of a deficit in working memory for auditory objects. Parsimoniously we propose that auditory timing pathways are altered, which are used in common for processing acoustic sequence structure in both speech output and acoustic input.
Topics: Aged; Aphasia, Primary Progressive; Auditory Perception; Case-Control Studies; Cues; Female; Humans; Magnetic Resonance Imaging; Male; Middle Aged; Neuroimaging; Neuropsychological Tests; Psychoacoustics
PubMed: 27060523
DOI: 10.1093/brain/aww067 -
The Journal of the Acoustical Society... Jan 2013Despite their remarkable clinical success, cochlear-implant listeners today still receive spectrally degraded information. Much research has examined normally hearing...
Despite their remarkable clinical success, cochlear-implant listeners today still receive spectrally degraded information. Much research has examined normally hearing adult listeners' ability to interpret spectrally degraded signals, primarily using noise-vocoded speech to simulate cochlear implant processing. Far less research has explored infants' and toddlers' ability to interpret spectrally degraded signals, despite the fact that children in this age range are frequently implanted. This study examines 27-month-old typically developing toddlers' recognition of noise-vocoded speech in a language-guided looking study. Children saw two images on each trial and heard a voice instructing them to look at one item ("Find the cat!"). Full-spectrum sentences or their noise-vocoded versions were presented with varying numbers of spectral channels. Toddlers showed equivalent proportions of looking to the target object with full-speech and 24- or 8-channel noise-vocoded speech; they failed to look appropriately with 2-channel noise-vocoded speech and showed variable performance with 4-channel noise-vocoded speech. Despite accurate looking performance for speech with at least eight channels, children were slower to respond appropriately as the number of channels decreased. These results indicate that 2-yr-olds have developed the ability to interpret vocoded speech, even without practice, but that doing so requires additional processing. These findings have important implications for pediatric cochlear implantation.
Topics: Acoustic Stimulation; Age Factors; Audiometry, Speech; Child Development; Child, Preschool; Cues; Female; Humans; Male; Photic Stimulation; Psychoacoustics; Reaction Time; Recognition, Psychology; Signal Processing, Computer-Assisted; Sound Spectrography; Speech Perception; Task Performance and Analysis; Time Factors
PubMed: 23297920
DOI: 10.1121/1.4770241 -
Journal of Speech, Language, and... Jan 2023Acoustic and perceptual quantification of vocal strain has been a vexing problem for years. To increase measurement rigor, a suitable single-variable matching stimulus...
PURPOSE
Acoustic and perceptual quantification of vocal strain has been a vexing problem for years. To increase measurement rigor, a suitable single-variable matching stimulus for strain was developed and validated, based on the matching stimulus used previously for breathy and rough voice qualities.
METHOD
A set of 21 comparison stimuli for a single-variable matching task (SVMT) was synthesized based on a speech-shaped sawtooth waveform mixed with speech-shaped noise. Variable bandpass filter gain in mid-to-high frequencies achieved a wide range of computed sharpness (in constant sharpness steps) and served as the independent variable for the SVMT. Ten natural /ɑ/ stimuli with a wide range of the primary voice quality of strain and a minimum of breathiness or roughness were selected and assessed using the SVMT. Natural voice samples and synthetic comparison stimuli were also assessed using a perceptual magnitude estimation (ME) task.
RESULTS
ME data validated the correspondence of the set of comparison stimuli to varying perceived strain. Perceived strain magnitudes of the comparison stimuli increased significantly and linearly with computed sharpness ( = .99). A linear regression revealed that strain matching values were significantly predicted by computed sharpness ( = .96) and perceived strain magnitudes ( = .95) of the natural voice stimuli.
CONCLUSION
The perception of vocal strain is strongly associated with computed sharpness and is captured accurately and precisely using an SVMT, in which the independent variable is the bandpass filter gain (in steps of equal sharpness) applied to the comparison stimuli.
Topics: Humans; Voice Quality; Psychoacoustics; Speech Acoustics; Acoustics; Speech Perception; Speech Production Measurement
PubMed: 36516473
DOI: 10.1044/2022_JSLHR-22-00280 -
Journal of Speech, Language, and... Feb 2018Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more...
PURPOSE
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults.
METHOD
Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups.
RESULTS
It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects.
CONCLUSIONS
In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.
Topics: Adult; Child; Child, Preschool; Female; Humans; Male; Pattern Recognition, Physiological; Perceptual Masking; Psychoacoustics; Recognition, Psychology; Speech Perception; Young Adult
PubMed: 29396579
DOI: 10.1044/2017_JSLHR-H-17-0118 -
PloS One 2021Loudness judgments of sounds varying in level across time show a non-uniform temporal weighting, with increased weights assigned to the beginning of the sound (primacy...
Loudness judgments of sounds varying in level across time show a non-uniform temporal weighting, with increased weights assigned to the beginning of the sound (primacy effect). In addition, higher weights are observed for temporal components that are higher in level than the remaining components (loudness dominance). In three experiments, sounds consisting of 100- or 475-ms Gaussian wideband noise segments with random level variations were presented and either none, the first, or a central temporal segment was amplified or attenuated. In Experiment 1, the sounds consisted of four 100-ms segments that were separated by 500-ms gaps. Previous experiments did not show a primacy effect in such a condition. In Experiment 2, four- or ten-100-ms-segment sounds without gaps between the segments were presented to examine the interaction between the primacy effect and level dominance. As expected, for the sounds with segments separated by gaps, no primacy effect was observed, but weights on amplified segments were increased and weights on attenuated segments were decreased. For the sounds with contiguous segments, a primacy effect as well as effects of relative level (similar to those in Experiment 1) were found. For attenuation, the data indicated no substantial interaction between the primacy effect and loudness dominance, whereas for amplification an interaction was present. In Experiment 3, sounds consisting of either four contiguous 100-ms or 475-ms segments, or four 100-ms segments separated by 500-ms gaps were presented. Effects of relative level were more pronounced for the contiguous sounds. Across all three experiments, the effects of relative level were more pronounced for attenuation. In addition, the effects of relative level showed a dependence on the position of the change in level, with opposite direction for attenuation compared to amplification. Some of the results are in accordance with explanations based on masking effects on auditory intensity resolution.
Topics: Acoustic Stimulation; Adult; Discrimination, Psychological; Female; Humans; Judgment; Loudness Perception; Male; Noise; Psychoacoustics; Sound; Young Adult
PubMed: 34941913
DOI: 10.1371/journal.pone.0261001 -
Current Biology : CB Apr 2008The source of conscious experience has fueled scientific and philosophical debates for centuries. In auditory and motor domains, it is yet unknown how consciously and...
The source of conscious experience has fueled scientific and philosophical debates for centuries. In auditory and motor domains, it is yet unknown how consciously and unconsciously obtained information may combine to enable the production and perception of speaking and singing. Both forms of vocalizations rely upon the interaction of brain networks responsible for perception and action. While perceptual experience and executed actions are usually well coupled, dissociations between perception and action can be informative of their underlying neural systems. Here we report a dissociation between production and perception: tone-deaf individuals, who cannot consciously perceive pitch differences, can paradoxically reproduce pitch intervals in correct directions. Our results suggest that multiple neural pathways have evolved for sound perception and production, so that pitch information sufficient for intact speech can be obtained separately from pathways necessary for conscious perception.
Topics: Auditory Perception; Case-Control Studies; Humans; Pitch Discrimination; Psychoacoustics
PubMed: 18430629
DOI: 10.1016/j.cub.2008.02.045 -
Scientific Reports Oct 2021For many cochlear implant (CI) users, frequency discrimination is still challenging. We studied the effect of frequency differences relative to the electrode frequency...
For many cochlear implant (CI) users, frequency discrimination is still challenging. We studied the effect of frequency differences relative to the electrode frequency bands on pure tone discrimination. A single-center, prospective, controlled, psychoacoustic exploratory study was conducted in a tertiary university referral center. Thirty-four patients with Cochlear Ltd. and MED-EL CIs and 19 age-matched normal-hearing control subjects were included. Two sinusoidal tones were presented with varying frequency differences. The reference tone frequency was chosen according to the center frequency of basal or apical electrodes. Discrimination abilities were psychophysically measured in a three-interval, two-alternative, forced-choice procedure (3I-2AFC) for various CI electrodes. Hit rates were measured, particularly with respect to discrimination abilities at the corner frequency of the electrode frequency-bands. The mean rate of correct decision concerning pitch difference was about 60% for CI users and about 90% for the normal-hearing control group. In CI users, the difference limen was two semitones, while normal-hearing participants detected the difference of one semitone. No influence of the corner frequency of the CI electrodes was found. In CI users, pure tone discrimination seems to be independent of tone positions relative to the corner frequency of the electrode frequency-band. Differences of 2 semitones can be distinguished within one electrode.
Topics: Adult; Aged; Cochlea; Cochlear Implantation; Electric Stimulation; Female; Hearing Tests; Humans; Male; Middle Aged; Pitch Discrimination; Prospective Studies; Psychoacoustics; Timbre Perception; Young Adult
PubMed: 34642437
DOI: 10.1038/s41598-021-99799-4 -
International Journal of Environmental... Jul 2022With the continuous expansion of urban scale with dense population and traffic and the gradual improvement of residents' requirements for environmental quality, the...
With the continuous expansion of urban scale with dense population and traffic and the gradual improvement of residents' requirements for environmental quality, the traditional evaluation method relying on acoustic energy is not enough to reflect the feelings of urban crowds about acoustic environment quality. The acoustic environment quality evaluation method based on human subjective perception has gradually become one of the research focuses in the field of environmental noise control. In recent years, various subjective and objective acoustic characteristic parameters have been introduced into the study of acoustic environment assessment in the global literature. However, the extraction of "effective characteristics" from a large number of physical and psychoacoustic characteristics contained in acoustic signals and the creation of a scientific and efficient subjective evaluation model have always been key technical problems in the field of acoustic environment evaluation. Based on subjective human perceptions, the overall acoustic environment quality evaluation of urban open spaces is studied in this paper. Based on the "effective characteristic" parameters and the subjective characteristic proposed in the previous research, including equivalent continuous A-weighted sound pressure level (), the difference between median noise and ambient background noise ( - ), Sharpness (), as well as satisfaction (), the multivariable linear regression algorithm is used to further study the intrinsic correlation between the proposed "effective characteristics" and subjective perception. Then, a satisfaction evaluation model of the acoustic environment based on "effective characteristics" is built in this paper. Furthermore, the soundwalk evaluation experiment and the MATLAB numerical simulation experiment are carried out, which verify that the prediction accuracy of the proposed model is more than 92%, the consistency of satisfaction level is more than 88%, as well as the changes in the values of and - have a significant impact on the satisfaction prediction of the proposed model. It shows that the proposed "effective characteristics" more comprehensively describe the quality level of the regional acoustic environment in urban open space compared with a single index, and the proposed acoustic environment satisfaction evaluation model based on "effective characteristics" has significant accuracy superiority and regional applicability.
Topics: Acoustics; Humans; Noise; Personal Satisfaction; Psychoacoustics; Sound
PubMed: 35954584
DOI: 10.3390/ijerph19159231 -
Journal of Voice : Official Journal of... Mar 2019The perception of pediatric voice quality has been investigated using clinical protocols developed for adult voices and acoustic analyses designed to identify important...
BACKGROUND
The perception of pediatric voice quality has been investigated using clinical protocols developed for adult voices and acoustic analyses designed to identify important physical parameters associated with normal and dysphonic pediatric voices. Laboratory investigations of adult dysphonia have included sophisticated methods, including a psychoacoustic approach that involves a single-variable matching task (SVMT), characterized by high inter- and intra-listener reliability, and analyses that include bio-inspired models of auditory perception that have provided valuable information regarding adult voice quality.
OBJECTIVES
To establish the utility of a psychoacoustic approach to the investigation of voice quality perception in the context of pediatric voices?
METHODS
Six listeners judged the breathiness of 20 synthetic vowel stimuli using an SVMT. To support comparisons with previous data, stimuli were modeled after four pediatric speakers and synthesized using Klatt with five parameter settings that influence the perception of breathiness. The population average breathiness judgments were modeled with acoustic measures of loudness ratio, pitch strength, and cepstral peak.
RESULTS
Listeners reliably judged the perceived breathiness of pediatric voices, as with previous investigations of breathiness in adult dysphonic voices. Breathiness judgments were accurately modeled by loudness ratio (r = 0.93), pitch strength (r = 0.91), and cepstral peak (r = 0.82). Model accuracy was not affected significantly by including stimulus fundamental frequency and was slightly higher for pediatric than for adult voices.
CONCLUSIONS
The SVMT proved robust for pediatric voices spanning a wide range of breathiness. The data indicate that this is a promising approach for future investigation of pediatric voice quality.
Topics: Age Factors; Auditory Perception; Child, Preschool; Dysphonia; Female; Humans; Judgment; Loudness Perception; Male; Observer Variation; Pitch Perception; Psychoacoustics; Severity of Illness Index; Sound Spectrography; Speech Acoustics; Speech Perception; Voice Quality; Young Adult
PubMed: 29162356
DOI: 10.1016/j.jvoice.2017.09.024