-
Hearing Research Apr 2024Noise sensitivity and hyperacusis are decreased sound tolerance conditions that are not well delineated or defined. This paper presents the correlations and... (Review)
Review
Noise sensitivity and hyperacusis are decreased sound tolerance conditions that are not well delineated or defined. This paper presents the correlations and distributions of the Noise Sensitivity Scale (NSS) and the Hyperacusis Questionnaire (HQ) scores in two distinct large samples. In Study 1, a community-based sample of young healthy adults (n = 103) exhibited a strong correlation (r = 0.74) between the two questionnaires. The mean NSS and HQ scores were 54.4 ± 16.9 and 12.5 ± 7.5, respectively. NSS scores displayed a normal distribution, whereas HQ scores showed a slight positive skew. In Study 2, a clinical sample of Veterans with or without clinical comorbidities (n = 95) showed a moderate correlation (r = 0.58) between the two questionnaires. The mean scores were 66.6 ± 15.6 and 15.3 ± 7.3 on the NSS and HQ, respectively. Both questionnaires' scores followed a normal distribution. In both samples, participants who self-identified as having decreased sound tolerance scored higher on both questionnaires. These findings provide reference data from two diverse sample groups. The moderate to strong correlations observed in both studies suggest a significant overlap between noise sensitivity and hyperacusis. The results underscore that NSS and HQ should not be used interchangeably, as they aim to measure distinct constructs, however to what extent they actually do remains to be determined. Further investigation should distinguish between these conditions through a comprehensive psychometric analysis of the questionnaires and a thorough exploration of psychoacoustic, neurological, and physiological differences that set them apart.
Topics: Adult; Humans; Hyperacusis; Tinnitus; Surveys and Questionnaires; Sound; Psychoacoustics
PubMed: 38492447
DOI: 10.1016/j.heares.2024.108992 -
International Journal of Environmental... Aug 2018Soundscape research needs to develop predictive tools for environmental design. A number of descriptor-indicator(s) models have been proposed so far, particularly for...
Soundscape research needs to develop predictive tools for environmental design. A number of descriptor-indicator(s) models have been proposed so far, particularly for the "tranquility" dimension to manage "quiet areas" in urban contexts. However, there is a current lack of models addressing environments offering actively engaging soundscapes, i.e., the "vibrancy" dimension. The main aim of this study was to establish a predictive model for a vibrancy descriptor based on physical parameters, which could be used by designers and practitioners. A group interview was carried out to formulate a hypothesis on what elements would be influential for vibrancy perception. Afterwards, data on vibrancy perception were collected for different locations in the UK and China through a laboratory experiment and their physical parameters were used as indicators to establish a predictive model. Such indicators included both aural and visual parameters. The model, based on Roughness, Presence of People, Fluctuation Strength, Loudness and Presence of Music as predictors, explained 76% of the variance in the mean individual vibrancy scores. A statistically significant correlation was found between vibrancy scores and eventfulness scores, but not between vibrancy scores and pleasantness scores. Overall results showed that vibrancy is contextual and depends both on the soundscape and on the visual scenery.
Topics: Acoustics; Auditory Perception; City Planning; Emotions; Environment; Humans; Models, Theoretical; Psychoacoustics; Sound; Vibration
PubMed: 30103394
DOI: 10.3390/ijerph15081712 -
The Journal of the Acoustical Society... Mar 2014While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale... (Comparative Study)
Comparative Study
While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.
Topics: Acoustic Stimulation; Adolescent; Adult; Analysis of Variance; Audiometry; Cues; Female; Humans; Male; Music; Periodicity; Pitch Discrimination; Pitch Perception; Psychoacoustics; Time Factors; Time Perception; Young Adult
PubMed: 24606277
DOI: 10.1121/1.4865196 -
Journal of the Association For Research... Jun 2011Previous studies have found a significant correlation between spectral-ripple discrimination and speech and music perception in cochlear implant (CI) users. This...
Previous studies have found a significant correlation between spectral-ripple discrimination and speech and music perception in cochlear implant (CI) users. This relationship could be of use to clinicians and scientists who are interested in using spectral-ripple stimuli in the assessment and habilitation of CI users. However, previous psychoacoustic tasks used to assess spectral discrimination are not suitable for all populations, and it would be beneficial to develop methods that could be used to test all age ranges, including pediatric implant users. Additionally, it is important to understand how ripple stimuli are processed in the central auditory system and how their neural representation contributes to behavioral performance. For this reason, we developed a single-interval, yes/no paradigm that could potentially be used both behaviorally and electrophysiologically to estimate spectral-ripple threshold. In experiment 1, behavioral thresholds obtained using the single-interval method were compared to thresholds obtained using a previously established three-alternative forced-choice method. A significant correlation was found (r = 0.84, p = 0.0002) in 14 adult CI users. The spectral-ripple threshold obtained using the new method also correlated with speech perception in quiet and noise. In experiment 2, the effect of the number of vocoder-processing channels on the behavioral and physiological threshold in normal-hearing listeners was determined. Behavioral thresholds, using the new single-interval method, as well as cortical P1-N1-P2 responses changed as a function of the number of channels. Better behavioral and physiological performance (i.e., better discrimination ability at higher ripple densities) was observed as more channels added. In experiment 3, the relationship between behavioral and physiological data was examined. Amplitudes of the P1-N1-P2 "change" responses were significantly correlated with d' values from the single-interval behavioral procedure. Results suggest that the single-interval procedure with spectral-ripple phase inversion in ongoing stimuli is a valid approach for measuring behavioral or physiological spectral resolution.
Topics: Adult; Aged; Auditory Cortex; Auditory Threshold; Cochlear Implants; Deafness; Discrimination, Psychological; Female; Humans; Male; Middle Aged; Psychoacoustics; Psychometrics; Speech Perception
PubMed: 21271274
DOI: 10.1007/s10162-011-0257-4 -
Philosophical Transactions of the Royal... Mar 2008Although most research on the perception of speech has been conducted with speech presented without any competing sounds, we almost always listen to speech against a...
Although most research on the perception of speech has been conducted with speech presented without any competing sounds, we almost always listen to speech against a background of other sounds which we are adept at ignoring. Nevertheless, such additional irrelevant sounds can cause severe problems for speech recognition algorithms and for the hard of hearing as well as posing a challenge to theories of speech perception. A variety of different problems are created by the presence of additional sound sources: detection of features that are partially masked, allocation of detected features to the appropriate sound sources and recognition of sounds on the basis of partial information. The separation of sounds is arousing substantial attention in psychoacoustics and in computer science. An effective solution to the problem of separating sounds would have important practical applications.
Topics: Humans; Noise; Psychoacoustics; Sound; Speech Perception
PubMed: 17827106
DOI: 10.1098/rstb.2007.2156 -
Cognition Mar 1987This paper reviews what is currently known about the sensory and perceptual input that is made available to the word recognition system by processes typically assumed to...
This paper reviews what is currently known about the sensory and perceptual input that is made available to the word recognition system by processes typically assumed to be related to speech sound perception. In the first section, we discuss several of the major problems that speech researchers have tried to deal with over the last thirty years. In the second section, we consider one attempt to conceptualize the speech perception process within a theoretical framework that equates processing stages with levels of linguistic analysis. This framework assumes that speech is processed through a series of analytic stages ranging from peripheral auditory processing, acoustic-phonetic and phonological analysis, to word recognition and lexical access. Finally, in the last section, we consider several recent approaches to spoken word recognition and lexical access. We examine a number of claims surrounding the nature of the bottom-up input assumed by these models, postulated perceptual units, and the interaction of different knowledge sources in auditory word recognition. An additional goal of this paper was to establish the need to employ segmental representations in spoken word recognition.
Topics: Humans; Phonetics; Psychoacoustics; Semantics; Speech Perception
PubMed: 3581727
DOI: 10.1016/0010-0277(87)90003-5 -
Proceedings of the National Academy of... Apr 2018Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source... (Comparative Study)
Comparative Study
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Topics: Acoustic Stimulation; Adult; Cues; Depth Perception; Female; Head Movements; Humans; Motion; Proprioception; Psychoacoustics; Sound Localization; Vestibule, Labyrinth; Young Adult
PubMed: 29531082
DOI: 10.1073/pnas.1712058115 -
European Archives of... Jan 2022In most cases, tinnitus co-exists with hearing loss, suggesting that poorer speech understanding is simply due to a lack of acoustic information reaching the central...
PURPOSE
In most cases, tinnitus co-exists with hearing loss, suggesting that poorer speech understanding is simply due to a lack of acoustic information reaching the central nervous system (CNS). However, it also happens that patients with tinnitus who have normal hearing also report problems with speech understanding, and it is possible to suppose that tinnitus is to blame for difficulties in perceptual processing of auditory information. The purpose of the study was to evaluate the auditory processing abilities of normally hearing subjects with and without tinnitus.
METHODS
The study group comprised 97 adults, 54 of whom had normal hearing and chronic tinnitus (the study group) and 43 who had normal hearing and no tinnitus (the control group). The audiological assessment comprised pure-tone audiometry and high-frequency pure-tone audiometry, impedance audiometry, and distortion product oto-acoustic emission assessment. To evaluate possible auditory processing deficits, the Frequency Pattern Test (FPT), Duration Pattern Test (DPT), Dichotic Listening Test (DLT), and Gap Detection Threshold (GDT) tests were performed.
RESULTS
The tinnitus subjects had significantly lower scores than the controls in the gap detection test (p < 0.01) and in the dichotic listening test (p < 0.001), but only for the right ear. The results for both groups were similar in the temporal ordering tests (FPT and DPT). Right-ear advantage (REA) was found for the controls, but not for the tinnitus subjects.
CONCLUSION
In normally hearing patients, the presence of tinnitus may be accompanied with auditory processing difficulties.
Topics: Adult; Audiometry, Pure-Tone; Auditory Perception; Auditory Threshold; Hearing; Humans; Psychoacoustics; Tinnitus
PubMed: 34363504
DOI: 10.1007/s00405-021-07023-w -
Proceedings of the National Academy of... Dec 2020Perceptual systems have finite memory resources and must store incoming signals in compressed formats. To explore whether representations of a sound's pitch might derive... (Observational Study)
Observational Study
Perceptual systems have finite memory resources and must store incoming signals in compressed formats. To explore whether representations of a sound's pitch might derive from this need for compression, we compared discrimination of harmonic and inharmonic sounds across delays. In contrast to inharmonic spectra, harmonic spectra can be summarized, and thus compressed, using their fundamental frequency (f0). Participants heard two sounds and judged which was higher. Despite being comparable for sounds presented back-to-back, discrimination was better for harmonic than inharmonic stimuli when sounds were separated in time, implicating memory representations unique to harmonic sounds. Patterns of individual differences (correlations between thresholds in different conditions) indicated that listeners use different representations depending on the time delay between sounds, directly comparing the spectra of temporally adjacent sounds, but transitioning to comparing f0s across delays. The need to store sound in memory appears to determine reliance on f0-based pitch and may explain its importance in music, in which listeners must extract relationships between notes separated in time.
Topics: Acoustic Stimulation; Adolescent; Adult; Auditory Threshold; Female; Humans; Male; Memory; Middle Aged; Music; Psychoacoustics; Sound; Time Factors; Young Adult
PubMed: 33262275
DOI: 10.1073/pnas.2008956117 -
Otology & Neurotology : Official... Sep 2014To determine if unaided, non-linguistic psychoacoustic measures can be effective in evaluating cochlear implant (CI) candidacy.
OBJECTIVE
To determine if unaided, non-linguistic psychoacoustic measures can be effective in evaluating cochlear implant (CI) candidacy.
STUDY DESIGN
Prospective split-cohort study including predictor development subgroup and independent predictor validation subgroup.
SETTING
Tertiary referral center.
SUBJECTS
Fifteen subjects (28 ears) with hearing loss were recruited from patients visiting the University of Washington Medical Center for CI evaluation.
METHODS
Spectral-ripple discrimination (using a 13-dB modulation depth) and temporal modulation detection using 10- and 100-Hz modulation frequencies were assessed with stimuli presented through insert earphones. Correlations between performance for psychoacoustic tasks and speech perception tasks were assessed. Receiver operating characteristic curve analysis was performed to estimate the optimal psychoacoustic score for CI candidacy evaluation in the development subgroup and then tested in an independent sample.
RESULTS
Strong correlations were observed between spectral-ripple thresholds and both aided sentence recognition and unaided word recognition. Weaker relationships were found between temporal modulation detection and speech tests. Receiver operating characteristic curve analysis demonstrated that the unaided spectral-ripple discrimination shows a good sensitivity, specificity, positive predictive value, and negative predictive value compared to the current gold standard, aided sentence recognition.
CONCLUSION
Results demonstrated that the unaided spectral-ripple discrimination test could be a promising tool for evaluating CI candidacy.
Topics: Adult; Aged; Aged, 80 and over; Cochlear Implantation; Cochlear Implants; Cohort Studies; Female; Hearing Loss; Humans; Linguistics; Male; Middle Aged; Patient Selection; Prospective Studies; Psychoacoustics; Speech Perception; Young Adult
PubMed: 24901669
DOI: 10.1097/MAO.0000000000000323