-
Attention, Perception & Psychophysics Jul 2015How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene... (Review)
Review
How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene analysis, and attention, all dealing with early processing and selection of speech, which has been stimulated by this question. Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and "unmasking" resulting from binaural listening. Psychoacoustic models have been developed that can predict these effects accurately, albeit using computational approaches rather than approximations of neural processing. Grouping—the segregation and streaming of sounds—represents a subsequent processing stage that interacts closely with attention. Sounds can be easily grouped—and subsequently selected—using primitive features such as spatial location and fundamental frequency. More complex processing is required when lexical, syntactic, or semantic information is used. Whereas it is now clear that such processing can take place preattentively, there also is evidence that the processing depth depends on the task-relevancy of the sound. This is consistent with the presence of a feedback loop in attentional control, triggering enhancement of to-be-selected input. Despite recent progress, there are still many unresolved issues: there is a need for integrative models that are neurophysiologically plausible, for research into grouping based on other than spatial or voice-related cues, for studies explicitly addressing endogenous and exogenous attention, for an explanation of the remarkable sluggishness of attention focused on dynamically changing sounds, and for research elucidating the distinction between binaural speech perception and sound localization.
Topics: Attention; Cues; Humans; Perceptual Masking; Psychoacoustics; Sound Localization; Sound Spectrography; Speech; Speech Acoustics; Speech Perception
PubMed: 25828463
DOI: 10.3758/s13414-015-0882-9 -
Hearing Research Nov 2012In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel... (Review)
Review
In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested.
Topics: Acoustic Stimulation; Animals; Audiology; Bone Conduction; Cochlea; Hearing; History, 20th Century; History, 21st Century; Humans; Mechanotransduction, Cellular; Models, Biological; Pressure; Psychoacoustics; Vibration
PubMed: 22617841
DOI: 10.1016/j.heares.2012.05.004 -
Journal of Neuroscience Methods Mar 2009An unsupervised correlation-based clustering method was developed to assess the trial-to-trial variability of auditory evoked potentials (AEPs). The method first...
An unsupervised correlation-based clustering method was developed to assess the trial-to-trial variability of auditory evoked potentials (AEPs). The method first decomposes single trials into three frequency bands, each containing activity primarily associated with one of the three major AEP components, i.e., P50, N100 and P200. Next, single-trial evoked potentials with similar post-stimulus characteristics are clustered and selectively averaged to determine the presence or absence of an AEP component. The method was evaluated on actual AEP and spontaneous EEG data collected from 25 healthy participants using a paradigm in which pairs of identical tones were presented, with the first stimulus (S1) presented 0.5s before the second stimulus (S2). Homogeneous, well-separated clusters were obtained and substantial AEP variability was found. Also, there was a trend for S2 to produce fewer 'complete' (and significantly smaller) responses than S1. Tests conducted on spontaneous EEG produced similar clusters as obtained from EP data, but significantly fewer stimuli produced responses containing all three EP components than seen in AEP data. These findings suggest that the clustering method presented here performs adequately to assess trial-to-trial EP variability. Also, the results suggest that the sensory gating observed in normal controls may be caused by the fact that the second stimulus generates fewer 'responsive' trials than the first stimulus, thus resulting in smaller ensemble averages.
Topics: Acoustic Stimulation; Auditory Perception; Cluster Analysis; Electroencephalography; Evoked Potentials, Auditory; Humans; Psychoacoustics; Reaction Time
PubMed: 19103222
DOI: 10.1016/j.jneumeth.2008.11.021 -
CoDAS 2019To compare clinical characteristics of tinnitus and interference in quality of life in individuals with and without associated hearing loss, as well as to discuss the...
PURPOSE
To compare clinical characteristics of tinnitus and interference in quality of life in individuals with and without associated hearing loss, as well as to discuss the association of quantitative measurements and qualitative instruments.
METHODS
A quantitative, cross-sectional and comparative study approved by the Research Ethics Committee (No. 973.314/CAEE: 41634815.3.0000.0106) was carried out. The responses of the psychoacoustic assessment of tinnitus (intensity, frequency, minimum masking level and loudness discomfort level for pure tone and speech), as well as the Tinnitus Handicap Inventory (THI) questionnaire, and the visual analogue scale (VAS) were compared between 15 patients with tinnitus and peripheral hearing loss (group I) and 16 adults with normal hearing (group II).
RESULTS
The mean VAS and THI scores obtained in GI were 5.1 (+1.5) and 42.3 (+18), and in GII, 5.7 (+2.6) and 32.7 (+25), respectively. This result suggests moderate GI annoyance and moderate/mild GII annoyance (p>0.005). There was a positive and moderate correlation between THI and VAS only in GII. In the psychoacoustic evaluation, significant differences were observed between the groups regarding the measurement of loudness (*p=0.013) and the minimum masking level (*p=0.001).
CONCLUSION
There was no direct influence of the presence of hearing loss in relation to the impact of tinnitus. The differences found between the groups regarding the psychoacoustics measures can be justified by the presence of cochlear damage. The objective measurement of tinnitus, regardless of the presence or absence of peripheral hearing loss, is an important instrument to be used along with self-evaluation measures.
Topics: Adult; Age Factors; Audiometry; Cross-Sectional Studies; Female; Hearing Loss; Humans; Male; Middle Aged; Prospective Studies; Psychoacoustics; Quality of Life; Retrospective Studies; Severity of Illness Index; Surveys and Questionnaires; Tinnitus; Visual Analog Scale; Young Adult
PubMed: 31644709
DOI: 10.1590/2317-1782/20192018029 -
Psychonomic Bulletin & Review Jun 2022The perception of consonance and dissonance in intervals and chords is influenced by psychoacoustic and cultural factors. Past research has provided conflicting...
The perception of consonance and dissonance in intervals and chords is influenced by psychoacoustic and cultural factors. Past research has provided conflicting observations about the role of frequency in assessing musical consonance that may stem from comparisons of limited frequency bands without much theorizing or modeling. Here we examine the effect of register on perceptual consonance of chords. Based on two acoustic principles, we predict a decrease in consonance at low frequencies (roughness) and a decrease of consonance at high frequencies (sharpness). Due to these two separate principles, we hypothesize that frequency will have a curvilinear impact on consonance. A selection of tetrads varying in consonance were presented in seven registers spanning 30 to 2600 Hz. Fifty-five participants rated the stimuli in an online experiment. The effect of register on consonance ratings was clear and largely according to the predictions; The low registers impacted consonance negatively and the highest two registers also received significantly lower consonance ratings than the middle registers. The impact of register on consonance could be accurately described with a cubic relationship. Overall, the influence of roughness was more pronounced on consonance ratings than sharpness. Together, these findings clarify previous empirical efforts to model the effect of frequency on consonance through basic acoustic principles. They further suggest that a credible account of consonance and dissonance in music needs to incorporate register.
Topics: Acoustic Stimulation; Auditory Perception; Humans; Music; Psychoacoustics
PubMed: 34921342
DOI: 10.3758/s13423-021-02033-5 -
PloS One 2023There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and...
There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and psychoacoustics, we considered automatic responses to the perfect fifth and the major second (flattened by 25 cents) intervals alongside conscious evaluations of the same intervals across two cultures and two levels of musical expertise. Four groups of participants completed the tasks: expert performers of Lithuanian Sutartinės, English speaking musicians in Western diatonic genres, Lithuanian non-musicians and English-speaking non-musicians. Sutartinės singers were chosen as this style of singing is an example of 'beat diaphony' where intervals of parts form predominantly rough sonorities and audible beats. There was no difference in automatic responses to intervals, suggesting that an aversion to acoustically rough intervals is not governed by cultural familiarity but may have a physical basis in how the human auditory system works. However, conscious evaluations resulted in group differences with Sutartinės singers rating both the flattened major as more positive than did other groups. The results are discussed in the context of recent developments in consonance and dissonance research.
Topics: Humans; Music; Singing; Psychoacoustics; Recognition, Psychology; Consciousness; Acoustic Stimulation; Auditory Perception
PubMed: 38051728
DOI: 10.1371/journal.pone.0294645 -
International Journal of Environmental... Jan 2021In public, the role of a fire alarm is to induce a person to a certain recognition of potential danger, resulting in that person taking appropriate evacuation action....
In public, the role of a fire alarm is to induce a person to a certain recognition of potential danger, resulting in that person taking appropriate evacuation action. Unfortunately, the sound of the fire alarm is not internationally standardized yet, except for recommending the use of a signal with a regular temporal pattern (or T-3 pattern). To identify the effective alarm sound, the present study investigated a relationship between acoustic characteristics of the fire alarm and its subjective psychoacoustic recognition and objective electroencephalography (EEG) responses for 50 young and older listeners. As the stimuli, six different types of alarms were applied: bell, slow whoop, T-3 520 Hz, T-3 3100 Hz, and two simulated T-3 sounds (i.e., 520 and 3100 Hz) to which older adults with age-related hearing loss seemed to hear. While listening to the sounds, the EEG was recorded by each individual. The psychoacoustic recognition was also evaluated by using a questionnaire consisting of three subcategories, i.e., arousal, urgency, and immersion. The subjective responses resulted in a statistically significant difference between the types of sound. In particular, the fire alarms had acoustic features of high frequency or gradually increased frequencies such as T-3 3100 Hz, bell, and slow whoop, representing effective sounds to induce high arousal and urgency, although they also showed a limitation in being widely transmitted and vulnerable to background noise environment. Interestingly, there was a meaningful interaction effect between the sounds and age groups for the urgency and immersion, indicating that the bell was quite highly recognized in older adults. In general, EEG data showed that alpha power was decreased and gamma power was increased in all sounds, which means a relationship with negative emotions such as high arousal and urgency. Based on the current findings, we suggest using fire alarm sounds with acoustic features of high frequencies in indoor and/or public places.
Topics: Acoustic Stimulation; Aged; Auditory Perception; Brain; Humans; Psychoacoustics; Recognition, Psychology; Sound
PubMed: 33440710
DOI: 10.3390/ijerph18020541 -
PloS One 2019This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal...
This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal tuning system. Extrinsic predictors are derived, in part, from long-term statistical regularities in music; for example, the prevalence of a chord in a corpus of music that is relevant to a participant. Conversely, intrinsic predictors make no use of long-term statistical regularities in music; for example, psychoacoustic features inherent in the music, such as roughness. Two types of affect were measured for each chord: pleasantness/unpleasantness and happiness/sadness. We modelled the data with a number of novel and well-established intrinsic predictors, namely roughness, harmonicity, spectral entropy and average pitch height; and a single extrinsic predictor, 12-TET Dissimilarity, which was estimated by the chord's smallest distance to any 12-tone equally tempered chord. Musical sophistication was modelled as a potential moderator of the above predictors. Two experiments were conducted, each using slightly different tunings of the Bohlen-Pierce musical system: a just intonation version and an equal-tempered version. It was found that, across both tunings and across both affective responses, all the tested intrinsic features and 12-TET Dissimilarity have consistent influences in the expected direction. These results contrast with much current music perception research, which tends to assume the dominance of extrinsic over intrinsic predictors. This study highlights the importance of both intrinsic characteristics of the acoustic signal itself, as well as extrinsic factors, such as 12-TET Dissimilarity, on perception of affect in music.
Topics: Acoustic Stimulation; Adolescent; Adult; Affect; Auditory Perception; Emotions; Evoked Potentials, Auditory; Female; Happiness; Humans; Male; Music; Pitch Perception; Psychoacoustics; Random Allocation; Young Adult
PubMed: 31226170
DOI: 10.1371/journal.pone.0218570 -
Journal of the Association For Research... Feb 2015In ordinary listening environments, acoustic signals reaching the ears directly from real sound sources are followed after a few milliseconds by early reflections... (Review)
Review
In ordinary listening environments, acoustic signals reaching the ears directly from real sound sources are followed after a few milliseconds by early reflections arriving from nearby surfaces. Early reflections are spectrotemporally similar to their source signals but commonly carry spatial acoustic cues unrelated to the source location. Humans and many other animals, including nonmammalian and even invertebrate animals, are nonetheless able to effectively localize sound sources in such environments, even in the absence of disambiguating visual cues. Robust source localization despite concurrent or nearly concurrent spurious spatial acoustic information is commonly attributed to an assortment of perceptual phenomena collectively termed "the precedence effect," characterizing the perceptual dominance of spatial information carried by the first-arriving signal. Here, we highlight recent progress and changes in the understanding of the precedence effect and related phenomena.
Topics: Animals; Humans; Psychoacoustics; Sound Localization
PubMed: 25479823
DOI: 10.1007/s10162-014-0496-2 -
The Journal of the Acoustical Society... Mar 2014Although many studies have examined the precedence effect (PE), few have tested whether it shows a buildup and breakdown in nonhuman animals comparable to that seen in... (Comparative Study)
Comparative Study
Although many studies have examined the precedence effect (PE), few have tested whether it shows a buildup and breakdown in nonhuman animals comparable to that seen in humans. These processes are thought to reflect the ability of the auditory system to adjust to a listener's acoustic environment, and their mechanisms are still poorly understood. In this study, ferrets were trained on a two-alternative forced-choice task to discriminate the azimuthal direction of brief sounds. In one experiment, pairs of noise bursts were presented from two loudspeakers at different interstimulus delays (ISDs). Results showed that localization performance changed as a function of ISD in a manner consistent with the PE being operative. A second experiment investigated buildup and breakdown of the PE by measuring the ability of ferrets to discriminate the direction of a click pair following presentation of a conditioning train. Human listeners were also tested using this paradigm. In both species, performance was better when the test clicks and conditioning train had the same ISD but deteriorated following a switch in the direction of the leading and lagging sounds between the conditioning train and test clicks. These results suggest that ferrets, like humans, experience a buildup and breakdown of the PE.
Topics: Acoustic Stimulation; Adult; Animals; Audiometry; Auditory Pathways; Behavior, Animal; Conditioning, Psychological; Discrimination, Psychological; Female; Ferrets; Humans; Male; Models, Animal; Psychoacoustics; Reaction Time; Sound Localization; Species Specificity; Time Factors
PubMed: 24606278
DOI: 10.1121/1.4864486