-
Frontiers in Neuroscience 2022Stroke-induced lesions at different locations in the brain can affect various aspects of binaural hearing, including spatial perception. Previous studies found...
Stroke-induced lesions at different locations in the brain can affect various aspects of binaural hearing, including spatial perception. Previous studies found impairments in binaural hearing, especially in patients with temporal lobe tumors or lesions, but also resulting from lesions all along the auditory pathway from brainstem nuclei up to the auditory cortex. Currently, structural magnetic resonance imaging (MRI) is used in the clinical treatment routine of stroke patients. In combination with structural imaging, an analysis of binaural hearing enables a better understanding of hearing-related signaling pathways and of clinical disorders of binaural processing after a stroke. However, little data are currently available on binaural hearing in stroke patients, particularly for the acute phase of stroke. Here, we sought to address this gap in an exploratory study of patients in the acute phase of ischemic stroke. We conducted psychoacoustic measurements using two tasks of binaural hearing: binaural tone-in-noise detection, and lateralization of stimuli with interaural time- or level differences. The location of the stroke lesion was established by previously acquired MRI data. An additional general assessment included three-frequency audiometry, cognitive assessments, and depression screening. Fifty-five patients participated in the experiments, on average 5 days after their stroke onset. Patients whose lesions were in different locations were tested, including lesions in brainstem areas, basal ganglia, thalamus, temporal lobe, and other cortical and subcortical areas. Lateralization impairments were found in most patients with lesions within the auditory pathway. Lesioned areas at brainstem levels led to distortions of lateralization in both hemifields, thalamus lesions were correlated with a shift of the whole auditory space, whereas some cortical lesions predominantly affected the lateralization of stimuli contralateral to the lesion and resulted in more variable responses. Lateralization performance was also found to be affected by lesions of the right, but not the left, basal ganglia, as well as by lesions in non-auditory cortical areas. In general, altered lateralization was common in the stroke group. In contrast, deficits in tone-in-noise detection were relatively scarce in our sample of lesion patients, although a significant number of patients with multiple lesion sites were not able to complete the task.
PubMed: 36620448
DOI: 10.3389/fnins.2022.1022354 -
International Journal of Environmental... Dec 2022The sound environment and music intersect in several ways and the same holds true for the soundscape and our internal response to listening to music. Music may be part... (Review)
Review
The sound environment and music intersect in several ways and the same holds true for the soundscape and our internal response to listening to music. Music may be part of a sound environment or take on some aspects of environmental sound, and therefore some of the soundscape response may be experienced alongside the response to the music. At a deeper level, coping with music, spoken language, and the sound environment may all have influenced our evolution, and the cognitive-emotional structures and responses evoked by all three sources of acoustic information may be, to some extent, the same. This paper distinguishes and defines the extent of our understanding about the interplay of external sound and our internal response to it in both musical and real-world environments. It takes a naturalistic approach to music/sound and music-listening/soundscapes to describe in objective terms some mechanisms of sense-making and interactions with the sounds. It starts from a definition of sound as vibrational and transferable energy that impinges on our body and our senses, with a dynamic tension between lower-level coping mechanisms and higher-level affective and cognitive functioning. In this way, we establish both commonalities and differences between musical responses and soundscapes. Future research will allow this understanding to grow and be refined further.
Topics: Music; Sound; Acoustics; Emotions; Auditory Perception
PubMed: 36612591
DOI: 10.3390/ijerph20010269 -
International Journal of Environmental... Dec 2022In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to...
In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject to auditory masking by other aural stimuli such as voice or additional sounds (e.g., applause, laughter, horns), and is also influenced by the visual content that accompanies the soundtrack, and by attentional and motivational factors. This situation is relevant to the music industry because, according to some copyright regulations, the non-audible background music must not generate any distribution rights, and the marginally audible background music must generate half of the standard value of audible music. In this study, we conduct two psychoacoustic experiments to identify several factors that influence background music perception, and their contribution to its variable audibility. Our experiments are based on auditory detection and chronometric tasks involving keyboard interactions with original TV content. From the collected data, we estimated a sound-to-music ratio range to define the audibility threshold limits of the class. In addition, results show that perception is affected by loudness level, listening condition, music sensitivity, and type of television content.
Topics: Music; Acoustic Stimulation; Auditory Perception; Sound; Psychoacoustics
PubMed: 36612443
DOI: 10.3390/ijerph20010123 -
Journal of Speech, Language, and... Jan 2023Acoustic and perceptual quantification of vocal strain has been a vexing problem for years. To increase measurement rigor, a suitable single-variable matching stimulus...
PURPOSE
Acoustic and perceptual quantification of vocal strain has been a vexing problem for years. To increase measurement rigor, a suitable single-variable matching stimulus for strain was developed and validated, based on the matching stimulus used previously for breathy and rough voice qualities.
METHOD
A set of 21 comparison stimuli for a single-variable matching task (SVMT) was synthesized based on a speech-shaped sawtooth waveform mixed with speech-shaped noise. Variable bandpass filter gain in mid-to-high frequencies achieved a wide range of computed sharpness (in constant sharpness steps) and served as the independent variable for the SVMT. Ten natural /ɑ/ stimuli with a wide range of the primary voice quality of strain and a minimum of breathiness or roughness were selected and assessed using the SVMT. Natural voice samples and synthetic comparison stimuli were also assessed using a perceptual magnitude estimation (ME) task.
RESULTS
ME data validated the correspondence of the set of comparison stimuli to varying perceived strain. Perceived strain magnitudes of the comparison stimuli increased significantly and linearly with computed sharpness ( = .99). A linear regression revealed that strain matching values were significantly predicted by computed sharpness ( = .96) and perceived strain magnitudes ( = .95) of the natural voice stimuli.
CONCLUSION
The perception of vocal strain is strongly associated with computed sharpness and is captured accurately and precisely using an SVMT, in which the independent variable is the bandpass filter gain (in steps of equal sharpness) applied to the comparison stimuli.
Topics: Humans; Voice Quality; Psychoacoustics; Speech Acoustics; Acoustics; Speech Perception; Speech Production Measurement
PubMed: 36516473
DOI: 10.1044/2022_JSLHR-22-00280 -
Cognition Mar 2023Information in speech and music is often conveyed through changes in fundamental frequency (f0), perceived by humans as "relative pitch". Relative pitch judgments are...
Information in speech and music is often conveyed through changes in fundamental frequency (f0), perceived by humans as "relative pitch". Relative pitch judgments are complicated by two facts. First, sounds can simultaneously vary in timbre due to filtering imposed by a vocal tract or instrument body. Second, relative pitch can be extracted in two ways: by measuring changes in constituent frequency components from one sound to another, or by estimating the f0 of each sound and comparing the estimates. We examined the effects of timbral differences on relative pitch judgments, and whether any invariance to timbre depends on whether judgments are based on constituent frequencies or their f0. Listeners performed up/down and interval discrimination tasks with pairs of spoken vowels, instrument notes, or synthetic tones, synthesized to be either harmonic or inharmonic. Inharmonic sounds lack a well-defined f0, such that relative pitch must be extracted from changes in individual frequencies. Pitch judgments were less accurate when vowels/instruments were different compared to when they were the same, and were biased by the associated timbre differences. However, this bias was similar for harmonic and inharmonic sounds, and was observed even in conditions where judgments of harmonic sounds were based on f0 representations. Relative pitch judgments are thus not invariant to timbre, even when timbral variation is naturalistic, and when such judgments are based on representations of f0.
Topics: Humans; Pitch Perception; Music; Pitch Discrimination; Acoustic Stimulation
PubMed: 36495710
DOI: 10.1016/j.cognition.2022.105327 -
Frontiers in Pain Research (Lausanne,... 2022The experience of anxiety is central to the development of chronic pain. Music listening has been previously shown to exert analgesic effects. Here we tested if an...
The experience of anxiety is central to the development of chronic pain. Music listening has been previously shown to exert analgesic effects. Here we tested if an active engagement in music making is more beneficial than music listening in terms of anxiety and pain levels during physical activity that is often avoided in patients with chronic pain. We applied a music feedback paradigm that combines music making and sports exercise, and which has been previously shown to enhance mood. We explored this method as an intervention to potentially reduce anxiety in a group of patients with chronic pain (= 24, 20 female and 4 men; age range 34-64, 51.67, = 6.84) and with various anxiety levels. All participants performed two conditions: one condition, , where exercise equipment was modified with music feedback so that it could be played like musical instruments by groups of three. Second, a where groups of three performed exercise on the same devices but where they listened to the same type of music passively. Participants' levels of anxiety, mood, pain and self-efficacy were assessed with standardized psychological questionnaires before the experiment and after each condition. Results demonstrate that exercise with musical feedback reduced anxiety values in patients with chronic pain significantly as compared to conventional workout with passive music listening. There were no significant overall changes in pain, but patients with greater anxiety levels compared to those with moderate anxiety levels were observed to potentially benefit more from the music feedback intervention in terms of alleviation of pain. Furthermore, it was observed that patients during more strongly perceived motivation through others. The observed diminishing effects of on anxiety have a high clinical relevance, and in a longer term the therapeutic application could help to break the Anxiety Loop of Pain, reducing chronic pain. The intervention method, however, also has immediate benefits to chronic pain rehabilitation, increasing the motivation to work out, and facilitating social bonding.
PubMed: 36483944
DOI: 10.3389/fpain.2022.944181 -
Hearing Research Jan 2023The ability of hearing-impaired listeners to detect spectro-temporal modulation (STM) has been shown to correlate with individual listeners' speech reception...
The ability of hearing-impaired listeners to detect spectro-temporal modulation (STM) has been shown to correlate with individual listeners' speech reception performance. However, the STM detection tests used in previous studies were overly challenging especially for elderly listeners with moderate-to-severe hearing loss. Furthermore, the speech tests considered as a reference were not optimized to yield ecologically valid outcomes that represent real-life speech reception deficits. The present study investigated an STM detection measurement paradigm with individualized audibility compensation, focusing on its clinical viability and relevance as a real-life supra-threshold speech intelligibility predictor. STM thresholds were measured in 13 elderly hearing-impaired native Danish listeners using four previously established (noise-carrier based) and two novel complex-tone carrier based STM stimulus variants. Speech reception thresholds (SRTs) were measured (i) in a realistic spatial speech-on-speech set up and (ii) using co-located stationary noise, both with individualized amplification. In contrast with previous related studies, the proposed measurement paradigm yielded robust STM thresholds for all listeners and conditions. The STM thresholds were positively correlated with the SRTs, whereby significant correlations were found for the realistic speech-test condition but not for the stationary-noise condition. Three STM stimulus variants (one noise-carrier based and two complex-tone based) yielded significant predictions of SRTs, accounting for up to 53% of the SRT variance. The results of the study could form the basis for a clinically viable STM test for quantifying supra-threshold speech reception deficits in aided hearing-impaired listeners.
Topics: Humans; Aged; Auditory Threshold; Speech Perception; Hearing Loss; Hearing; Speech Intelligibility
PubMed: 36463632
DOI: 10.1016/j.heares.2022.108650 -
Annals of the New York Academy of... Dec 2022The successful design of musical interventions for dementia patients requires knowledge of how rhythmic abilities change with disease severity. In this study, we tested...
The successful design of musical interventions for dementia patients requires knowledge of how rhythmic abilities change with disease severity. In this study, we tested the impact of the severity of the neurocognitive disorders (NCD) on the socioemotional and motor responses to music in three groups of patients with Major NCD, Mild NCD, or No NCD. Patients were asked to tap to a metronomic or musical rhythm while facing a live musician or through a video. We recorded their emotional facial reactions and their sensorimotor synchronization (SMS) abilities. Patients with No NCD or Mild NCD expressed positive socioemotional reactions to music, but patients with Major NCD did not, indicating a decrease in the positive emotional impact of music at this stage of the disease. SMS to a metronome was less regular and less precise in patients with a Major NCD than in patients with No NCD or Mild NCD, which was not the case when tapping with music, particularly in the presence of a live musician, suggesting the relevance of live performance for patients with Major NCD. These findings suggest that the socioemotional and motor reactions to music are negatively affected by the progression of the NCD.
Topics: Humans; Music; Emotions; Dementia
PubMed: 36321882
DOI: 10.1111/nyas.14923 -
The Journal of Experimental Biology Oct 2022As diving foragers, sea ducks are vulnerable to underwater anthropogenic activity, including ships, underwater construction, seismic surveys and gillnet fisheries....
As diving foragers, sea ducks are vulnerable to underwater anthropogenic activity, including ships, underwater construction, seismic surveys and gillnet fisheries. Bycatch in gillnets is a contributing source of mortality for sea ducks, killing hundreds of thousands of individuals annually. We researched underwater hearing in sea duck species to increase knowledge of underwater avian acoustic sensitivity and to assist with possible development of gillnet bycatch mitigation strategies that include auditory deterrent devices. We used both psychoacoustic and electrophysiological techniques to investigate underwater duck hearing in several species including the long-tailed duck (Clangula hyemalis), surf scoter (Melanitta perspicillata) and common eider (Somateria mollissima). Psychoacoustic results demonstrated that all species tested share a common range of maximum auditory sensitivity of 1.0-3.0 kHz, with the long-tailed ducks and common eiders at the high end of that range (2.96 kHz), and surf scoters at the low end (1.0 kHz). In addition, our electrophysiological results from 4 surf scoters and 2 long-tailed ducks, while only tested at 0.5, 1 and 2 kHz, generally agree with the audiogram shape from our psychoacoustic testing. The results from this study are applicable to the development of effective acoustic deterrent devices or pingers in the 2-3 kHz range to deter sea ducks from anthropogenic threats.
Topics: Humans; Animals; Ducks; Fisheries; Acoustics; Hearing
PubMed: 36305674
DOI: 10.1242/jeb.243953 -
Frontiers in Neuroscience 2022The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date, many researchers have...
The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date, many researchers have designed their individual sonification from scratch. There are no standards in sonification design and evaluation. But researchers and practitioners have formulated several requirements and established several methods. There is a wide consensus that psychoacocustics could play an important role in the sonification design and evaluation phase. But this requires a) an adaption of psychoacoustic methods to the signal types of sonification and b) a preparation of the sonification for the psychoacoustic experiment procedure. In this method paper, we present a PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification (PAMPAS) dedicated to the researchers of sonification. A well-defined and well-established, efficient, reliable, and replicable just noticeable difference (JND) experiment using the maximum likelihood procedure (MLP) serves as the basis to achieve perceptual linearity of parameter mapping during the sonification design stage and to identify and quantify perceptual effects during the sonification evaluation stage, namely the perceptual resolution, hysteresis effects and perceptual interferences. The experiment results are scores from standardized data space and a standardized procedure. These scores can serve to compare multiple sonification designs of a single researcher or even among different research groups. This method can supplement other sonification designs and evaluation methods from a perceptual viewpoint.
PubMed: 36277997
DOI: 10.3389/fnins.2022.930944