-
Scientific Reports May 2021Misophonia is a condition where a strong arousal response is triggered when hearing specific human generated sounds, like chewing, and/or repetitive tapping noises, like...
Misophonia is a condition where a strong arousal response is triggered when hearing specific human generated sounds, like chewing, and/or repetitive tapping noises, like pen clicking. It is diagnosed with clinical interviews and questionnaires since no psychoacoustic tools exist to assess its presence. The present study was aimed at developing and testing a new assessment tool for misophonia. The method was inspired by an approach we have recently developed for hyperacusis. It consisted of presenting subjects (n = 253) with misophonic, pleasant, and unpleasant sounds in an online experiment. The task was to rate them on a pleasant to unpleasant visual analog scale. Subjects were labeled as misophonics (n = 78) or controls (n = 55) by using self-report questions and a misophonia questionnaire, the MisoQuest. There was a significant difference between controls and misophonics in the median global rating of misophonic sounds. On the other hand, median global rating of unpleasant, and pleasant sounds did not differ significantly. We selected a subset of the misophonic sounds to form the core discriminant sounds of misophonia (CDS). A metric: the CDS score, was used to quantitatively measure misophonia, both with a global score and with subscores. The latter could specifically quantify aversion towards different sound sources/events, i.e., mouth, breathing/nose, throat, and repetitive sounds. A receiver operating characteristic analysis showed that the method accurately classified subjects with and without misophonia (accuracy = 91%). The present study suggests that the psychoacoustic test we have developed can be used to assess misophonia reliably and quickly.
Topics: Adult; Affective Symptoms; Arousal; Emotions; Female; Humans; Hyperacusis; Male; Psychoacoustics; Self Report; Surveys and Questionnaires
PubMed: 34040061
DOI: 10.1038/s41598-021-90355-8 -
Disability and Rehabilitation Aug 2023Cerebellar impairment (CI) manifests from different etiologies resulting in a heterogenic clinical presentation affecting walking and mobility. Case-reports were... (Review)
Review
Assessment and tailored physical rehabilitation approaches in persons with cerebellar impairments targeting mobility and walking according to the International Classification of Functioning: a systematic review of case-reports and case-series.
PURPOSE
Cerebellar impairment (CI) manifests from different etiologies resulting in a heterogenic clinical presentation affecting walking and mobility. Case-reports were reviewed to provide an analytical clinical picture of persons with CI (PwCI) to differentiate cerebellar and non-cerebellar impairments and to identify interventions and assessments used to quantify impact on walking and mobility according to the International Classification of Functioning, Disability and Health (ICF).
MATERIALS AND METHODS
Literature was searched in PubMed, Web Of Science and Scopus. Case-reports conducting physical rehabilitation and reporting at least one outcome measure of ataxia, gait pattern, walking or mobility were included.
RESULTS
28 articles with a total of 38 different patients were included. Etiologies were clustered to: spinocerebellar degenerations, traumatic brain injuries, cerebellar tumors, stroke and miscellaneous. The interventions applied were activity-based, including gait and balance training. Participation based activities such as tai chi, climbing and dance-based therapy had positive outcomes on mobility. Outcomes on body function such as ataxia and gait pattern were only reported in 22% of the patients.
CONCLUSIONS
A comprehensive test battery to encompass the key features of a PwCI on different levels of the ICF is needed to manage heterogeneity. Measures on body function level should be included in interventions.
PubMed: 37639546
DOI: 10.1080/09638288.2023.2248886 -
International Journal of Environmental... Dec 2022The sound environment and music intersect in several ways and the same holds true for the soundscape and our internal response to listening to music. Music may be part... (Review)
Review
The sound environment and music intersect in several ways and the same holds true for the soundscape and our internal response to listening to music. Music may be part of a sound environment or take on some aspects of environmental sound, and therefore some of the soundscape response may be experienced alongside the response to the music. At a deeper level, coping with music, spoken language, and the sound environment may all have influenced our evolution, and the cognitive-emotional structures and responses evoked by all three sources of acoustic information may be, to some extent, the same. This paper distinguishes and defines the extent of our understanding about the interplay of external sound and our internal response to it in both musical and real-world environments. It takes a naturalistic approach to music/sound and music-listening/soundscapes to describe in objective terms some mechanisms of sense-making and interactions with the sounds. It starts from a definition of sound as vibrational and transferable energy that impinges on our body and our senses, with a dynamic tension between lower-level coping mechanisms and higher-level affective and cognitive functioning. In this way, we establish both commonalities and differences between musical responses and soundscapes. Future research will allow this understanding to grow and be refined further.
Topics: Music; Sound; Acoustics; Emotions; Auditory Perception
PubMed: 36612591
DOI: 10.3390/ijerph20010269 -
Wiley Interdisciplinary Reviews.... Sep 2020The multifaceted ability to produce, transmit, receive, and respond to acoustic signals is widespread in animals and forms the basis of the interdisciplinary science of... (Review)
Review
The multifaceted ability to produce, transmit, receive, and respond to acoustic signals is widespread in animals and forms the basis of the interdisciplinary science of bioacoustics. Bioacoustics research methods, including sound recording and playback experiments, are applicable in cognitive research that centers around the processing of information from the acoustic environment. We provide an overview of bioacoustics techniques in the context of cognitive studies and make the case for the importance of bioacoustics in the study of cognition by outlining some of the major cognitive processes in which acoustic signals are involved. We also describe key considerations associated with the recording of sound and its use in cognitive applications. Based on these considerations, we provide a set of recommendations for best practices in the recording and use of acoustic signals in cognitive studies. Our aim is to demonstrate that acoustic recordings and stimuli are valuable tools for cognitive researchers when used appropriately. In doing so, we hope to stimulate opportunities for innovative cognitive research that incorporates robust recording protocols. This article is categorized under: Neuroscience > Cognition Psychology > Theory and Methods Neuroscience > Behavior Neuroscience > Cognition.
Topics: Biomedical Research; Cognitive Neuroscience; Humans; Psychoacoustics
PubMed: 32548958
DOI: 10.1002/wcs.1538 -
Frontiers in Psychology 2019Children who are typically developing often struggle to hear and understand speech in the presence of competing background sounds, particularly when the background... (Review)
Review
Children who are typically developing often struggle to hear and understand speech in the presence of competing background sounds, particularly when the background sounds are also speech. For example, in many cases, young school-age children require an additional 5- to 10-dB signal-to-noise ratio relative to adults to achieve the same word or sentence recognition performance in the presence of two streams of competing speech. Moreover, adult-like performance is not observed until adolescence. Despite ample converging evidence that children are more susceptible to auditory masking than adults, the field lacks a comprehensive model that accounts for the development of masked speech recognition. This review provides a synthesis of the literature on the typical development of masked speech recognition. Age-related changes in the ability to recognize phonemes, words, or sentences in the presence of competing background sounds will be discussed by considering (1) how masking sounds influence the sensory encoding of target speech; (2) differences in the time course of development for speech-in-noise versus speech-in-speech recognition; and (3) the central auditory and cognitive processes required to separate and attend to target speech when multiple people are speaking at the same time.
PubMed: 31551862
DOI: 10.3389/fpsyg.2019.01981 -
The Journal of the Acoustical Society... Mar 2022The aviation sector is rapidly evolving with more electric propulsion systems and a variety of new technologies of vertical take-off and landing manned and unmanned...
The aviation sector is rapidly evolving with more electric propulsion systems and a variety of new technologies of vertical take-off and landing manned and unmanned aerial vehicles. Community noise impact is one of the main barriers for the wider adoption of these new vehicles. Within the framework of a perception-driven engineering approach, this paper investigates the relationship between sound quality and first order physical parameters in rotor systems to aid design. Three case studies are considered: (i) contra-rotating versus single rotor systems, (ii) varying blade diameter and thrust in both contra-rotating and single rotor systems, and (iii) varying rotor-rotor axial spacing in contra-rotating systems. The outcomes of a listening experiment, where participants assessed a series of sound stimuli with varying design parameters, allow a better understanding of the annoyance induced by rotor noise. Further to this, a psychoacoustic annoyance model optimised for rotor noise has been formulated. The model includes a novel psychoacoustic function to account for the perceptual effect of impulsiveness. The significance of the proposed model lies in the quantification of the effects of psychoacoustic factors, such as loudness as the dominant factor, and also tonality, high frequency content, temporal fluctuations, and impulsiveness on rotor noise annoyance.
Topics: Acoustic Stimulation; Auditory Perception; Humans; Noise; Psychoacoustics; Sound
PubMed: 35364939
DOI: 10.1121/10.0009801 -
Scientific Reports Sep 2021Rhythmic joint coordination is ubiquitous in daily-life human activities. In order to coordinate their actions towards shared goals, individuals need to co-regulate...
Rhythmic joint coordination is ubiquitous in daily-life human activities. In order to coordinate their actions towards shared goals, individuals need to co-regulate their timing and move together at the collective level of behavior. Remarkably, basic forms of coordinated behavior tend to emerge spontaneously as long as two individuals are exposed to each other's rhythmic movements. The present study investigated the dynamics of spontaneous dyadic entrainment, and more specifically how they depend on the sensory modalities mediating informational coupling. By means of a novel interactive paradigm, we showed that dyadic entrainment systematically takes place during a minimalistic rhythmic task despite explicit instructions to ignore the partner. Crucially, the interaction was organized by clear dynamics in a modality-dependent fashion. Our results showed highly consistent coordination patterns in visually-mediated entrainment, whereas we observed more chaotic and more variable profiles in the auditorily-mediated counterpart. The proposed experimental paradigm yields empirical evidence for the overwhelming tendency of dyads to behave as coupled rhythmic units. In the context of our experimental design, it showed that coordination dynamics differ according to availability and nature of perceptual information. Interventions aimed at rehabilitating, teaching or training sensorimotor functions can be ultimately informed and optimized by such fundamental knowledge.
PubMed: 34526522
DOI: 10.1038/s41598-021-96054-8 -
Data in Brief Apr 2022This paper presents the Clarity Speech Corpus, a publicly available, forty speaker British English speech dataset. The corpus was created for the purpose of running...
This paper presents the Clarity Speech Corpus, a publicly available, forty speaker British English speech dataset. The corpus was created for the purpose of running listening tests to gauge speech intelligibility and quality in the Clarity Project, which has the goal of advancing speech signal processing by hearing aids through a series of challenges. The dataset is suitable for machine learning and other uses in speech and hearing technology, acoustics and psychoacoustics. The data comprises recordings of approximately 10,000 sentences drawn from the British National Corpus (BNC) with suitable length, words and grammatical construction for speech intelligibility testing. The collection process involved the selection of a subset of BNC sentences, the recording of these produced by 40 British English speakers, and the processing of these recordings to create individual sentence recordings with associated transcripts and metadata.
PubMed: 35242933
DOI: 10.1016/j.dib.2022.107951 -
Sensors (Basel, Switzerland) Dec 2020The superdirective beamformer, while attractive for processing broadband acoustic signals, often suffers from the problem of white noise amplification. So, its...
The superdirective beamformer, while attractive for processing broadband acoustic signals, often suffers from the problem of white noise amplification. So, its application requires well-designed acoustic arrays with sensors of extremely low self-noise level, which is difficult if not impossible to attain. In this paper, a new binaural superdirective beamformer is proposed, which is divided into two sub-beamformers. Based on studies and facts in psychoacoustics, these two filters are designed in such a way that they are orthogonal to each other to make the white noise components in the binaural beamforming outputs incoherent while maximizing the output interaural coherence of the diffuse noise, which is important for the brain to localize the sound source of interest. As a result, the signal of interest in the binaural superdirective beamformer's outputs is in phase but the white noise components in the outputs are random phase, so the human auditory system can better separate the acoustic signal of interest from white noise by listening to the outputs of the proposed approach. Experimental results show that the derived binaural superdirective beamformer is superior to its conventional monaural counterpart.
PubMed: 33375543
DOI: 10.3390/s21010074 -
Philosophical Transactions of the Royal... Jan 2020The complex and melodic nature of many birds' songs has raised interest in potential parallels between avian vocal sequences and human speech. The similarities between... (Review)
Review
The complex and melodic nature of many birds' songs has raised interest in potential parallels between avian vocal sequences and human speech. The similarities between birdsong and speech in production and learning are well established, but surprisingly little is known about how birds perceive song sequences. One popular laboratory songbird, the zebra finch (), has recently attracted attention as an avian model for human speech, in part because the male learns to produce the individual elements in its song motif in a fixed sequence. But psychoacoustic evidence shows that adult zebra finches are relatively insensitive to the sequential features of song syllables. Instead, zebra finches and other birds seem to be exquisitely sensitive to the acoustic details of individual syllables to a degree that is beyond human hearing capacity. Based on these findings, we present a finite-state model of zebra finch perception of song syllable sequences and discuss the rich informational capacity of their vocal system. Furthermore, we highlight the abilities of budgerigars (), a parrot species, to hear sequential features better than zebra finches and suggest that neurophysiological investigations comparing these species could prove fruitful for uncovering neural mechanisms for auditory sequence perception in human speech. This article is part of the theme issue 'What can animal communication teach us about human language?'
Topics: Animals; Attention; Auditory Perception; Birds; Female; Finches; Learning; Male; Melopsittacus; Music; Songbirds; Sound; Species Specificity; Vocalization, Animal
PubMed: 31735149
DOI: 10.1098/rstb.2019.0044