-
Journal of Psycholinguistic Research Aug 2023This study sought to determine whether rap expertise is associated with enhanced knowledge of psychoacoustic similarity. Using a stimulus composed of pseudo-word...
This study sought to determine whether rap expertise is associated with enhanced knowledge of psychoacoustic similarity. Using a stimulus composed of pseudo-word assonantal half-rhyme triplets (e.g., freet/speet//yeek), expert improvisational rap lyricists were compared to laypersons (non-lyricists) in their judgments of half-rhyme acceptability. According to both a perception-based and a linguistic feature-based measure of psychoacoustic similarity, lyricists were distinct from non-lyricists in the rates at which they found half-rhymes acceptable, and in how group responses were correlated with the similarity measures. Data indicate that, compared to non-lyricists, lyricists' half-rhyme acceptance rates are more highly correlated with linguistic features that have more robust perceptual cues. Evidence suggests that lyricists and non-lyricists employ different strategies for determining the acceptability of half-rhymes, and that lyricists might be more sensitive or attuned to similar aspects of speech sounds.
Topics: Humans; Psychoacoustics; Judgment; Phonetics; Cues; Speech Perception
PubMed: 36929042
DOI: 10.1007/s10936-023-09932-9 -
Brain : a Journal of Neurology Jan 2018Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical...
Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is only a facultative component of voice-identity recognition in situations where additional face-identity processing is required.
Topics: Association Learning; Audiometry; Brain; Brain Mapping; Female; Humans; Magnetic Resonance Imaging; Male; Middle Aged; Neuropsychological Tests; Psychoacoustics; Recognition, Psychology; Statistics, Nonparametric; Surveys and Questionnaires; Verbal Learning; Voice
PubMed: 29228111
DOI: 10.1093/brain/awx313 -
International Journal of Environmental... Mar 2022Novel electric air transportation is emerging as an industry that could help to improve the lives of people living in both metropolitan and rural areas through...
Novel electric air transportation is emerging as an industry that could help to improve the lives of people living in both metropolitan and rural areas through integration into infrastructure and services. However, as this new resource of accessibility increases in momentum, the need to investigate any potential adverse health impacts on the public becomes paramount. This paper details research investigating the effectiveness of available noise metrics and sound quality metrics (SQMs) for assessing perception of drone noise. A subjective experiment was undertaken to gather data on human response to a comprehensive set of drone sounds and to investigate the relationship between perceived annoyance, perceived loudness and perceived pitch and key psychoacoustic factors. Based on statistical analyses, subjective models were obtained for perceived annoyance, loudness and pitch of drone noise. These models provide understanding on key psychoacoustic features to consider in decision making in order to mitigate the impact of drone noise. For the drone sounds tested in this paper, the main contributors to perceived annoyance are perceived noise level (PNL) and sharpness; for perceived loudness are PNL and fluctuation strength; and for perceived pitch are sharpness, roughness and Aures tonality. Responses for the drone sounds tested were found to be highly sensitive to the distance between drone and receiver, measured in terms of height above ground level (HAGL). All these findings could inform the optimisation of drone operating conditions in order to mitigate community noise.
Topics: Benchmarking; Humans; Loudness Perception; Noise; Psychoacoustics; Unmanned Aerial Devices
PubMed: 35328839
DOI: 10.3390/ijerph19063152 -
The Journal of the Acoustical Society... Apr 2021Sound radiation of most natural sources, like human speakers or musical instruments, typically exhibits a spatial directivity pattern. This directivity contributes to...
Sound radiation of most natural sources, like human speakers or musical instruments, typically exhibits a spatial directivity pattern. This directivity contributes to the perception of sound sources in rooms, affecting the spatial energy distribution of early reflections and late diffuse reverberation. Thus, for convincing sound field reproduction and acoustics simulation, source directivity has to be considered. Whereas perceptual effects of directivity, such as source-orientation-dependent coloration, appear relevant for the direct sound and individual early reflections, it is unclear how spectral and spatial cues interact for later reflections. Better knowledge of the perceptual relevance of source orientation cues might help to simplify the acoustics simulation. Here, it is assessed as to what extent directivity of a human speaker should be simulated for early reflections and diffuse reverberation. The computationally efficient hybrid approach to simulate and auralize binaural room impulse responses [Wendt et al., J. Audio Eng. Soc. 62, 11 (2014)] was extended to simulate source directivity. Two psychoacoustic experiments assessed the listeners' ability to distinguish between different virtual source orientations when the frequency-dependent spatial directivity pattern of the source was approximated by a direction-independent average filter for different higher reflection orders. The results indicate that it is sufficient to simulate effects of source directivity in the first-order reflections.
Topics: Acoustics; Cues; Humans; Perception; Psychoacoustics; Sound; Sound Localization
PubMed: 33940902
DOI: 10.1121/10.0003823 -
American Journal of Otolaryngology 2022Tinnitus network(s) consists of pathways in the auditory cortex, frontal cortex, and the limbic system. The cortical hyperactivity caused by tinnitus may be suppressed...
The effectiveness of the combined transcranial direct current stimulation (tDCS) and tailor-made notched music training (TMNMT) on psychoacoustic, psychometric, and cognitive indices of tinnitus patients.
PURPOSE
Tinnitus network(s) consists of pathways in the auditory cortex, frontal cortex, and the limbic system. The cortical hyperactivity caused by tinnitus may be suppressed by neuromodulation techniques. Due to the lack of definitive treatment for tinnitus and limited usefulness of the individual methods, in this study, a combination of transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex (DLPFC) and tailor-made notched music training (TMNMT) was used.
MATERIAL AND METHODS
In this descriptive-analytic study, 26 patients with chronic unilateral tinnitus of the right ear were randomly divided into the clinical trial group (CTG) and the control group (CG). In both groups, six sessions of tDCS with 2 mA intensity for 20 min, with anode on F4 and cathode on F3, were conducted. Simultaneous with tDCS sessions, and based on TMNMT, the participant was asked to listen passively for 120 min/day, to a CD containing her/his favorite music with a proper notch applied in its spectrum according to the individual's tinnitus The treatment outcome was measured by, psychoacoustic (loudness-matching), psychometric (awareness, loudness and annoyance Visual Analogue Scale (VAS) scores, and Tinnitus Handicap Inventory (THI)) scores, and cognitive assessments (randomized dichotic digits test (RDDT) and dichotic auditory-verbal memory test (DAVMT)). Repeated measurement test was used for statistical analyses.
RESULTS
In the CTG, the tinnitus loudness and annoyance VAS scores, and THI were reduced significantly (p = 0.001). In addition, the DAVMT and RDDT scores were enhanced (p = 0.001). Such changes were not observed in the CG (p > 0.05).
CONCLUSION
The combination of tDCS and TMNMT led to a reduction in the loudness, awareness, annoyance, and also disability induced by tinnitus in CTG. Furthermore, this method showed an improvement of cognitive functions (auditory divided attention, selective attention and working memory) in the CTG.
Topics: Adult; Auditory Cortex; Cognition; Female; Frontal Lobe; Humans; Limbic System; Male; Middle Aged; Music Therapy; Psychoacoustics; Psychometrics; Tinnitus; Transcranial Direct Current Stimulation; Treatment Outcome
PubMed: 34715486
DOI: 10.1016/j.amjoto.2021.103274 -
Quarterly Journal of Experimental... Mar 2018Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life?...
Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task ( N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest differences in acoustic characteristics. In contrast, both perceptual and acoustic differences between authentic and acted vocalizations of amusement, disgust, and sadness were relatively small. Acoustic predictors of authenticity included higher and more variable pitch, lower harmonicity, and less regular temporal structure. The existence of perceptual and acoustic differences between authentic and acted vocalizations for all analysed emotions suggests that it may be useful to include spontaneous expressions in datasets for psychological research and affective computing.
Topics: Acoustic Stimulation; Acoustics; Auditory Perception; Emotions; Female; Humans; Language; Male; Nonverbal Communication; Online Systems; Psychoacoustics; Social Perception
PubMed: 27937389
DOI: 10.1080/17470218.2016.1270976 -
International Journal of Environmental... Dec 2022In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to...
In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject to auditory masking by other aural stimuli such as voice or additional sounds (e.g., applause, laughter, horns), and is also influenced by the visual content that accompanies the soundtrack, and by attentional and motivational factors. This situation is relevant to the music industry because, according to some copyright regulations, the non-audible background music must not generate any distribution rights, and the marginally audible background music must generate half of the standard value of audible music. In this study, we conduct two psychoacoustic experiments to identify several factors that influence background music perception, and their contribution to its variable audibility. Our experiments are based on auditory detection and chronometric tasks involving keyboard interactions with original TV content. From the collected data, we estimated a sound-to-music ratio range to define the audibility threshold limits of the class. In addition, results show that perception is affected by loudness level, listening condition, music sensitivity, and type of television content.
Topics: Music; Acoustic Stimulation; Auditory Perception; Sound; Psychoacoustics
PubMed: 36612443
DOI: 10.3390/ijerph20010123 -
Hearing Research Jul 2017Transcutaneous, electrical stimulation with electrodes placed on the mastoid processes represents a specific way to elicit vestibular reflexes in humans without active...
Transcutaneous, electrical stimulation with electrodes placed on the mastoid processes represents a specific way to elicit vestibular reflexes in humans without active or passive subject movements, for which the term galvanic vestibular stimulation was coined. It has been suggested that galvanic vestibular stimulation mainly affects the vestibular periphery, but whether vestibular hair cells, vestibular afferents, or a combination of both are excited, is still a matter of debate. Galvanic vestibular stimulation has been in use since the late 18th century, but despite the long-known and well-documented effects on the vestibular system, reports of the effect of electrical stimulation on the adjacent cochlea or the ascending auditory pathway are surprisingly sparse. The present study examines the effect of transcutaneous, electrical stimulation of the human auditory periphery employing evoked and spontaneous otoacoustic emissions and several psychoacoustic measures. In particular, level growth functions of distortion product otoacoustic emissions were recorded during electrical stimulation with alternating currents (2 Hz, 1-4 mA in 1 mA-steps). In addition, the level and frequency of spontaneous otoacoustic emissions were followed before, during, and after electrical stimulation (2 Hz, 1-4 mA). To explore the effect of electrical stimulation on the retrocochlear level (i.e. on the ascending auditory pathway beyond the cochlea), psychoacoustic experiments were carried out. Specifically, participants indicated whether electrical stimulation (4 Hz, 2 and 3 mA) induced amplitude modulations of the perception of a pure tone, and of auditory illusions after presentation of either an intense, low-frequency sound (Bounce tinnitus) or a faint band-stop noise (Zwicker tone). These three psychoacoustic measures revealed significant perceived amplitude modulations during electrical stimulation in the majority of participants. However, no significant changes of evoked and spontaneous otoacoustic emissions could be detected during electrical stimulation relative to recordings without electrical stimulation. The present findings show that cochlear function, as assessed with spontaneous and evoked otoacoustic emissions, is not affected by transcutaneous electrical stimulation, at the currents used in this study. Psychoacoustic measures like pure tone perception, but also auditory illusions, are affected by electrical stimulation. This indicates that activity of the retrocochlear ascending auditory pathway is modulated during transcutaneous electrical stimulation.
Topics: Acoustic Stimulation; Adolescent; Adult; Audiometry, Pure-Tone; Auditory Pathways; Auditory Perception; Auditory Threshold; Cochlea; Female; Hair Cells, Auditory, Outer; Humans; Male; Otoacoustic Emissions, Spontaneous; Psychoacoustics; Transcutaneous Electric Nerve Stimulation; Vestibule, Labyrinth; Young Adult
PubMed: 28323018
DOI: 10.1016/j.heares.2017.03.008 -
Neuropsychologia May 2016Although visual deficits due to unilateral spatial neglect (USN) have been frequently described in the literature, fewer studies have been interested in directional...
Although visual deficits due to unilateral spatial neglect (USN) have been frequently described in the literature, fewer studies have been interested in directional hearing impairment in USN. The aim of this study was to explore sound lateralisation deficits in USN. Using a paradigm inspired by Tanaka et al. (1999), interaural time differences (ITD) were presented over headphones to give the illusion of a leftward or a rightward movement of sound. Participants were asked to respond "right" and "left" as soon as possible to indicate whether they heard the sound moving to the right or to the left side of the auditory space. We additionally adopted a single-case method to analyse the performance of 15 patients with right-hemisphere (RH) stroke and added two additional measures to underline sound lateralisation on the left side and on the right side. We included 15 patients with RH stoke (5 with a severe USN, 5 with a mild USN and 5 without USN) and 11 healthy age-matched participants. We expected to replicate findings of abnormal sound lateralisation in USN. However, although a sound lateralisation deficit was observed in USN, two different deficit profiles were identified. Namely, patients with a severe USN seemed to have left sound lateralisation impairment whereas patients with a mild USN seemed to be more influenced by a systematic bias in auditory representation with respect to body meridian axis (egocentric deviation). This latter profile was unexpected as sounds were manipulated with ITD and, thus, would not be perceived as coming from an external source of the head. Future studies should use this paradigm in order to better understand these two distinct profiles.
Topics: Acoustic Stimulation; Adult; Aged; Female; Functional Laterality; Humans; Magnetic Resonance Imaging; Male; Middle Aged; Perceptual Disorders; Psychoacoustics; Sound Localization; Space Perception
PubMed: 27018451
DOI: 10.1016/j.neuropsychologia.2016.03.024 -
The Journal of the Acoustical Society... Dec 2015Roughness is a sound quality that has been related to the amplitude modulation characteristics of the acoustic stimulus. Roughness also is considered one of the primary...
Roughness is a sound quality that has been related to the amplitude modulation characteristics of the acoustic stimulus. Roughness also is considered one of the primary elements of voice quality associated with natural variations across normal voices and is a salient feature of many dysphonic voices. It is known that the roughness of tonal stimuli is dependent on the frequency and depth of amplitude modulation and on the carrier frequency. Here, it is determined if similar dependencies exist for voiced speech stimuli. Knowledge of such dependencies can lead to a better understanding of the acoustic characteristics of vocal roughness along the continuum of normal to dysphonic and may facilitate computational estimates of vocal roughness. Synthetic vowel stimuli were modeled after talkers selected from the Satloff/Heman-Ackah disordered voice database. To parametrically control amplitude modulation frequency and depth, synthesized stimuli had minimal amplitude fluctuations, and amplitude modulation was superimposed with the desired frequency and depth. Perceptual roughness judgments depended on amplitude modulation frequency and depth in a manner that closely matched data from tonal carriers. The dependence of perceived roughness on amplitude modulation frequency and depth closely matched the roughness of sinusoidal carriers as reported by Fastl and Zwicker [(2007) Psychoacoustics: Facts and Models, 3rd ed. (Springer, New York)].
Topics: Acoustic Stimulation; Acoustics; Adolescent; Adult; Audiometry, Pure-Tone; Audiometry, Speech; Auditory Threshold; Dysphonia; Female; Humans; Male; Psychoacoustics; Speech Acoustics; Speech Perception; Speech Production Measurement; Voice Quality; Young Adult
PubMed: 26723336
DOI: 10.1121/1.4937753