-
JMIR Research Protocols Jun 2024Sound therapy methods have seen a surge in popularity, with a predominant focus on music among all types of sound stimulation. There is substantial evidence documenting... (Review)
Review
BACKGROUND
Sound therapy methods have seen a surge in popularity, with a predominant focus on music among all types of sound stimulation. There is substantial evidence documenting the integrative impact of music therapy on psycho-emotional and physiological outcomes, rendering it beneficial for addressing stress-related conditions such as pain syndromes, depression, and anxiety. Despite these advancements, the therapeutic aspects of sound, as well as the mechanisms underlying its efficacy, remain incompletely understood. Existing research on music as a holistic cultural phenomenon often overlooks crucial aspects of sound therapy mechanisms, particularly those related to speech acoustics or the so-called "music of speech."
OBJECTIVE
This study aims to provide an overview of empirical research on sound interventions to elucidate the mechanism underlying their positive effects. Specifically, we will focus on identifying therapeutic factors and mechanisms of change associated with sound interventions. Our analysis will compare the most prevalent types of sound interventions reported in clinical studies and experiments. Moreover, we will explore the therapeutic effects of sound beyond music, encompassing natural human speech and intermediate forms such as traditional poetry performances.
METHODS
This review adheres to the methodological guidance of the Joanna Briggs Institute and follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist for reporting review studies, which is adapted from the Arksey and O'Malley framework. Our search strategy encompasses PubMed, Web of Science, Scopus, and PsycINFO or EBSCOhost, covering literature from 1990 to the present. Among the different study types, randomized controlled trials, clinical trials, laboratory experiments, and field experiments were included.
RESULTS
Data collection began in October 2022. We found a total of 2027 items. Our initial search uncovered an asymmetry in the distribution of studies, with a larger number focused on music therapy compared with those exploring prosody in spoken interventions such as guided meditation or hypnosis. We extracted and selected papers using Rayyan software (Rayyan) and identified 41 eligible papers after title and abstract screening. The completion of the scoping review is anticipated by October 2024, with key steps comprising the analysis of findings by May 2024, drafting and revising the study by July 2024, and submitting the paper for publication in October 2024.
CONCLUSIONS
In the next step, we will conduct a quality evaluation of the papers and then chart and group the therapeutic factors extracted from them. This process aims to unveil conceptual gaps in existing studies. Gray literature sources, such as Google Scholar, ClinicalTrials.gov, nonindexed conferences, and reference list searches of retrieved studies, will be added to our search strategy to increase the number of relevant papers that we cover.
INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID)
DERR1-10.2196/54030.
Topics: Humans; Stress, Psychological; Music Therapy; Adult
PubMed: 38935945
DOI: 10.2196/54030 -
Developmental Science Jun 2024Infants begin to segment word forms from fluent speech-a crucial task in lexical processing-between 4 and 7 months of age. Prior work has established that infants rely...
Infants begin to segment word forms from fluent speech-a crucial task in lexical processing-between 4 and 7 months of age. Prior work has established that infants rely on a variety of cues available in the speech signal (i.e., prosodic, statistical, acoustic-segmental, and lexical) to accomplish this task. In two experiments with French-learning 6- and 10-month-olds, we use a psychoacoustic approach to examine if and how degradation of the two fundamental acoustic components extracted from speech by the auditory system, namely, temporal (both frequency and amplitude modulation) and spectral information, impact word form segmentation. Infants were familiarized with passages containing target words, in which frequency modulation (FM) information was replaced with pure tones using a vocoder, while amplitude modulation (AM) was preserved in either 8 or 16 spectral bands. Infants were then tested on their recognition of the target versus novel control words. While the 6-month-olds were unable to segment in either condition, the 10-month-olds succeeded, although only in the 16 spectral band condition. These findings suggest that 6-month-olds need FM temporal cues for speech segmentation while 10-month-olds do not, although they need the AM cues to be presented in enough spectral bands (i.e., 16). This developmental change observed in infants' sensitivity to spectrotemporal cues likely results from an increase in the range of available segmentation procedures, and/or shift from a vowel to a consonant bias in lexical processing between the two ages, as vowels are more affected by our acoustic manipulations. RESEARCH HIGHLIGHTS: Although segmenting speech into word forms is crucial for lexical acquisition, the acoustic information that infants' auditory system extracts to process continuous speech remains unknown. We examined infants' sensitivity to spectrotemporal cues in speech segmentation using vocoded speech, and revealed a developmental change between 6 and 10 months of age. We showed that FM information, that is, the fast temporal modulations of speech, is necessary for 6- but not 10-month-old infants to segment word forms. Moreover, reducing the number of spectral bands impacts 10-month-olds' segmentation abilities, who succeed when 16 bands are preserved, but fail with 8 bands.
PubMed: 38853379
DOI: 10.1111/desc.13533 -
Language, Speech, and Hearing Services... May 2024This study aimed to conduct a scoping review of research exploring the effects of slight hearing loss on auditory and speech perception in children.
PURPOSE
This study aimed to conduct a scoping review of research exploring the effects of slight hearing loss on auditory and speech perception in children.
METHOD
A comprehensive search conducted in August 2023 identified a total of 402 potential articles sourced from eight prominent bibliographic databases. These articles were subjected to rigorous evaluation for inclusion criteria, specifically focusing on their reporting of speech or auditory perception using psychoacoustic tasks. The selected studies exclusively examined school-age children, encompassing those between 5 and 18 years of age. Following rigorous evaluation, 10 articles meeting these criteria were selected for inclusion in the review.
RESULTS
The analysis of included articles consistently shows that even slight hearing loss in school-age children significantly affects their speech and auditory perception. Notably, most of the included articles highlighted a common trend, demonstrating that perceptual deficits originating due to slight hearing loss in children are particularly observable under challenging experimental conditions and/or in cognitively demanding listening tasks. Recent evidence further underscores that the negative impacts of slight hearing loss in school-age children cannot be solely predicted by their pure-tone thresholds alone. However, there is limited evidence concerning the effect of slight hearing loss on the segregation of competing speech, which may be a better representation of listening in the classroom.
CONCLUSION
This scoping review discusses the perceptual consequences of slight hearing loss in school-age children and provides insights into an array of methodological issues associated with studying perceptual skills in school-age children with slight hearing losses, offering guidance for future research endeavors.
PubMed: 38787321
DOI: 10.1044/2024_LSHSS-23-00165 -
Hearing Research Jul 2024Combining cochlear implants with binaural acoustic hearing via preserved hearing in the implanted ear(s) is commonly referred to as combined electric and acoustic...
Combining cochlear implants with binaural acoustic hearing via preserved hearing in the implanted ear(s) is commonly referred to as combined electric and acoustic stimulation (EAS). EAS fittings can provide patients with significant benefit for speech recognition in complex noise, perceived listening difficulty, and horizontal-plane localization as compared to traditional bimodal hearing conditions with contralateral and monaural acoustic hearing. However, EAS benefit varies across patients and the degree of benefit is not reliably related to the underlying audiogram. Previous research has indicated that EAS benefit for speech recognition in complex listening scenarios and localization is significantly correlated with the patients' binaural cue sensitivity, namely interaural time differences (ITD). In the context of pure tones, interaural phase differences (IPD) and ITD can be understood as two perspectives on the same phenomenon. Through simple mathematical conversion, one can be transformed into the other, illustrating their inherent interrelation for spatial hearing abilities. However, assessing binaural cue sensitivity is not part of a clinical assessment battery as psychophysical tasks are time consuming, require training to achieve performance asymptote, and specialized programming and software all of which render this clinically unfeasible. In this study, we investigated the possibility of using an objective measure of binaural cue sensitivity by the acoustic change complex (ACC) via imposition of an IPD of varying degrees at stimulus midpoint. Ten adult listeners with normal hearing were assessed on tasks of behavioral and objective binaural cue sensitivity for carrier frequencies of 250 and 1000 Hz. Results suggest that 1) ACC amplitude increases with IPD; 2) ACC-based IPD sensitivity for 250 Hz is significantly correlated with behavioral ITD sensitivity; 3) Participants were more sensitive to IPDs at 250 Hz as compared to 1000 Hz. Thus, this objective measure of IPD sensitivity may hold clinical application for pre- and post-operative assessment for individuals meeting candidacy indications for cochlear implantation with low-frequency acoustic hearing preservation as this relatively quick and objective measure may provide clinicians with information identifying patients most likely to derive benefit from EAS technology.
Topics: Humans; Acoustic Stimulation; Cochlear Implants; Cues; Sound Localization; Female; Speech Perception; Male; Cochlear Implantation; Adult; Middle Aged; Auditory Threshold; Electric Stimulation; Audiometry, Pure-Tone; Persons With Hearing Impairments; Time Factors; Aged; Noise; Perceptual Masking; Young Adult; Hearing; Psychoacoustics
PubMed: 38763034
DOI: 10.1016/j.heares.2024.109020 -
Journal of Voice : Official Journal of... May 2024Timbre is a central quality of singing, yet remains a complex notion poorly understood in psychoacoustic studies. Previous studies note how no single acoustic variable...
UNLABELLED
Timbre is a central quality of singing, yet remains a complex notion poorly understood in psychoacoustic studies. Previous studies note how no single acoustic variable or combinations of variables consistently predict timbre dimensions. Timbre varies on a continuum from darkest to lightest. These extremes are associated with laryngeal and vocal tract adjustments related to smaller and larger vocal tract area and variations in vocal fold vibratory characteristics. Perceptually, timbre assessment is influenced by spectral characteristics and formant frequency adjustments, though these dimensions are not independently perceived. Perceptual studies repeatedly demonstrate difficulties in correlating variations in timbre stimuli to specific measures. A recent study demonstrated how acoustic predictive salience of voice category and voice weight across pitches contribute to timbre assessments and concludes that timbre may be related to as-of-yet unknown factor(s). The purpose of this study was to test four different models for assessing timbre; one model focused on specific anatomy, one on listener intuition, one utilizing auditory anchors, and one using expert raters in a deconstructed timbre model with five specific dimensions.
METHODS
Four independent panels were conducted with separate cohorts of professional singing teachers. Forty-one assessors took part in the anatomically focused panel, 54 in the intuition-based panel, 30 in the anchored panel, and 12 in the expert listener panel. Stimuli taken from live performances of well-known singers were used for all panels, representing all genders, genres, and styles across a large pitch range. All stimuli are available as Supplementary Materials. Fleiss' kappa values, descriptive statistics, and significance tests are reported for all panel assessments.
RESULTS
Panels 1 through 4 varied in overall accuracy and agreement. The intuition-based model showed overall 45% average accuracy (SD ± 4%), k = 0.289 (<0.001) compared to overall 71% average accuracy (SD ± 3%), k = 0.368 (<0.001) of the anatomical focused panel. The auditory-anchored model showed overall 75% average accuracy (SD ± 8%), k = 0.54 (<0.001) compared with overall 83% average accuracy and agreement of k = 0.63 (<0.001) for panel 4. Results revealed that the highest accuracy and reliability were achieved in a deconstructed timbre model and that providing anchoring improved reliability but with no further increase in accuracy.
CONCLUSION
Deconstructing timbre into specific parameters improved auditory perceptual accuracy and overall agreement. Assessing timbre along with other perceptual dimensions improves accuracy and reliability. Panel assessors' expert level of listening skills remain an important factor in obtaining reliable and accurate assessments of auditory stimuli for timbre dimensions. Anchoring improved reliability but with no further increase in accuracy. The study suggests that timbre assessment can be improved by approaching the percept through a prism of five specific dimensions each related to specific physiology and auditory-perceptual subcategories. Further tests are needed with framework-naïve listeners, nonmusically educated listeners, artificial intelligence comparisons, and synthetic stimuli to further test the reliability.
PubMed: 38755075
DOI: 10.1016/j.jvoice.2024.03.039 -
Journal of Comparative Physiology. A,... May 2024Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound...
Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound sources in the environment. During auditory streaming, sounds produced by the same source are integrated through time into a single, coherent auditory stream that is perceptually segregated from other concurrent sounds. Based on human psychoacoustic studies, one hypothesis regarding auditory streaming is that any sufficiently salient perceptual difference may lead to stream segregation. Here, we used the eastern grey treefrog, Hyla versicolor, to test this hypothesis in the context of vocal communication in a non-human animal. In this system, females choose their mate based on perceiving species-specific features of a male's pulsatile advertisement calls in social environments (choruses) characterized by mixtures of overlapping vocalizations. We employed an experimental paradigm from human psychoacoustics to design interleaved pulsatile sequences (ABAB…) that mimicked key features of the species' advertisement call, and in which alternating pulses differed in pulse rise time, which is a robust species recognition cue in eastern grey treefrogs. Using phonotaxis assays, we found no evidence that perceptually salient differences in pulse rise time promoted the segregation of interleaved pulse sequences into distinct auditory streams. These results do not support the hypothesis that any perceptually salient acoustic difference can be exploited as a cue for stream segregation in all species. We discuss these findings in the context of cues used for species recognition and auditory streaming.
PubMed: 38733407
DOI: 10.1007/s00359-024-01702-9 -
The Journal of the Acoustical Society... May 2024Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to... (Comparative Study)
Comparative Study
Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.
Topics: Humans; Middle Aged; Aged; Male; Female; Aging; Young Adult; Adult; Speech Perception; Cues; Age Factors; Speech Acoustics; Acoustic Stimulation; Pitch Perception; Language; Voice Quality; Psychoacoustics; Audiometry, Speech
PubMed: 38717206
DOI: 10.1121/10.0025928 -
Brain and Behavior May 2024In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of... (Randomized Controlled Trial)
Randomized Controlled Trial
OBJECTIVE
In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of sound enrichment therapy in tinnitus treatment by developing a protocol that includes criteria for psychoacoustic characteristics of tinnitus to determine whether the etiology is related to hearing loss.
METHODS
A total of 96 patients with chronic tinnitus were included in the study. Fifty-two patients in the study group and 44 patients in the placebo group considered residual inhibition (RI) outcomes and tinnitus pitches. Both groups received sound enrichment treatment with different spectrum contents. The tinnitus handicap inventory (THI), visual analog scale (VAS), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment.
RESULTS
There was a statistically significant difference between the groups in THI, VAS, MML, and TLL scores from the first month to all months after treatment (p < .01). For the study group, there was a statistically significant decrease in THI, VAS, MML, and TLL scores in the first month (p < .01). This decrease continued at a statistically significant level in the third month of posttreatment for THI (p < .05) and at all months for VAS-1 (tinnitus severity) (p < .05) and VAS-2 (tinnitus discomfort) (p < .05).
CONCLUSION
In clinical practice, after excluding other factors related to the tinnitus etiology, sound enrichment treatment can be effective in tinnitus cases where RI is positive and the tinnitus pitch is matched with a hearing loss between 45 and 55 dB HL in a relatively short period of 1 month.
Topics: Tinnitus; Humans; Male; Female; Middle Aged; Adult; Hearing Loss; Treatment Outcome; Aged; Acoustic Stimulation; Sound; Psychoacoustics
PubMed: 38715412
DOI: 10.1002/brb3.3520 -
Behavior Research Methods May 2024PSYCHOACOUSTICS-WEB is an online tool written in JavaScript and PHP that enables the estimation of auditory sensory thresholds via adaptive threshold tracking. The...
PSYCHOACOUSTICS-WEB is an online tool written in JavaScript and PHP that enables the estimation of auditory sensory thresholds via adaptive threshold tracking. The toolbox implements the transformed up-down methods proposed by Levitt (Journal of the Acoustical Society of America, 49, 467-477, (1971) for a set of classic psychoacoustical tasks: frequency, intensity, and duration discrimination of pure tones; duration discrimination and gap detection of noise; and amplitude modulation detection with noise carriers. The toolbox can be used through a common web browser; it works with both fixed and mobile devices, and requires no programming skills. PSYCHOACOUSTICS-WEB is suitable for laboratory, classroom, and online testing and is designed for two main types of users: an occasional user and, above all, an experimenter using the toolbox for their own research. This latter user can create a personal account, customise existing experiments, and share them in the form of direct links to further users (e.g., the participants of a hypothetical experiment). Finally, because data storage is centralised, the toolbox offers the potential for creating a database of auditory skills.
PubMed: 38709452
DOI: 10.3758/s13428-024-02430-3 -
Journal of Communication Disorders 2024Central auditory processing disorders (CAPD) can significantly affect the daily functioning of a child, and the first step in determining whether rehabilitation...
INTRODUCTION
Central auditory processing disorders (CAPD) can significantly affect the daily functioning of a child, and the first step in determining whether rehabilitation procedures are required is a proper diagnosis. Different guidelines for making diagnoses have been published in the literature, and in various centers normative values for psychoacoustic tests of CAPD have been used internally. The material presented in this paper is based on more than 1000 children and is the largest collection so far published. The aim of this study is to present normative values for tests assessing CAPD in children aged 6 to 12 years, divided by age at last birthday.
METHOD
We tested 1037 children aged 6 to 12 years who were attending primary schools and kindergartens. The criteria for inclusion were a normal audiogram, intellectually normal, no developmental problems, and no difficulties in auditory processing. To evaluate auditory processing all children were given three tests on the Senses Examination Platform: the Frequency Pattern Test (FPT), Duration Pattern Test (DPT), and Dichotic Digit Test (DDT).
RESULTS
The results from 1,037 children allowed us to determine normative values for FPT, DPT, and DDT in seven different age groups (6 through to 12 years). We developed a newapproach, based on quantile-based norms, to determine normative values in each group. Three categories - average, below-average, and above-average - allow for a broader but more realistic interpretation than those used previously. We compare our results with published standards.
CONCLUSIONS
Our study is the largest normative database published to date for CAPD testing, setting a standard for each child by age in years. We used the Senses Examination Platform, a universal tool, to unify standards for the classification of CAPD. Our study can serve as a basis for the development of a Polish model for the diagnosis of CAPD.
Topics: Humans; Child; Female; Reference Values; Male; Auditory Perceptual Disorders
PubMed: 38692192
DOI: 10.1016/j.jcomdis.2024.106426