-
CoDAS 2024To map the vocal risk in professional classical singers, analyzing their self-assessment of voice and self-perception of singing voice handicap and vocal fatigue.
PURPOSE
To map the vocal risk in professional classical singers, analyzing their self-assessment of voice and self-perception of singing voice handicap and vocal fatigue.
METHODS
The study sample comprised of 52 professional classical choir singers, aged 31 to 72 years. They answered an online questionnaire in Google Forms, addressing their characterization, self-assessment of voice, the Voice Handicap Index-10 (VHI-10), Classical Singing Handicap Index (CSHI), and Vocal Fatigue Index (VFI).
RESULTS
The mean self-assessment of voice was between "Good" and "Very good" (1.2). The mean total VHI-10 score was 1.35, which is below the cutoff. The mean total CSHI score was 10.04. The mean total VFI score was 10.83, near the cutoff value. Classical singers who use their voice to give examples to students in their classes had higher scores in VHI-10 (p = 0.013), VFI voice restriction (p = 0.011), and VFI total score (p = 0.015). Besides, classical singers who already visited a Speech-Language Pathologist for voice problems had higher scores in VFI voice restriction (p = 0.040) and VFI recovery with voice rest (p = 0.019), in addition to correlations between instrument scores.
CONCLUSION
Professional classical singers did not have voice handicaps. However, their self-perception of vocal fatigue was more present when the singing voice was used, such as giving examples with their own voice in class. Having had voice problems and visited a Speech-Language Pathologist in the past led to a greater perception of vocal recovery with rest.
Topics: Humans; Voice Quality; Singing; Middle Aged; Adult; Voice Disorders; Male; Self Concept; Female; Aged; Surveys and Questionnaires; Occupational Diseases; Self-Assessment; Disability Evaluation
PubMed: 38896630
DOI: 10.1590/2317-1782/20242023088pt -
Scientific Reports Jun 2024Musical activities (MA) such as singing, playing instruments, and listening to music may be associated with health benefits. However, evidence from epidemiological...
Musical activities (MA) such as singing, playing instruments, and listening to music may be associated with health benefits. However, evidence from epidemiological studies is still limited. This study aims at describing the relation between MA and both sociodemographic and health-related factors in a cross-sectional approach. A total of 6717 adults (50.3% women, 49.7% men, median age: 51 years (IQR 43-60) were recruited from the study center Berlin-Mitte of the German National Cohort (NAKO), a population-based prospective study. This study is based on a sample randomly selected from the population registry of Berlin, Germany, aged 20 to 69 years. 53% of the participants had been musically active at least once in their life (56.1% women, 43.9% men). Playing keyboard instruments (30%) and singing (21%) were the most frequent MA. Participants listened to music in median 90 min per day (IQR 30.0-150.0). Musically active individuals were more likely to have a higher education, higher alcohol consumption, were less likely to be physically active, and had a lower BMI compared to musically inactive individuals. This large population-based study offers a comprehensive description of demographic, health, and lifestyle characteristics associated with MA. Our findings may aid in assessing long-term health consequences of MA.
Topics: Humans; Middle Aged; Female; Male; Music; Adult; Germany; Aged; Prospective Studies; Cross-Sectional Studies; Singing; Young Adult; Cohort Studies; Life Style
PubMed: 38890477
DOI: 10.1038/s41598-024-64773-3 -
JASA Express Letters Jun 2024Singing is socially important but constrains voice acoustics, potentially masking certain aspects of vocal identity. Little is known about how well listeners extract...
Singing is socially important but constrains voice acoustics, potentially masking certain aspects of vocal identity. Little is known about how well listeners extract talker details from sung speech or identify talkers across the sung and spoken modalities. Here, listeners (n = 149) were trained to recognize sung or spoken voices and then tested on their identification of these voices in both modalities. Learning vocal identities was initially easier through speech than song. At test, cross-modality voice recognition was above chance, but weaker than within-modality recognition. We conclude that talker information is accessible in sung speech, despite acoustic constraints in song.
Topics: Humans; Singing; Male; Female; Adult; Speech Perception; Voice; Young Adult; Recognition, Psychology; Speech
PubMed: 38888432
DOI: 10.1121/10.0026385 -
Proceedings of Meetings on Acoustics.... Dec 2023The vocal folds experience repeated collision during phonation. The resulting contact pressure is often considered to play an important role in vocal fold injury, and...
The vocal folds experience repeated collision during phonation. The resulting contact pressure is often considered to play an important role in vocal fold injury, and has been the focus of many experimental studies. In this study, vocal fold contact pattern and contact pressure during phonation were numerically investigated. The results show that vocal fold contact in general occurs within a horizontal strip on the medial surface, first appearing at the inferior medial surface and propagating upward. Because of the localized and travelling nature of vocal fold contact, sensors of a finite size may significantly underestimate the peak vocal fold contact pressure, particularly for vocal folds of low transverse stiffness. This underestimation also makes it difficult to identify the contact pressure peak in the intraglottal pressure waveform. These results showed that the vocal fold contact pressure reported in previous experimental studies may have significantly underestimated the actual values. It is recommended that contact pressure sensors with a diameter no greater than 0.4 mm are used in future experiments to ensure adequate accuracy in measuring the peak vocal fold contact pressure during phonation.
PubMed: 38872712
DOI: 10.1121/2.0001894 -
Cureus Jun 2024A patient with multiple comorbidities and an eight-year history of tracheostomy was being treated for tracheitis. At this point, she became incapable of using regular...
A patient with multiple comorbidities and an eight-year history of tracheostomy was being treated for tracheitis. At this point, she became incapable of using regular speaking valves, and multiple attempts to reintroduce the speaking valve failed. A Ferrer adjustable speaking valve (FASV) was designed with gradations of outflow closure, allowing air to go through the vocal cords for phonation. The FASV was offered to her through the compassionate use program at the FDA. At 20% initial closure, the patient was able to tolerate the valve and was advanced to 50% closure, at which point she could phonate partially. The use of the valve was terminated at the time of her transfer, 23 days after the initiation of use. This suggests the safety and possible efficacy of using an adjustable speaking valve earlier than regular valves, allowing patients to communicate earlier and further exercise their diaphragms.
PubMed: 38868548
DOI: 10.7759/cureus.62081 -
NPJ Parkinson's Disease Jun 2024Approximately 90% of Parkinson's patients (PD) suffer from dysarthria. However, there is currently a lack of research on acoustic measurements and speech impairment... (Review)
Review
Approximately 90% of Parkinson's patients (PD) suffer from dysarthria. However, there is currently a lack of research on acoustic measurements and speech impairment patterns among Mandarin-speaking individuals with PD. This study aims to assess the diagnosis and disease monitoring possibility in Mandarin-speaking PD patients through the recommended speech paradigm for non-tonal languages, and to explore the anatomical and functional substrates. We examined total of 160 native Mandarin-speaking Chinese participants consisting of 80 PD patients, 40 healthy controls (HC), and 40 MRI controls. We screened the optimal acoustic metric combination for PD diagnosis. Finally, we used the objective metrics to predict the patient's motor status using the Naïve Bayes model and analyzed the correlations between cortical thickness, subcortical volumes, functional connectivity, and network properties. Comprehensive acoustic screening based on prosodic, articulation, and phonation abnormalities allows differentiation between HC and PD with an area under the curve of 0.931. Patients with slowed reading exhibited atrophy of the fusiform gyrus (FDR p = 0.010, R = 0.391), reduced functional connectivity between the fusiform gyrus and motor cortex, and increased nodal local efficiency (NLE) and nodal efficiency (NE) in bilateral pallidum. Patients with prolonged pauses demonstrated atrophy in the left hippocampus, along with decreased NLE and NE. The acoustic assessment in Mandarin proves effective in diagnosis and disease monitoring for Mandarin-speaking PD patients, generalizing standardized acoustic guidelines beyond non-tonal languages. The speech impairment in Mandarin-speaking PD patients not only involves motor aspects of speech but also encompasses the cognitive processes underlying language generation.
PubMed: 38866758
DOI: 10.1038/s41531-024-00720-3 -
The Pan African Medical Journal 2024Guillain-Barré syndrome/Miller-Fisher syndrome (GBS/MFS) overlap syndrome is an extremely rare variant of Guillain-Barré syndrome (GBS) in which Miller-Fisher syndrome...
Guillain-Barré syndrome/Miller-Fisher syndrome (GBS/MFS) overlap syndrome is an extremely rare variant of Guillain-Barré syndrome (GBS) in which Miller-Fisher syndrome (MFS) coexists with other characteristics of GBS, such as limb weakness, paresthesia, and facial paralysis. We report the clinical case of a 12-year-old patient, with no pathological history, who acutely presents with ophthalmoplegia, areflexia, facial diplegia, and swallowing and phonation disorders, followed by progressive, descending, and symmetrical paresis affecting first the upper limbs and then the lower limbs. An albuminocytological dissociation was found in the cerebrospinal fluid study. Magnetic resonance imaging of the spinal cord showed enhancement and thickening of the cauda equina roots. The patient was treated with immunoglobulins with a favorable clinical outcome.
Topics: Humans; Miller Fisher Syndrome; Guillain-Barre Syndrome; Child; Magnetic Resonance Imaging; Male; Immunoglobulins; Treatment Outcome
PubMed: 38854867
DOI: 10.11604/pamj.2024.47.127.42985 -
Scientific Reports Jun 2024Voice production of humans and most mammals is governed by the MyoElastic-AeroDynamic (MEAD) principle, where an air stream is modulated by self-sustained vocal fold...
Voice production of humans and most mammals is governed by the MyoElastic-AeroDynamic (MEAD) principle, where an air stream is modulated by self-sustained vocal fold oscillation to generate audible air pressure fluctuations. An alternative mechanism is found in ultrasonic vocalizations of rodents, which are established by an aeroacoustic (AA) phenomenon without vibration of laryngeal tissue. Previously, some authors argued that high-pitched human vocalization is also produced by the AA principle. Here, we investigate the so-called "whistle register" voice production in nine professional female operatic sopranos singing a scale from C6 (≈ 1047 Hz) to G6 (≈ 1568 Hz). Super-high-speed videolaryngoscopy revealed vocal fold collision in all participants, with closed quotients from 30 to 73%. Computational modeling showed that the biomechanical requirements to produce such high-pitched voice would be an increased contraction of the cricothyroid muscle, vocal fold strain of about 50%, and high subglottal pressure. Our data suggest that high-pitched operatic soprano singing uses the MEAD mechanism. Consequently, the commonly used term "whistle register" does not reflect the physical principle of a whistle with regard to voice generation in high pitched classical singing.
Topics: Humans; Female; Singing; Biomechanical Phenomena; Vocal Cords; Adult; Sound; Voice; Phonation
PubMed: 38849382
DOI: 10.1038/s41598-024-62598-8 -
JASA Express Letters Jun 2024The automatic classification of phonation types in singing voice is essential for tasks such as identification of singing style. In this study, it is proposed to use...
The automatic classification of phonation types in singing voice is essential for tasks such as identification of singing style. In this study, it is proposed to use wavelet scattering network (WSN)-based features for classification of phonation types in singing voice. WSN, which has a close similarity with auditory physiological models, generates acoustic features that greatly characterize the information related to pitch, formants, and timbre. Hence, the WSN-based features can effectively capture the discriminative information across phonation types in singing voice. The experimental results show that the proposed WSN-based features improved phonation classification accuracy by at least 9% compared to state-of-the-art features.
PubMed: 38847582
DOI: 10.1121/10.0026241 -
Nature Communications Jun 2024Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned...
Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.
Topics: Humans; Speech; Male; Female; Machine Learning; Adult; Acoustics; Cross-Cultural Comparison; Auditory Perception; Sound Spectrography; Singing; Music; Middle Aged; Young Adult
PubMed: 38844457
DOI: 10.1038/s41467-024-49040-3