-
American Journal of Health Promotion :... May 2024To examine the relative importance of social cognitive predictors (ie, performance accomplishment, vicarious learning, verbal persuasion, affective state) on health...
PURPOSE
To examine the relative importance of social cognitive predictors (ie, performance accomplishment, vicarious learning, verbal persuasion, affective state) on health promotion self-efficacy among older adults during COVID-19.
DESIGN
Cross-sectional.
SETTING
Data collected online from participants in British Columbia (BC), Canada.
SUBJECTS
Seventy-five adults (n = 75) aged ≥65 years.
MEASURES
Health promotion self-efficacy was measured using the Self-Rated Abilities for Health Practices Scale. Performance accomplishment was assessed using the health directed behavior subscale of the Health Education Impact Questionnaire; vicarious learning was measured using the positive social interaction subscale of the Medical Outcomes Survey - Social Support Scale (MOS-SSS); verbal persuasion was assessed using the informational support subscale from the MOS-SSS; and affective state was assessed using the depression subscale from the Depression Anxiety Stress Scale (DASS-21).
ANALYSIS
Multiple linear regression was used to investigate the relative importance of each social cognitive predictor on self-efficacy, after controlling for age.
RESULTS
Our analyses revealed statistically significant associations between self-efficacy and performance accomplishment (health-directed behavior; β = .20), verbal persuasion (informational support; β = .41), and affective state (depressive symptoms; β = -.44) at < .05. Vicarious learning (β = -.15) did not significantly predict self-efficacy. The model was statistically significant ( < .001) explaining 43% of the self-efficacy variance.
CONCLUSION
Performance accomplishment experiences, verbal persuasion strategies, and affective states may be the target of interventions to modify health promotion self-efficacy among older adults, in environments that require physical and social distancing.
PubMed: 38816954
DOI: 10.1177/08901171241256703 -
Frontiers in Public Health 2024The domination of the Contemporary Commercial Music (CCM) industry in music markets has led to a significant increase in the number of CCM performers. Performing in a...
BACKGROUND
The domination of the Contemporary Commercial Music (CCM) industry in music markets has led to a significant increase in the number of CCM performers. Performing in a wide variety of singing styles involves exposing CCM singers to specific risk factors potentially leading to voice problems. This, in turn, necessitates the consideration of this particular group of voice users in the Occupational Health framework. The aim of the present research was threefold. First, it sought to profile the group of Polish CCM singers. Second, it was designed to explore the prevalence of self-reported voice problems and voice quality in this population, in both speech and singing. Third, it aimed to explore the relationships between voice problems and lifetime singing involvement, occupational voice use, smoking, alcohol consumption, vocal training, and microphone use, as potential voice risk factors.
MATERIALS AND METHODS
The study was conducted in Poland from January 2020 to April 2023. An online survey included socio-demographic information, singing involvement characteristics, and singers' voice self-assessment. The prevalence of voice problems was assessed by the Polish versions of the Vocal Tract Discomfort Scale (VTDS) and the Singing Voice Handicap Index (SVHI). Also, a self-reported dysphonia symptoms protocol was applied. The perceived overall voice quality was assessed by a Visual Analogue Scale (VAS) of 100 mm.
RESULTS
412 singers, 310 women and 102 men, completed the survey. Nearly half of the studied population declared lifetime singing experience over 10 years with an average daily singing time of 1 or 2 h. 283 participants received vocal training. For 11.4% of respondents, singing was the primary income source, and 42% defined their career goals as voice-related. The median scores of the VTDS were 11.00 (0-44) and 12.00 (0-40) for the Frequency and Severity subscales, respectively. The median SVHI score of 33 (0-139) was significantly higher than the normative values determined in a systematic review and meta-analysis (2018). Strong positive correlations were observed between SVHI and both VTD subscales: Frequency ( = 0.632, < 0.001) and Severity ( = 0.611, < 0.001). The relationships between most of the other variables studied were weak or negligible.
CONCLUSION
The examined CCM singers exhibited substantial diversity with regard to musical genre preferences, aspirations pertaining to singing endeavors, career affiliations, and source of income. Singing voice assessment revealed a greater degree of voice problems in the examined cohort than so far reported in the literature, based on the SVH and VTDS.
Topics: Humans; Poland; Singing; Male; Female; Adult; Cross-Sectional Studies; Middle Aged; Music; Voice Quality; Voice Disorders; Self-Assessment; Surveys and Questionnaires; Prevalence; Risk Factors; Young Adult; Speech
PubMed: 38813421
DOI: 10.3389/fpubh.2024.1256152 -
Communicative & Integrative Biology 2024It is generally assumed that verbal communication can articulate concepts like 'fact' and 'truth' accurately. However, language is fundamentally inaccurate and ambiguous...
It is generally assumed that verbal communication can articulate concepts like 'fact' and 'truth' accurately. However, language is fundamentally inaccurate and ambiguous and it is not possible to express exact propositions accurately in an ambiguous medium. Whether truth exists or not, language cannot express it in any exact way. A major problem for verbal communication is that words are fundamentally differently interpreted by the sender and the receiver. In addition, intrapersonal verbal communication - the voice in our head - is a useless extension to the thought process and results in misunderstanding our own thoughts. The evolvement of language has had a profound impact on human life. Most consequential has been that it allowed people to question the old human rules of behavior - the pre-language way of living. As language could not accurately express the old rules, they lost their authority and disappeared. A long period without any rules of how to live together must have followed, probably accompanied by complete chaos. Later, new rules were devised in language, but the new rules were also questioned and had to be enforced by punishment. Language changed the peaceful human way of living under the old rules into violent and aggressive forms of living under punitive control. Religion then tried to incorporate the old rules into the harsh verbal world. The rules were expressed in language through parables: imaginary beings - the gods - who possessed the power of the old rules, but who could be related to through their human appearance and behavior.
PubMed: 38812722
DOI: 10.1080/19420889.2024.2353197 -
Cognition Aug 2024Do speakers use less redundant language with more proficient interlocutors? Both the communicative efficiency framework and the language development literature predict...
Do speakers use less redundant language with more proficient interlocutors? Both the communicative efficiency framework and the language development literature predict that speech directed to younger infants should be more redundant than speech directed to older infants. Here, we test this by quantifying redundancy in infant-directed speech using entropy rate - an information-theoretic measure reflecting average degree of repetitiveness. While IDS is often described as repetitive, entropy rate provides a novel holistic measure of redundancy in this speech genre. Using two developmental corpora, we compare entropy rates of samples taken from different ages. We find that parents use less redundant speech when talking to older children, illustrating an effect of perceived interlocutor proficiency on redundancy. The developmental decrease in redundancy reflects a decrease in lexical repetition, but also a decrease in repetitions of multi-word sequences, highlighting the importance of larger sequences in early language learning.
Topics: Humans; Language Development; Infant; Speech; Male; Female; Child, Preschool; Learning; Child Development
PubMed: 38810427
DOI: 10.1016/j.cognition.2024.105817 -
Acta Psychologica Jul 2024This study investigates the impact of segmental accuracy and nucleus placement on the comprehensibility of English as an International Language (EIL), with the aim of...
This study investigates the impact of segmental accuracy and nucleus placement on the comprehensibility of English as an International Language (EIL), with the aim of informing phonological norms and teaching models. Speech samples from 59 EIL speakers with varying levels of segmental accuracy were collected during a reading task, involving reading a passage in three different versions of speech, each version lasting approximately 30 to 40 s. To directly compare the impact of nuclear stress placement on comprehensibility, based on these samples, two versions of stimuli were created, each differing only in their placement of nuclear stress - either correct or incorrect. The correctness of placements was determined by seven native speakers of English. Eight native English speakers, aged 19-24, and eight EIL speakers, aged 20-24 with an upper-intermediate to advanced proficiency level, rated the comprehensibility of the two versions of speech. Results suggest that while correct nucleus placement enhances comprehensibility for native English listeners, it has little influence on EIL listeners. Segmental accuracy in EIL speech impacts comprehensibility substantially more than nucleus placement on both native and EIL listeners, indicating that English language teaching should focus on minimizing segmental errors to improve comprehensibility for EIL speakers, despite the benefits of correct nucleus placement.
Topics: Humans; Male; Female; Reading; Young Adult; Comprehension; Speech Perception; Phonetics; Language; Adult; Multilingualism; Speech
PubMed: 38810356
DOI: 10.1016/j.actpsy.2024.104313 -
PloS One 2024Non-random exploration of infant speech-like vocalizations (e.g., squeals, growls, and vowel-like sounds or "vocants") is pivotal in speech development. This type of...
Non-random exploration of infant speech-like vocalizations (e.g., squeals, growls, and vowel-like sounds or "vocants") is pivotal in speech development. This type of vocal exploration, often noticed when infants produce particular vocal types in clusters, serves two crucial purposes: it establishes a foundation for speech because speech requires formation of new vocal categories, and it serves as a basis for vocal signaling of wellness and interaction with caregivers. Despite the significance of clustering, existing research has largely relied on subjective descriptions and anecdotal observations regarding early vocal category formation. In this study, we aim to address this gap by presenting the first large-scale empirical evidence of vocal category exploration and clustering throughout the first year of life. We observed infant vocalizations longitudinally using all-day home recordings from 130 typically developing infants across the entire first year of life. To identify clustering patterns, we conducted Fisher's exact tests to compare the occurrence of squeals versus vocants, as well as growls versus vocants. We found that across the first year, infants demonstrated clear clustering patterns of squeals and growls, indicating that these categories were not randomly produced, but rather, it seemed, infants actively engaged in practice of these specific categories. The findings lend support to the concept of infants as manifesting active vocal exploration and category formation, a key foundation for vocal language.
Topics: Humans; Infant; Male; Female; Speech; Language Development; Voice; Longitudinal Studies; Phonetics
PubMed: 38809807
DOI: 10.1371/journal.pone.0299140 -
CoDAS 2024To present the content and response process validity evidence of the Speaking in Public Coping of Scale (ECOFAP).
PURPOSE
To present the content and response process validity evidence of the Speaking in Public Coping of Scale (ECOFAP).
METHODS
A methodological study to develop and validate the instrument. It followed the instrument development method with theoretical, empirical, and analytical procedures, based on the validity criteria of the Standards for Educational and Psychological Testing (SEPT). The process of obtaining content validity evidence had two stages: 1) conceptual definition of the construct, based on theoretical precepts of speaking in public and the Motivational Theory of Coping (MTC); 2) developing items and response keys, structuring the instrument, assessment by a committee with 10 specialists, restructuring scale items, and developing the ECOFAP pilot version. Item representativity was analyzed through the item content validity index. The response process was conducted in a single stage with a convenience sample of 30 people with and without difficulties speaking in public, from the campus of a Brazilian university, belonging to various social and professional strata. In this process, the respondents' verbal and nonverbal reactions were qualitatively analyzed.
RESULTS
The initial version of ECOFAP, consisting of 46 items, was evaluated by judges and later reformulated, resulting in a second version with 60 items. This second version was again submitted for expert analysis, and the content validity index per item was calculated. 18 items were excluded, resulting in a third version of 42 items. The validity evidence based on the response processes of the 42-item version was applied to a sample of 30 individuals, resulting in the rewriting of one item and the inclusion of six more items, resulting in the pilot version of ECOFAP with 48 items.
CONCLUSION
ECOFAP pilot version has items with well-structured semantics and syntactic, representing strategies to cope with speaking in public.
Topics: Humans; Adaptation, Psychological; Reproducibility of Results; Male; Female; Brazil; Surveys and Questionnaires; Psychometrics; Adult; Young Adult; Middle Aged; Speech
PubMed: 38808778
DOI: 10.1590/2317-1782/20242023200pt -
Frontiers in Human Neuroscience 2024Traumatic brain injury (TBI) negatively impacts social communication in part due to social cognitive difficulties, which may include reduced mental state term (MST) use...
INTRODUCTION
Traumatic brain injury (TBI) negatively impacts social communication in part due to social cognitive difficulties, which may include reduced mental state term (MST) use in some discourse genres. As social cognitive difficulties can negatively impact relationships, employment, and meaningful everyday activities, assessing and treating these difficulties post-TBI is crucial. To address knowledge gaps, the present study examined MST use in the narrative retells of adults with and without severe TBI to compare between-group performance, evaluate changes over the first two years post-TBI, and investigate the impact of participant and injury-related variables.
METHODS
The total number of MSTs, ratio of MSTs to total utterances, and diversity of MSTs were identified in the Cinderella narratives of 57 participants with no brain injury and 57 with TBI at 3, 6, 9, 12, and 24-months post-TBI.
RESULTS
Reduced MST use in participants with TBI was found at 3, 6, 9, and 12-months post-TBI, but these reductions disappeared when story length (total utterances) was accounted for. Further, MST diversity did not differ between groups. Similarly, although the total number of MSTs increased over time post-TBI, no changes were observed in the ratio of MSTs to total utterances or MST diversity over time. Injury severity (post-traumatic amnesia duration), years of education, and verbal reasoning abilities were all related to MST use.
DISCUSSION
Overall, although individuals used fewer MSTs in complex story retells across the first year following severe TBI, this reduction reflected impoverished story content, rather than the use of a lower ratio of MSTs. Further, key prognostic factors related to MST use included injury severity, educational attainment, and verbal reasoning ability. These findings have important implications for social communication assessment and treatment targeting social cognition post-TBI.
PubMed: 38807634
DOI: 10.3389/fnhum.2024.1386227 -
PLoS Biology May 2024Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely...
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Topics: Humans; Music; Male; Female; Adult; Auditory Perception; Acoustic Stimulation; Speech Perception; Young Adult; Speech; Adolescent
PubMed: 38805517
DOI: 10.1371/journal.pbio.3002631 -
PloS One 2024When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown...
When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.
Topics: Humans; Female; Male; Language; Adult; Speech Perception; Phonetics; Young Adult; Face; Visual Perception; Eye Movements; Speech; Eye-Tracking Technology
PubMed: 38805447
DOI: 10.1371/journal.pone.0304150