-
PloS One 2024Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words....
Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words. The ability to process these non-adjacent dependencies (NADs) conflicts with the brain's sampling of speech: We consume speech in chunks that are limited in time, containing only a limited number of words. It is unknown how we link words together that belong to separate chunks. Here, we report that we cannot-at least not so well. In our electroencephalography (EEG) study, 37 human listeners learned chunks and dependencies from an artificial grammar (AG) composed of syllables. Multi-syllable chunks to be learned were equal-sized, allowing us to employ a frequency-tagging approach. On top of chunks, syllable streams contained NADs that were either confined to a single chunk or crossed a chunk boundary. Frequency analyses of the EEG revealed a spectral peak at the chunk rate, showing that participants learned the chunks. NADs that cross boundaries were associated with smaller electrophysiological responses than within-chunk NADs. This shows that NADs are processed readily when they are confined to the same chunk, but not as well when crossing a chunk boundary. Our findings help to reconcile the classical notion that language is processed incrementally with recent evidence for discrete perceptual sampling of speech. This has implications for language acquisition and processing as well as for the general view of syntax in human language.
Topics: Humans; Electroencephalography; Female; Male; Adult; Language; Young Adult; Speech Perception; Speech; Learning; Brain
PubMed: 38889141
DOI: 10.1371/journal.pone.0305333 -
JASA Express Letters Jun 2024Singing is socially important but constrains voice acoustics, potentially masking certain aspects of vocal identity. Little is known about how well listeners extract...
Singing is socially important but constrains voice acoustics, potentially masking certain aspects of vocal identity. Little is known about how well listeners extract talker details from sung speech or identify talkers across the sung and spoken modalities. Here, listeners (n = 149) were trained to recognize sung or spoken voices and then tested on their identification of these voices in both modalities. Learning vocal identities was initially easier through speech than song. At test, cross-modality voice recognition was above chance, but weaker than within-modality recognition. We conclude that talker information is accessible in sung speech, despite acoustic constraints in song.
Topics: Humans; Singing; Male; Female; Adult; Speech Perception; Voice; Young Adult; Recognition, Psychology; Speech
PubMed: 38888432
DOI: 10.1121/10.0026385 -
Child Development Jun 2024The study examined how children's self-regulation skills measured by the strengths and weaknesses of ADHD symptoms and normal behavior rating are associated with story...
The study examined how children's self-regulation skills measured by the strengths and weaknesses of ADHD symptoms and normal behavior rating are associated with story comprehension and how verbal engagement and e-book discussion prompts moderate this relation. Children aged 3-7 (N = 111, 50% female, Chinese as first language) read an interactive Chinese-English bilingual story e-book with or without discussion prompts twice with their parents (2020-2021). Results demonstrated that the lower children's self-regulation skills, the more they struggled with story comprehension. Critically, our data suggest that embedding e-book discussion prompts and more verbalization in English can mitigate this negative association for children with inattention/hyperactivity. These findings have critical implications for future e-book design, interventions, and home reading practice for children with inattention/hyperactivity and those at risk for attention deficit/hyperactivity disorder.
PubMed: 38887788
DOI: 10.1111/cdev.14128 -
Tobacco Induced Diseases 2024In this study, we investigate the effects of smoking on pain scores, vital signs, and analgesic consumption in the intraoperative and postoperative period in patients...
INTRODUCTION
In this study, we investigate the effects of smoking on pain scores, vital signs, and analgesic consumption in the intraoperative and postoperative period in patients undergoing tympanomastoidectomy surgery.
METHODS
A total of 100 patients with American Society of Anesthesiologists I-II status, aged 18-55 years, and who were planned to undergo tympanomastoidectomy surgery were divided into two groups: smokers (Group 1) and non-smokers (Group 2). The patients were compared for preoperative, intraoperative, and 24-hour postoperative carboxyhemoglobin, blood pressure, oxygen saturation, respiratory rate, heart rate, pain intensity and verbal numerical rating scales, the extent of patient-controlled tramadol dose, nausea, and vomiting.
RESULTS
There were 50 individuals in each group. Postoperative analgesic consumption and pain scores were higher in Group 1, and the first postoperative pain was felt earlier. Furthermore, in Group 1, preoperative carboxyhemoglobin levels and postoperative nausea were statistically higher before, after, and at the tenth minute after induction, whereas oxygen saturation was lower. The two groups had no statistical difference regarding intraoperative and postoperative vital signs. Postoperative analgesic consumption was not affected by age or gender.
CONCLUSIONS
Smoking changes postoperative pain management, especially for this kind of operation, and these patients feel more pain and need more postoperative analgesic doses. Therefore, effective postoperative pain control should take account of smoking behavior, and analgesic doses may need to be adjusted for patients who smoke.
PubMed: 38887600
DOI: 10.18332/tid/189301 -
Journal of Neural Engineering Jun 2024Brain-computer interfaces (BCIs) are technologies that bypass damaged or disrupted neural pathways and directly decode brain signals to perform intended actions. BCIs...
Brain-computer interfaces (BCIs) are technologies that bypass damaged or disrupted neural pathways and directly decode brain signals to perform intended actions. BCIs for speech have the potential to restore communication by decoding the intended speech directly. Many studies have demonstrated promising results using invasive micro-electrode arrays and electrocorticography. However, the use of stereo-electroencephalography (sEEG) for speech decoding has not been fully recognized.In this research, recently released sEEG data were used to decode Dutch words spoken by epileptic participants. We decoded speech waveforms from sEEG data using advanced deep-learning methods. Three methods were implemented: a linear regression method, an recurrent neural network (RNN)-based sequence-to-sequence model (RNN), and a transformer model.Our RNN and transformer models outperformed the linear regression significantly, while no significant difference was found between the two deep-learning methods. Further investigation on individual electrodes showed that the same decoding result can be obtained using only a few of the electrodes.This study demonstrated that decoding speech from sEEG signals is possible, and the location of the electrodes is critical to the decoding performance.
Topics: Humans; Deep Learning; Electroencephalography; Speech; Brain-Computer Interfaces; Male; Female; Epilepsy; Stereotaxic Techniques; Adult; Neural Networks, Computer
PubMed: 38885688
DOI: 10.1088/1741-2552/ad593a -
Psychopathology Jun 2024This study aimed to investigate the influence of familial predisposition on substance-induced psychosis among healthy siblings of patients diagnosed with...
OBJECTIVE
This study aimed to investigate the influence of familial predisposition on substance-induced psychosis among healthy siblings of patients diagnosed with substance-induced psychotic disorder, who themselves lack any family history of psychotic disorders. Additionally, the study aimed to explore clinical high-risk states for psychosis, schizotypal features, and neurocognitive functions in comparison to a healthy control group.
METHOD
The study compared healthy siblings of 41 patients diagnosed with substance-induced psychotic disorder with 41 healthy volunteers without a family history of psychotic disorders, matching age, gender, and education. Sociodemographic and clinical characteristics of participants were obtained using data collection forms. The Comprehensive Assessment of At-Risk Mental States (CAARMS) and the Structured Interview for Schizotypy-Revised Form (SIS-R) scales were utilized to assess clinical high risk for psychosis. Neurocognitive functions were evaluated with digit span test (DST), trail making test part A-B (TMT), verbal fluency test (VFT), and Stroop test (ST).
RESULTS
Analysis using the CAARMS scale revealed that 39% of siblings and 7.3% of the control group were at clinically high risk for psychosis, indicating a significant difference in rates of psychotic vulnerability. Comparison between siblings and the control group showed significant differences in mean SIS-R subscale scores, including social behavior, hypersensitivity, referential thinking, suspiciousness, illusions, and overall oddness, as well as in mean neurocognitive function scores, including errors in TMT-A, TMT-B, and VFT out-of-category errors, with siblings exhibiting poorer performance.
CONCLUSION
Our study suggests that healthy siblings of patients with substance-induced psychosis exhibit more schizotypal features and have a higher risk of developing psychosis compared to healthy controls. Additionally, siblings demonstrate greater impairment in attention, response inhibition, and executive functions compared to healthy controls, indicating the potential role of genetic predisposition in the development of substance-induced psychotic disorder.
PubMed: 38885619
DOI: 10.1159/000538478 -
Psychological Services Jun 2024Coercive, controlling behavior toward intimate partners correlates with physical intimate partner violence (IPV). We examined whether it also predicts subsequent IPV or...
Coercive, controlling behavior toward intimate partners correlates with physical intimate partner violence (IPV). We examined whether it also predicts subsequent IPV or other aggression. We conducted a secondary analysis of self-reports by 1,039 women and 509 men who participated in the first two waves of the Interpersonal Conflict and Resolution Study (Mumford et al., 2019). We defined coercive control as any reported perpetration at Wave 1 of threat to physically harm, threat to use information to control, or put down or disrespect their partner. The participants also reported perpetration of verbal abuse and physical or sexual aggression against intimate partners. We tested correlations of these behaviors with similar acts toward nonintimates (friends or unfamiliar persons) in Wave 1 and the prediction of physical violence in Wave 2, approximately 5 months later. Coercive control (14% of men, 26% of women) was correlated with physical or sexual IPV (8% of men, 15% of women) in both women and men and with physical violence and coercive control to nonintimates. In logistic regressions entering Wave 1 physical IPV on the first step, Wave 1 coercive control was a significant independent predictor of Wave 2 physical IPV overall, and for men but not women. Coercive control did not independently predict nonintimate physical violence. Coercive control toward an intimate partner is a unique predictor of physical IPV among men. Future research should use improved measures of coercive control and further examine coercive control as an indicator of general antisociality. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
PubMed: 38884951
DOI: 10.1037/ser0000881 -
The Journal of the Acoustical Society... Jun 2024For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate...
For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This study compared several deep-learning algorithms with access to one, two unilateral, or six bilateral microphones that were trained to recover speech signals by jointly removing noise and reverberation. The noisy-reverberant speech and an ideal noise reduction algorithm served as lower and upper references, respectively. Objective signal metrics were compared with results from two listening tests, including 15 typical hearing listeners with CI simulations and 12 CI listeners. Large and statistically significant improvements in speech reception thresholds of 7.4 and 10.3 dB were found for the multi-microphone algorithms. For the single-microphone algorithm, there was an improvement of 2.3 dB but only for the CI listener group. The objective signal metrics correctly predicted the rank order of results for CI listeners, and there was an overall agreement for most effects and variances between results for CI simulations and CI listeners. These algorithms hold promise to improve speech intelligibility for CI listeners in environments with noise and reverberation and benefit from a boost in performance when using features extracted from multiple microphones.
Topics: Humans; Cochlear Implants; Speech Intelligibility; Noise; Deep Learning; Female; Male; Adult; Speech Perception; Middle Aged; Aged; Algorithms; Young Adult; Cochlear Implantation
PubMed: 38884525
DOI: 10.1121/10.0026218 -
The Journal of Social Psychology Jun 2024Risk communication involves conveying potential risks to the audience. It's crucial for shaping behavior and influencing individual well-being. Previous research...
Risk communication involves conveying potential risks to the audience. It's crucial for shaping behavior and influencing individual well-being. Previous research predominantly focused on verbal and written aspects of risk communication, with less emphasis on nonverbal cues like vocal tone. Addressing this gap, our study explores the impact of competent and warm vocal tones on risk communication across two risky decision-making paradigms, the Balloon Analogue Risk Task (BART) in Study 1 and the Gambling Task in Study 2. Results show that competent and warm vocal tones are more persuasive than neutral tones, and their effectiveness varies in different decision-making scenarios. Additionally, participants' perceived competence and warmth of vocal tones mediate this persuasiveness. This study enhances our theoretical understanding of risk communication by incorporating the impact of vocal tones. Also, it carries practical implications for marketers and practitioners, demonstrating the importance of using voice as a medium to persuade in real-world scenarios.
PubMed: 38884469
DOI: 10.1080/00224545.2024.2368015 -
Journal of the American Medical... Jun 2024The main objectives of this research are (1) to uniquely design assistive behaviors for socially assistive robots using the principles of persuasion from behavioral...
OBJECTIVES
The main objectives of this research are (1) to uniquely design assistive behaviors for socially assistive robots using the principles of persuasion from behavioral psychology, and (2) to investigate caregivers' perspectives and opinions on the use of these behaviors to engage and motivate older adults in cognitive activities.
DESIGN
We developed 10 unique robot persuasive assistive behavior strategies for the social robot Pepper using both verbal and nonverbal communication modes. Robot verbal behaviors were designed using Cialdini's principles of persuasion; nonverbal behaviors included expansive movements of the body. Care providers' perceptions of the quality, strength, and persuasiveness of these robot persuasive behaviors were assessed based on the Perceived Argument Strength Likert scale.
SETTING AND PARTICIPANTS
Eighteen formal and informal care providers caring for older adults including those living with mild cognitive impairments participated.
METHODS
An online survey was designed consisting of short videos of the Pepper robot displaying each behavior. After viewing each video, care providers completed the Perceived Argument Strength Likert scale to evaluate 6 attributes for each behavior. They also provided comments.
RESULTS
Results show robot assistive behaviors using praise with emotion, along with emotion with commitment were the most positively rated by care providers. Qualitative responses indicate robot body language and speech quality were influencing factors in how a person perceives assistance in human-robot interactions.
CONCLUSIONS AND IMPLICATIONS
Our findings provide new insights into incorporating persuasive strategies into the design of assistive social robot behaviors with the aim of engaging and motivating older adults in an activity. The majority of care providers rated the robot persuasive behaviors positively. In designing a persuasive socially assistive robot for older adults, it is beneficial to display a combination of persuasive strategies, such as praise and commitment with emotion, to address individual users' needs and cognitive levels.
PubMed: 38880121
DOI: 10.1016/j.jamda.2024.105084