-
Journal of Experimental Child Psychology Jun 2024Preschoolers are notoriously poor at delaying gratification and saving limited resources, yet evidence-based methods of improving these behaviors are lacking. Using the...
Preschoolers are notoriously poor at delaying gratification and saving limited resources, yet evidence-based methods of improving these behaviors are lacking. Using the marble game saving paradigm, we examined whether young children's saving behavior would increase as a result of engaging in future-oriented imagination using a storyboard. Participants were 115 typically developing 4-year-olds from a midwestern U.S. metropolitan area (M = 53.48 months, SD = 4.14, range = 47-60; 54.8% female; 84.5% White; 7.3% Hispanic/Latino ethnicity; median annual household income = $150,000-$174,999). Children were randomly assigned to one of four storyboard conditions prior to the marble game: Positive Future Simulation, Negative Future Simulation, Positive Routine, or Negative Routine. In each condition, children were asked to imagine how they would feel in the future situation using a smiley face rating scale. Results showed that children were significantly more likely to save (and to save more marbles) in the experimental conditions compared with the control conditions (medium effect sizes). Moreover, imagining saving for the future (and how good that would feel) was more effective at increasing saving behaviors than imagining not saving (and how bad that would feel). Emotion ratings were consistent with the assigned condition, but positive emotion alone did not account for these effects. Results held after accounting for game order and verbal IQ. Implications of temporal psychological distancing and emotion anticipation for children's future-oriented decision making are discussed.
PubMed: 38852402
DOI: 10.1016/j.jecp.2024.105966 -
Child Abuse & Neglect Jun 2024Previous studies on maternal parenting styles and children's callous-unemotional behavior (CU behavior) have focused on the West, and few studies have examined the...
BACKGROUND
Previous studies on maternal parenting styles and children's callous-unemotional behavior (CU behavior) have focused on the West, and few studies have examined the longitudinal relationship between maternal parenting styles and CU behavior using Chinese preschoolers as subjects.
OBJECTIVE
Through a 1.5-year longitudinal lens, this study probed the relations between maternal parenting styles and CU behavior in the Chinese cultural setting.
PARTICIPANTS
Participants were N = 492 Chinese young children (Mage = 52.44 months, SD = 5.00, 48 % girls).
METHODS
At Time 1 (T1), mothers reported their use of authoritative parenting styles (i.e., warmth, reasoning, and autonomy), authoritarian parenting styles (i.e., physical coercion, verbal hostility, and nonreasoning) and children's CU behavior. At Time 2 (T2; approximately 1.5 years later), mothers again reported the above variables.
RESULTS
Cross-lagged models indicated that maternal warmth, reasoning, autonomy, and nonreasoning at T1 predicted CU behavior at T2. However, not only did maternal physical coercion and verbal hostility at T1 predict CU behavior at T2, but CU behavior at T1 also predicted maternal physical coercion and verbal hostility at T2. Additionally, there were no gender differences in the relationship between dimensions of maternal parenting styles and CU behavior.
CONCLUSIONS
It underscores the influence of authoritative parenting in potentially mitigating CU behavior, while authoritarian approaches may exacerbate CU behavior. The absence of gender differences suggests these dynamics are broadly applicable across genders. These findings have significant implications for parenting strategies aimed at addressing CU behavior in children, emphasizing the need for warmth, reasoning, and autonomy in parenting practices.
PubMed: 38850750
DOI: 10.1016/j.chiabu.2024.106865 -
Scientific Reports Jun 2024In human-computer interaction systems, speech emotion recognition (SER) plays a crucial role because it enables computers to understand and react to users' emotions. In...
In human-computer interaction systems, speech emotion recognition (SER) plays a crucial role because it enables computers to understand and react to users' emotions. In the past, SER has significantly emphasised acoustic properties extracted from speech signals. The use of visual signals for enhancing SER performance, however, has been made possible by recent developments in deep learning and computer vision. This work utilizes a lightweight Vision Transformer (ViT) model to propose a novel method for improving speech emotion recognition. We leverage the ViT model's capabilities to capture spatial dependencies and high-level features in images which are adequate indicators of emotional states from mel spectrogram input fed into the model. To determine the efficiency of our proposed approach, we conduct a comprehensive experiment on two benchmark speech emotion datasets, the Toronto English Speech Set (TESS) and the Berlin Emotional Database (EMODB). The results of our extensive experiment demonstrate a considerable improvement in speech emotion recognition accuracy attesting to its generalizability as it achieved 98%, 91%, and 93% (TESS-EMODB) accuracy respectively on the datasets. The outcomes of the comparative experiment show that the non-overlapping patch-based feature extraction method substantially improves the discipline of speech emotion recognition. Our research indicates the potential for integrating vision transformer models into SER systems, opening up fresh opportunities for real-world applications requiring accurate emotion recognition from speech compared with other state-of-the-art techniques.
Topics: Humans; Emotions; Speech; Deep Learning; Speech Recognition Software; Databases, Factual; Algorithms
PubMed: 38849422
DOI: 10.1038/s41598-024-63776-4 -
Journal of Alzheimer's Disease : JAD 2024Dementia is a general term for several progressive neurodegenerative disorders including Alzheimer's disease. Timely and accurate detection is crucial for early...
BACKGROUND
Dementia is a general term for several progressive neurodegenerative disorders including Alzheimer's disease. Timely and accurate detection is crucial for early intervention. Advancements in artificial intelligence present significant potential for using machine learning to aid in early detection.
OBJECTIVE
Summarize the state-of-the-art machine learning-based approaches for dementia prediction, focusing on non-invasive methods, as the burden on the patients is lower. Specifically, the analysis of gait and speech performance can offer insights into cognitive health through clinically cost-effective screening methods.
METHODS
A systematic literature review was conducted following the PRISMA protocol (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). The search was performed on three electronic databases (Scopus, Web of Science, and PubMed) to identify the relevant studies published between 2017 to 2022. A total of 40 papers were selected for review.
RESULTS
The most common machine learning methods employed were support vector machine followed by deep learning. Studies suggested the use of multimodal approaches as they can provide comprehensive and better prediction performance. Deep learning application in gait studies is still in the early stages as few studies have applied it. Moreover, including features of whole body movement contribute to better classification accuracy. Regarding speech studies, the combination of different parameters (acoustic, linguistic, cognitive testing) produced better results.
CONCLUSIONS
The review highlights the potential of machine learning, particularly non-invasive approaches, in the early prediction of dementia. The comparable prediction accuracies of manual and automatic speech analysis indicate an imminent fully automated approach for dementia detection.
Topics: Humans; Machine Learning; Dementia; Speech; Gait Analysis
PubMed: 38848181
DOI: 10.3233/JAD-231459 -
Nature Communications Jun 2024Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned...
Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.
Topics: Humans; Speech; Male; Female; Machine Learning; Adult; Acoustics; Cross-Cultural Comparison; Auditory Perception; Sound Spectrography; Singing; Music; Middle Aged; Young Adult
PubMed: 38844457
DOI: 10.1038/s41467-024-49040-3 -
PloS One 2024Dementia can disrupt how people experience and describe events as well as their own role in them. Alzheimer's disease (AD) compromises the processing of entities...
Dementia can disrupt how people experience and describe events as well as their own role in them. Alzheimer's disease (AD) compromises the processing of entities expressed by nouns, while behavioral variant frontotemporal dementia (bvFTD) entails a depersonalized perspective with increased third-person references. Yet, no study has examined whether these patterns can be captured in connected speech via natural language processing tools. To tackle such gaps, we asked 96 participants (32 AD patients, 32 bvFTD patients, 32 healthy controls) to narrate a typical day of their lives and calculated the proportion of nouns, verbs, and first- or third-person markers (via part-of-speech and morphological tagging). We also extracted objective properties (frequency, phonological neighborhood, length, semantic variability) from each content word. In our main study (with 21 AD patients, 21 bvFTD patients, and 21 healthy controls), we used inferential statistics and machine learning for group-level and subject-level discrimination. The above linguistic features were correlated with patients' scores in tests of general cognitive status and executive functions. We found that, compared with HCs, (i) AD (but not bvFTD) patients produced significantly fewer nouns, (ii) bvFTD (but not AD) patients used significantly more third-person markers, and (iii) both patient groups produced more frequent words. Machine learning analyses showed that these features identified individuals with AD and bvFTD (AUC = 0.71). A generalizability test, with a model trained on the entire main study sample and tested on hold-out samples (11 AD patients, 11 bvFTD patients, 11 healthy controls), showed even better performance, with AUCs of 0.76 and 0.83 for AD and bvFTD, respectively. No linguistic feature was significantly correlated with cognitive test scores in either patient group. These results suggest that specific cognitive traits of each disorder can be captured automatically in connected speech, favoring interpretability for enhanced syndrome characterization, diagnosis, and monitoring.
Topics: Humans; Frontotemporal Dementia; Alzheimer Disease; Female; Male; Aged; Speech; Middle Aged; Case-Control Studies; Biomarkers; Natural Language Processing; Machine Learning; Neuropsychological Tests; Executive Function
PubMed: 38843210
DOI: 10.1371/journal.pone.0304272 -
Journal of the Experimental Analysis of... Jul 2024The current study examined 98 participants' preferences for five pictorial stimuli. The researchers used a verbal multiple-stimulus-without-replacement (VMSWO)...
The current study examined 98 participants' preferences for five pictorial stimuli. The researchers used a verbal multiple-stimulus-without-replacement (VMSWO) preference assessment with each participant to identify high-preference and low-preference pictorial stimuli. Next, participants viewed each pictorial stimulus in a randomized order on a computer while using a hand dynamometer that measured the amount of force they exerted to increase or maintain the visual clarity of each image. The results indicate that over 75% of participants' force response ranks corresponded with participants' VMSWO high-preference stimuli, VMSWO low-preference stimuli, or both. The results of the current study provide further evidence for the use of conjugate schedules in the assessment of stimulus preference with potential for use as a reinforcer assessment. Implications along with directions for future research and limitations of the findings are discussed.
Topics: Humans; Male; Female; Choice Behavior; Young Adult; Reinforcement, Psychology; Adult; Photic Stimulation; Adolescent; Reinforcement Schedule; Psychomotor Performance
PubMed: 38837371
DOI: 10.1002/jeab.926 -
Low testosterone levels relate to poorer cognitive function in women in an APOE-ε4-dependant manner.Biology of Sex Differences Jun 2024Past research suggests that low testosterone levels relate to poorer cognitive function and higher Alzheimer's disease (AD) risk; however, these findings are...
BACKGROUND
Past research suggests that low testosterone levels relate to poorer cognitive function and higher Alzheimer's disease (AD) risk; however, these findings are inconsistent and are mostly derived from male samples, despite similar age-related testosterone decline in females. Both animal and human studies demonstrate that testosterone's effects on brain health may be moderated by apolipoprotein E ε4 allele (APOE-ε4) carrier status, which may explain some previous inconsistencies. We examined how testosterone relates to cognitive function in older women versus men across healthy aging and the AD continuum and the moderating role of APOE-ε4 genotype.
METHODS
Five hundred and sixty one participants aged 55-90 (155 cognitively normal (CN), 294 mild cognitive impairment (MCI), 112 AD dementia) from the Alzheimer's Disease Neuroimaging Initiative (ADNI), who had baseline cognitive and plasma testosterone data, as measured by the Rules Based Medicine Human DiscoveryMAP Panel were included. There were 213 females and 348 males (self-reported sex assigned at birth), and 52% of the overall sample were APOE-ε4 carriers. We tested the relationship of plasma testosterone levels and its interaction with APOE-ε4 status on clinical diagnostic group (CN vs. MCI vs. AD), global, and domain-specific cognitive performance using ANOVAs and linear regression models in sex-stratified samples. Cognitive domains included verbal memory, executive function, processing speed, and language.
RESULTS
We did not observe a significant difference in testosterone levels between clinical diagnostic groups in either sex, regrardless of APOE-ε4 status. Across clinical diagnostic group, we found a significant testosterone by APOE-ε4 interaction in females, such that lower testosterone levels related to worse global cognition, processing speed, and verbal memory in APOE-ε4 carriers only. We did not find that testosterone, nor its interaction with APOE-ε4, related to cognitive outcomes in males.
CONCLUSIONS
Findings suggest that low testosterone levels in older female APOE-ε4 carriers across the aging-MCI-AD continuum may have deleterious, domain-specific effects on cognitive performance. Although future studies including additional sex hormones and longitudinal cognitive trajectories are needed, our results highlight the importance of including both sexes and considering APOE-ε4 carrier status when examining testosterone's role in cognitive health.
Topics: Aged; Aged, 80 and over; Female; Humans; Male; Middle Aged; Alzheimer Disease; Apolipoprotein E4; Cognition; Cognitive Dysfunction; Sex Characteristics; Testosterone
PubMed: 38835072
DOI: 10.1186/s13293-024-00620-4 -
Frontiers in Psychiatry 2024Transgressive incidents directed at staff by forensic patients occur frequently, leading to detrimental psychological and physical harm, underscoring urgency of...
Transgressive incidents directed at staff by forensic patients occur frequently, leading to detrimental psychological and physical harm, underscoring urgency of preventive measures. These incidents, emerging within therapeutic relationships, involve complex interactions between patient and staff behavior. This study aims to identify clusters of transgressive incidents based on incident characteristics such as impact, severity, (presumed) cause, type of aggression, and consequences, using latent class analysis (LCA). Additionally, variations in incident clusters based on staff, patient, and context characteristics were investigated. A total of 1,184 transgressive incidents, reported by staff and targeted at staff by patients between 2018-2022, were extracted from a digital incident reporting system at Fivoor, a Dutch forensic psychiatric healthcare organisation. Latent Class Analysis revealed six incident classes: 1) ; 2) ; 3) ; 4) ; 5) ; and 6) . Significant differences in age and gender of both staff and patients, staff function, and patient diagnoses were observed among these classes. Incidents with higher impact were more prevalent in high security clinics, while lower-impact incidents were more common in clinics for patients with intellectual disabilities. Despite limitations like missing information, tailored prevention approaches are needed due to varying types of transgressive incidents across patients, staff, and units.
PubMed: 38832326
DOI: 10.3389/fpsyt.2024.1394535 -
Trends in Hearing 2024The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One...
The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.
Topics: Humans; Hearing Aids; Signal-To-Noise Ratio; Speech Intelligibility; Noise; Perceptual Masking; Speech Perception; Computer Simulation; Acoustic Stimulation; Correction of Hearing Impairment; Persons With Hearing Impairments; Hearing Loss; Equipment Design; Signal Processing, Computer-Assisted
PubMed: 38831646
DOI: 10.1177/23312165241260029