-
Frontiers in Aging Neuroscience 2024Amid the backdrop of global aging, the increasing prevalence of cognitive decline among the elderly, particularly within the female demographic, represents a...
BACKGROUND
Amid the backdrop of global aging, the increasing prevalence of cognitive decline among the elderly, particularly within the female demographic, represents a considerable public health concern. Physical activity (PA) is recognized as an effective non-pharmacological intervention for mitigating cognitive decline in older adults. However, the relationship between different PA patterns and cognitive function (CF) in elderly women remains unclear.
METHODS
This study utilized data from National Health and Nutrition Examination Survey (NHANES) 2011-2014 to investigate the relationships between PA, PA patterns [inactive, Weekend Warrior (WW), and Regular Exercise (RE)], and PA intensity with CF in elderly women. Multivariate regression analysis served as the primary analytical method.
RESULTS
There was a significant positive correlation between PA and CF among elderly women (β-PA: 0.003, 95% CI: 0.000-0.006, = 0.03143). Additionally, WW and RE activity patterns were associated with markedly better cognitive performance compared to the inactive group (β-WW: 0.451, 95% CI: 0.216-0.685, = 0.00017; β-RE: 0.153, 95% CI: 0.085-0.221, = 0.00001). Furthermore, our results indicate a progressive increase in CF with increasing PA intensity (β-MPA- dominated: 0.16, 95% CI: 0.02-0.09, = 0.0208; β-VPA-dominated: 0.21, 95% CI: 0.09-0.34, = 0.0011; β-Total VPA: 0.31, 95% CI: -0.01-0.63, = 0.0566).
CONCLUSION
Our study confirms a positive association between PA and CF in elderly women, with even intermittent but intensive PA models like WW being correlated with improved CF. These findings underscore the significant role that varying intensities and patterns of PA play in promoting cognitive health among older age groups, highlighting the need for adaptable PA strategies in public health initiatives targeting this population.
PubMed: 38934018
DOI: 10.3389/fnagi.2024.1407423 -
Therapeutics and Clinical Risk... 2024Incorporating unfamiliar therapies into practice requires effective longitudinal learning and the optimal way to achieve this is debated. Though not a novel therapy,...
BACKGROUND
Incorporating unfamiliar therapies into practice requires effective longitudinal learning and the optimal way to achieve this is debated. Though not a novel therapy, ketamine in critical care has a paucity of data and variable acceptance, with limited research describing intensivist perceptions and utilization. The Coronavirus-19 pandemic presented a particular crisis where providers rapidly adapted analgosedation strategies to achieve prolonged, deep sedation due to a surge of severe acute respiratory distress syndrome (ARDS).
QUESTION
How does clinical experience with ketamine impact the perception and attitude of clinicians toward this therapy?
METHODS
We conducted a mixed-methods study using quantitative ketamine prescription data and qualitative focus group data. We analyzed prescription patterns of ketamine in a tertiary academic ICU during two different time points: pre-COVID-19 (March 1-June 30, 2019) and during the COVID-19 surge (March 1-June 30, 2020). Two focus groups (FG) of critical care attendings were held, and data were analyzed using the Framework Method for content analysis.
RESULTS
Four-hundred forty-six medical ICU patients were mechanically ventilated (195 pre-COVID-19 and 251 during COVID-19). The COVID-19 population was more likely to receive ketamine (81[32.3%] vs 4 [2.1%], p < 0.001). Thirteen respondents participated across two FG sessions (Pre-COVID = 8, Post-COVID=5). The most prevalent attitude among our respondents was discomfort, with three key themes identified as follows: 1) lack of evidence regarding ketamine, 2) lack of personal experience, and 3) desire for more education and protocols.
CONCLUSION
Despite a substantial increase in ketamine prescription during COVID-19, intensivists continued to feel discomfort with utilization. Factors contributing to this discomfort include a lack of evidence, a lack of experience, and a desire for more education and protocols. Increase in experience with ketamine alone was not sufficient to minimize provider discomfort. These findings should inform future curricula and call for process improvement to optimize continuing education.
PubMed: 38934016
DOI: 10.2147/TCRM.S462760 -
Frontiers in Psychology 2023The cognitive distortion scale (CDS) is a self-rated measure to assess the degree of cognitive distortion which is 10 thinking errors commonly seen in depression....
The cognitive distortion scale (CDS) is a self-rated measure to assess the degree of cognitive distortion which is 10 thinking errors commonly seen in depression. However, there is no scale to measure 10 types cognitive distortions specific to depression in Japan. Therefore, this study translated the CDS into Japanese (CDS-J), and examined its factor structure, validity, and reliability in a Japanese population. A total of 237 healthy individuals and 39 individuals with depression participated in this study. Confirmatory factor analysis indicated the appropriateness of the CDS-J's 10-factor structure. Regarding convergent validity, CDS-J was significantly correlated with dysfunctional attitudes, negative automatic thoughts, and depression. Regarding discriminant validity, the CDS-J showed no significant correlation with positive automatic thoughts. The total CDS-J scores of the healthy participants and of those with major depression were compared. The results showed significant differences between groups. Finally, the CDS-J was found to have a high test-retest reliability. Therefore, the CDS-J is a valid and reliable tool for assessing cognitive distortions in Japan.
PubMed: 38933743
DOI: 10.3389/fpsyg.2023.1261166 -
Trauma Surgery & Acute Care Open 2024The decision to undertake a surgical intervention for an emergency general surgery (EGS) condition (appendicitis, diverticulitis, cholecystitis, hernia, peptic ulcer,...
BACKGROUND
The decision to undertake a surgical intervention for an emergency general surgery (EGS) condition (appendicitis, diverticulitis, cholecystitis, hernia, peptic ulcer, bowel obstruction, ischemic bowel) involves a complex consideration of factors, particularly in older adults. We hypothesized that identifying variability in the application of operative management could highlight a potential pathway to improve patient survival and outcomes.
METHODS
We included adults aged 65+ years with an EGS condition from the 2016-2017 National Inpatient Sample. Operative management was determined from procedure codes. Each patient was assigned a propensity score (PS) for the likelihood of undergoing an operation, modeled from patient and hospital factors: EGS diagnosis, age, gender, race, presence of shock, comorbidities, and hospital EGS volumes. Low and high probability for surgery was defined using a PS cut-off of 0.5. We identified two model-concordant groups (no surgery-low probability, surgery-high probability) and two model-discordant groups (no surgery-high probability, surgery-low probability). Logistic regression estimated the adjusted OR (AOR) of in-hospital mortality for each group.
RESULTS
Of 375 546 admissions, 21.2% underwent surgery. Model-discordant care occurred in 14.6%; 5.9% had no surgery despite a high PS and 8.7% received surgery with low PS. In the adjusted regression, model-discordant care was associated with significantly increased mortality: no surgery-high probability AOR 2.06 (1.86 to 2.27), surgery-low probability AOR 1.57 (1.49 to 1.65). Model-concordant care showed a protective effect against mortality (AOR 0.83, 0.74 to 0.92).
CONCLUSIONS
Nearly one in seven EGS patients received model-discordant care, which was associated with higher mortality. Our study suggests that streamlined treatment protocols can be applied in EGS patients as a means to save lives.
LEVEL OF EVIDENCE
III.
PubMed: 38933602
DOI: 10.1136/tsaco-2023-001288 -
Frontiers in Neurology 2024For children who are unable to cooperate due to severe dental anxiety (DA), dental treatment of childhood caries under Dental General Anesthesia (DGA) is a safe and...
BACKGROUND
For children who are unable to cooperate due to severe dental anxiety (DA), dental treatment of childhood caries under Dental General Anesthesia (DGA) is a safe and high-quality treatment method. This study aims to evaluate the impact on neurocognitive functions and the growth and development of children 2 years after dental procedure based on previous research, and further establish a causal relationship between general anesthesia (GA) and changes in children's neurocognitive functions by incorporating Mendelian Randomization (MR) analysis.
METHODS
Data were collected and analyzed from 340 cases of S-ECC procedures of preschool children conducted in 2019. This involved comparing the neurocognitive outcomes 2 years post-operation of preschool children receiving dental procedures under general anesthesia or local anesthesia. Physical development indicators such as height, weight, and body mass index (BMI) of children were also compared at baseline, half a year post-operation, and 2 years post-operation. We performed a Mendelian randomization analysis on the causal relationship between children's cognitive development and general anesthesia, drawing on a large-scale meta-analysis of GWAS for anesthesia, including multiple general anesthesia datasets.
RESULTS
Outcome data were obtained for 111 children in the general anesthesia group and 121 children in the local anesthesia group. The mean FSIQ score for the general anesthesia group was 106.77 (SD 6.96), while the mean score for the local anesthesia group was 106.36 (SD 5.88). FSIQ scores were equivalent between the two groups. The incidence of malnutrition in children in the general anesthesia group was 27.93% ( < 0.001) before surgery and decreased to 15.32% ( > 0.05) after 2 years, which was not different from the general population. The IVW method suggested that the causal estimate ( = 0.99 > 0.05, OR = 1.04, 95% CI = 5.98 × 10-1.82 × 10) was not statistically significant for disease prevalence. This indicates no genetic cause-and-effect relationship between anesthesia and childhood intelligence.
CONCLUSION
There were no adverse outcomes in neurocognitive development in 2 years after severe early childhood caries (S-ECC) procedure under total sevoflurane-inhalation in preschool children. The malnutrition condition in children can be improved after S-ECC procedure under general anesthesia. Limited MR evidence does not support a correlation between genetic susceptibility to anesthesia and an increased risk for intelligence in children.
PubMed: 38933327
DOI: 10.3389/fneur.2024.1389203 -
Frontiers in Immunology 2024Programmed cell death protein-1 (PD-1) inhibitor-based therapy has demonstrated promising results in metastatic gastric cancer (MGC). However, the previous researches...
OBJECTIVE
Programmed cell death protein-1 (PD-1) inhibitor-based therapy has demonstrated promising results in metastatic gastric cancer (MGC). However, the previous researches are mostly clinical trials and have reached various conclusions. Our objective is to investigate the efficacy of PD-1 inhibitor-based treatment as first-line therapy for MGC, utilizing real-world data from China, and further analyze predictive biomarkers for efficacy.
METHODS
This retrospective study comprised 105 patients diagnosed with MGC who underwent various PD-1 inhibitor-based treatments as first-line therapy at West China Hospital of Sichuan University from January 2018 to December 2022. Patient characteristics, treatment regimens, and tumor responses were extracted. We also conducted univariate and multivariate analyses to assess the relationship between clinical features and treatment outcomes. Additionally, we evaluated the predictive efficacy of several commonly used biomarkers for PD-1 inhibitor treatments.
RESULTS
Overall, after 28.0 months of follow-up among the 105 patients included in our study, the objective response rate (ORR) was 30.5%, and the disease control rate (DCR) was 89.5% post-treatment, with two individuals (1.9%) achieving complete response (CR). The median progression-free survival (mPFS) was 9.0 months, and the median overall survival (mOS) was 22.0 months. According to both univariate and multivariate analyses, favorable OS was associated with patients having Eastern Cooperative Oncology Group performance status (ECOG PS) of 0-1. Additionally, normal baseline levels of carcinoembryonic antigen (CEA), as well as the combination of PD-1 inhibitors with chemotherapy and trastuzumab in patients with human epidermal growth factor receptor 2 (HER2)-positive MGC, independently predicted longer PFS and OS. However, microsatellite instability/mismatch repair (MSI/MMR) status and Epstein-Barr virus (EBV) infection status were not significantly correlated with PFS or OS extension.
CONCLUSION
As the first-line treatment, PD-1 inhibitors, either as monotherapy or in combination therapy, are promising to prolong survival for patients with metastatic gastric cancer. Additionally, baseline level of CEA is a potential predictive biomarker for identifying patients mostly responsive to PD-1 inhibitors.
Topics: Humans; Stomach Neoplasms; Male; Female; Retrospective Studies; Middle Aged; Aged; Immune Checkpoint Inhibitors; Programmed Cell Death 1 Receptor; Adult; China; Biomarkers, Tumor; Treatment Outcome; Neoplasm Metastasis; Antineoplastic Combined Chemotherapy Protocols; East Asian People
PubMed: 38933261
DOI: 10.3389/fimmu.2024.1370860 -
Frontiers in Medicine 2024The coexistence of diabetes mellitus (DM) and pulmonary tuberculosis (PTB) poses a significant health concern globally, with their convergence presenting a considerable...
BACKGROUND
The coexistence of diabetes mellitus (DM) and pulmonary tuberculosis (PTB) poses a significant health concern globally, with their convergence presenting a considerable challenge to healthcare systems. Previous research has highlighted that comorbidities can mutually influence and exacerbate immune disorders. However, there is a paucity of data on the impact of DM on immunological features and treatment responses in the TB population in China.
METHODS
From January 2020 to June 2022, 264 cases of pulmonary tuberculosis patients (82 DM patients and 182 non-DM patients) hospitalized in our center were selected. 80 patients with TB with DM (TB-DM) and 80 patients with TB without DM (TB-NDM) were enrolled into the final analysis by propensity score matching for age, gender and involved lung field at a ratio of 1:1. The clinical characteristics, immunological features and treatment response were compared between the two groups.
RESULTS
After propensity score matching, no differences in the general features such as age gender, involved lung field, the incidence of retreatment and WBC count were found between the two groups. Compared to TB-NDM group, the TB-DM group exhibited a higher positive rate of sputum smear and incidence of cavitary lesions. Immunological features analysis revealed that the TB-DM patients had higher levels of TNF-α [pg/ml; 8.56 (7.08-13.35) vs. 7.64 (6.38-10.14) = 0.033] and IL-8 [pg/ml; 25.85 (11.63-58.40) vs. 17.56 (6.44-39.08) = 0.003] but lower CD8+ T lymphocyte count [cells/mm3; 334.02 (249.35-420.71) 380.95 (291.73-471.25) = 0.038]. However, there was no significant difference in serum IL-6 concentration and CD4+ T lymphocyte count between the two groups. After 2 months of anti-tuberculosis treatment, 39 (24.4%) cases had suboptimal treatment response, including 23 (28.7%) TB-DM patients and 16 (20%) TB-NDM patients. There was no difference in suboptimal response rate (SRR) was found between the two groups ( = 0.269). The multivariate logistic regression analysis indicated that retreatment for TB [AOR: 5.68 (95%CI: 2.01-16.08), = 0.001], sputum smear positivity [AOR: 8.01 (95%CI: 2.62-24.50), = 0.001] were associated with SRR in all participants, and in TB-DM group, only sputum smear positivity [AOR: 16.47 (1.75-155.12), = 0.014] was positive with SRR.
CONCLUSION
DM is a risk factor for pulmonary cavity formation and sputum smear positivity in TB population. Additionally, TB-DM patients is characterized by enhanced cytokine responses and decreased CD8+ T lymphocytes. The retreatment for TB and sputum smear positivity were associated with the occurrence of suboptimal treatment response.
PubMed: 38933114
DOI: 10.3389/fmed.2024.1386124 -
Frontiers in Medicine 2024The objective of this research was to create a machine learning predictive model that could be easily interpreted in order to precisely determine the risk of premature...
OBJECTIVE
The objective of this research was to create a machine learning predictive model that could be easily interpreted in order to precisely determine the risk of premature death in patients receiving intensive care after pulmonary inflammation.
METHODS
In this study, information from the China intensive care units (ICU) Open Source database was used to examine data from 2790 patients who had infections between January 2019 and December 2020. A 7:3 ratio was used to randomly assign the whole patient population to training and validation groups. This study used six machine learning techniques: logistic regression, random forest, gradient boosting tree, extreme gradient boosting tree (XGBoost), multilayer perceptron, and K-nearest neighbor. A cross-validation grid search method was used to search the parameters in each model. Eight metrics were used to assess the models' performance: accuracy, precision, recall, F1 score, area under the curve (AUC) value, Brier score, Jordon's index, and calibration slope. The machine methods were ranked based on how well they performed in each of these metrics. The best-performing models were selected for interpretation using both the Shapley Additive exPlanations (SHAP) and Local interpretable model-agnostic explanations (LIME) interpretable techniques.
RESULTS
A subset of the study cohort's patients (120/1668, or 7.19%) died in the hospital following screening for inclusion and exclusion criteria. Using a cross-validated grid search to evaluate the six machine learning techniques, XGBoost showed good discriminative ability, achieving an accuracy score of 0.889 (0.874-0.904), precision score of 0.871 (0.849-0.893), recall score of 0.913 (0.890-0.936), F1 score of 0.891 (0.876-0.906), and AUC of 0.956 (0.939-0.973). Additionally, XGBoost exhibited excellent performance with a Brier score of 0.050, Jordon index of 0.947, and calibration slope of 1.074. It was also possible to create an interactive internet page using the XGBoost model.
CONCLUSION
By identifying patients at higher risk of early mortality, machine learning-based mortality risk prediction models have the potential to significantly improve patient care by directing clinical decision making and enabling early detection of survival and mortality issues in patients with pulmonary inflammation disease.
PubMed: 38933112
DOI: 10.3389/fmed.2024.1399527 -
Frontiers in Medicine 2024The fight against SARS-CoV-2 has been a major task worldwide since it was first identified in December 2019. An imperative preventive measure is the availability of...
INTRODUCTION
The fight against SARS-CoV-2 has been a major task worldwide since it was first identified in December 2019. An imperative preventive measure is the availability of efficacious vaccines while there is also a significant interest in the protective effect of a previous SARS-CoV-2 infection on a subsequent infection (natural protection rate).
METHODS
In order to compare protection rates after infection and vaccination, researchers consider different effect measures such as 1 minus hazard ratio, 1 minus odds ratio, or 1 minus risk ratio. These measures differ in a setting with competing risks. Nevertheless, as there is no unique definition, these metrics are frequently used in studies examining protection rate. Comparison of protection rates via vaccination and natural infection poses several challenges. For instance many publications consider the epidemiological definition, that a reinfection after a SARS-CoV-2 infection is only possible after 90 days, whereas there is no such constraint after vaccination. Furthermore, death is more prominent as a competing event during the first 90 days after infection compared to vaccination. In this work we discuss the statistical issues that arise when investigating protection rates comparing vaccination with infection. We explore different aspects of effect measures and provide insights drawn from different analyses, distinguishing between the first and the second 90 days post-infection or vaccination.
RESULTS
In this study, we have access to real-world data of almost two million people from Stockholm County, Sweden. For the main analysis, data of over 52.000 people is considered. The infected group is younger, includes more men, and is less morbid compared to the vaccinated group. After the first 90 days, these differences increased. Analysis of the second 90 days shows differences between analysis approaches and between age groups. There are age-related differences in mortality. Considering the outcome SARS-CoV-2 infection, the effect of vaccination versus infection varies by age, showing a disadvantage for the vaccinated in the younger population, while no significant difference was found in the elderly.
DISCUSSION
To compare the effects of immunization through infection or vaccination, we emphasize consideration of several investigations. It is crucial to examine two observation periods: The first and second 90-day intervals following infection or vaccination. Additionally, methods to address imbalances are essential and need to be used. This approach supports fair comparisons, allows for more comprehensive conclusions and helps prevent biased interpretations.
PubMed: 38933111
DOI: 10.3389/fmed.2024.1376275 -
Ecology and Evolution Jun 2024Eco-evolutionary experiments are typically conducted in semi-unnatural controlled settings, such as mesocosms; yet inferences about how evolution and ecology interact...
Eco-evolutionary experiments are typically conducted in semi-unnatural controlled settings, such as mesocosms; yet inferences about how evolution and ecology interact in the real world would surely benefit from experiments in natural uncontrolled settings. Opportunities for such experiments are rare but do arise in the context of restoration ecology-where different "types" of a given species can be introduced into different "replicate" locations. Designing such experiments requires wrestling with consequential questions. (Q1) Which specific "types" of a focal species should be introduced to the restoration location? (Q2) How many sources of each type should be used-and should they be mixed together? (Q3) Which source populations should be used? (Q4) Which type(s) or population(s) should be introduced into which restoration sites? We recently grappled with these questions when designing an eco-evolutionary experiment with threespine stickleback () introduced into nine small lakes and ponds on the Kenai Peninsula in Alaska that required restoration. After considering the options at length, we decided to use benthic versus limnetic ecotypes (Q1) to create a mixed group of colonists from four source populations of each ecotype (Q2), where ecotypes were identified based on trophic morphology (Q3), and were then introduced into nine restoration lakes scaled by lake size (Q4). We hope that outlining the alternatives and resulting choices will make the rationales clear for future studies leveraging our experiment, while also proving useful for investigators considering similar experiments in the future.
PubMed: 38932947
DOI: 10.1002/ece3.11503