-
International Journal of Nursing Studies May 2021In the face of pressure to contain costs and make best use of scarce nurses, flexible staff deployment (floating staff between units and temporary hires) guided by a...
Beyond ratios - flexible and resilient nurse staffing options to deliver cost-effective hospital care and address staff shortages: A simulation and economic modelling study.
BACKGROUND
In the face of pressure to contain costs and make best use of scarce nurses, flexible staff deployment (floating staff between units and temporary hires) guided by a patient classification system may appear an efficient approach to meeting variable demand for care in hospitals.
OBJECTIVES
We modelled the cost-effectiveness of different approaches to planning baseline numbers of nurses to roster on general medical/surgical units while using flexible staff to respond to fluctuating demand.
DESIGN AND SETTING
We developed an agent-based simulation, where hospital inpatient units move between being understaffed, adequately staffed or overstaffed as staff supply and demand (as measured by the Safer Nursing Care Tool patient classification system) varies. Staffing shortfalls are addressed by floating staff from overstaffed units or hiring temporary staff. We compared a standard staffing plan (baseline rosters set to match average demand) with a higher baseline 'resilient' plan set to match higher than average demand, and a low baseline 'flexible' plan. We varied assumptions about temporary staff availability and estimated the effect of unresolved low staffing on length of stay and death, calculating cost per life saved.
RESULTS
Staffing plans with higher baseline rosters led to higher costs but improved outcomes. Cost savings from lower baseline staff mainly arose because shifts were left understaffed and much of the staff cost saving was offset by costs from longer patient stays. With limited temporary staff available, changing from low baseline flexible plan to the standard plan cost £13,117 per life saved and changing from the standard plan to the higher baseline 'resilient' plan cost £8,653 per life saved. Although adverse outcomes from low baseline staffing reduced when more temporary staff were available, higher baselines were even more cost-effective because the saving on staff costs also reduced. With unlimited temporary staff, changing from low baseline plan to the standard cost £4,520 per life saved and changing from the standard plan to the higher baseline cost £3,693 per life saved.
CONCLUSION
Shift-by-shift measurement of patient demand can guide flexible staff deployment, but the baseline number of staff rostered must be sufficient. Higher baseline rosters are more resilient in the face of variation and appear cost-effective. Staffing plans that minimise the number of nurses rostered in advance are likely to harm patients because temporary staff may not be available at short notice. Such plans, which rely heavily on flexible deployments, do not represent an efficient or effective use of nurses.
STUDY REGISTRATION
ISRCTN 12307968 Tweetable abstract: Economic simulation model of hospital units shows low baseline staff levels with high use of flexible staff are not cost-effective and don't solve nursing shortages.
Topics: Cost-Benefit Analysis; Hospitals; Humans; Nurses; Nursing Staff, Hospital; Personnel Staffing and Scheduling; Workforce
PubMed: 33677251
DOI: 10.1016/j.ijnurstu.2021.103901 -
Value in Health : the Journal of the... Aug 2010The methods used to estimate health-state utility values (HSUV) for multiple health conditions can produce very different values. Economic results generated using...
BACKGROUND
The methods used to estimate health-state utility values (HSUV) for multiple health conditions can produce very different values. Economic results generated using baselines of perfect health are not comparable with those generated using baselines adjusted to reflect the HSUVs associated with the health condition. Despite this, there is no guidance on the preferred techniques and little research describing the effect on cost per quality adjusted life-year (QALY) results when using the different methods.
METHODS
Using a cardiovascular disease (CVD) model and cost per QALY thresholds, we assess the consequence of using different baseline health-state utility profiles (perfect health, no history of CVD, general population) in conjunction with models (minimum, additive, multiplicative) frequently used to approximate scores for health states with multiple health conditions. HSUVs are calculated using the EQ-5D UK preference-based algorithm.
RESULTS
Assuming a baseline of perfect health ignores the natural decline in quality of life associated with age, overestimating the benefits of treatment. The results generated using baselines from the general population are comparable to those obtained using baselines from individuals with no history of CVD. The minimum model biases results in favor of younger-aged cohorts. The additive and multiplicative models give similar results.
CONCLUSION
Although further research in additional health conditions is required to support our findings, our results highlight the need for analysts to conform to an agreed reference case. We demonstrate that in CVD, if data are not available from individuals without the health condition, HSUVs from the general population provide a reasonable approximation.
Topics: Adolescent; Adult; Age Factors; Aged; Aged, 80 and over; Algorithms; Cardiovascular Diseases; Data Collection; Decision Making; England; Female; Health Status; Health Status Indicators; Humans; Male; Markov Chains; Middle Aged; Models, Economic; Pilot Projects; Quality of Life; Quality-Adjusted Life Years; Research Design; Surveys and Questionnaires; Young Adult
PubMed: 20230546
DOI: 10.1111/j.1524-4733.2010.00700.x -
NDSS Symposium 2023When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i)...
When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i) privacy guarantees for the data entries and (ii) liability guarantees (via fingerprinting) in case of unauthorized redistribution. However, (i) and (ii) are orthogonal objectives, because when sharing a database with multiple recipients, privacy via data sanitization requires adding noise once (and sharing the same noisy version with all recipients), whereas liability via unique fingerprint insertion requires adding different noises to each shared copy to distinguish all recipients. Although achieving (i) and (ii) together is possible in a naïve way (e.g., either differentially-private database perturbation or synthesis followed by fingerprinting), this approach results in significant degradation in the utility of shared databases. In this paper, we achieve privacy and liability guarantees simultaneously by proposing a novel entry-level differentially-private (DP) fingerprinting mechanism for relational databases without causing large utility degradation. The proposed mechanism fulfills the privacy and liability requirements by leveraging the randomization nature of fingerprinting and transforming it into provable privacy guarantees. Specifically, we devise a bit-level random response scheme to achieve differential privacy guarantee for arbitrary data entries when sharing the entire database, and then, based on this, we develop an -entry-level DP fingerprinting mechanism. We theoretically analyze the connections between privacy, fingerprint robustness, and database utility by deriving closed form expressions. We also propose a sparse vector technique-based solution to control the cumulative privacy loss when fingerprinted copies of a database are shared with multiple recipients. We experimentally show that our mechanism achieves strong fingerprint robustness (e.g., the fingerprint cannot be compromised even if the malicious database recipient modifies/distorts more than half of the entries in its received fingerprinted copy), and higher database utility compared to various baseline methods (e.g., application-dependent database utility of the shared database achieved by the proposed mechanism is higher than that of the considered baselines).
PubMed: 37275390
DOI: 10.14722/ndss.2023.24693 -
Perspectives on Behavior Science Sep 2022Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly...
Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly employed in other disciplines. In the past, there was significant controversy regarding the relative rigor of concurrent and nonconcurrent multiple baseline designs. The consensus in recent textbooks and methodological papers is that nonconcurrent designs are less rigorous than concurrent designs because of their presumed limited ability to address the threat of coincidental events (i.e., history). This skepticism of nonconcurrent designs stems from an emphasis on the importance of across-tier comparisons and relatively low importance placed on replicated within-tier comparisons for addressing threats to internal validity and establishing experimental control. In this article, we argue that the primary reliance on across-tier comparisons and the resulting deprecation of nonconcurrent designs are not well-justified. In this article, we first define multiple baseline designs, describe common threats to internal validity, and delineate the two bases for controlling these threats. Second, we briefly summarize historical methodological writing and current textbook treatment of these designs. Third, we explore how concurrent and nonconcurrent multiple baselines address each of the main threats to internal validity. Finally, we make recommendations for more rigorous use, reporting, and evaluation of multiple baseline designs.
PubMed: 36249165
DOI: 10.1007/s40614-022-00326-1 -
Brain Sciences Aug 2021Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the...
Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the mu-rhythm desynchronization/synchronization (mu-ERD/ERS) in a specific task is usually computed in relation to a baseline condition. In the present study, we investigated the effect that different types of baseline might have on toddler mu-ERD/ERS related to an action observation (AO) and action execution (AE) task. Specifically, we compared mu-ERD/ERS values computed using as a baseline: (1) the observation of a static image (BL1) and (2) a period of stillness (BL2). Our results showed that the majority of the subjects suppressed the mu-rhythm in response to the task and presented a greater mu-ERD for one of the two baselines. In some cases, one of the two baselines was not even able to produce a significant mu-ERD, and the preferred baseline varied among subjects even if most of them were more sensitive to the BL1, thus suggesting that this could be a good baseline to elicit mu-rhythm modulations in toddlers. These results recommended some considerations for the design and analysis of mu-rhythm studies involving pediatric subjects: in particular, the importance of verifying the mu-rhythm activity during baseline, the relevance of single-subject analysis, the possibility of including more than one baseline condition, and caution in the choice of the baseline and in the interpretation of the results of studies investigating mu-rhythm activity in pediatric populations.
PubMed: 34573178
DOI: 10.3390/brainsci11091159 -
International Journal of Medical... Mar 2020To simulate the clinical reasoning of doctors, retrieve analogous patients of an index patient automatically and predict diagnoses by the similar/dissimilar patients.
OBJECTIVE
To simulate the clinical reasoning of doctors, retrieve analogous patients of an index patient automatically and predict diagnoses by the similar/dissimilar patients.
METHODS
We proposed a novel patient-similarity-based framework for diagnostic prediction, which is inspired by the structure-mapping theory about analogy reasoning in psychology. Patient similarity is defined as the similarity between two patients' diagnoses sets rather than a dichotomous (absence/presence of just one disease). The multilabel classification problem is converted to a single-value regression problem by integrating the pairwise patients' clinical features into a vector and taking the vector as the input and the patient similarity as the output. In contrast to the common k-NN method which only considering the nearest neighbors, we not only utilize similar patients (positive analogy) to generate diagnostic hypotheses, but also utilize dissimilar patients (negative analogy) are used to reject diagnostic hypotheses.
RESULTS
The patient-similarity-based models perform better than the one-vs-all baseline and traditional k-NN methods. The f-1 score of positive-analogy-based prediction is 0.698, significantly higher than the scores of baselines ranging from 0.368 to 0.661. It increases to 0.703 when the negative analogy method is applied to modify the prediction results of positive analogy. The performance of this method is highly promising for larger datasets.
CONCLUSION
The patient-similarity-based model provides diagnostic decision support that is more accurate, generalizable, and interpretable than those of previous methods and is based on heterogeneous and incomplete data. The model also serves as a new application for the use of clinical big data through artificial intelligence technology.
Topics: Artificial Intelligence; Cluster Analysis; Diagnosis; Female; Humans; Male; Middle Aged; Patients
PubMed: 31923816
DOI: 10.1016/j.ijmedinf.2019.104073 -
Journal of Biomedical Informatics May 2022Medical decision-making impacts both individual and public health. Clinical scores are commonly used among various decision-making models to determine the degree of...
BACKGROUND
Medical decision-making impacts both individual and public health. Clinical scores are commonly used among various decision-making models to determine the degree of disease deterioration at the bedside. AutoScore was proposed as a useful clinical score generator based on machine learning and a generalized linear model. However, its current framework still leaves room for improvement when addressing unbalanced data of rare events.
METHODS
Using machine intelligence approaches, we developed AutoScore-Imbalance, which comprises three components: training dataset optimization, sample weight optimization, and adjusted AutoScore. Baseline techniques for performance comparison included the original AutoScore, full logistic regression, stepwise logistic regression, least absolute shrinkage and selection operator (LASSO), full random forest, and random forest with a reduced number of variables. These models were evaluated based on their area under the curve (AUC) in the receiver operating characteristic analysis and balanced accuracy (i.e., mean value of sensitivity and specificity). By utilizing a publicly accessible dataset from Beth Israel Deaconess Medical Center, we assessed the proposed model and baseline approaches to predict inpatient mortality.
RESULTS
AutoScore-Imbalance outperformed baselines in terms of AUC and balanced accuracy. The nine-variable AutoScore-Imbalance sub-model achieved the highest AUC of 0.786 (0.732-0.839), while the eleven-variable original AutoScore obtained an AUC of 0.723 (0.663-0.783), and the logistic regression with 21 variables obtained an AUC of 0.743 (0.685-0.801). The AutoScore-Imbalance sub-model (using a down-sampling algorithm) yielded an AUC of 0.771 (0.718-0.823) with only five variables, demonstrating a good balance between performance and variable sparsity. Furthermore, AutoScore-Imbalance obtained the highest balanced accuracy of 0.757 (0.702-0.805), compared to 0.698 (0.643-0.753) by the original AutoScore and the maximum of 0.720 (0.664-0.769) by other baseline models.
CONCLUSIONS
We have developed an interpretable tool to handle clinical data imbalance, presented its structure, and demonstrated its superiority over baselines. The AutoScore-Imbalance tool can be applied to highly unbalanced datasets to gain further insight into rare medical events and facilitate real-world clinical decision-making.
Topics: Algorithms; Clinical Decision-Making; Logistic Models; Machine Learning; ROC Curve
PubMed: 35421602
DOI: 10.1016/j.jbi.2022.104072 -
Brain : a Journal of Neurology Jan 2019The proportional recovery rule asserts that most stroke survivors recover a fixed proportion of lost function. To the extent that this is true, recovery from stroke can...
The proportional recovery rule asserts that most stroke survivors recover a fixed proportion of lost function. To the extent that this is true, recovery from stroke can be predicted accurately from baseline measures of acute post-stroke impairment alone. Reports that baseline scores explain more than 80%, and sometimes more than 90%, of the variance in the patients' recoveries, are rapidly accumulating. Here, we show that these headline effect sizes are likely inflated. The key effects in this literature are typically expressed as, or reducible to, correlation coefficients between baseline scores and recovery (outcome scores minus baseline scores). Using formal analyses and simulations, we show that these correlations will be extreme when outcomes are significantly less variable than baselines, which they often will be in practice regardless of the real relationship between outcomes and baselines. We show that these effect sizes are likely to be over-optimistic in every empirical study that we found that reported enough information for us to make the judgement, and argue that the same is likely to be true in other studies as well. The implication is that recovery after stroke may not be as proportional as recent studies suggest.
Topics: Humans; Recovery of Function; Statistics as Topic; Stroke
PubMed: 30535098
DOI: 10.1093/brain/awy302 -
Heliyon Apr 2022Achieving human-level performance on some Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models...
Achieving human-level performance on some Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs). However, it is necessary to provide both answer prediction and its explanation to further improve the MRC system's reliability, especially for real-life applications. In this paper, we propose a new benchmark called ExpMRC for evaluating the textual explainability of the MRC systems. ExpMRC contains four subsets, including SQuAD, CMRC 2018, RACE, and C, with additional annotations of the answer's evidence. The MRC systems are required to give not only the correct answer but also its explanation. We use state-of-the-art PLMs to build baseline systems and adopt various unsupervised approaches to extract both answer and evidence spans without human-annotated evidence spans. The experimental results show that these models are still far from human performance, suggesting that the ExpMRC is challenging. Resources (data and baselines) are available through https://github.com/ymcui/expmrc.
PubMed: 35497046
DOI: 10.1016/j.heliyon.2022.e09290 -
Veterinary World Jul 2017The aim of the study was to establish the baseline hematology and serum biochemistry values for Indian leopards (), and to assess the possible variations in these...
AIM
The aim of the study was to establish the baseline hematology and serum biochemistry values for Indian leopards (), and to assess the possible variations in these parameters based on age and gender.
MATERIALS AND METHODS
Hemato-biochemical test reports from a total of 83 healthy leopards, carried out as part of routine health evaluation in Bannerghatta Biological Park and Manikdoh Leopard Rescue Center, were used to establish baseline hematology and serum biochemistry parameters for the subspecies. The hematological parameters considered for the analysis included hemoglobin (Hb), packed cell volume, total erythrocyte count (TEC), total leukocyte count (TLC), mean corpuscular volume (MCV), mean corpuscular Hb (MCH), and MCH concentration. The serum biochemistry parameters considered included total protein (TP), albumin, globulin, aspartate aminotransferase, alanine aminotransferase (ALT), blood urea nitrogen, creatinine, triglycerides, calcium, and phosphorus.
RESULTS
Even though few differences were observed in hematologic and biochemistry values between male and female Indian leopards, the differences were statistically not significant. Effects of age, however, were evident in relation to many hematologic and biochemical parameters. Sub-adults had significantly greater values for Hb, TEC, and TLC compared to adults and geriatric group, whereas they had significantly lower MCV and MCH compared to adults and geriatric group. Among, serum biochemistry parameters the sub-adult age group was observed to have significantly lower values for TP and ALT than adult and geriatric leopards.
CONCLUSION
The study provides a comprehensive analysis of hematologic and biochemical parameters for Indian leopards. Baselines established here will permit better captive management of the subspecies, serve as a guide to assess the health and physiological status of the free ranging leopards, and may contribute valuable information for making effective management decisions during translocation or rehabilitation process.
PubMed: 28831229
DOI: 10.14202/vetworld.2017.818-824