-
International Journal of Nursing Studies May 2021In the face of pressure to contain costs and make best use of scarce nurses, flexible staff deployment (floating staff between units and temporary hires) guided by a...
Beyond ratios - flexible and resilient nurse staffing options to deliver cost-effective hospital care and address staff shortages: A simulation and economic modelling study.
BACKGROUND
In the face of pressure to contain costs and make best use of scarce nurses, flexible staff deployment (floating staff between units and temporary hires) guided by a patient classification system may appear an efficient approach to meeting variable demand for care in hospitals.
OBJECTIVES
We modelled the cost-effectiveness of different approaches to planning baseline numbers of nurses to roster on general medical/surgical units while using flexible staff to respond to fluctuating demand.
DESIGN AND SETTING
We developed an agent-based simulation, where hospital inpatient units move between being understaffed, adequately staffed or overstaffed as staff supply and demand (as measured by the Safer Nursing Care Tool patient classification system) varies. Staffing shortfalls are addressed by floating staff from overstaffed units or hiring temporary staff. We compared a standard staffing plan (baseline rosters set to match average demand) with a higher baseline 'resilient' plan set to match higher than average demand, and a low baseline 'flexible' plan. We varied assumptions about temporary staff availability and estimated the effect of unresolved low staffing on length of stay and death, calculating cost per life saved.
RESULTS
Staffing plans with higher baseline rosters led to higher costs but improved outcomes. Cost savings from lower baseline staff mainly arose because shifts were left understaffed and much of the staff cost saving was offset by costs from longer patient stays. With limited temporary staff available, changing from low baseline flexible plan to the standard plan cost £13,117 per life saved and changing from the standard plan to the higher baseline 'resilient' plan cost £8,653 per life saved. Although adverse outcomes from low baseline staffing reduced when more temporary staff were available, higher baselines were even more cost-effective because the saving on staff costs also reduced. With unlimited temporary staff, changing from low baseline plan to the standard cost £4,520 per life saved and changing from the standard plan to the higher baseline cost £3,693 per life saved.
CONCLUSION
Shift-by-shift measurement of patient demand can guide flexible staff deployment, but the baseline number of staff rostered must be sufficient. Higher baseline rosters are more resilient in the face of variation and appear cost-effective. Staffing plans that minimise the number of nurses rostered in advance are likely to harm patients because temporary staff may not be available at short notice. Such plans, which rely heavily on flexible deployments, do not represent an efficient or effective use of nurses.
STUDY REGISTRATION
ISRCTN 12307968 Tweetable abstract: Economic simulation model of hospital units shows low baseline staff levels with high use of flexible staff are not cost-effective and don't solve nursing shortages.
Topics: Cost-Benefit Analysis; Hospitals; Humans; Nurses; Nursing Staff, Hospital; Personnel Staffing and Scheduling; Workforce
PubMed: 33677251
DOI: 10.1016/j.ijnurstu.2021.103901 -
NDSS Symposium 2023When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i)...
When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i) privacy guarantees for the data entries and (ii) liability guarantees (via fingerprinting) in case of unauthorized redistribution. However, (i) and (ii) are orthogonal objectives, because when sharing a database with multiple recipients, privacy via data sanitization requires adding noise once (and sharing the same noisy version with all recipients), whereas liability via unique fingerprint insertion requires adding different noises to each shared copy to distinguish all recipients. Although achieving (i) and (ii) together is possible in a naïve way (e.g., either differentially-private database perturbation or synthesis followed by fingerprinting), this approach results in significant degradation in the utility of shared databases. In this paper, we achieve privacy and liability guarantees simultaneously by proposing a novel entry-level differentially-private (DP) fingerprinting mechanism for relational databases without causing large utility degradation. The proposed mechanism fulfills the privacy and liability requirements by leveraging the randomization nature of fingerprinting and transforming it into provable privacy guarantees. Specifically, we devise a bit-level random response scheme to achieve differential privacy guarantee for arbitrary data entries when sharing the entire database, and then, based on this, we develop an -entry-level DP fingerprinting mechanism. We theoretically analyze the connections between privacy, fingerprint robustness, and database utility by deriving closed form expressions. We also propose a sparse vector technique-based solution to control the cumulative privacy loss when fingerprinted copies of a database are shared with multiple recipients. We experimentally show that our mechanism achieves strong fingerprint robustness (e.g., the fingerprint cannot be compromised even if the malicious database recipient modifies/distorts more than half of the entries in its received fingerprinted copy), and higher database utility compared to various baseline methods (e.g., application-dependent database utility of the shared database achieved by the proposed mechanism is higher than that of the considered baselines).
PubMed: 37275390
DOI: 10.14722/ndss.2023.24693 -
Bioinformatics (Oxford, England) Mar 2022The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to...
MOTIVATION
The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models. However, representations based on a single modality are inherently limited.
RESULTS
To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Graphs (KGs). This multimodal Transformer uses combined input sequences of structured information from KGs and unstructured text data from biomedical literature to learn joint representations in a shared embedding space. First, we pre-trained STonKGs on a knowledge base assembled by the Integrated Network and Dynamical Reasoning Assembler consisting of millions of text-triple pairs extracted from biomedical literature by multiple NLP systems. Then, we benchmarked STonKGs against three baseline models trained on either one of the modalities (i.e. text or KG) across eight different classification tasks, each corresponding to a different biological application. Our results demonstrate that STonKGs outperforms both baselines, especially on the more challenging tasks with respect to the number of classes, improving upon the F1-score of the best baseline by up to 0.084 (i.e. from 0.881 to 0.965). Finally, our pre-trained model as well as the model architecture can be adapted to various other transfer learning applications.
AVAILABILITY AND IMPLEMENTATION
We make the source code and the Python package of STonKGs available at GitHub (https://github.com/stonkgs/stonkgs) and PyPI (https://pypi.org/project/stonkgs/). The pre-trained STonKGs models and the task-specific classification models are respectively available at https://huggingface.co/stonkgs/stonkgs-150k and https://zenodo.org/communities/stonkgs.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
Topics: Pattern Recognition, Automated; Software; Machine Learning; Natural Language Processing; Publications
PubMed: 34986221
DOI: 10.1093/bioinformatics/btac001 -
Perspectives on Behavior Science Sep 2022Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly...
Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly employed in other disciplines. In the past, there was significant controversy regarding the relative rigor of concurrent and nonconcurrent multiple baseline designs. The consensus in recent textbooks and methodological papers is that nonconcurrent designs are less rigorous than concurrent designs because of their presumed limited ability to address the threat of coincidental events (i.e., history). This skepticism of nonconcurrent designs stems from an emphasis on the importance of across-tier comparisons and relatively low importance placed on replicated within-tier comparisons for addressing threats to internal validity and establishing experimental control. In this article, we argue that the primary reliance on across-tier comparisons and the resulting deprecation of nonconcurrent designs are not well-justified. In this article, we first define multiple baseline designs, describe common threats to internal validity, and delineate the two bases for controlling these threats. Second, we briefly summarize historical methodological writing and current textbook treatment of these designs. Third, we explore how concurrent and nonconcurrent multiple baselines address each of the main threats to internal validity. Finally, we make recommendations for more rigorous use, reporting, and evaluation of multiple baseline designs.
PubMed: 36249165
DOI: 10.1007/s40614-022-00326-1 -
Proceedings of Machine Learning Research Aug 2022Survival analysis, the art of time-to-event modeling, plays an important role in clinical treatment decisions. Recently, continuous time models built from neural ODEs...
Survival analysis, the art of time-to-event modeling, plays an important role in clinical treatment decisions. Recently, continuous time models built from neural ODEs have been proposed for survival analysis. However, the training of neural ODEs is slow due to the high computational complexity of neural ODE solvers. Here, we propose an efficient alternative for flexible continuous time models, called Survival Mixture Density Networks (Survival MDNs). Survival MDN applies an invertible positive function to the output of Mixture Density Networks (MDNs). While MDNs produce flexible real-valued distributions, the invertible positive function maps the model into the time-domain while preserving a tractable density. Using four datasets, we show that Survival MDN performs better than, or similarly to continuous and discrete time baselines on concordance, integrated Brier score and integrated binomial log-likelihood. Meanwhile, Survival MDNs are also faster than ODE-based models and circumvent binning issues in discrete models.
PubMed: 37706207
DOI: No ID Found -
Brain Sciences Aug 2021Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the...
Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the mu-rhythm desynchronization/synchronization (mu-ERD/ERS) in a specific task is usually computed in relation to a baseline condition. In the present study, we investigated the effect that different types of baseline might have on toddler mu-ERD/ERS related to an action observation (AO) and action execution (AE) task. Specifically, we compared mu-ERD/ERS values computed using as a baseline: (1) the observation of a static image (BL1) and (2) a period of stillness (BL2). Our results showed that the majority of the subjects suppressed the mu-rhythm in response to the task and presented a greater mu-ERD for one of the two baselines. In some cases, one of the two baselines was not even able to produce a significant mu-ERD, and the preferred baseline varied among subjects even if most of them were more sensitive to the BL1, thus suggesting that this could be a good baseline to elicit mu-rhythm modulations in toddlers. These results recommended some considerations for the design and analysis of mu-rhythm studies involving pediatric subjects: in particular, the importance of verifying the mu-rhythm activity during baseline, the relevance of single-subject analysis, the possibility of including more than one baseline condition, and caution in the choice of the baseline and in the interpretation of the results of studies investigating mu-rhythm activity in pediatric populations.
PubMed: 34573178
DOI: 10.3390/brainsci11091159 -
Scientific Reports Apr 2022Videos, especially short videos, have become an increasingly important source of information in these years. However, many videos spread on video sharing platforms are...
Videos, especially short videos, have become an increasingly important source of information in these years. However, many videos spread on video sharing platforms are misleading, which have negative social impacts. Therefore, it is necessary to find methods to automatically identify misleading videos. In this paper, three categories of features (content features, uploader features and environment features) are proposed to construct a convolutional neural network (CNN) for misleading video detection. The experiment showed that all the three proposed categories of features play a vital role in detecting misleading videos. Our proposed approach that combines three categories of features achieved the best performance with the accuracy of 0.90 and the F1 score of 0.89. It also outperformed other baselines such as SVM, k-NN, decision tree and random forest models by more than 22%.
Topics: Communications Media; Neural Networks, Computer; Video Recording
PubMed: 35414095
DOI: 10.1038/s41598-022-10117-y -
International Journal of Medical... Mar 2020To simulate the clinical reasoning of doctors, retrieve analogous patients of an index patient automatically and predict diagnoses by the similar/dissimilar patients.
OBJECTIVE
To simulate the clinical reasoning of doctors, retrieve analogous patients of an index patient automatically and predict diagnoses by the similar/dissimilar patients.
METHODS
We proposed a novel patient-similarity-based framework for diagnostic prediction, which is inspired by the structure-mapping theory about analogy reasoning in psychology. Patient similarity is defined as the similarity between two patients' diagnoses sets rather than a dichotomous (absence/presence of just one disease). The multilabel classification problem is converted to a single-value regression problem by integrating the pairwise patients' clinical features into a vector and taking the vector as the input and the patient similarity as the output. In contrast to the common k-NN method which only considering the nearest neighbors, we not only utilize similar patients (positive analogy) to generate diagnostic hypotheses, but also utilize dissimilar patients (negative analogy) are used to reject diagnostic hypotheses.
RESULTS
The patient-similarity-based models perform better than the one-vs-all baseline and traditional k-NN methods. The f-1 score of positive-analogy-based prediction is 0.698, significantly higher than the scores of baselines ranging from 0.368 to 0.661. It increases to 0.703 when the negative analogy method is applied to modify the prediction results of positive analogy. The performance of this method is highly promising for larger datasets.
CONCLUSION
The patient-similarity-based model provides diagnostic decision support that is more accurate, generalizable, and interpretable than those of previous methods and is based on heterogeneous and incomplete data. The model also serves as a new application for the use of clinical big data through artificial intelligence technology.
Topics: Artificial Intelligence; Cluster Analysis; Diagnosis; Female; Humans; Male; Middle Aged; Patients
PubMed: 31923816
DOI: 10.1016/j.ijmedinf.2019.104073 -
Journal of Biomedical Informatics May 2022Medical decision-making impacts both individual and public health. Clinical scores are commonly used among various decision-making models to determine the degree of...
BACKGROUND
Medical decision-making impacts both individual and public health. Clinical scores are commonly used among various decision-making models to determine the degree of disease deterioration at the bedside. AutoScore was proposed as a useful clinical score generator based on machine learning and a generalized linear model. However, its current framework still leaves room for improvement when addressing unbalanced data of rare events.
METHODS
Using machine intelligence approaches, we developed AutoScore-Imbalance, which comprises three components: training dataset optimization, sample weight optimization, and adjusted AutoScore. Baseline techniques for performance comparison included the original AutoScore, full logistic regression, stepwise logistic regression, least absolute shrinkage and selection operator (LASSO), full random forest, and random forest with a reduced number of variables. These models were evaluated based on their area under the curve (AUC) in the receiver operating characteristic analysis and balanced accuracy (i.e., mean value of sensitivity and specificity). By utilizing a publicly accessible dataset from Beth Israel Deaconess Medical Center, we assessed the proposed model and baseline approaches to predict inpatient mortality.
RESULTS
AutoScore-Imbalance outperformed baselines in terms of AUC and balanced accuracy. The nine-variable AutoScore-Imbalance sub-model achieved the highest AUC of 0.786 (0.732-0.839), while the eleven-variable original AutoScore obtained an AUC of 0.723 (0.663-0.783), and the logistic regression with 21 variables obtained an AUC of 0.743 (0.685-0.801). The AutoScore-Imbalance sub-model (using a down-sampling algorithm) yielded an AUC of 0.771 (0.718-0.823) with only five variables, demonstrating a good balance between performance and variable sparsity. Furthermore, AutoScore-Imbalance obtained the highest balanced accuracy of 0.757 (0.702-0.805), compared to 0.698 (0.643-0.753) by the original AutoScore and the maximum of 0.720 (0.664-0.769) by other baseline models.
CONCLUSIONS
We have developed an interpretable tool to handle clinical data imbalance, presented its structure, and demonstrated its superiority over baselines. The AutoScore-Imbalance tool can be applied to highly unbalanced datasets to gain further insight into rare medical events and facilitate real-world clinical decision-making.
Topics: Algorithms; Clinical Decision-Making; Logistic Models; Machine Learning; ROC Curve
PubMed: 35421602
DOI: 10.1016/j.jbi.2022.104072 -
Human Vaccines & Immunotherapeutics Nov 2022Dengue (DENV) is a mosquito-borne virus with four serotypes causing substantial morbidity in tropical and subtropical areas worldwide. V181 is an investigational, live,... (Randomized Controlled Trial)
Randomized Controlled Trial
A phase I randomized, double-blind, placebo-controlled study to evaluate the safety, tolerability, and immunogenicity of a live-attenuated quadrivalent dengue vaccine in flavivirus-naïve and flavivirus-experienced healthy adults.
Dengue (DENV) is a mosquito-borne virus with four serotypes causing substantial morbidity in tropical and subtropical areas worldwide. V181 is an investigational, live, attenuated, quadrivalent dengue vaccine. In this phase 1 double-blind, placebo-controlled study, the safety, tolerability, and immunogenicity of V181 in baseline flavivirus-naïve (BFN) and flavivirus-experienced (BFE) healthy adults were evaluated in two formulations: TV003 and TV005. TV005 contains a 10-fold higher DENV2 level than TV003. Two-hundred adults were randomized 2:2:1 to receive TV003, TV005, or placebo on Days 1 and 180. Immunogenicity against the 4 DENV serotypes was measured using a Virus Reduction Neutralization Test (VRNT) after each vaccination and out to 1 year after the second dose. There were no discontinuations due to adverse events (AE) or serious vaccine-related AEs in the study. Most common AEs after TV003 or TV005 were headache, rash, fatigue, and myalgia. Tri- or tetravalent vaccine-viremia was detected in 63.9% and 25.6% of BFN TV003 and TV005 participants, respectively, post-dose 1 (PD1). Tri- or tetravalent dengue VRNT seropositivity was demonstrated in 92.6% of BFN TV003, 74.2% of BFN TV005, and 100% of BFE TV003 and TV005 participants PD1. Increases in VRNT GMTs were observed after the first vaccination with TV003 and TV005 in both flavivirus subgroups for all dengue serotypes, and minimal increases were measured PD2. GMTs in the TV003 and TV005 BFE and BFN groups remained above the respective baselines and placebo through 1-year PD2. These data support further development of V181 as a single-dose vaccine for the prevention of dengue disease.
Topics: Adult; Antibodies, Viral; Dengue; Dengue Vaccines; Dengue Virus; Double-Blind Method; Flavivirus; Humans; Immunogenicity, Vaccine; Vaccines, Attenuated; Vaccines, Combined
PubMed: 35290152
DOI: 10.1080/21645515.2022.2046960