-
Malaria Journal Mar 2016The World Health Organization recommends malaria to be confirmed by either microscopy or a rapid diagnostic test (RDT) before treatment. The correct use of RDTs in... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
The World Health Organization recommends malaria to be confirmed by either microscopy or a rapid diagnostic test (RDT) before treatment. The correct use of RDTs in resource-limited settings facilitates basing treatment onto a confirmed diagnosis; contributes to speeding up considering a correct alternative diagnosis, and prevents overprescription of anti-malarial drugs, reduces costs and avoids unnecessary exposure to adverse drug effects. This review aims to evaluate health workers' compliance to RDT results and factors contributing to compliance.
METHODS
A PROSPERO-registered systematic review was conducted to evaluate health workers' compliance to RDTs in sub-Saharan Africa, following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies published up to November 2015 were searched without language restrictions in Medline/Ovid, Embase, Cochrane Central Register of Controlled Trials, Web of Science, LILACS, Biosis Previews and the African Index Medicus. The primary outcome was health workers treating patients according to the RDT results obtained.
RESULTS
The literature search identified 474 reports; 14 studies were eligible and included in the quantitative analysis. From the meta-analysis, health workers' overall compliance in terms of initiating treatment or not in accordance with the respective RDT results was 83% (95% CI 80-86%). Compliance to positive and negative results was 97% (95% CI 94-99%) and 78% (95% CI 66-89%), respectively. Community health workers had higher compliance rates to negative test results than clinicians. Patient expectations, work experience, scepticism of results, health workers' cadres and perceived effectiveness of the test, influenced compliance.
CONCLUSIONS
With regard to published data, compliance to RDT appears to be generally fair in sub-Saharan Africa; compliance to negative results will need to improve to prevent mismanagement of patients and overprescribing of anti-malarial drugs. Improving diagnostic capacity for other febrile illnesses and developing local evidence-based guidelines may help improve compliance and management of negative RDT results.
TRIAL REGISTRATION
CRD42015016151 (PROSPERO).
Topics: Africa South of the Sahara; Antimalarials; Attitude of Health Personnel; Chromatography, Affinity; Diagnostic Tests, Routine; Guideline Adherence; Humans; Malaria
PubMed: 26979286
DOI: 10.1186/s12936-016-1218-5 -
Health Technology Assessment... Dec 2012The evidence base which supported the National Institute for Health and Clinical Excellence (NICE) published Clinical Guideline 3 was limited and 50% was graded as... (Review)
Review
What is the value of routinely testing full blood count, electrolytes and urea, and pulmonary function tests before elective surgery in patients with no apparent clinical indication and in subgroups of patients with common comorbidities: a systematic review of the clinical and cost-effective...
BACKGROUND
The evidence base which supported the National Institute for Health and Clinical Excellence (NICE) published Clinical Guideline 3 was limited and 50% was graded as amber. However, the use of tests as part of pre-operative work-up remains a low-cost but high-volume activity within the NHS, with substantial resource implications. The objective of this study was to identify, evaluate and synthesise the published evidence on the clinical effectiveness and cost-effectiveness of the routine use of three tests, full blood counts (FBCs), urea and electrolytes tests (U&Es) and pulmonary function tests, in the pre-operative work-up of otherwise healthy patients undergoing minor or intermediate surgery in the NHS.
OBJECTIVE
The aims of this study were to estimate the clinical effectiveness and cost-effectiveness of routine pre-operative testing of FBC, electrolytes and renal function and pulmonary function in adult patients classified as American Society of Anaesthesiologists (ASA) grades 1 and 2 undergoing elective minor (grade 1) or intermediate (grade 2) surgical procedures; to compare NICE recommendations with current practice; to evaluate the cost-effectiveness of mandating or withdrawing each of these tests in this patient group; and to identify the expected value of information and whether or not it has value to the NHS in commissioning further primary research into the use of these tests in this group of patients.
DATA SOURCES
The following electronic bibliographic databases were searched: (1) BIOSIS; (2) Cumulative Index to Nursing and Allied Health Literature; (3) Cochrane Database of Systematic Reviews; (4) Cochrane Central Register of Controlled Trials; (5) EMBASE; (6) MEDLINE; (7) MEDLINE In-Process & Other Non-Indexed Citations; (8) NHS Database of Abstracts of Reviews of Effects; (9) NBS Health Technology Assessment Database; and (10) Science Citation Index. To identify grey and unpublished literature, the Cochrane Register of Controlled Trials, National Research Register Archive, National Institute for Health Research Clinical Research Network Portfolio database and the Copernic Meta-search Engine were searched. A large routine data set which recorded the results of tests was obtained from Leeds Teaching Hospitals Trust.
REVIEW METHODS
A systematic review of the literature was carried out. The searches were undertaken in March to April 2008 and June 2009. Searches were designed to retrieve studies that evaluated the clinical effectiveness and cost-effectiveness of routine pre-operative testing of FBC, electrolytes and renal function and pulmonary function in the above group of patients. A postal survey of current practice in testing patients in this group pre-operatively was undertaken in 2008. An exemplar cost-effectiveness model was constructed to demonstrate what form this would have taken had there been sufficient data. A large routine data set that recorded the results of tests was obtained from Leeds Teaching Hospitals Trust. This was linked to individual patient data with surgical outcomes, and regression models were estimated.
RESULTS
A comprehensive and systematic search of both the clinical effectiveness and cost-effectiveness literature identified a large number of potentially relevant studies. However, when these studies were subjected to detailed review and quality assessment, it became clear that the literature provides no evidence on the clinical effectiveness and cost-effectiveness of these specific tests in the specific patient groups. The postal survey had a 17% response rate. Results reported that in ASA grade 1, patients aged < 40 years with no comorbidities undergoing minor surgery did not have routine tests for FBC, electrolytes and renal function and pulmonary function. The results from the regression model showed that the frequency of test use was not consistent with the hypothesis of their routine use. FBC tests were performed in only 58% of patients in the data set and U&E testing was carried out in only 57%.
LIMITATIONS
Systematic searches of the clinical effectiveness and cost-effectiveness literature found that there is no evidence on the clinical effectiveness or cost-effectiveness of these tests in this specific clinical context for the NHS. A survey of NHS hospitals found that respondent trusts were implementing current NICE guidance in relation to pre-operative testing generally, and a de novo analysis of routine data on test utilisation and post-operative outcome found that the tests were not be used in routine practice; rather, use was related to an expectation of a more complex clinical case. The paucity of published evidence is a limitation of this study. The studies included relied on non-UK health-care systems data, which may not be transferable. The inclusion of non-randomised studies is associated with an increased risk of bias and confounding. Scoping work to establish the likely mechanism of action by which tests would impact upon outcomes and resource utilisation established that the cause of an abnormal test result is likely to be a pivotal determinant of the cost-effectiveness of a pre-operative test and therefore evaluations would need to consider tests in the context of the underlying risk of specific clinical problems (i.e. risk guided rather than routine use).
CONCLUSIONS
The time of universal utilisation of pre-operative tests for all surgical patients is likely to have passed. The evidence we have identified, though weak, indicates that tests are increasingly utilised in patients in whom there is a reason to consider an underlying raised risk of a clinical abnormality that should be taken into account in their clinical management. It is likely that this strategy has led to substantial resource savings for the NHS, although there is not a published evidence base to establish that this is the case. The total expenditure on pre-operative tests across the NHS remains significant. Evidence on current practice indicates that clinical practice has changed to such a degree that the original research question is no longer relevant to UK practice. Future research on the value of these tests in pre-operative work-up should be couched in terms of the clinical effectiveness and cost-effectiveness in the identification of specific clinical abnormalities in patients with a known underlying risk. We suggest that undertaking a multicentre study making use of linked, routinely collected data sets would identify the extent and nature of pre-operative testing in this group of patients.
FUNDING
The National Institute for Health Research Health Technology Assessment programme.
Topics: Adolescent; Adult; Aged; Aged, 80 and over; Blood Cell Count; Comorbidity; Cost-Benefit Analysis; Diagnostic Tests, Routine; Elective Surgical Procedures; Electrolytes; Female; Humans; Male; Middle Aged; Preoperative Care; Respiratory Function Tests; State Medicine; United Kingdom; Urea; Young Adult
PubMed: 23302507
DOI: 10.3310/hta16500 -
The Medical Journal of Australia Dec 2023To review evaluations of the diagnostic accuracy of coronavirus disease 2019 (COVID-19) rapid antigen tests (RATs) approved by the Therapeutic Goods Administration (TGA)... (Review)
Review
COVID-19 rapid antigen tests approved for self-testing in Australia: published diagnostic test accuracy studies and manufacturer-supplied information. A systematic review.
OBJECTIVES
To review evaluations of the diagnostic accuracy of coronavirus disease 2019 (COVID-19) rapid antigen tests (RATs) approved by the Therapeutic Goods Administration (TGA) for self-testing by ambulatory people in Australia; to compare these estimates with values reported by test manufacturers.
STUDY DESIGN
Systematic review of publications in any language that reported cross-sectional, case-control, or cohort studies in which the participants were ambulatory people in the community or health care workers in hospitals in whom severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection was suspected, and the results of testing self-collected biological samples with a TGA-approved COVID-19 RAT were compared with those of reverse transcription-polymerase chain reaction (RT-PCR) testing for SARS-CoV-2. Estimates of diagnostic accuracy (sensitivity, specificity) were checked and compared with manufacturer estimates published on the TGA website.
DATA SOURCES
Publications (to 1 September 2022) identified in the Cochrane COVID-19 Study Register and the World Health Organization COVID-19 research database. Information on manufacturer diagnostic accuracy evaluations was obtained from the TGA website.
DATA SYNTHESIS
Twelve publications that reported a total of eighteen evaluations of eight RATs approved by the TGA for self-testing (manufacturers: All Test, Roche, Flowflex, MP Biomedicals, Clungene, Panbio, V-Chek, Whistling) were identified. Five studies were undertaken in the Netherlands, two each in Germany and the United States, and one each in Denmark, Belgium, and Canada; test sample collection was unsupervised in twelve studies, and supervised by health care workers or researchers in six. Estimated sensitivity with unsupervised sample collection ranged from 20.9% (MP Biomedicals) to 74.3% (Roche), and with supervised collection from 7.7% (V-Chek) to 84.4% (Panbio); the estimates were between 8.2 and 88 percentage points lower than the values reported by the manufacturers. Test specificity was high for all RATs (97.9-100%).
CONCLUSIONS
The risk of false negative results when using COVID-19 RATs for self-testing may be considerably higher than apparent in manufacturer reports on the TGA website, with implications for the reliability of these tests for ruling out infection.
Topics: Humans; COVID-19; SARS-CoV-2; Self-Testing; Cross-Sectional Studies; Reproducibility of Results; Sensitivity and Specificity; Diagnostic Tests, Routine; COVID-19 Testing
PubMed: 37903650
DOI: 10.5694/mja2.52151 -
Pain Physician 2009Appropriate diagnosis is essential in providing proper and effective therapy. The field of diagnostic accuracy tests is dynamic with new tests being developed at a fast... (Review)
Review
Evidence-based medicine, systematic reviews, and guidelines in interventional pain management: Part 7: systematic reviews and meta-analyses of diagnostic accuracy studies.
Appropriate diagnosis is essential in providing proper and effective therapy. The field of diagnostic accuracy tests is dynamic with new tests being developed at a fast pace along with improvement in technology of existing tests on a continuous basis. Well-designed diagnostic test accuracy studies can help in making appropriate health care decisions, provided that they transparently and fully report their participants, tests, methods, and results. Exaggerated and biased results from poorly designed and reported diagnostic test studies can trigger their premature dissemination and lead physicians into making incorrect treatment decisions. Consequently, a diagnostic test is useful only to the extent that it distinguishes between conditions or disorders that might otherwise be confused. Since it is unlikely that clinicians, patients, and policy makers have the time, skills, and resources to find, appraise, and interpret the evidence and incorporate it into their health care decisions, systematic reviews and meta-analysis provide an accurate and reliable synthesis of vast quantities of data. A systematic review can identify what is known and what is unknown, giving guidance for future research. Systematic reviews have been considered as a vital link in the great chain of evidence that stretches from the laboratory to the bedside by helping to separate the insignificant, unsound, or redundant deadwood from the salient and critical studies that are worthy of reflection. A dangerous discrepancy exists between experts and evidence with all types of evidence. Historically, it has been reported that in only 15% of all cases can a pathoanatomical explanation be found for patients with chronic low back pain of more than 3 months resulting in the assumption that very little can be done in our present state of ignorance to treat these patients and improve their natural histories. On the other end of the spectrum, due to lack of sound diagnostic information, excessive health care is utilized with exploding costs. The validity of all diagnostic techniques has been described with variable accuracy and reliability. Lack of understanding of reference standards and their unavailability with interventional diagnostic techniques and misinterpretation secondary to interpretation bias may adversely influence the applicability of diagnostic interventions. This manuscript provides a review of the literature, a checklist, and a flow diagram describing the preferred way to present the abstract, introduction, methods, results, and discussion sections of the report of an analysis in a systematic review of diagnostic accuracy studies.
Topics: Diagnostic Tests, Routine; Evidence-Based Medicine; Guidelines as Topic; Humans; Meta-Analysis as Topic; Pain; Review Literature as Topic
PubMed: 19935980
DOI: No ID Found -
The Lancet. Child & Adolescent Health May 2024Febrile infants presenting in the first 90 days of life are at higher risk of invasive and serious bacterial infections than older children. Modern clinical practice... (Meta-Analysis)
Meta-Analysis
Diagnostic test accuracy of procalcitonin and C-reactive protein for predicting invasive and serious bacterial infections in young febrile infants: a systematic review and meta-analysis.
BACKGROUND
Febrile infants presenting in the first 90 days of life are at higher risk of invasive and serious bacterial infections than older children. Modern clinical practice guidelines, mostly using procalcitonin as a diagnostic biomarker, can identify infants who are at low risk and therefore suitable for tailored management. C-reactive protein, by comparison, is widely available, but whether C-reactive protein and procalcitonin have similar diagnostic accuracy is unclear. We aimed to compare the test accuracy of procalcitonin and C-reactive protein in the prediction of invasive or serious bacterial infections in febrile infants.
METHODS
For this systematic review and meta-analysis, we searched MEDLINE, EMBASE, Web of Science, and The Cochrane Library for diagnostic test accuracy studies up to June 19, 2023, using MeSH terms "procalcitonin", and "bacterial infection" or "fever" and keywords "invasive bacterial infection*" and "serious bacterial infection*", without language or date restrictions. Studies were selected by independent authors against eligibility criteria. Eligible studies included participants aged 90 days or younger presenting to hospital with a fever (≥38°C) or history of fever within the preceding 48 h. The primary index test was procalcitonin, and the secondary index test was C-reactive protein. Test kits had to be commercially available, and test samples had to be collected upon presentation to hospital. Invasive bacterial infection was defined as the presence of a bacterial pathogen in blood or cerebrospinal fluid, as detected by culture or quantitative PCR; authors' definitions of serious bacterial infection were used. Data were extracted from selected studies, and the detection of invasive or serious bacterial infections was analysed with two models for each biomarker. Diagnostic accuracy was determined against internationally recognised cutoff values (0·5 ng/mL for procalcitonin, 20 mg/L for C-reactive protein) and pooled to calculate partial area under the curve (pAUC) values for each biomarker. Optimum cutoff values were identified for each biomarker. This study is registered with PROSPERO, CRD42022293284.
FINDINGS
Of 734 studies derived from the literature search, 14 studies (n=7755) were included in the meta-analysis. For the detection of invasive bacterial infections, pAUC values were greater for procalcitonin (0·72, 95% CI 0·56-0·79) than C-reactive protein (0·28, 0·17-0·61; p=0·016). Optimal cutoffs for detecting invasive bacterial infections were 0·49 ng/mL for procalcitonin and 13·12 mg/L for C-reactive protein. For the detection of serious bacterial infections, procalcitonin and C-reactive protein had similar pAUC values (0·55, 0·44-0·69 vs 0·54, 0·40-0·61; p=0·92). For serious bacterial infections, the optimal cutoffs for procalcitonin and C-reactive protein were 0·17 ng/mL and 16·18 mg/L, respectively. Heterogeneity was low for studies investigating the test accuracy of procalcitonin in detecting invasive bacterial infection (I=23·5%), high for studies investigating procalcitonin for serious bacterial infection (I=75·5%), and moderate for studies investigating C-reactive protein for invasive bacterial infection (I=49·5%) and serious bacterial infection (I=28·3%). The absence of a single definition of serious bacterial infection across studies was the greatest source of interstudy variability and potential bias.
INTERPRETATION
Within a large cohort of febrile infants, a procalcitonin cutoff of 0·5 ng/mL had a superior pAUC value to a C-reactive protein cutoff of 20 mg/L for identifying invasive bacterial infections. In settings without access to procalcitonin, C-reactive protein should therefore be used cautiously for the identification of invasive bacterial infections, and a cutoff value below 20 mg/L should be considered. C-reactive protein and procalcitonin showed similar test accuracy for the identification of serious bacterial infection with internationally recognised cutoff values. This might reflect the challenges involved in confirming serious bacterial infection and the absence of a universally accepted definition of serious bacterial infection.
FUNDING
None.
Topics: Infant; Child; Humans; Adolescent; C-Reactive Protein; Procalcitonin; Fever; Biomarkers; Bacterial Infections; Diagnostic Tests, Routine
PubMed: 38499017
DOI: 10.1016/S2352-4642(24)00021-X -
BMJ Global Health Jun 2021During the last decade, many studies have assessed the performance of malaria tests on non-invasively collected specimens, but no systematic review has hitherto... (Meta-Analysis)
Meta-Analysis
BACKGROUND
During the last decade, many studies have assessed the performance of malaria tests on non-invasively collected specimens, but no systematic review has hitherto estimated the overall performance of these tests. We report here the first meta-analysis estimating the diagnostic performance of malaria diagnostic tests performed on saliva, urine, faeces, skin odour ('sniff and tell') and hair, using either microscopy or PCR on blood sample as reference test.
METHODS
We searched on PubMed, EMBASE, African Journals Online and Cochrane Infectious Diseases from inception until 19 January 2021 for relevant primary studies. A random effects model was used to estimate the overall performance of various diagnostic methods on different types of specimen.
RESULTS
Eighteen studies providing 30 data sets were included in the meta-analysis. The overall sensitivity, specificity and diagnostic OR (DOR) of PCR were 84.5% (95% CI 79.3% to 88.6%), 97.3% (95% CI 95.3% to 98.5%) and 184.9 (95% CI 95.8 to 356.9) in saliva, respectively; 57.4% (95% CI 41.4% to 72.1%), 98.6% (95% CI 97.3% to 99.3%) and 47.2 (95% CI 22.1 to 101.1) in urine, respectively. The overall sensitivity, specificity and DOR of rapid diagnostic test for malaria in urine was 59.8% (95% CI 40.0% to 76.9%), 96.9% (95% CI 91.0% to 99.0%) and 30.8 (95% CI:23.5 to 40.4).
CONCLUSION
In settings where PCR is available, saliva and urine samples should be considered for PCR-based malaria diagnosis only if blood samples cannot be collected. The performance of rapid diagnostic testing in the urine is limited, especially its sensitivity. Malaria testing on non-invasively collected specimen still needs substantial improvement.
Topics: Diagnostic Tests, Routine; Humans; Malaria; Microscopy; Polymerase Chain Reaction; Sensitivity and Specificity
PubMed: 34078631
DOI: 10.1136/bmjgh-2021-005634 -
Scientific Reports Dec 2016Diagnostic test accuracy of the loop-mediated isothermal amplification (LAMP) assay for culture proven tuberculosis is unclear. We searched electronic databases for both... (Meta-Analysis)
Meta-Analysis Review
Diagnostic test accuracy of the loop-mediated isothermal amplification (LAMP) assay for culture proven tuberculosis is unclear. We searched electronic databases for both cohort and case-control studies that provided data to calculate sensitivity and specificity. The index test was any LAMP assay including both commercialized kits and in-house assays. Culture-proven M. tuberculosis was considered a positive reference test. We included 26 studies on 9330 sputum samples and one study on 315 extra-pulmonary specimens. For sputum samples, 26 studies yielded the summary estimates of sensitivity of 89.6% (95% CI 85.6-92.6%), specificity of 94.0% (95% CI 91.0-96.1%), and a diagnostic odds ratio of 145 (95% CI 93-226). Nine studies focusing on Loopamp MTBC yielded the summary estimates of sensitivity of 80.9% (95% CI 76.0-85.1%) and specificity of 96.5% (95% CI 94.7-97.7%). Loopamp MTBC had higher sensitivity and lower specificity for smear-positive sputa compared to smear-negative sputa. In-house assays showed higher sensitivity and lower specificity compared to Loopamp MTBC. LAMP promises to be a useful test for the diagnosis of TB, however there is still need to improve the assay to make it simpler, cheaper and more efficient to make it competitive against other PCR methods already available.
Topics: Cross-Sectional Studies; Diagnostic Tests, Routine; Humans; Mycobacterium tuberculosis; Nucleic Acid Amplification Techniques; Odds Ratio; Sensitivity and Specificity; Sputum; Tuberculosis
PubMed: 27958360
DOI: 10.1038/srep39090 -
Alzheimer Disease and Associated... 2011The purpose of this study was to review the relationship between education and dementia. (Review)
Review
OBJECTIVE
The purpose of this study was to review the relationship between education and dementia.
METHODS
A systematic literature review was conducted of all published studies examining the relationship between education and dementia listed in the PubMed and PsycINFO databases from January 1985 to July 2010. The inclusion criteria were a measure of education and a dementia diagnosis by a standardized diagnostic procedure. Alzheimer disease and Total Dementia were the outcomes.
RESULTS
A total of 88 study populations from 71 studies met inclusion criteria. Overall, 51 studies (58%) reported significant effects of lower education on risk for dementia, whereas 37 studies (42%) reported no significant relationship. A relationship between education and risk for dementia was more consistent in developed regions compared with developing regions. Age, sex, race/ethnicity, and geographical region moderated the relationship.
CONCLUSIONS
Lower education was associated with a greater risk for dementia in many but not all studies. The level of education associated with risk for dementia varied by study population and more years of education did not uniformly attenuate the risk for dementia. It seemed that a more consistent relationship with dementia occurred when years of education reflected cognitive capacity, suggesting that the effect of education on risk for dementia may be best evaluated within the context of a lifespan developmental model.
Topics: Alzheimer Disease; Dementia; Educational Measurement; Educational Status; Humans; Randomized Controlled Trials as Topic; Risk Factors
PubMed: 21750453
DOI: 10.1097/WAD.0b013e318211c83c -
Academic Emergency Medicine : Official... Nov 2022The Clinical Frailty Scale (CFS) is a representative frailty assessment tool in medicine. This systematic review and meta-analysis aimed to examine whether frailty... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
The Clinical Frailty Scale (CFS) is a representative frailty assessment tool in medicine. This systematic review and meta-analysis aimed to examine whether frailty defined based on the CFS could adequately predict short-term mortality in emergency department (ED) patients.
METHODS
The PubMed, EMBASE, and Cochrane libraries were searched for eligible studies until December 23, 2021. We included studies in which frailty was measured by the CFS and short-term mortality was reported for ED patients. All studies were screened by two independent researchers. Sensitivity, specificity, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) values were calculated based on the data extracted from each study. Additionally, the diagnostic odds ratio (DOR) was calculated for effect size analysis, and the area under the curve (AUC) of summary receiver operating characteristics was calculated. Outcomes were in-hospital and 1-month mortality rate for patients with the CFS scores of ≥5, ≥6, and ≥7.
RESULTS
Overall, 17 studies (n = 45,022) were included. Although there was no evidence of publication bias, a high degree of heterogeneity was observed. For the CFS score of ≥5, the PLR, NLR, and DOR values for in-hospital mortality were 1.446 (95% confidence interval [CI] 1.325-1.578), 0.563 (95% CI 0.355-0.893), and 2.728 (95% CI 1.872-3.976), respectively. In addition, the pooled statistics for 1-month mortality were 1.566 (95% CI 1.241-1.976), 0.582 (95% CI 0.430-0.789), and 2.696 (95% CI 1.673-4.345), respectively. Subgroup analysis of trauma patients revealed that the CFS score of ≥5 could adequately predict in-hospital mortality (PLR 1.641, 95% CI 1.242-2.170; NLR 0.580, 95% CI 0.461-0.729; DOR 2.883, 95% CI 1.994-4.168). The AUC values represented sufficient to good diagnostic accuracy.
CONCLUSIONS
Evidence that is published to date suggests that the CFS is an accurate and reliable tool for predicting short-term mortality in emergency patients.
Topics: Humans; Frailty; Diagnostic Tests, Routine; ROC Curve; Hospital Mortality
PubMed: 35349205
DOI: 10.1111/acem.14493 -
Value in Health : the Journal of the... Dec 2011To review and evaluate the literature of cost-utility analyses (CUAs) regarding diagnostic laboratory testing. (Review)
Review
OBJECTIVE
To review and evaluate the literature of cost-utility analyses (CUAs) regarding diagnostic laboratory testing.
METHODS
We reviewed all articles related to diagnostic laboratory testing in the Tufts Medical Center Cost-Effectiveness Analysis Registry (www.cearegistry.org), which contains detailed information on over 2000 published CUAs through 2008. We analyzed the extent to which the studies adhered to recommended practices for conducting and reporting cost-effectiveness analyses. We also recorded whether the studies contained information on diagnostic test accuracy and costs, and whether any account was taken of potential benefits or harms of testing that are unrelated to subsequent treatment, such as the reassurance value of testing.
RESULTS
We identified 141 published CUAs pertaining to diagnostic laboratory testing published through 2008 which contained 433 separate incremental cost-effectiveness ratios. Prior to 2000, there were only 20 CUAs published, but the number averaged 13.4 annually thereafter. Most studies focused on hematology/oncology (n = 42, 30%) and obstetrics/gynecology (n = 36, 26%) applications. Approximately 63% (n = 89) of studies clearly reported information about the accuracy of the test, but only 10% (n = 14) mentioned test safety or possible risks. A small number (n = 10, 7%) mentioned or considered the potential value or harm of testing unrelated to treatment consequences. Over 55% of the reported incremental cost-effectiveness ratios (ICERs) were either dominant (more quality-adjusted life years for less cost), or below $50,000 per quality-adjusted life years gained (in 2008 US dollars).
CONCLUSION
The number of CUAs investigating laboratory diagnostic testing has increased substantially with applications to diverse clinical areas. The literature reveals many areas in which testing represents good value for money. The vast majority of studies have not considered preferences for test information unrelated to treatment consequences.
Topics: Clinical Laboratory Techniques; Cost-Benefit Analysis; Diagnostic Tests, Routine; Guideline Adherence; Guidelines as Topic; Humans; Quality-Adjusted Life Years; Registries
PubMed: 22152169
DOI: 10.1016/j.jval.2011.05.044