-
Clinical and Vaccine Immunology : CVI Apr 2016A concern during the early AIDS epidemic was the lack of a test to identify individuals who carried the virus. The first HIV antibody test, developed in 1985, was... (Review)
Review
A concern during the early AIDS epidemic was the lack of a test to identify individuals who carried the virus. The first HIV antibody test, developed in 1985, was designed to screen blood products, not to diagnose AIDS. The first-generation assays detected IgG antibody and became positive 6 to 12 weeks postinfection. False-positive results occurred; thus, a two-test algorithm was developed using a Western blot or immunofluorescence test as a confirmatory procedure. The second-generation HIV test added recombinant antigens, and the third-generation HIV tests included IgM detection, reducing the test-negative window to approximately 3 weeks postinfection. Fourth- and fifth-generation HIV assays added p24 antigen detection to the screening assay, reducing the test-negative window to 11 to 14 days. A new algorithm addressed the fourth-generation assay's ability to detect both antibody and antigen and yet not differentiate between them. The fifth-generation HIV assay provides separate antigen and antibody results and will require yet another algorithm. HIV infection may now be detected approximately 2 weeks postexposure, with a reduced number of false-positive results.
Topics: Diagnostic Tests, Routine; HIV; HIV Infections; Humans; Immunoassay
PubMed: 26936099
DOI: 10.1128/CVI.00053-16 -
Critical Care Nursing Clinics of North... Mar 2010
Topics: Critical Care; Diagnostic Tests, Routine; Humans; Nurse's Role; Patient Care Planning; Patient Education as Topic; Sensitivity and Specificity
PubMed: 20193874
DOI: 10.1016/j.ccell.2009.10.014 -
Clinical Microbiology and Infection :... Mar 2019Parasitic diseases are one of the world's most devastating and prevalent infections, causing millions of morbidities and mortalities annually. In the past, many of these... (Review)
Review
BACKGROUND
Parasitic diseases are one of the world's most devastating and prevalent infections, causing millions of morbidities and mortalities annually. In the past, many of these infections have been linked predominantly to tropical or subtropical areas. Nowadays, however, climatic and vector ecology changes, a significant increase in international travel, armed conflicts, and migration of humans and animals have influenced the transmission of some parasitic diseases from 'book pages' to reality in developed countries. It has also been noted that many patients who have never travelled to endemic areas suffer from blood-borne infections caused by protozoa. In the light of existing knowledge, this new trend can be explained by the fact that in the process of migration a large number of asymptomatic carriers become a part of the blood bank donor and transplant donor populations. Accurate and rapid diagnosis represents the crucial weapon in the fight against parasitic infections.
AIMS
To review old and new approaches for rapid diagnosis of parasitic infections.
SOURCES
Data for this review were obtained through searches of PubMed using combinations of the following terms: parasitological diagnostics, microscopy, lateral flow assays, immunochromatographic assays, multiplex-PCR, and transplantation.
CONTENT
In this review, we provide a brief account of the advantages and limitations of rapid methods for diagnosis of parasitic diseases and focus our attention on current and future research in this area. The approximate costs associated with the use of different techniques and their applicability in endemic and non-endemic areas are also discussed.
IMPLICATIONS
Microscopy remains the cornerstone of parasitological diagnostics, especially in the field and low-resource settings, and provides epidemiological assessment of parasite burden. However, increased use and availability of point-of-care tests and molecular assays in modern era allow more rapid and accurate diagnoses and increased sensitivity in the identification of parasitic infections.
Topics: Animals; Diagnostic Tests, Routine; Humans; Microscopy; Molecular Diagnostic Techniques; Parasites; Parasitic Diseases; Parasitology; Point-of-Care Testing
PubMed: 29730224
DOI: 10.1016/j.cmi.2018.04.028 -
The Lancet. Infectious Diseases Mar 2014The aim of diagnostic point-of-care testing is to minimise the time to obtain a test result, thereby allowing clinicians and patients to make a quick clinical decision.... (Review)
Review
The aim of diagnostic point-of-care testing is to minimise the time to obtain a test result, thereby allowing clinicians and patients to make a quick clinical decision. Because point-of-care tests are used in resource-limited settings, the benefits need to outweigh the costs. To optimise point-of-care testing in resource-limited settings, diagnostic tests need rigorous assessments focused on relevant clinical outcomes and operational costs, which differ from assessments of conventional diagnostic tests. We reviewed published studies on point-of-care testing in resource-limited settings, and found no clearly defined metric for the clinical usefulness of point-of-care testing. Therefore, we propose a framework for the assessment of point-of-care tests, and suggest and define the term test efficacy to describe the ability of a diagnostic test to support a clinical decision within its operational context. We also propose revised criteria for an ideal diagnostic point-of-care test in resource-limited settings. Through systematic assessments, comparisons between centralised testing and novel point-of-care technologies can be more formalised, and health officials can better establish which point-of-care technologies represent valuable additions to their clinical programmes.
Topics: Diagnostic Tests, Routine; Health Resources; Humans; Point-of-Care Systems
PubMed: 24332389
DOI: 10.1016/S1473-3099(13)70250-0 -
Presse Medicale (Paris, France : 1983) Sep 2016The diagnosis of a perioperative allergic reaction is based on clinical features associated with a suggestive timeline, the exclusion of other diagnoses, elevated...
The diagnosis of a perioperative allergic reaction is based on clinical features associated with a suggestive timeline, the exclusion of other diagnoses, elevated concentrations of degranulation markers (histamine, tryptase), and positive allergy assessments (skin tests, specific IgE). After initiating appropriate treatment, the anesthesiologist should take blood samples to measure histamine and tryptase concentrations just after the reaction and repeat them 1-2hours later to validate the diagnosis of immediate hypersensitivity. A delayed measurement of basal tryptase is useful to rule out mastocytosis and to interpret moderate tryptase levels. The anesthesiologist must inform the patient of the reaction to obtain adhesion and consent to subsequent investigations and must record the timing of the reaction and of the blood sampling, the possible causal agents, and the treatment administered. These data must be shared with the laboratory and the allergist. An adverse drug reaction report must be filed. The gold standard for allergy assessment is skin testing. These tests should be done in an appropriate facility, with experienced staff and in compliance with current guidelines. Specific IgE assays and cellular assays can help when clinical features and skin tests are discordant. Provocation tests are sometimes required. After allergy assessment, the safest protocol for subsequent anesthesia is determined in collaboration with the anesthesiologist. The patient must be informed and carry an allergy alert card.
Topics: Decision Trees; Diagnostic Tests, Routine; Humans; Hypersensitivity, Immediate; Intraoperative Complications; Operating Rooms
PubMed: 27374263
DOI: 10.1016/j.lpm.2016.05.016 -
Neonatology 2019Recent advances in molecular and mass screening technologies have paved the way for discovery of novel diagnostic tests and/or biomarkers for accurate identification of... (Review)
Review
Recent advances in molecular and mass screening technologies have paved the way for discovery of novel diagnostic tests and/or biomarkers for accurate identification of specific diseases and organ injuries. However, new diagnostic tests/biomarkers should be subjected to thorough evaluation before introduction into routine clinical practice. This review focuses on the up-to-date methodology in designing and evaluating diagnostic tests/biomarkers, and assessing their clinical utilities for improving health care efficiency, cost-effectiveness and outcomes. In addition to improved diagnostic utilities, future diagnostic tests should be developed in collaboration with our industrial partners and be applicable at the bedside for disease surveillance.
Topics: Biomarkers; Cooperative Behavior; Cost-Benefit Analysis; Diagnostic Tests, Routine; Humans; Research Design
PubMed: 30580336
DOI: 10.1159/000492777 -
Biomarkers in Medicine 2015The emergence of companion diagnostic devices has been spurred by drug discovery and development efforts towards targeted therapies, particularly in oncology. Companion... (Review)
Review
The emergence of companion diagnostic devices has been spurred by drug discovery and development efforts towards targeted therapies, particularly in oncology. Companion diagnostics and their corresponding therapeutics are often codeveloped, or developed in parallel, to ensure the safe and effective use of the products. The regulatory framework for companion diagnostics has gradually evolved as a result of the essential role of diagnostic tests to identify the intended population for a corresponding treatment. Here, we describe the current regulatory model for companion diagnostics in the US and outline key strategies for a successful codevelopment program from the device perspective. We also discuss how technological advances and changes in clinical management may challenge the regulatory model in the future.
Topics: Biomarkers; Diagnostic Tests, Routine; Government Regulation; Humans; Neoplasms; Precision Medicine
PubMed: 25605456
DOI: 10.2217/bmm.14.98 -
European Journal of Endocrinology Feb 2021Diagnostic accuracy studies are fundamental for the assessment of diagnostic tests. Researchers need to understand the implications of their chosen design, opting for...
Diagnostic accuracy studies are fundamental for the assessment of diagnostic tests. Researchers need to understand the implications of their chosen design, opting for comparative designs where possible. Researchers should analyse test accuracy studies using the appropriate methods, acknowledging the uncertainty of results and avoiding overstating conclusions and ignoring the clinical situation which should inform the trade-off between sensitivity and specificity. Test accuracy studies should be reported with transparency using the STAndards for the Reporting of Diagnostic accuracy studies (STARD) checklist.
Topics: Checklist; Diagnostic Techniques, Endocrine; Diagnostic Tests, Routine; Humans; Random Allocation; Reference Values; Research Design; Sample Size; Sensitivity and Specificity
PubMed: 33410763
DOI: 10.1530/EJE-20-1239 -
Trials Aug 2012Clinicians, patients, governments, third-party payers, and the public take for granted that diagnostic tests are accurate, safe and effective. However, we may be...
Clinicians, patients, governments, third-party payers, and the public take for granted that diagnostic tests are accurate, safe and effective. However, we may be seriously misled if we are relying on robust study design to ensure accurate, safe, and effective diagnostic tests. Properly conducted, randomized controlled trials are the gold standard for assessing the effectiveness and safety of interventions, yet are rarely conducted in the assessment of diagnostic tests. Instead, diagnostic cohort studies are commonly performed to assess the characteristics of a diagnostic test including sensitivity and specificity. While diagnostic cohort studies can inform us about the relative accuracy of an experimental diagnostic intervention compared to a reference standard, they do not inform us about whether the differences in accuracy are clinically important, or the degree of clinical importance (in other words, the impact on patient outcomes). In this commentary we provide the advantages of the diagnostic randomized controlled trial and suggest a greater awareness and uptake in their conduct. Doing so will better ensure that patients are offered diagnostic procedures that will make a clinical difference.
Topics: Diagnostic Tests, Routine; Humans; Observer Variation; Predictive Value of Tests; Prognosis; Randomized Controlled Trials as Topic; Reference Standards; Reproducibility of Results; Research Design
PubMed: 22897974
DOI: 10.1186/1745-6215-13-137 -
Lancet (London, England) Nov 1989
Topics: Attitude of Health Personnel; Clinical Competence; Consultants; Diagnostic Tests, Routine; Economics, Hospital; Humans; Medical History Taking; Physical Examination
PubMed: 2572906
DOI: No ID Found