-
International Journal of Environmental... May 2023(1) Background: Immunological laboratory testing is known to be complex, and it is usually performed in tertiary referral centers. Many criticalities affect diagnostic... (Review)
Review
(1) Background: Immunological laboratory testing is known to be complex, and it is usually performed in tertiary referral centers. Many criticalities affect diagnostic immunological testing, such as limited availability, the need for specifically trained laboratory staff, and potential difficulties in collecting blood samples, especially in the most vulnerable patients, i.e., the elderly and children. For this reason, the identification of a new feasible and reliable methodology for autoantibody detection is urgently needed. (2) Methods: We designed a systematic review to investigate the available literature on the utilization of saliva samples for immunological testing. (3) Results: A total of 170 articles were identified. Eighteen studies met the inclusion criteria, accounting for 1059 patients and 671 controls. The saliva collection method was mostly represented by passive drooling (11/18, 61%), and the most frequently described methodology for antibody detection was ELISA (12/18, 67%). The analysis included 392 patients with rheumatoid arthritis, 161 with systemic lupus erythematosus, 131 with type 1 diabetes mellitus, 116 with primary biliary cholangitis, 100 with pemphigus vulgaris, 50 with bullous pemphigoids, 49 with Sjogren syndrome, 39 with celiac disease, 10 with primary antiphospholipid syndromes, 8 with undifferentiated connective tissue disease, 2 with systemic sclerosis, and 1 with autoimmune thyroiditis. The majority of the reviewed studies involved adequate controls, and saliva testing allowed for a clear distinction of patients (10/12 studies, 83%). More than half of the papers showed a correlation between saliva and serum results (10/18, 55%) for autoantibody detection, with varying rates of correlation, sensitivity, and specificity. Interestingly, many papers showed a correlation between saliva antibody results and clinical manifestations. (4) Conclusions: Saliva testing might represent an appealing alternative to serum-based testing for autoantibody detection, considering the correspondence with serum testing results and the correlation with clinical manifestations. Nonetheless, standardization of sample collection processing, maintenance, and detection methodology has yet to be fully addressed.
Topics: Child; Humans; Aged; Saliva; Lupus Erythematosus, Systemic; Sjogren's Syndrome; Autoantibodies; Arthritis, Rheumatoid
PubMed: 37239511
DOI: 10.3390/ijerph20105782 -
Health Technology Assessment... Oct 2022Coeliac disease is an autoimmune disorder triggered by ingesting gluten. It affects approximately 1% of the UK population, but only one in three people is thought to...
BACKGROUND
Coeliac disease is an autoimmune disorder triggered by ingesting gluten. It affects approximately 1% of the UK population, but only one in three people is thought to have a diagnosis. Untreated coeliac disease may lead to malnutrition, anaemia, osteoporosis and lymphoma.
OBJECTIVES
The objectives were to define at-risk groups and determine the cost-effectiveness of active case-finding strategies in primary care.
DESIGN
(1) Systematic review of the accuracy of potential diagnostic indicators for coeliac disease. (2) Routine data analysis to develop prediction models for identification of people who may benefit from testing for coeliac disease. (3) Systematic review of the accuracy of diagnostic tests for coeliac disease. (4) Systematic review of the accuracy of genetic tests for coeliac disease (literature search conducted in April 2021). (5) Online survey to identify diagnostic thresholds for testing, starting treatment and referral for biopsy. (6) Economic modelling to identify the cost-effectiveness of different active case-finding strategies, informed by the findings from previous objectives.
DATA SOURCES
For the first systematic review, the following databases were searched from 1997 to April 2021: MEDLINE (National Library of Medicine, Bethesda, MD, USA), Embase (Elsevier, Amsterdam, the Netherlands), Cochrane Library, Web of Science™ (Clarivate™, Philadelphia, PA, USA), the World Health Organization International Clinical Trials Registry Platform ( WHO ICTRP ) and the National Institutes of Health Clinical Trials database. For the second systematic review, the following databases were searched from January 1990 to August 2020: MEDLINE, Embase, Cochrane Library, Web of Science, Kleijnen Systematic Reviews ( KSR ) Evidence, WHO ICTRP and the National Institutes of Health Clinical Trials database. For prediction model development, Clinical Practice Research Datalink GOLD, Clinical Practice Research Datalink Aurum and a subcohort of the Avon Longitudinal Study of Parents and Children were used; for estimates for the economic models, Clinical Practice Research Datalink Aurum was used.
REVIEW METHODS
For review 1, cohort and case-control studies reporting on a diagnostic indicator in a population with and a population without coeliac disease were eligible. For review 2, diagnostic cohort studies including patients presenting with coeliac disease symptoms who were tested with serological tests for coeliac disease and underwent a duodenal biopsy as reference standard were eligible. In both reviews, risk of bias was assessed using the quality assessment of diagnostic accuracy studies 2 tool. Bivariate random-effects meta-analyses were fitted, in which binomial likelihoods for the numbers of true positives and true negatives were assumed.
RESULTS
People with dermatitis herpetiformis, a family history of coeliac disease, migraine, anaemia, type 1 diabetes, osteoporosis or chronic liver disease are 1.5-2 times more likely than the general population to have coeliac disease; individual gastrointestinal symptoms were not useful for identifying coeliac disease. For children, women and men, prediction models included 24, 24 and 21 indicators of coeliac disease, respectively. The models showed good discrimination between patients with and patients without coeliac disease, but performed less well when externally validated. Serological tests were found to have good diagnostic accuracy for coeliac disease. Immunoglobulin A tissue transglutaminase had the highest sensitivity and endomysial antibody the highest specificity. There was little improvement when tests were used in combination. Survey respondents ( = 472) wanted to be 66% certain of the diagnosis from a blood test before starting a gluten-free diet if symptomatic, and 90% certain if asymptomatic. Cost-effectiveness analyses found that, among adults, and using serological testing alone, immunoglobulin A tissue transglutaminase was most cost-effective at a 1% pre-test probability (equivalent to population screening). Strategies using immunoglobulin A endomysial antibody plus human leucocyte antigen or human leucocyte antigen plus immunoglobulin A tissue transglutaminase with any pre-test probability had similar cost-effectiveness results, which were also similar to the cost-effectiveness results of immunoglobulin A tissue transglutaminase at a 1% pre-test probability. The most practical alternative for implementation within the NHS is likely to be a combination of human leucocyte antigen and immunoglobulin A tissue transglutaminase testing among those with a pre-test probability above 1.5%. Among children, the most cost-effective strategy was a 10% pre-test probability with human leucocyte antigen plus immunoglobulin A tissue transglutaminase, but there was uncertainty around the most cost-effective pre-test probability. There was substantial uncertainty in economic model results, which means that there would be great value in conducting further research.
LIMITATIONS
The interpretation of meta-analyses was limited by the substantial heterogeneity between the included studies, and most included studies were judged to be at high risk of bias. The main limitations of the prediction models were that we were restricted to diagnostic indicators that were recorded by general practitioners and that, because coeliac disease is underdiagnosed, it is also under-reported in health-care data. The cost-effectiveness model is a simplification of coeliac disease and modelled an average cohort rather than individuals. Evidence was weak on the probability of routine coeliac disease diagnosis, the accuracy of serological and genetic tests and the utility of a gluten-free diet.
CONCLUSIONS
Population screening with immunoglobulin A tissue transglutaminase (1% pre-test probability) and of immunoglobulin A endomysial antibody followed by human leucocyte antigen testing or human leucocyte antigen testing followed by immunoglobulin A tissue transglutaminase with any pre-test probability appear to have similar cost-effectiveness results. As decisions to implement population screening cannot be made based on our economic analysis alone, and given the practical challenges of identifying patients with higher pre-test probabilities, we recommend that human leucocyte antigen combined with immunoglobulin A tissue transglutaminase testing should be considered for adults with at least a 1.5% pre-test probability of coeliac disease, equivalent to having at least one predictor. A more targeted strategy of 10% pre-test probability is recommended for children (e.g. children with anaemia).
FUTURE WORK
Future work should consider whether or not population-based screening for coeliac disease could meet the UK National Screening Committee criteria and whether or not it necessitates a long-term randomised controlled trial of screening strategies. Large prospective cohort studies in which all participants receive accurate tests for coeliac disease are needed.
STUDY REGISTRATION
This study is registered as PROSPERO CRD42019115506 and CRD42020170766.
FUNDING
This project was funded by the National Institute for Health and Care Research ( NIHR ) Health Technology Assessment programme and will be published in full in ; Vol. 26, No. 44. See the NIHR Journals Library website for further project information.
Topics: United States; Adult; Child; Male; Humans; Female; Celiac Disease; Longitudinal Studies; Prospective Studies; Skin Neoplasms; Immunoglobulin A; Osteoporosis; Randomized Controlled Trials as Topic
PubMed: 36321689
DOI: 10.3310/ZUCE8371 -
Blood Advances Jun 2023Paroxysmal cold hemoglobinuria (PCH) is a rare autoimmune hemolytic anemia often overlooked as a potential etiology of hemolysis and is challenging to diagnose because...
Paroxysmal cold hemoglobinuria (PCH) is a rare autoimmune hemolytic anemia often overlooked as a potential etiology of hemolysis and is challenging to diagnose because of the complicated testing methods required. We performed a systematic review of all reported cases to better assess the clinical, immunohematologic, and therapeutic characteristics of PCH. We systematically analyzed PubMed, Medline, and EMBASE to identify all cases of PCH confirmed by Donath-Landsteiner (DL) testing. Three authors independently screened articles for inclusion, and systematically extracted epidemiologic, clinical, laboratory, treatment, and outcomes data. Discrepancies were adjudicated by a fourth author. We identified 230 cases, with median presentation hemoglobin of 6.5 g/dL and nadir of 5.5 g/dL. The most common direct antiglobulin test (DAT) result was the presence of complement and absence of immunoglobulin G (IgG) bound to red blood cells, although other findings were observed in one-third of cases. DL antibody class and specificity were reported for 71 patients, of which 83.1% were IgG anti-P. The use of corticosteroids is common, although we found no significant difference in the length of hospitalization for patients with and without steroid therapy. Recent reports have highlighted the use of complement inhibitors. Among patients with follow-up, 99% (213 of 216) were alive at the time of reporting. To our knowledge, this represents the largest compilation of PCH cases to date. We discovered that contemporary PCH most commonly occurs in children with a preceding viral infection, corticosteroid use is frequent (but potentially ineffective), and DAT results are more disparate than traditionally reported.
Topics: Child; Humans; Hemoglobinuria, Paroxysmal; Erythrocytes; Anemia, Hemolytic, Autoimmune; Adrenal Cortex Hormones; Immunoglobulin G
PubMed: 36716137
DOI: 10.1182/bloodadvances.2022009516 -
BioMed Research International 2023Schistosomiasis is causing high morbidity and significant mortality in endemic areas. Kato-Katz stool examination and urine filtration techniques are the conventional... (Meta-Analysis)
Meta-Analysis Review
INTRODUCTION
Schistosomiasis is causing high morbidity and significant mortality in endemic areas. Kato-Katz stool examination and urine filtration techniques are the conventional methods for the detection of intestinal and urinary schistosomiasis. The most appropriate diagnostic tools for the detection of schistosomiasis especially in low-prevalence settings should be used. Therefore, this study is aimed at investigating the diagnostic accuracy of and diagnostic tools in sub-Saharan Africa.
METHODS
Electronic databases such as PubMed, PubMed Central/Medline, HINARI, Scopus, EMBASE, Science Direct, Google Scholar, and Cochrane Library were reviewed. The pooled estimates and heterogeneity were determined using Midas in Stata 14.0. The diagnostic accuracy of index tests was compared using the hierarchical summary of the receiver operating characteristic (HSROC) curve in Stata 14.0.
RESULTS
Twenty-four studies consisting of 12,370 individuals were tested to evaluate the accuracy of antigen, antibody, and molecular test methods for the detection of and . The pooled estimate of sensitivity and specificity of CCA was 88% (95% CI: 83-92) and 72 (95% CI: 62-80), respectively, when it is compared with parasitological stool examination for detection. On the other hand, ELISA showed a pooled estimate of sensitivity and specificity of 95% (95% CI: 93-96) and 35% (95% CI: 21-52), respectively, for the examination of using stool examination as a reference test. With regard to , the pooled estimate of sensitivity and specificity of polymerase chain reaction was 97% (95% CI: 78-100) and 94% (95% CI: 74-99), respectively. Moreover, the sensitivity and specificity of urine CCA vary between 41-80% and 55-91%, respectively, compared to urine microscopy.
CONCLUSION
The effort of schistosomiasis elimination requires accurate case identification especially in low-intensity infections. This study showed that CCA had the highest sensitivity and moderate specificity for the diagnosis of . Similarly, the sensitivity of ELISA was excellent, but its specificity was low. The diagnostic accuracy of PCR for the detection of was excellent compared to urine microscopic examination.
Topics: Humans; Animals; Microscopy; Schistosoma mansoni; Urinalysis; Africa South of the Sahara; Diagnostic Tests, Routine
PubMed: 37621699
DOI: 10.1155/2023/3769931 -
F1000Research 2021Mass testing and adequate management are essential to terminate the spread of coronavirus disease 2019 (COVID-19). This testing is due to the possibility of...
Mass testing and adequate management are essential to terminate the spread of coronavirus disease 2019 (COVID-19). This testing is due to the possibility of unidentified cases, especially ones without COVID-19 related symptoms. This review aimed to examine the outcome of the existing studies on the ways of identifying COVID-19 cases, and determine the populations at risk, symptom and diagnostic test management of COVID-19. The articles reviewed were scientific publications on the PubMed, Science Direct, ProQuest, and Scopus databases. The keywords used to obtain the data were COVID-19, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and case detection, case management or diagnostic test. We applied the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Population, Intervention, Control and Outcomes (PICO) approaches. A total of 21 articles from 13 countries met the inclusion criteria and were further analyzed qualitatively. However, 62% of the articles used a rapid antibody test for screening rather than a rapid antigen test. According to the rapid antigen test, 51.3% were positive, with men aged above 50 years recording the highest number of cases. Furthermore, 57.1% of patients were symptomatic, while diagnostic tests' sensitivity and specificity increased to 100% in 14 days after the onset. : Real-time polymerase chain reaction (RT-PCR) is recommended by the World Health Organization for detection of COVID-19. Suppose it is unavailable, the rapid antigen test is used as an alternative rather than the rapid antibody test. Diagnosis is expected to be confirmed using the PCR and serological assay to achieve an early diagnosis of COVID-19, according to disease progression, gradual rapid tests can be used, such as rapid antigen in an earlier week and antibody tests confirmed by RT-PCR and serological assay in the second week of COVID-19.
Topics: COVID-19; COVID-19 Testing; Clinical Laboratory Techniques; Humans; Male; SARS-CoV-2; Sensitivity and Specificity
PubMed: 35719313
DOI: 10.12688/f1000research.50929.3 -
PLoS Neglected Tropical Diseases 2013Strongyloidiasis is frequently under diagnosed since many infections remain asymptomatic and conventional diagnostic tests based on parasitological examination are not... (Review)
Review
BACKGROUND
Strongyloidiasis is frequently under diagnosed since many infections remain asymptomatic and conventional diagnostic tests based on parasitological examination are not sufficiently sensitive. Serology is useful but is still only available in reference laboratories. The need for improved diagnostic tests in terms of sensitivity and specificity is clear, particularly in immunocompromised patients or candidates to immunosuppressive treatments. This review aims to evaluate both conventional and novel techniques for the diagnosis of strongyloidiasis as well as available cure markers for this parasitic infection.
METHODOLOGY/PRINCIPAL FINDINGS
The search strategy was based on the data-base sources MEDLINE, Cochrane Library Register for systematic review, EmBase, Global Health and LILACS and was limited in the search string to articles published from 1960 to August 2012 and to English, Spanish, French, Portuguese and German languages. Case reports, case series and animal studies were excluded. 2003 potentially relevant citations were selected for retrieval, of which 1649 were selected for review of the abstract. 143 were eligible for final inclusion.
CONCLUSIONS
Sensitivity of microscopic-based techniques is not good enough, particularly in chronic infections. Furthermore, techniques such as Baermann or agar plate culture are cumbersome and time-consuming and several specimens should be collected on different days to improve the detection rate. Serology is a useful tool but it might overestimate the prevalence of disease due to cross-reactivity with other nematode infections and its difficulty distinguishing recent from past (and cured) infections. To evaluate treatment efficacy is still a major concern because direct parasitological methods might overestimate it and the serology has not yet been well evaluated; even if there is a decline in antibody titres after treatment, it is slow and it needs to be done at 6 to 12 months after treatment which can cause a substantial loss to follow-up in a clinical trial.
Topics: Animals; Clinical Laboratory Techniques; Drug Monitoring; Humans; Parasitology; Sensitivity and Specificity; Strongyloides; Strongyloidiasis
PubMed: 23350004
DOI: 10.1371/journal.pntd.0002002 -
Inflammatory Bowel Diseases Oct 2012Anti-glycan antibody serologic markers may serve as a useful adjunct in the diagnosis/prognosis of inflammatory bowel disease (IBD), including Crohn's disease (CD) and... (Review)
Review
BACKGROUND
Anti-glycan antibody serologic markers may serve as a useful adjunct in the diagnosis/prognosis of inflammatory bowel disease (IBD), including Crohn's disease (CD) and ulcerative colitis (UC). This meta-analysis/systemic review aimed to evaluate the diagnostic value, as well as the association of anti-glycan biomarkers with IBD susceptible gene variants, disease complications, and the need for surgery in IBD.
METHODS
The diagnostic odds ratio (DOR), 95% confidence interval (CI), and sensitivity/specificity were used to compare the diagnostic value of individual and combinations of anti-glycan markers and their association with disease course (complication and/or need for surgery).
RESULTS
Fourteen studies were included in the systemic review and nine in the meta-analysis. Individually, anti-Saccharomyces cervisiae antibodies (ASCA) had the highest DOR for differentiating IBD from healthy (DOR 21.1; 1.8-247.3; two studies), and CD from UC (DOR 10.2; CI 7.7-13.7; seven studies). For combination of ≥2 markers, the DOR was 2.8 (CI 2.2-3.6; two studies) for CD-related surgery, higher than any individual marker, while the DOR for differentiating CD from UC was 10.2 (CI 5.6-18.5; three studies) and for complication was 2.8 (CI 2.2-3.7; two studies), similar to individual markers.
CONCLUSIONS
ASCA had the highest diagnostic value among individual anti-glycan markers. While anti-chitobioside carbohydrate antibody (ACCA) had the highest association with complications, ASCA and ACCA associated equally with the need for surgery. Although in most individual studies the combination of ≥2 markers had a better diagnostic value as well as higher association with complications and need for surgery, we found the combination performing slightly better than any individual marker in our meta-analysis.
Topics: Antibodies, Anti-Idiotypic; Biomarkers; Case-Control Studies; Disease Progression; Humans; Inflammatory Bowel Diseases; Meta-Analysis as Topic; Polysaccharides; Prognosis
PubMed: 22294465
DOI: 10.1002/ibd.22862 -
PloS One 2016With an expected sensitivity (Se) of 96% and specificity (Sp) of 98%, the immunofluorescence antibody test (IFAT) is frequently used as a reference test to validate new... (Meta-Analysis)
Meta-Analysis Review
With an expected sensitivity (Se) of 96% and specificity (Sp) of 98%, the immunofluorescence antibody test (IFAT) is frequently used as a reference test to validate new diagnostic methods and estimate the canine leihmaniasis (CanL) true prevalence in the Mediterranean basin. To review the diagnostic accuracy of IFAT to diagnose CanL in this area with reference to its Se and Sp and elucidate the potential causes of their variations, a systematic review was conducted (31 studies for the 26-year period). Three IFAT validation methods stood out: the classical contingency table method, methods based on statistical models and those based on experimental studies. A variation in the IFAT Se and Sp values and cut-off values was observed. For the classical validation method based on a meta-analysis, the Se of IFAT was estimated in this area as 89.86% and 31.25% in symptomatic and asymptomatic dogs, respectively. The Sp of IFAT was estimated in non-endemic and endemic areas as 98.12% and 96.57%, respectively. IFAT can be considered as a good standard test in non-endemic areas for CanL, but its accuracy declines in endemic areas due to the complexity of the disease. Indeed, the accuracy of IFAT is due to the negative results obtained in non-infected dogs from non-endemic areas and to the positive results obtained in sera of symptomatic dogs living in endemic areas. But IFAT results are not unequivocal when it comes to determining CanL infection on asymptomatic dogs living in endemic areas. Statistical methods might be a solution to overcome the lack of gold standard, to better categorize groups of animals investigated, to assess optimal cut-off values and to allow a better estimate of the true prevalence aiming information on preventive/control measures for CanL.
Topics: Animals; Dog Diseases; Dogs; Fluorescent Antibody Technique, Direct; Leishmaniasis; Mediterranean Region; Reproducibility of Results; Sensitivity and Specificity
PubMed: 27537405
DOI: 10.1371/journal.pone.0161051 -
The Cochrane Database of Systematic... Jun 2020Plague is a severe disease associated with high mortality. Late diagnosis leads to advance stage of the disease with worse outcomes and higher risk of spread of the...
BACKGROUND
Plague is a severe disease associated with high mortality. Late diagnosis leads to advance stage of the disease with worse outcomes and higher risk of spread of the disease. A rapid diagnostic test (RDT) could help in establishing a prompt diagnosis of plague. This would improve patient care and help appropriate public health response.
OBJECTIVES
To determine the diagnostic accuracy of the RDT based on the antigen F1 (F1RDT) for detecting plague in people with suspected disease.
SEARCH METHODS
We searched the CENTRAL, Embase, Science Citation Index, Google Scholar, the World Health Organization International Clinical Trials Registry Platform and ClinicalTrials.gov up to 15 May 2019, and PubMed (MEDLINE) up to 27 August 2019, regardless of language, publication status, or publication date. We handsearched the reference lists of relevant papers and contacted researchers working in the field.
SELECTION CRITERIA
We included cross-sectional studies that assessed the accuracy of the F1RDT for diagnosing plague, where participants were tested with both the F1RDT and at least one reference standard. The reference standards were bacterial isolation by culture, polymerase chain reaction (PCR), and paired serology (this is a four-fold difference in F1 antibody titres between two samples from acute and convalescent phases).
DATA COLLECTION AND ANALYSIS
Two review authors independently selected studies and extracted data. We appraised the methodological quality of each selected studies and applicability by using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. When meta-analysis was appropriate, we used the bivariate model to obtain pooled estimates of sensitivity and specificity. We stratified all analyses by the reference standard used and presented disaggregated data for forms of plague. We assessed the certainty of the evidence using GRADE.
MAIN RESULTS
We included eight manuscripts reporting seven studies. Studies were conducted in three countries in Africa among adults and children with any form of plague. All studies except one assessed the F1RDT produced at the Institut Pasteur of Madagascar (F1RDT-IPM) and one study assessed a F1RDT produced by New Horizons (F1RDT-NH), utilized by the US Centers for Disease Control and Prevention. We could not pool the findings from the F1RDT-NH in meta-analyses due to a lack of raw data and a threshold of the test for positivity different from the F1RDT-IPM. Risk of bias was high for participant selection (retrospective studies, recruitment of participants not consecutive or random, unclear exclusion criteria), low or unclear for index test (blinding of F1RDT interpretation unknown), low for reference standards, and high or unclear for flow and timing (time of sample transportation was longer than seven days, which can lead to decreased viability of the pathogen and overgrowth of contaminating bacteria, with subsequent false-negative results and misclassification of the target condition). F1RDT for diagnosing all forms of plague F1RDT-IPM pooled sensitivity against culture was 100% (95% confidence interval (CI) 82 to 100; 4 studies, 1692 participants; very low certainty evidence) and pooled specificity was 70.3% (95% CI 65 to 75; 4 studies, 2004 participants; very low-certainty evidence). The performance of F1RDT-IPM against PCR was calculated from a single study in participants with bubonic plague (see below). There were limited data on the performance of F1RDT against paired serology. F1RDT for diagnosing pneumonic plague Performed in sputum, F1RDT-IPM pooled sensitivity against culture was 100% (95% CI 0 to 100; 2 studies, 56 participants; very low-certainty evidence) and pooled specificity was 71% (95% CI 59 to 80; 2 studies, 297 participants; very low-certainty evidence). There were limited data on the performance of F1RDT against PCR or against paired serology for diagnosing pneumonic plague. F1RDT for diagnosing bubonic plague Performed in bubo aspirate, F1RDT-IPM pooled sensitivity against culture was 100% (95% CI not calculable; 2 studies, 1454 participants; low-certainty evidence) and pooled specificity was 67% (95% CI 65 to 70; 2 studies, 1198 participants; very low-certainty evidence). Performed in bubo aspirate, F1RDT-IPM pooled sensitivity against PCR for the caf1 gene was 95% (95% CI 89 to 99; 1 study, 88 participants; very low-certainty evidence) and pooled specificity was 93% (95% CI 84 to 98; 1 study, 61 participants; very low-certainty evidence). There were no data providing data on both F1RDT and paired serology for diagnosing bubonic plague.
AUTHORS' CONCLUSIONS
Against culture, the F1RDT appeared highly sensitive for diagnosing either pneumonic or bubonic plague, and can help detect plague in remote areas to assure management and enable a public health response. False positive results mean culture or PCR confirmation may be needed. F1RDT does not replace culture, which provides additional information on resistance to antibiotics and bacterial strains.
Topics: Adult; Antigens, Bacterial; Child; Confidence Intervals; Cross-Sectional Studies; False Negative Reactions; False Positive Reactions; Humans; Plague; Sensitivity and Specificity; Time Factors; Yersinia pestis
PubMed: 32597510
DOI: 10.1002/14651858.CD013459.pub2 -
Autoimmunity Reviews Jun 2022Antinuclear antibodies (ANA) detected in juvenile idiopathic arthritis (JIA) sera are considered to be a biomarker for JIA-related uveitis. There is an unclear consensus... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
Antinuclear antibodies (ANA) detected in juvenile idiopathic arthritis (JIA) sera are considered to be a biomarker for JIA-related uveitis. There is an unclear consensus on the screening dilutions of ANA as detected by the HEp-2 indirect immunofluorescence assay (IFA) that should be used when predicting the risk of uveitis in JIA. The primary aim of this systematic review and meta-analysis was to summarize the evidence regarding ANA prevalence and performance in JIA and JIA-associated uveitis.
METHODS
A search of five databases identified 1766 abstracts, using the search terms juvenile idiopathic arthritis; pediatric; sensitivity or diagnostic; and ANA. Studies that met inclusion/exclusion criteria were analyzed for the proportion of JIA patients with a positive ANA. Forest plots and pooled estimates were generated for the proportion of JIA patients and those with uveitis who were positive for ANA stratified by screening dilution. Study heterogeneity was also assessed.
RESULTS
Twenty-eight studies met inclusion criteria yielding 6250 unique patients; 5902 had JIA and 348 were healthy controls or were known to have other autoimmune diseases. The most reported IFA serum screening dilution was ≥1:80, representing 41.9% of patients and this screening dilution had the highest proportion of JIA ANA positivity (41.0%; 95% CI 25.0%-57.0%). ANA screening for JIA uveitis had a sensitivity and specificity of ANA at ≥1:40 of 75% (95% CI 46%-100%) and 66% (95% CI 39%-93%), respectively. There was significant study heterogeneity across both JIA subtypes and ANA titres.
CONCLUSIONS
Although there was a large variation of ANA IFA screening dilutions used for investigation of JIA, the most common dilution was 1:80. The current literature has several important deficiencies that are identified in this review requiring additional studies to inform the ANA screening dilutions of clinical value in JIA and JIA-associated uveitis.
Topics: Antibodies, Antinuclear; Arthritis, Juvenile; Child; Fluorescent Antibody Technique, Indirect; Humans; Prevalence; Retrospective Studies; Uveitis
PubMed: 35398272
DOI: 10.1016/j.autrev.2022.103086