-
Clinical Orthopaedics and Related... Jun 2015MRI is the gold standard for evaluating the relationship of disc material to soft tissue and neural structures. However, terminologies used to describe lumbar disc... (Review)
Review
BACKGROUND
MRI is the gold standard for evaluating the relationship of disc material to soft tissue and neural structures. However, terminologies used to describe lumbar disc herniation and nerve root compression have always been a source of confusion. A clear understanding of lumbar disc terminology among clinicians, radiologists, and researchers is vital for patient care and future research.
QUESTIONS/PURPOSES
Through a systematic review of the literature, the purpose of this article is to describe lumbar disc terminology and comment on the reliability of various nomenclature systems and their application to clinical practice.
METHODS
PubMed was used for our literature search using the following MeSH headings: "Magnetic Resonance Imaging and Intervertebral Disc Displacement" and "Lumbar Vertebrae" and terms "nomenclature" or "grading" or "classification". Ten papers evaluating lumbar disc herniation/nerve root compression using different grading criteria and providing information regarding intraobserver and interobserver agreement were identified.
RESULTS
To date, the Combined Task Force (CTF) and van Rijn classification systems are the most reliable methods for describing lumbar disc herniation and nerve root compression, respectively. van Rijn dichotomized nerve roots from "definitely no root compression, possibly no root compression, indeterminate root compression, possible root compression, and definite root compression" into no root compression (first three categories) and root compression (last two categories). The CTF classification defines lumbar discs as normal, focal protrusion, broad-based protrusion, or extrusion. The CTF classification system excludes "disc bulges," which is a source of confusion and disagreement among many practitioners. This potentially accounts for its improved reliability compared with other proposed nomenclature systems.
CONCLUSIONS
The main issue in the management of patients with lumbar disc disease and nerve root compression is correlation of imaging findings with clinical presentation and symptomatology to guide treatment and intervention. Although it appears that the most commonly supported nomenclatures have strong interobserver reliability, the classification term "disc bulges" is a source of confusion and disagreement among many practitioners. Additional research should focus on the clinical application of the various nomenclatures.
Topics: Humans; Intervertebral Disc; Intervertebral Disc Displacement; Lumbar Vertebrae; Magnetic Resonance Imaging; Observer Variation; Predictive Value of Tests; Prognosis; Radiculopathy; Reproducibility of Results; Severity of Illness Index; Terminology as Topic
PubMed: 24825130
DOI: 10.1007/s11999-014-3674-y -
JAMA Network Open Mar 2023Artificial intelligence (AI) enables powerful models for establishment of clinical diagnostic and prognostic tools for hip fractures; however the performance and... (Meta-Analysis)
Meta-Analysis
IMPORTANCE
Artificial intelligence (AI) enables powerful models for establishment of clinical diagnostic and prognostic tools for hip fractures; however the performance and potential impact of these newly developed algorithms are currently unknown.
OBJECTIVE
To evaluate the performance of AI algorithms designed to diagnose hip fractures on radiographs and predict postoperative clinical outcomes following hip fracture surgery relative to current practices.
DATA SOURCES
A systematic review of the literature was performed using the MEDLINE, Embase, and Cochrane Library databases for all articles published from database inception to January 23, 2023. A manual reference search of included articles was also undertaken to identify any additional relevant articles.
STUDY SELECTION
Studies developing machine learning (ML) models for the diagnosis of hip fractures from hip or pelvic radiographs or to predict any postoperative patient outcome following hip fracture surgery were included.
DATA EXTRACTION AND SYNTHESIS
This study followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses and was registered with PROSPERO. Eligible full-text articles were evaluated and relevant data extracted independently using a template data extraction form. For studies that predicted postoperative outcomes, the performance of traditional predictive statistical models, either multivariable logistic or linear regression, was recorded and compared with the performance of the best ML model on the same out-of-sample data set.
MAIN OUTCOMES AND MEASURES
Diagnostic accuracy of AI models was compared with the diagnostic accuracy of expert clinicians using odds ratios (ORs) with 95% CIs. Areas under the curve for postoperative outcome prediction between traditional statistical models (multivariable linear or logistic regression) and ML models were compared.
RESULTS
Of 39 studies that met all criteria and were included in this analysis, 18 (46.2%) used AI models to diagnose hip fractures on plain radiographs and 21 (53.8%) used AI models to predict patient outcomes following hip fracture surgery. A total of 39 598 plain radiographs and 714 939 hip fractures were used for training, validating, and testing ML models specific to diagnosis and postoperative outcome prediction, respectively. Mortality and length of hospital stay were the most predicted outcomes. On pooled data analysis, compared with clinicians, the OR for diagnostic error of ML models was 0.79 (95% CI, 0.48-1.31; P = .36; I2 = 60%) for hip fracture radiographs. For the ML models, the mean (SD) sensitivity was 89.3% (8.5%), specificity was 87.5% (9.9%), and F1 score was 0.90 (0.06). The mean area under the curve for mortality prediction was 0.84 with ML models compared with 0.79 for alternative controls (P = .09).
CONCLUSIONS AND RELEVANCE
The findings of this systematic review and meta-analysis suggest that the potential applications of AI to aid with diagnosis from hip radiographs are promising. The performance of AI in diagnosing hip fractures was comparable with that of expert radiologists and surgeons. However, current implementations of AI for outcome prediction do not seem to provide substantial benefit over traditional multivariable predictive statistics.
Topics: Humans; Artificial Intelligence; Hip Fractures; Prognosis; Algorithms; Length of Stay
PubMed: 36930153
DOI: 10.1001/jamanetworkopen.2023.3391 -
Radiology Jun 2023Background There is considerable interest in the potential use of artificial intelligence (AI) systems in mammographic screening. However, it is essential to critically... (Meta-Analysis)
Meta-Analysis
Background There is considerable interest in the potential use of artificial intelligence (AI) systems in mammographic screening. However, it is essential to critically evaluate the performance of AI before it can become a modality used for independent mammographic interpretation. Purpose To evaluate the reported standalone performances of AI for interpretation of digital mammography and digital breast tomosynthesis (DBT). Materials and Methods A systematic search was conducted in PubMed, Google Scholar, Embase (Ovid), and Web of Science databases for studies published from January 2017 to June 2022. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) values were reviewed. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 and Comparative (QUADAS-2 and QUADAS-C, respectively). A random effects meta-analysis and meta-regression analysis were performed for overall studies and for different study types (reader studies vs historic cohort studies) and imaging techniques (digital mammography vs DBT). Results In total, 16 studies that include 1 108 328 examinations in 497 091 women were analyzed (six reader studies, seven historic cohort studies on digital mammography, and four studies on DBT). Pooled AUCs were significantly higher for standalone AI than radiologists in the six reader studies on digital mammography (0.87 vs 0.81, = .002), but not for historic cohort studies (0.89 vs 0.96, = .152). Four studies on DBT showed significantly higher AUCs in AI compared with radiologists (0.90 vs 0.79, < .001). Higher sensitivity and lower specificity were seen for standalone AI compared with radiologists. Conclusion Standalone AI for screening digital mammography performed as well as or better than radiologists. Compared with digital mammography, there is an insufficient number of studies to assess the performance of AI systems in the interpretation of DBT screening examinations. © RSNA, 2023 See also the editorial by Scaranelo in this issue.
Topics: Female; Humans; Artificial Intelligence; Breast Neoplasms; Early Detection of Cancer; Mammography; Breast; Retrospective Studies
PubMed: 37219445
DOI: 10.1148/radiol.222639 -
BMJ (Clinical Research Ed.) Sep 2021To examine the accuracy of artificial intelligence (AI) for the detection of breast cancer in mammography screening practice.
OBJECTIVE
To examine the accuracy of artificial intelligence (AI) for the detection of breast cancer in mammography screening practice.
DESIGN
Systematic review of test accuracy studies.
DATA SOURCES
Medline, Embase, Web of Science, and Cochrane Database of Systematic Reviews from 1 January 2010 to 17 May 2021.
ELIGIBILITY CRITERIA
Studies reporting test accuracy of AI algorithms, alone or in combination with radiologists, to detect cancer in women's digital mammograms in screening practice, or in test sets. Reference standard was biopsy with histology or follow-up (for screen negative women). Outcomes included test accuracy and cancer type detected.
STUDY SELECTION AND SYNTHESIS
Two reviewers independently assessed articles for inclusion and assessed the methodological quality of included studies using the QUality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A single reviewer extracted data, which were checked by a second reviewer. Narrative data synthesis was performed.
RESULTS
Twelve studies totalling 131 822 screened women were included. No prospective studies measuring test accuracy of AI in screening practice were found. Studies were of poor methodological quality. Three retrospective studies compared AI systems with the clinical decisions of the original radiologist, including 79 910 women, of whom 1878 had screen detected cancer or interval cancer within 12 months of screening. Thirty four (94%) of 36 AI systems evaluated in these studies were less accurate than a single radiologist, and all were less accurate than consensus of two or more radiologists. Five smaller studies (1086 women, 520 cancers) at high risk of bias and low generalisability to the clinical context reported that all five evaluated AI systems (as standalone to replace radiologist or as a reader aid) were more accurate than a single radiologist reading a test set in the laboratory. In three studies, AI used for triage screened out 53%, 45%, and 50% of women at low risk but also 10%, 4%, and 0% of cancers detected by radiologists.
CONCLUSIONS
Current evidence for AI does not yet allow judgement of its accuracy in breast cancer screening programmes, and it is unclear where on the clinical pathway AI might be of most benefit. AI systems are not sufficiently specific to replace radiologist double reading in screening programmes. Promising results in smaller studies are not replicated in larger studies. Prospective studies are required to measure the effect of AI in clinical practice. Such studies will require clear stopping rules to ensure that AI does not reduce programme specificity.
STUDY REGISTRATION
Protocol registered as PROSPERO CRD42020213590.
Topics: Artificial Intelligence; Breast Neoplasms; Early Detection of Cancer; Female; Humans; Mammography; Mass Screening
PubMed: 34470740
DOI: 10.1136/bmj.n1872 -
The Cochrane Database of Systematic... Nov 2019Diagnosing acute appendicitis (appendicitis) based on clinical evaluation, blood testing, and urinalysis can be difficult. Therefore, in persons with suspected... (Meta-Analysis)
Meta-Analysis
BACKGROUND
Diagnosing acute appendicitis (appendicitis) based on clinical evaluation, blood testing, and urinalysis can be difficult. Therefore, in persons with suspected appendicitis, abdominopelvic computed tomography (CT) is often used as an add-on test following the initial evaluation to reduce remaining diagnostic uncertainty. The aim of using CT is to assist the clinician in discriminating between persons who need surgery with appendicectomy and persons who do not.
OBJECTIVES
Primary objective Our primary objective was to evaluate the accuracy of CT for diagnosing appendicitis in adults with suspected appendicitis. Secondary objectives Our secondary objectives were to compare the accuracy of contrast-enhanced versus non-contrast-enhanced CT, to compare the accuracy of low-dose versus standard-dose CT, and to explore the influence of CT-scanner generation, radiologist experience, degree of clinical suspicion of appendicitis, and aspects of methodological quality on diagnostic accuracy.
SEARCH METHODS
We searched MEDLINE, Embase, and Science Citation Index until 16 June 2017. We also searched references lists. We did not exclude studies on the basis of language or publication status.
SELECTION CRITERIA
We included prospective studies that compared results of CT versus outcomes of a reference standard in adults (> 14 years of age) with suspected appendicitis. We excluded studies recruiting only pregnant women; studies in persons with abdominal pain at any location and with no particular suspicion of appendicitis; studies in which all participants had undergone ultrasonography (US) before CT and the decision to perform CT depended on the US outcome; studies using a case-control design; studies with fewer than 10 participants; and studies that did not report the numbers of true-positives, false-positives, false-negatives, and true-negatives. Two review authors independently screened and selected studies for inclusion.
DATA COLLECTION AND ANALYSIS
Two review authors independently collected the data from each study and evaluated methodological quality according to the Quality Assessment of Studies of Diagnostic Accuracy - Revised (QUADAS-2) tool. We used the bivariate random-effects model to obtain summary estimates of sensitivity and specificity.
MAIN RESULTS
We identified 64 studies including 71 separate study populations with a total of 10,280 participants (4583 with and 5697 without acute appendicitis). Estimates of sensitivity ranged from 0.72 to 1.0 and estimates of specificity ranged from 0.5 to 1.0 across the 71 study populations. Summary sensitivity was 0.95 (95% confidence interval (CI) 0.93 to 0.96), and summary specificity was 0.94 (95% CI 0.92 to 0.95). At the median prevalence of appendicitis (0.43), the probability of having appendicitis following a positive CT result was 0.92 (95% CI 0.90 to 0.94), and the probability of having appendicitis following a negative CT result was 0.04 (95% CI 0.03 to 0.05). In subgroup analyses according to contrast enhancement, summary sensitivity was higher for CT with intravenous contrast (0.96, 95% CI 0.92 to 0.98), CT with rectal contrast (0.97, 95% CI 0.93 to 0.99), and CT with intravenous and oral contrast enhancement (0.96, 95% CI 0.93 to 0.98) than for unenhanced CT (0.91, 95% CI 0.87 to 0.93). Summary sensitivity of CT with oral contrast enhancement (0.89, 95% CI 0.81 to 0.94) and unenhanced CT was similar. Results show practically no differences in summary specificity, which varied from 0.93 (95% CI 0.90 to 0.95) to 0.95 (95% CI 0.90 to 0.98) between subgroups. Summary sensitivity for low-dose CT (0.94, 95% 0.90 to 0.97) was similar to summary sensitivity for standard-dose or unspecified-dose CT (0.95, 95% 0.93 to 0.96); summary specificity did not differ between low-dose and standard-dose or unspecified-dose CT. No studies had high methodological quality as evaluated by the QUADAS-2 tool. Major methodological problems were poor reference standards and partial verification primarily due to inadequate and incomplete follow-up in persons who did not have surgery.
AUTHORS' CONCLUSIONS
The sensitivity and specificity of CT for diagnosing appendicitis in adults are high. Unenhanced standard-dose CT appears to have lower sensitivity than standard-dose CT with intravenous, rectal, or oral and intravenous contrast enhancement. Use of different types of contrast enhancement or no enhancement does not appear to affect specificity. Differences in sensitivity and specificity between low-dose and standard-dose CT appear to be negligible. The results of this review should be interpreted with caution for two reasons. First, these results are based on studies of low methodological quality. Second, the comparisons between types of contrast enhancement and radiation dose may be unreliable because they are based on indirect comparisons that may be confounded by other factors.
Topics: Acute Disease; Adult; Appendicitis; Humans; Randomized Controlled Trials as Topic; Tomography, X-Ray Computed
PubMed: 31743429
DOI: 10.1002/14651858.CD009977.pub2 -
BMC Cancer Sep 2021Artificial intelligence (AI) is increasingly being used in medical imaging analysis. We aimed to evaluate the diagnostic accuracy of AI models used for detection of... (Meta-Analysis)
Meta-Analysis
BACKGROUND
Artificial intelligence (AI) is increasingly being used in medical imaging analysis. We aimed to evaluate the diagnostic accuracy of AI models used for detection of lymph node metastasis on pre-operative staging imaging for colorectal cancer.
METHODS
A systematic review was conducted according to PRISMA guidelines using a literature search of PubMed (MEDLINE), EMBASE, IEEE Xplore and the Cochrane Library for studies published from January 2010 to October 2020. Studies reporting on the accuracy of radiomics models and/or deep learning for the detection of lymph node metastasis in colorectal cancer by CT/MRI were included. Conference abstracts and studies reporting accuracy of image segmentation rather than nodal classification were excluded. The quality of the studies was assessed using a modified questionnaire of the QUADAS-2 criteria. Characteristics and diagnostic measures from each study were extracted. Pooling of area under the receiver operating characteristic curve (AUROC) was calculated in a meta-analysis.
RESULTS
Seventeen eligible studies were identified for inclusion in the systematic review, of which 12 used radiomics models and five used deep learning models. High risk of bias was found in two studies and there was significant heterogeneity among radiomics papers (73.0%). In rectal cancer, there was a per-patient AUROC of 0.808 (0.739-0.876) and 0.917 (0.882-0.952) for radiomics and deep learning models, respectively. Both models performed better than the radiologists who had an AUROC of 0.688 (0.603 to 0.772). Similarly in colorectal cancer, radiomics models with a per-patient AUROC of 0.727 (0.633-0.821) outperformed the radiologist who had an AUROC of 0.676 (0.627-0.725).
CONCLUSION
AI models have the potential to predict lymph node metastasis more accurately in rectal and colorectal cancer, however, radiomics studies are heterogeneous and deep learning studies are scarce.
TRIAL REGISTRATION
PROSPERO CRD42020218004 .
Topics: Artificial Intelligence; Bias; Colorectal Neoplasms; Deep Learning; Humans; Lymph Nodes; Lymphatic Metastasis; Magnetic Resonance Imaging; Preoperative Care; Publication Bias; ROC Curve; Radiologists; Rectal Neoplasms; Sensitivity and Specificity; Tomography, X-Ray Computed
PubMed: 34565338
DOI: 10.1186/s12885-021-08773-w -
Alimentary Pharmacology & Therapeutics Aug 2018Fibrotic stricture is a common complication of Crohn's disease (CD) affecting approximately half of all patients. No specific anti-fibrotic therapies are available;...
BACKGROUND
Fibrotic stricture is a common complication of Crohn's disease (CD) affecting approximately half of all patients. No specific anti-fibrotic therapies are available; however, several therapies are currently under evaluation. Drug development for the indication of stricturing CD is hampered by a lack of standardised definitions, diagnostic modalities, clinical trial eligibility criteria, endpoints and treatment targets in stricturing CD.
AIM
To standardise definitions, diagnosis and treatment targets for anti-fibrotic stricture therapies in Chron's disease.
METHODS
An interdisciplinary expert panel consisting of 15 gastroenterologists and radiologists was assembled. Using modified RAND/University of California Los Angeles appropriateness methodology, 109 candidate items derived from systematic review and expert opinion focusing on small intestinal strictures were anonymously rated as inappropriate, uncertain or appropriate. Survey results were discussed as a group before a second and third round of voting.
RESULTS
Fibrotic strictures are defined by the combination of luminal narrowing, wall thickening and pre-stenotic dilation. Definitions of anastomotic (at site of prior intestinal resection with anastomosis) and naïve small bowel strictures were similar; however, there was uncertainty regarding wall thickness in anastomotic strictures. Magnetic resonance imaging is considered the optimal technique to define fibrotic strictures and assess response to therapy. Symptomatic strictures are defined by abdominal distension, cramping, dietary restrictions, nausea, vomiting, abdominal pain and post-prandial abdominal pain. Need for intervention (endoscopic balloon dilation or surgery) within 24-48 weeks is considered the appropriate endpoint in pharmacological trials.
CONCLUSIONS
Consensus criteria for diagnosis and response to therapy in stricturing Crohn's disease should inform both clinical practice and trial design.
Topics: Catheterization; Clinical Trials as Topic; Colon; Consensus; Constriction, Pathologic; Crohn Disease; Dilatation; Endoscopy; Expert Testimony; Fibrosis; Humans; Intestinal Obstruction; Intestine, Small; Practice Guidelines as Topic; Reference Standards
PubMed: 29920726
DOI: 10.1111/apt.14853 -
Neurology India 2021Multiple sclerosis is a chronic demyelinating disorder with a myriad of imaging and clinical features that overlap with number of other neurological conditions.... (Review)
Review
BACKGROUND
Multiple sclerosis is a chronic demyelinating disorder with a myriad of imaging and clinical features that overlap with number of other neurological conditions. Incorrect diagnosis poses a significant risk to patients, it may lead to delays in management, increased morbidity, and also adds to the financial cost.
OBJECTIVE
The aim of this study was to highlight strategies for the efficient differentiation of multiple sclerosis from other diseases which may masquerade as MS clinico-radiologically.
MATERIAL AND METHODS
A systematic literature review was conducted through online databases including PubMed and Medline. Relevant publications on radiological aspects of multiple sclerosis, white matter diseases and mimickers of Multiple sclerosis were included in the analysis.
RESULTS
Common mimickers of MS include small vessel disease, acute disseminated encephalomyelitis, neuromyelitis optica, anti-MOG encephalomyelitis, vasculitis, and CADASIL. Contrast-enhanced MRI study performed using MS protocol on high strength MRI system evaluated following a structured protocol along with clinical correlation is effective in differentiating MS from its mimickers.
CONCLUSIONS
Contrast-enhanced MRI performed on a high strength scanner using MS protocol with structured protocol for evaluation along, with a better collaboration between radiologists and clinicians may help in minimizing errors in diagnosis of multiple sclerosis.
Topics: Encephalomyelitis; Encephalomyelitis, Acute Disseminated; Humans; Magnetic Resonance Imaging; Multiple Sclerosis; Neuromyelitis Optica
PubMed: 34979638
DOI: 10.4103/0028-3886.333497 -
The Cochrane Database of Systematic... Jul 2020Chest X-ray (CXR) is a longstanding method for the diagnosis of pneumothorax but chest ultrasonography (CUS) may be a safer, more rapid, and more accurate modality in... (Comparative Study)
Comparative Study Meta-Analysis
BACKGROUND
Chest X-ray (CXR) is a longstanding method for the diagnosis of pneumothorax but chest ultrasonography (CUS) may be a safer, more rapid, and more accurate modality in trauma patients at the bedside that does not expose the patient to ionizing radiation. This may lead to improved and expedited management of traumatic pneumothorax and improved patient safety and clinical outcomes.
OBJECTIVES
To compare the diagnostic accuracy of chest ultrasonography (CUS) by frontline non-radiologist physicians versus chest X-ray (CXR) for diagnosis of pneumothorax in trauma patients in the emergency department (ED). To investigate the effects of potential sources of heterogeneity such as type of CUS operator (frontline non-radiologist physicians), type of trauma (blunt vs penetrating), and type of US probe on test accuracy.
SEARCH METHODS
We conducted a comprehensive search of the following electronic databases from database inception to 10 April 2020: Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL) Plus, Database of Abstracts of Reviews of Effects, Web of Science Core Collection and Clinicaltrials.gov. We handsearched reference lists of included articles and reviews retrieved via electronic searching; and we carried out forward citation searching of relevant articles in Google Scholar and looked at the "Related articles" on PubMed.
SELECTION CRITERIA
We included prospective, paired comparative accuracy studies comparing CUS performed by frontline non-radiologist physicians to supine CXR in trauma patients in the emergency department (ED) suspected of having pneumothorax, and with computed tomography (CT) of the chest or tube thoracostomy as the reference standard.
DATA COLLECTION AND ANALYSIS
Two review authors independently extracted data from each included study using a data extraction form. We included studies using patients as the unit of analysis in the main analysis and we included those using lung fields in the secondary analysis. We performed meta-analyses by using a bivariate model to estimate and compare summary sensitivities and specificities.
MAIN RESULTS
We included 13 studies of which nine (410 traumatic pneumothorax patients out of 1271 patients) used patients as the unit of analysis; we thus included them in the primary analysis. The remaining four studies used lung field as the unit of analysis and we included them in the secondary analysis. We judged all studies to be at high or unclear risk of bias in one or more domains, with most studies (11/13, 85%) being judged at high or unclear risk of bias in the patient selection domain. There was substantial heterogeneity in the sensitivity of supine CXR amongst the included studies. In the primary analysis, the summary sensitivity and specificity of CUS were 0.91 (95% confidence interval (CI) 0.85 to 0.94) and 0.99 (95% CI 0.97 to 1.00); and the summary sensitivity and specificity of supine CXR were 0.47 (95% CI 0.31 to 0.63) and 1.00 (95% CI 0.97 to 1.00). There was a significant difference in the sensitivity of CUS compared to CXR with an absolute difference in sensitivity of 0.44 (95% CI 0.27 to 0.61; P < 0.001). In contrast, CUS and CXR had similar specificities: comparing CUS to CXR, the absolute difference in specificity was -0.007 (95% CI -0.018 to 0.005, P = 0.35). The findings imply that in a hypothetical cohort of 100 patients if 30 patients have traumatic pneumothorax (i.e. prevalence of 30%), CUS would miss 3 (95% CI 2 to 4) cases (false negatives) and overdiagnose 1 (95% CI 0 to 2) of those without pneumothorax (false positives); while CXR would miss 16 (95% CI 11 to 21) cases with 0 (95% CI 0 to 2) overdiagnosis of those who do not have pneumothorax.
AUTHORS' CONCLUSIONS
The diagnostic accuracy of CUS performed by frontline non-radiologist physicians for the diagnosis of pneumothorax in ED trauma patients is superior to supine CXR, independent of the type of trauma, type of CUS operator, or type of CUS probe used. These findings suggest that CUS for the diagnosis of traumatic pneumothorax should be incorporated into trauma protocols and algorithms in future medical training programmes; and that CUS may beneficially change routine management of trauma.
Topics: Bias; Confidence Intervals; Emergency Service, Hospital; Humans; Pneumothorax; Prospective Studies; Radiography, Thoracic; Sensitivity and Specificity; Supine Position; Thoracic Injuries; Ultrasonography; Wounds, Nonpenetrating; Wounds, Penetrating
PubMed: 32702777
DOI: 10.1002/14651858.CD013031.pub2 -
International Journal of Environmental... Dec 2021Smoking is a major public health problem. Although physicians have a key role in the fight against smoking, some of them are still smoking. Thus, we aimed to conduct a... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
Smoking is a major public health problem. Although physicians have a key role in the fight against smoking, some of them are still smoking. Thus, we aimed to conduct a systematic review and meta-analysis on the prevalence of smoking among physicians.
METHODS
PubMed, Cochrane, and Embase databases were searched. The prevalence of smoking among physicians was estimated and stratified, where possible, by specialties, continents, and periods of time. Then, meta-regressions were performed regarding putative influencing factors such as age and sex.
RESULTS
Among 246 studies and 497,081 physicians, the smoking prevalence among physicians was 21% (95CI 20 to 23%). Prevalence of smoking was 25% in medical students, 24% in family practitioners, 18% in surgical specialties, 17% in psychiatrists, 16% in medical specialties, 11% in anesthesiologists, 9% in radiologists, and 8% in pediatricians. Physicians in Europe and Asia had a higher smoking prevalence than in Oceania. The smoking prevalence among physicians has decreased over time. Male physicians had a higher smoking prevalence. Age did not influence smoking prevalence.
CONCLUSION
Prevalence of smoking among physicians is high, around 21%. Family practitioners and medical students have the highest percentage of smokers. All physicians should benefit from targeted preventive strategies.
Topics: Humans; Male; Physicians; Prevalence; Smoking; Students, Medical; Tobacco Smoking
PubMed: 34948936
DOI: 10.3390/ijerph182413328