-
BMJ (Clinical Research Ed.) Jul 2009Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity...
Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement-a reporting guideline published in 1999-there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Topics: Evidence-Based Medicine; Humans; Meta-Analysis as Topic; Publishing; Quality Control; Review Literature as Topic; Terminology as Topic
PubMed: 19622552
DOI: 10.1136/bmj.b2700 -
PLoS Medicine Jul 2009Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The...
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement--a reporting guideline published in 1999--there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Topics: Evidence-Based Medicine; Humans; Meta-Analysis as Topic; Publishing; Quality Control; Review Literature as Topic; Terminology as Topic
PubMed: 19621070
DOI: 10.1371/journal.pmed.1000100 -
Journal of Clinical Epidemiology Oct 2009Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The...
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement--a reporting guideline published in 1999--there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
Topics: Evidence-Based Medicine; Humans; Meta-Analysis as Topic; Publishing; Quality Control; Review Literature as Topic; Terminology as Topic
PubMed: 19631507
DOI: 10.1016/j.jclinepi.2009.06.006 -
The American Journal of Sports Medicine Nov 2014The role of evidence-based medicine in sports medicine and orthopaedic surgery is rapidly growing. Systematic reviews and meta-analyses are also proliferating in the... (Review)
Review
BACKGROUND
The role of evidence-based medicine in sports medicine and orthopaedic surgery is rapidly growing. Systematic reviews and meta-analyses are also proliferating in the medical literature.
PURPOSE
To provide the outline necessary for a practitioner to properly understand and/or conduct a systematic review for publication in a sports medicine journal.
STUDY DESIGN
Review.
METHODS
The steps of a successful systematic review include the following: identification of an unanswered answerable question; explicit definitions of the investigation's participant(s), intervention(s), comparison(s), and outcome(s); utilization of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines and PROSPERO registration; thorough systematic data extraction; and appropriate grading of the evidence and strength of the recommendations.
RESULTS
An outline to understand and conduct a systematic review is provided, and the difference between meta-analyses and systematic reviews is described. The steps necessary to perform a systematic review are fully explained, including the study purpose, search methodology, data extraction, reporting of results, identification of bias, and reporting of the study's main findings.
CONCLUSION
Systematic reviews or meta-analyses critically appraise and formally synthesize the best existing evidence to provide a statement of conclusion that answers specific clinical questions. Readers and reviewers, however, must recognize that the quality and strength of recommendations in a review are only as strong as the quality of studies that it analyzes. Thus, great care must be used in the interpretation of bias and extrapolation of the review's findings to translation to clinical practice. Without advanced education on the topic, the reader may follow the steps discussed herein to perform a systematic review.
Topics: Evidence-Based Medicine; Humans; Medical Writing; Meta-Analysis as Topic; Orthopedics; Publishing; Review Literature as Topic; Sports Medicine
PubMed: 23925575
DOI: 10.1177/0363546513497567 -
Medical Education Jun 2021Over the last two decades, the number of scoping reviews in core medical education journals has increased by 4200%. Despite this growth, research on scoping reviews... (Review)
Review
OBJECTIVES
Over the last two decades, the number of scoping reviews in core medical education journals has increased by 4200%. Despite this growth, research on scoping reviews provides limited information about their nature, including how they are conducted or why medical educators undertake this knowledge synthesis type. This gap makes it difficult to know where the field stands and may hamper attempts to improve the conduct, reporting and utility of scoping reviews. Thus, this review characterises the nature of medical education scoping reviews to identify areas for improvement and highlight future research opportunities.
METHOD
The authors searched PubMed for scoping reviews published between 1/1999 and 4/2020 in 14 medical education journals. The authors extracted and summarised key bibliometric data, the rationales given for conducting a scoping review, the research questions and key reporting elements as described in the PRISMA-ScR. Rationales and research questions were mapped to Arksey and O'Malley's reasons for conducting a scoping review.
RESULTS
One hundred and one scoping reviews were included. On average, 10.1 scoping reviews (SD = 13.1, median = 4) were published annually with the most reviews published in 2019 (n = 42). Authors described multiple reasons for undertaking scoping reviews; the most prevalent being to summarise and disseminate research findings (n = 77). In 11 reviews, the rationales for the scoping review and the research questions aligned. No review addressed all elements of the PRISMA-ScR, with few authors publishing a protocol (n = 2) or including stakeholders (n = 20). Authors identified shortcomings of scoping reviews, including lack of critical appraisal.
CONCLUSIONS
Scoping reviews are increasingly conducted in medical education and published by most core journals. Scoping reviews aim to map the depth and breadth of emerging topics; as such, they have the potential to play a critical role in the practice, policy and research of medical education. However, these results suggest improvements are needed for this role to be fully realised.
Topics: Education, Medical; Humans; Knowledge; Publications
PubMed: 33300124
DOI: 10.1111/medu.14431 -
Academic Medicine : Journal of the... Aug 2020Academic medical faculty members are assessed on their research productivity for hiring, promotion, grant, and award decisions. The current work systematically reviews,... (Meta-Analysis)
Meta-Analysis
PURPOSE
Academic medical faculty members are assessed on their research productivity for hiring, promotion, grant, and award decisions. The current work systematically reviews, synthesizes, and analyzes the available literature on publication productivity by academic rank across medical specialties.
METHOD
The authors searched PubMed for medical literature, including observational studies, published in English from 2005 to 2018, using the term "h-index," on July 1, 2018. Studies had to report on h-indices for faculty in academic medicine and, if available, other publication metrics, including number of citations, number of publications, and m-indices, stratified by academic rank. The DerSimonian and Laird method was used to perform meta-analyses for the primary (h-index) and secondary (m-index) outcome measures.
RESULTS
The systematic review included 21 studies. The meta-analysis included 19 studies and data on 14,567 academic physicians. Both h- and m-indices increased with academic rank. The weighted random effects summary effect sizes for mean h-indices were 5.22 (95% confidence interval [CI]: 4.21-6.23, n = 6,609) for assistant professors, 11.22 (95% CI: 9.65-12.78, n = 3,508) for associate professors, 20.77 (95% CI: 17.94-23.60, n = 3,626) for full professors, and 22.08 (95% CI: 17.73-26.44, n = 816) for department chairs. Mean m-indices were 0.53 (95% CI: 0.40-0.65, n = 1,653) for assistant professors, 0.72 (95% CI: 0.58-0.85, n = 883) for associate professors, 0.99 (95% CI: 0.75-1.22, n = 854) for full professors, and 1.16 (95% CI: 0.81-1.51, n = 195) for department chairs.
CONCLUSIONS
Both h- and m-indices increase with successive academic rank. There are unique distributions of these metrics among medical specialties. The h- and m-indices should be used in conjunction with other measures of academic success to evaluate faculty members for hiring, promotion, grant, and award decisions.
Topics: Bibliometrics; Canada; Career Mobility; Efficiency; Faculty, Medical; Humans; Periodicals as Topic; Publishing; United States
PubMed: 32028299
DOI: 10.1097/ACM.0000000000003185 -
American Journal of Obstetrics &... Sep 2022
Meta-Analysis
Topics: Bibliometrics; Obstetrics; Publication Bias
PubMed: 35413471
DOI: 10.1016/j.ajogmf.2022.100644 -
European Spine Journal : Official... Nov 2023The number of articles retracted by peer-reviewed journals has increased in recent years. This study systematically reviews retracted publications in the spine surgery... (Review)
Review
PURPOSE
The number of articles retracted by peer-reviewed journals has increased in recent years. This study systematically reviews retracted publications in the spine surgery literature.
METHODS
A search of PubMed MEDLINE, Ovid EMBASE, Retraction Watch, and the independent websites of 15 spine surgery-related journals from inception to September of 2022 was performed without language restrictions. PRISMA guidelines were followed with title/abstract screening, and full-text screening was conducted independently and in duplicate by two reviewers. Study characteristics and bibliometric information for each publication was extracted.
RESULTS
Of 250 studies collected from the search, 65 met the inclusion criteria. The most common reason for retraction was data error (n = 15, 21.13%), followed by plagiarism (n = 14, 19.72%) and submission to another journal (n = 14, 19.72%). Most studies pertained to degenerative pathologies of the spine (n = 32, 80.00%). Most articles had no indication of retraction in their manuscript (n = 24, 36.92%), while others had a watermark or notice at the beginning of the article. The median number of citations per retracted publication was 10.0 (IQR 3-29), and the median 4-year impact factor of the journals was 5.05 (IQR 3.20-6.50). On multivariable linear regression, the difference in years from publication to retraction (p = 0.0343, β = 6.56, 95% CI 0.50-12.62) and the journal 4-year impact factor (p = 0.0029, β = 7.47, 95% CI 2.66-12.28) were positively associated with the total number of citations per retracted publication. Most articles originated from China (n = 30, 46.15%) followed by the United States (n = 12, 18.46%) and Germany (n = 3, 4.62%). The most common study design was retrospective cohort studies (n = 14, 21.54%).
CONCLUSIONS
The retraction of publications has increased in recent years in spine surgery. Researchers consulting this body of literature should remain vigilant. Institutions and journals should collaborate to increase publication transparency and scientific integrity.
Topics: Humans; Scientific Misconduct; Retrospective Studies; Plagiarism; Journal Impact Factor; Research Design; Biomedical Research
PubMed: 37725162
DOI: 10.1007/s00586-023-07927-7 -
BMC Medical Research Methodology Jun 2020Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but...
BACKGROUND
Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but relatively little is known about their presence and extent in health services research (HSR). This paper aims to systematically review evidence concerning publication and related bias in quantitative HSR.
METHODS
Databases including MEDLINE, EMBASE, HMIC, CINAHL, Web of Science, Health Systems Evidence, Cochrane EPOC Review Group and several websites were searched to July 2018. Information was obtained from: (1) Methodological studies that set out to investigate publication and related biases in HSR; (2) Systematic reviews of HSR topics which examined such biases as part of the review process. Relevant information was extracted from included studies by one reviewer and checked by another. Studies were appraised according to commonly accepted scientific principles due to lack of suitable checklists. Data were synthesised narratively.
RESULTS
After screening 6155 citations, four methodological studies investigating publication bias in HSR and 184 systematic reviews of HSR topics (including three comparing published with unpublished evidence) were examined. Evidence suggestive of publication bias was reported in some of the methodological studies, but evidence presented was very weak, limited in both quality and scope. Reliable data on outcome reporting bias and p-hacking were scant. HSR systematic reviews in which published literature was compared with unpublished evidence found significant differences in the estimated intervention effects or association in some but not all cases.
CONCLUSIONS
Methodological research on publication and related biases in HSR is sparse. Evidence from available literature suggests that such biases may exist in HSR but their scale and impact are difficult to estimate for various reasons discussed in this paper.
SYSTEMATIC REVIEW REGISTRATION
PROSPERO 2016 CRD42016052333.
Topics: Bias; Health Services Research; Humans; Publication Bias; Research Design
PubMed: 32487022
DOI: 10.1186/s12874-020-01010-1 -
Neurosurgery Mar 2022Statistically significant positive results are more likely to be published than negative or insignificant outcomes. This phenomenon, also termed publication bias, can...
BACKGROUND
Statistically significant positive results are more likely to be published than negative or insignificant outcomes. This phenomenon, also termed publication bias, can skew the interpretation of meta-analyses. The widespread presence of publication bias in the biomedical literature has led to the development of various statistical approaches, such as the visual inspection of funnel plots, Begg test, and Egger test, to assess and account for it.
OBJECTIVE
To determine how well publication bias is assessed for in meta-analyses of the neurosurgical literature.
METHODS
A systematic search for meta-analyses from the top neurosurgery journals was conducted. Data relevant to the presence, assessment, and adjustments for publication bias were extracted.
RESULTS
The search yielded 190 articles. Most of the articles (n = 108, 56.8%) were assessed for publication bias, of which 40 (37.0%) found evidence for publication bias whereas 61 (56.5%) did not. In the former case, only 11 (27.5%) made corrections for the bias using the trim-and-fill method, whereas 29 (72.5%) made no correction. Thus, 111 meta-analyses (58.4%) either did not assess for publication bias or, if assessed to be present, did not adjust for it.
CONCLUSION
Taken together, these results indicate that publication bias remains largely unaccounted for in neurosurgical meta-analyses.
Topics: Humans; Meta-Analysis as Topic; Neurosurgery; Neurosurgical Procedures; Publication Bias; Research Design
PubMed: 35849494
DOI: 10.1227/NEU.0000000000001788