-
Systematic Reviews Dec 2023The living systematic review (LSR) approach is based on ongoing surveillance of the literature and continual updating. Most currently available guidance documents...
BACKGROUND AND OBJECTIVE
The living systematic review (LSR) approach is based on ongoing surveillance of the literature and continual updating. Most currently available guidance documents address the conduct, reporting, publishing, and appraisal of systematic reviews (SRs), but are not suitable for LSRs per se and miss additional LSR-specific considerations. In this scoping review, we aim to systematically collate methodological guidance literature on how to conduct, report, publish, and appraise the quality of LSRs and identify current gaps in guidance.
METHODS
A standard scoping review methodology was used. We searched MEDLINE (Ovid), EMBASE (Ovid), and The Cochrane Library on August 28, 2021. As for searching gray literature, we looked for existing guidelines and handbooks on LSRs from organizations that conduct evidence syntheses. The screening was conducted by two authors independently in Rayyan, and data extraction was done in duplicate using a pilot-tested data extraction form in Excel. Data was extracted according to four pre-defined categories for (i) conducting, (ii) reporting, (iii) publishing, and (iv) appraising LSRs. We mapped the findings by visualizing overview tables created in Microsoft Word.
RESULTS
Of the 21 included papers, methodological guidance was found in 17 papers for conducting, in six papers for reporting, in 15 papers for publishing, and in two papers for appraising LSRs. Some of the identified key items for (i) conducting LSRs were identifying the rationale, screening tools, or re-revaluating inclusion criteria. Identified items of (ii) the original PRISMA checklist included reporting the registration and protocol, title, or synthesis methods. For (iii) publishing, there was guidance available on publication type and frequency or update trigger, and for (iv) appraising, guidance on the appropriate use of bias assessment or reporting funding of included studies was found. Our search revealed major evidence gaps, particularly for guidance on certain PRISMA items such as reporting results, discussion, support and funding, and availability of data and material of a LSR.
CONCLUSION
Important evidence gaps were identified for guidance on how to report in LSRs and appraise their quality. Our findings were applied to inform and prepare a PRISMA 2020 extension for LSR.
Topics: Humans; Publishing; Bias; Checklist; Research Report; MEDLINE
PubMed: 38098023
DOI: 10.1186/s13643-023-02396-x -
Gut and Liver Nov 2015A systematic review (SR) provides the best and most objective analysis of the existing evidence in a particular field. SRs and derived conclusions are essential for... (Review)
Review
A systematic review (SR) provides the best and most objective analysis of the existing evidence in a particular field. SRs and derived conclusions are essential for evidence-based strategies in medicine and evidence-based guidelines in clinical practice. The popularity of SRs has also increased markedly in the field of hepatology. However, although SRs are considered to provide a higher level of evidence with greater confidence than original articles, there have been no reports on the quality of SRs and meta-analyses (MAs) in the field of hepatology. Therefore, we performed a quality assessment of 225 SRs and MAs that were recently published in the field of hepatology (January 2011 to September 2014) using A MeaSurement Tool to Assess systematic Reviews (AMSTAR). Using AMSTAR, we revealed both a shortage of assessments of the scientific quality of individual studies and a publication bias in many SRs and MAs. This review addresses the concern that SRs and MAs need to be conducted in a stricter and more objective manner to minimize bias and random errors. Thus, SRs and MAs should be supported by a multidisciplinary approach that includes clinical experts, methodologists, and statisticians.
Topics: Gastroenterology; Humans; Meta-Analysis as Topic; Publication Bias; Review Literature as Topic
PubMed: 26503570
DOI: 10.5009/gnl14451 -
BMJ Open Mar 2018To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies. (Review)
Review
Are methodological quality and completeness of reporting associated with citation-based measures of publication impact? A secondary analysis of a systematic review of dementia biomarker studies.
OBJECTIVE
To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies.
METHODS
We assessed dementia biomarker studies included in a previous systematic review in terms of methodological and reporting quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and Standards for Reporting of Diagnostic Accuracy (STARD), respectively. We extracted additional study and journal-related data from each publication to account for factors shown to be associated with impact in previous research. We explored associations between potential determinants and measures of publication impact in univariable and stepwise multivariable linear regression analyses.
OUTCOME MEASURES
We aimed to collect data on four measures of publication impact: two traditional measures-average number of citations per year and 5-year impact factor of the publishing journal and two alternative measures-the Altmetric Attention Score and counts of electronic downloads.
RESULTS
The systematic review included 142 studies. Due to limited data, Altmetric Attention Scores and electronic downloads were excluded from the analysis, leaving traditional metrics as the only analysed outcome measures. We found no relationship between QUADAS and traditional metrics. Citation rates were independently associated with 5-year journal impact factor (β=0.42; p<0.001), journal subject area (β=0.39; p<0.001), number of years since publication (β=-0.29; p<0.001) and STARD (β=0.13; p<0.05). Independent determinants of 5-year journal impact factor were citation rates (β=0.45; p<0.001), statement on conflict of interest (β=0.22; p<0.01) and baseline sample size (β=0.15; p<0.05).
CONCLUSIONS
Citation rates and 5-year journal impact factor appear to measure different dimensions of impact. Citation rates were weakly associated with completeness of reporting, while neither traditional metric was related to methodological rigour. Our results suggest that high publication usage and journal outlet is not a guarantee of quality and readers should critically appraise all papers regardless of presumed impact.
Topics: Bibliometrics; Biomarkers; Conflict of Interest; Dementia; Humans; Journal Impact Factor; Periodicals as Topic; Sample Size
PubMed: 29572396
DOI: 10.1136/bmjopen-2017-020331 -
Journal of Clinical Epidemiology Dec 2017The aim of this study was to identify and quantify the characteristics of studies associated with the likelihood of publication. (Meta-Analysis)
Meta-Analysis Review
OBJECTIVES
The aim of this study was to identify and quantify the characteristics of studies associated with the likelihood of publication.
STUDY DESIGN AND SETTING
We searched for manuscripts that tracked cohorts of clinical studies ("cohorts") that from launch to publication. We explored the association of study characteristics with the probability of publication via traditional meta-analyses and meta-regression using random effects models.
RESULTS
The literature review identified 85 cohorts of studies that met our inclusion criteria. The probability of publication was significantly higher for studies whose characteristics were favorable (odds ratio [OR] = 2.04; 95% confidence interval [CI]: 1.62, 2.57) or statistically significant (OR = 2.07; 95% CI: 1.52, 2.81), had a multicenter design (OR = 1.32; 95% CI: 1.16, 1.45), and were of later regulatory phase (3/4 vs. 1/2, OR = 1.34; 95% CI: 1.14, 1.49). Industry funding was modestly associated with lower (OR = 0.81; 95% CI: 0.67, 0.99) probability of publication. An exploratory analysis of effect modification revealed that the effect of the study characteristic "favorable results" on likelihood for publication was stronger for industry-funded studies.
CONCLUSION
The study characteristics of favorable and significant results were associated with greater probability of publication.
Topics: Confidence Intervals; Financial Management; Odds Ratio; Probability; Publication Bias; Publications; United States
PubMed: 28842289
DOI: 10.1016/j.jclinepi.2017.08.004 -
BMC Research Notes Dec 2017PROSPERO, an international prospective register of systematic reviews, was launched in February 2011 to reduce publication bias of systematic reviews (SRs). A... (Meta-Analysis)
Meta-Analysis
OBJECTIVE
PROSPERO, an international prospective register of systematic reviews, was launched in February 2011 to reduce publication bias of systematic reviews (SRs). A questionnaire survey of SR researchers conducted in 2005 indicated the existence of unpublished SRs and the potential influence of lack of funding as a reason for non-publication. Here, we investigated the publication status of registered SRs in the 1st year that PROSPERO launched and assessed the association between publication and the existence of funding or conflicts of interest (COIs).
RESULTS
We identified 326 SRs registered in PROSPERO from February 2011 through February 2012. Among them, 85 SRs (26%) remained unpublished at least 65 months after registration. We found 241 published reports, including four conference abstracts and one poster presentation. Median time to publication from protocol registration was 16.3 months. Funding for SRs was associated with publication [odds ratio (OR) = 2.10; 95% confidence interval (CI) = 1.26 to 3.50]. We found no significant association of author-reported COIs with publication (OR = 2.35; 95% CI = 0.67 to 8.20). Twenty SRs were not published despite the authors reporting completion of the reviews in PROSPERO.
Topics: Epidemiologic Studies; Financial Support; Review Literature as Topic
PubMed: 29208054
DOI: 10.1186/s13104-017-3043-5 -
The Cochrane Database of Systematic... Apr 2016Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health. (Review)
Review
BACKGROUND
Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health.
OBJECTIVES
Our objective was to evaluate the effectiveness of educational or policy interventions in research integrity or responsible conduct of research on the behaviour and attitudes of researchers in health and other research areas.
SEARCH METHODS
We searched the CENTRAL, MEDLINE, LILACS and CINAHL health research bibliographical databases, as well as the Academic Search Complete, AGRICOLA, GeoRef, PsycINFO, ERIC, SCOPUS and Web of Science databases. We performed the last search on 15 April 2015 and the search was limited to articles published between 1990 and 2014, inclusive. We also searched conference proceedings and abstracts from research integrity conferences and specialized websites. We handsearched 14 journals that regularly publish research integrity research.
SELECTION CRITERIA
We included studies that measured the effects of one or more interventions, i.e. any direct or indirect procedure that may have an impact on research integrity and responsible conduct of research in its broadest sense, where participants were any stakeholders in research and publication processes, from students to policy makers. We included randomized and non-randomized controlled trials, such as controlled before-and-after studies, with comparisons of outcomes in the intervention versus non-intervention group or before versus after the intervention. Studies without a control group were not included in the review.
DATA COLLECTION AND ANALYSIS
We used the standard methodological procedures expected by Cochrane. To assess the risk of bias in non-randomized studies, we used a modified Cochrane tool, in which we used four out of six original domains (blinding, incomplete outcome data, selective outcome reporting, other sources of bias) and two additional domains (comparability of groups and confounding factors). We categorized our primary outcome into the following levels: 1) organizational change attributable to intervention, 2) behavioural change, 3) acquisition of knowledge/skills and 4) modification of attitudes/perceptions. The secondary outcome was participants' reaction to the intervention.
MAIN RESULTS
Thirty-one studies involving 9571 participants, described in 33 articles, met the inclusion criteria. All were published in English. Fifteen studies were randomized controlled trials, nine were controlled before-and-after studies, four were non-equivalent controlled studies with a historical control, one was a non-equivalent controlled study with a post-test only and two were non-equivalent controlled studies with pre- and post-test findings for the intervention group and post-test for the control group. Twenty-one studies assessed the effects of interventions related to plagiarism and 10 studies assessed interventions in research integrity/ethics. Participants included undergraduates, postgraduates and academics from a range of research disciplines and countries, and the studies assessed different types of outcomes.We judged most of the included randomized controlled trials to have a high risk of bias in at least one of the assessed domains, and in the case of non-randomized trials there were no attempts to alleviate the potential biases inherent in the non-randomized designs.We identified a range of interventions aimed at reducing research misconduct. Most interventions involved some kind of training, but methods and content varied greatly and included face-to-face and online lectures, interactive online modules, discussion groups, homework and practical exercises. Most studies did not use standardized or validated outcome measures and it was impossible to synthesize findings from studies with such diverse interventions, outcomes and participants. Overall, there is very low quality evidence that various methods of training in research integrity had some effects on participants' attitudes to ethical issues but minimal (or short-lived) effects on their knowledge. Training about plagiarism and paraphrasing had varying effects on participants' attitudes towards plagiarism and their confidence in avoiding it, but training that included practical exercises appeared to be more effective. Training on plagiarism had inconsistent effects on participants' knowledge about and ability to recognize plagiarism. Active training, particularly if it involved practical exercises or use of text-matching software, generally decreased the occurrence of plagiarism although results were not consistent. The design of a journal's author contribution form affected the truthfulness of information supplied about individuals' contributions and the proportion of listed contributors who met authorship criteria. We identified no studies testing interventions for outcomes at the organizational level. The numbers of events and the magnitude of intervention effects were generally small, so the evidence is likely to be imprecise. No adverse effects were reported.
AUTHORS' CONCLUSIONS
The evidence base relating to interventions to improve research integrity is incomplete and the studies that have been done are heterogeneous, inappropriate for meta-analyses and their applicability to other settings and population is uncertain. Many studies had a high risk of bias because of the choice of study design and interventions were often inadequately reported. Even when randomized designs were used, findings were difficult to generalize. Due to the very low quality of evidence, the effects of training in responsible conduct of research on reducing research misconduct are uncertain. Low quality evidence indicates that training about plagiarism, especially if it involves practical exercises and use of text-matching software, may reduce the occurrence of plagiarism.
Topics: Attitude; Biomedical Research; Controlled Before-After Studies; Controlled Clinical Trials as Topic; Humans; Plagiarism; Publishing; Randomized Controlled Trials as Topic; Research Personnel; Scientific Misconduct
PubMed: 27040721
DOI: 10.1002/14651858.MR000038.pub2 -
The Journal of Hand Surgery Mar 2010Kienböck's disease is considered rare and currently affects fewer than 200,000 people in the United States. Given the inherent challenges associated with researching... (Review)
Review
PURPOSE
Kienböck's disease is considered rare and currently affects fewer than 200,000 people in the United States. Given the inherent challenges associated with researching rare diseases, the intense effort in hand surgery to treat this uncommon disorder may be influenced by publication bias in which positive outcomes are preferentially published. The specific aim of this project was to conduct a systematic review of the literature with the hypothesis that publication bias is present for the treatment of Kienböck's disease.
METHODS
We conducted a systematic review of all available abstracts associated with published manuscripts (English and non-English) and abstracts accepted to the 1992 to 2004 American Society for Surgery of the Hand (ASSH) annual meetings. Data collection included various study characteristics, direction of outcome (positive, neutral/negative), complication rates, mean follow-up time, time to publication, and length of patient enrollment.
RESULTS
Our study included 175 (124 English, 51 non-English) published manuscripts and 14 abstracts from the 1992 to 2004 annual ASSH meetings. Abstracts from published manuscripts were associated with a 53% positive outcome rate, which is lower than the 74% positive outcome rate found among other surgically treated disorders. Over the past 40 years, studies have become more positive (36% to 68%, p=.007) and are more likely to incorporate statistical analysis testing (0% to 55%, p<.001). Of the 14 abstracts accepted to ASSH, 11 were published in peer-reviewed journals. Ten of the 14 accepted abstracts were considered positive, and there was no significant difference in publication rate between studies with positive (n = 10) and negative (n = 4) outcomes (p>.999).
CONCLUSIONS
The acceptance rate for negative outcomes studies regarding Kienböck's disease is higher than for other surgical disorders. This may indicate a relative decrease in positive outcome bias among published Kienböck's disease studies compared with other surgical disorders. However, the increasing positive outcome rate for published Kienböck's disease studies over time may suggest a trend of increasing publication bias among journals toward Kienböck's disease studies.
Topics: Abstracting and Indexing; Bibliometrics; Humans; Osteonecrosis; Periodicals as Topic; Publication Bias; Publishing
PubMed: 20193856
DOI: 10.1016/j.jhsa.2009.12.003 -
Journal of Dental Research Oct 2015Economic evaluation (EE) studies have been undertaken in dentistry since the late 20th century because economic data provide additional information to policy makers to... (Review)
Review
Economic evaluation (EE) studies have been undertaken in dentistry since the late 20th century because economic data provide additional information to policy makers to develop guidelines and set future direction for oral health services. The objectives of this study were to assess the methodological quality of EEs in oral health. Electronic searching of Ovid MEDLINE, the Cochrane Library, and the NHS Economic Evaluation Database from 1975 to 2013 were undertaken to identify publications that include costs and outcomes in dentistry. Relevant reference lists were also searched for additional studies. Studies were retrieved and reviewed independently for inclusion by 3 authors. Furthermore, to appraise the EE methods, 1 author applied the Drummond 10-item (13-criteria) checklist tool to each study. Of the 114 publications identified, 79 studies were considered full EE and 35 partial. Twenty-eight studies (30%) were published between the years 2011 and 2013. Sixty-four (53%) studies focused on dental caries prevention or treatment. Median appraisal scores calculated for full and partial EE studies were 11 and 9 out of 13, respectively. Quality assessment scores showed that the quality of partial EE studies published after 2000 significantly improved (P = 0.02) compared to those published before 2000. Significant quality improvement was not found in full EE studies. Common methodological limitations were identified: absence of sensitivity analysis, discounting, and insufficient information on how costs and outcomes were measured and valued. EE studies in dentistry increased over the last 40 y in both quantity and quality, but a number of publications failed to satisfy some components of standard EE research methods, such as sensitivity analysis and discounting.
Topics: Cost-Benefit Analysis; Dental Caries; Dental Research; Dentistry; Economics, Dental; Humans; Publications; Quality Assurance, Health Care
PubMed: 26082388
DOI: 10.1177/0022034515589958 -
BMJ Open Jul 2014Ghostwriting of industry-sponsored articles is unethical and is perceived to be common practice. (Review)
Review
BACKGROUND
Ghostwriting of industry-sponsored articles is unethical and is perceived to be common practice.
OBJECTIVE
To systematically review how evidence for the prevalence of ghostwriting is reported in the medical literature.
DATA SOURCES
MEDLINE via PubMed 1966+, EMBASE 1966+, The Cochrane Library 1988+, Medical Writing 1998+, The American Medical Writers Association (AMWA) Journal 1986+, Council of Science Editors Annual Meetings 2007+, and the Peer Review Congress 1994+ were searched electronically (23 May 2013) using the search terms ghostwrit*, ghostauthor*, ghost AND writ*, ghost AND author*.
ELIGIBILITY CRITERIA
All publication types were considered; only publications reporting a numerical estimate of possible ghostwriting prevalence were included.
DATA EXTRACTION
Two independent reviewers screened the publications; discrepancies were resolved by consensus. Data to be collected included a numerical estimate of the prevalence of possible ghostwriting (primary outcome measure), definitions of ghostwriting reported, source of the reported prevalence, publication type and year, study design and sample population.
RESULTS
Of the 848 publications retrieved and screened for eligibility, 48 reported numerical estimates for the prevalence of possible ghostwriting. Sixteen primary publications reported findings from cross-sectional surveys or descriptive analyses of published articles; 32 secondary publications cited published or unpublished evidence. Estimates on the prevalence of possible ghostwriting in primary and secondary publications varied markedly. Primary estimates were not suitable for meta-analysis because of the various definitions of ghostwriting used, study designs and types of populations or samples. Secondary estimates were not always reported or cited correctly or appropriately.
CONCLUSIONS
Evidence for the prevalence of ghostwriting in the medical literature is limited and can be outdated, misleading or mistaken. Researchers should not inflate estimates using non-standard definitions of ghostwriting nor conflate ghostwriting with other unethical authorship practices. Editors and peer reviewers should not accept articles that incorrectly cite or interpret primary publications that report the prevalence of ghostwriting.
Topics: Authorship; Biomedical Research; Consensus; Humans; Periodicals as Topic; Publishing
PubMed: 25023129
DOI: 10.1136/bmjopen-2013-004777 -
The Australian and New Zealand Journal... Sep 2022This review aimed to measure the degree of placebo response in panic disorder. (Meta-Analysis)
Meta-Analysis
OBJECTIVE
This review aimed to measure the degree of placebo response in panic disorder.
DATA SOURCES
We searched major databases up to 31 January 2021, for randomized pharmacotherapy trials published in English.
STUDY SELECTION
A total of 43 studies met inclusion criteria to be in the analysis (with 174 separate outcome measurements).
DATA EXTRACTION
Changes in outcome measures from baseline in the placebo group were used to estimate modified Cohen's effect size.
RESULTS
A total of 43 trials (2392 subjects, 174 outcomes using 27 rating scales) were included in the meta-analysis. Overall placebo effect size was 0.57 (95% confidence interval = [0.50, 0.64]), heterogeneity (: 96.3%). Higher placebo effect size was observed among clinician-rated scales compared to patient reports (0.75 vs 0.35) and among general symptom and anxiety scales compared to panic symptoms and depression scales (0.92 and 0.64 vs 0.56 and 0.54, respectively). There was an upward trend in effect size over the publication period ( = 0.02, = 0.002) that was only significant among clinician-rated scales ( = 0.02, = 0.011). There was no significant publication bias, Egger's test ( = 0.08).
CONCLUSION
We observed a substantial placebo effect size in panic disorder. This effect was more prominent for some aspects of panic disorder psychopathology than for others and was correlated with the source of the assessment and publication year. This finding has implications both for research design, to address the heterogeneity and diversity in placebo responses, and for clinical practice to ensure optimal quality of care.
SYSTEMATIC REVIEW REGISTRATION NUMBER
PROSPERO, CRD42019125979.
Topics: Humans; Outcome Assessment, Health Care; Panic Disorder; Placebo Effect; Publication Bias
PubMed: 34996304
DOI: 10.1177/00048674211068793