-
The British Journal of Radiology Jun 2016Systematic reviews require comprehensive literature search strategies to avoid publication bias. This study aimed to assess and evaluate the reporting quality of search... (Meta-Analysis)
Meta-Analysis Review
OBJECTIVE
Systematic reviews require comprehensive literature search strategies to avoid publication bias. This study aimed to assess and evaluate the reporting quality of search strategies within systematic reviews published in the field of stereotactic radiosurgery (SRS).
METHODS
Three electronic databases (Ovid MEDLINE(®), Ovid EMBASE(®) and the Cochrane Library) were searched to identify systematic reviews addressing SRS interventions, with the last search performed in October 2014. Manual searches of the reference lists of included systematic reviews were conducted. The search strategies of the included systematic reviews were assessed using a standardized nine-question form based on the Cochrane Collaboration guidelines and Assessment of Multiple Systematic Reviews checklist. Multiple linear regression analyses were performed to identify the important predictors of search quality.
RESULTS
A total of 85 systematic reviews were included. The median quality score of search strategies was 2 (interquartile range = 2). Whilst 89% of systematic reviews reported the use of search terms, only 14% of systematic reviews reported searching the grey literature. Multiple linear regression analyses identified publication year (continuous variable), meta-analysis performance and journal impact factor (continuous variable) as predictors of higher mean quality scores.
CONCLUSION
This study identified the urgent need to improve the quality of search strategies within systematic reviews published in the field of SRS.
ADVANCES IN KNOWLEDGE
This study is the first to address how authors performed searches to select clinical studies for inclusion in their systematic reviews. Comprehensive and well-implemented search strategies are pivotal to reduce the chance of publication bias and consequently generate more reliable systematic review findings.
Topics: Guideline Adherence; Guidelines as Topic; Peer Review, Research; Publication Bias; Radiosurgery; Review Literature as Topic
PubMed: 26986458
DOI: 10.1259/bjr.20150878 -
Journal of Medical Internet Research Dec 2022The introduction of new medical technologies such as sensors has accelerated the process of collecting patient data for relevant clinical decisions, which has led to the...
BACKGROUND
The introduction of new medical technologies such as sensors has accelerated the process of collecting patient data for relevant clinical decisions, which has led to the introduction of a new technology known as digital biomarkers.
OBJECTIVE
This study aims to assess the methodological quality and quality of evidence from meta-analyses of digital biomarker-based interventions.
METHODS
This study follows the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guideline for reporting systematic reviews, including original English publications of systematic reviews reporting meta-analyses of clinical outcomes (efficacy and safety endpoints) of digital biomarker-based interventions compared with alternative interventions without digital biomarkers. Imaging or other technologies that do not measure objective physiological or behavioral data were excluded from this study. A literature search of PubMed and the Cochrane Library was conducted, limited to 2019-2020. The quality of the methodology and evidence synthesis of the meta-analyses were assessed using AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews 2) and GRADE (Grading of Recommendations, Assessment, Development, and Evaluations), respectively. This study was funded by the National Research, Development and Innovation Fund of Hungary.
RESULTS
A total of 25 studies with 91 reported outcomes were included in the final analysis; 1 (4%), 1 (4%), and 23 (92%) studies had high, low, and critically low methodologic quality, respectively. As many as 6 clinical outcomes (7%) had high-quality evidence and 80 outcomes (88%) had moderate-quality evidence; 5 outcomes (5%) were rated with a low level of certainty, mainly due to risk of bias (85/91, 93%), inconsistency (27/91, 30%), and imprecision (27/91, 30%). There is high-quality evidence of improvements in mortality, transplant risk, cardiac arrhythmia detection, and stroke incidence with cardiac devices, albeit with low reporting quality. High-quality reviews of pedometers reported moderate-quality evidence, including effects on physical activity and BMI. No reports with high-quality evidence and high methodological quality were found.
CONCLUSIONS
Researchers in this field should consider the AMSTAR-2 criteria and GRADE to produce high-quality studies in the future. In addition, patients, clinicians, and policymakers are advised to consider the results of this study before making clinical decisions regarding digital biomarkers to be informed of the degree of certainty of the various interventions investigated in this study. The results of this study should be considered with its limitations, such as the narrow time frame.
INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID)
RR2-10.2196/28204.
Topics: Humans; Bias; Hungary; Technology; Systematic Reviews as Topic; Biomarkers
PubMed: 36542427
DOI: 10.2196/41042 -
The Cochrane Database of Systematic... Apr 2016Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health. (Review)
Review
BACKGROUND
Improper practices and unprofessional conduct in clinical research have been shown to waste a significant portion of healthcare funds and harm public health.
OBJECTIVES
Our objective was to evaluate the effectiveness of educational or policy interventions in research integrity or responsible conduct of research on the behaviour and attitudes of researchers in health and other research areas.
SEARCH METHODS
We searched the CENTRAL, MEDLINE, LILACS and CINAHL health research bibliographical databases, as well as the Academic Search Complete, AGRICOLA, GeoRef, PsycINFO, ERIC, SCOPUS and Web of Science databases. We performed the last search on 15 April 2015 and the search was limited to articles published between 1990 and 2014, inclusive. We also searched conference proceedings and abstracts from research integrity conferences and specialized websites. We handsearched 14 journals that regularly publish research integrity research.
SELECTION CRITERIA
We included studies that measured the effects of one or more interventions, i.e. any direct or indirect procedure that may have an impact on research integrity and responsible conduct of research in its broadest sense, where participants were any stakeholders in research and publication processes, from students to policy makers. We included randomized and non-randomized controlled trials, such as controlled before-and-after studies, with comparisons of outcomes in the intervention versus non-intervention group or before versus after the intervention. Studies without a control group were not included in the review.
DATA COLLECTION AND ANALYSIS
We used the standard methodological procedures expected by Cochrane. To assess the risk of bias in non-randomized studies, we used a modified Cochrane tool, in which we used four out of six original domains (blinding, incomplete outcome data, selective outcome reporting, other sources of bias) and two additional domains (comparability of groups and confounding factors). We categorized our primary outcome into the following levels: 1) organizational change attributable to intervention, 2) behavioural change, 3) acquisition of knowledge/skills and 4) modification of attitudes/perceptions. The secondary outcome was participants' reaction to the intervention.
MAIN RESULTS
Thirty-one studies involving 9571 participants, described in 33 articles, met the inclusion criteria. All were published in English. Fifteen studies were randomized controlled trials, nine were controlled before-and-after studies, four were non-equivalent controlled studies with a historical control, one was a non-equivalent controlled study with a post-test only and two were non-equivalent controlled studies with pre- and post-test findings for the intervention group and post-test for the control group. Twenty-one studies assessed the effects of interventions related to plagiarism and 10 studies assessed interventions in research integrity/ethics. Participants included undergraduates, postgraduates and academics from a range of research disciplines and countries, and the studies assessed different types of outcomes.We judged most of the included randomized controlled trials to have a high risk of bias in at least one of the assessed domains, and in the case of non-randomized trials there were no attempts to alleviate the potential biases inherent in the non-randomized designs.We identified a range of interventions aimed at reducing research misconduct. Most interventions involved some kind of training, but methods and content varied greatly and included face-to-face and online lectures, interactive online modules, discussion groups, homework and practical exercises. Most studies did not use standardized or validated outcome measures and it was impossible to synthesize findings from studies with such diverse interventions, outcomes and participants. Overall, there is very low quality evidence that various methods of training in research integrity had some effects on participants' attitudes to ethical issues but minimal (or short-lived) effects on their knowledge. Training about plagiarism and paraphrasing had varying effects on participants' attitudes towards plagiarism and their confidence in avoiding it, but training that included practical exercises appeared to be more effective. Training on plagiarism had inconsistent effects on participants' knowledge about and ability to recognize plagiarism. Active training, particularly if it involved practical exercises or use of text-matching software, generally decreased the occurrence of plagiarism although results were not consistent. The design of a journal's author contribution form affected the truthfulness of information supplied about individuals' contributions and the proportion of listed contributors who met authorship criteria. We identified no studies testing interventions for outcomes at the organizational level. The numbers of events and the magnitude of intervention effects were generally small, so the evidence is likely to be imprecise. No adverse effects were reported.
AUTHORS' CONCLUSIONS
The evidence base relating to interventions to improve research integrity is incomplete and the studies that have been done are heterogeneous, inappropriate for meta-analyses and their applicability to other settings and population is uncertain. Many studies had a high risk of bias because of the choice of study design and interventions were often inadequately reported. Even when randomized designs were used, findings were difficult to generalize. Due to the very low quality of evidence, the effects of training in responsible conduct of research on reducing research misconduct are uncertain. Low quality evidence indicates that training about plagiarism, especially if it involves practical exercises and use of text-matching software, may reduce the occurrence of plagiarism.
Topics: Attitude; Biomedical Research; Controlled Before-After Studies; Controlled Clinical Trials as Topic; Humans; Plagiarism; Publishing; Randomized Controlled Trials as Topic; Research Personnel; Scientific Misconduct
PubMed: 27040721
DOI: 10.1002/14651858.MR000038.pub2 -
Nursing Outlook 2015Systematic reviews (SRs) and meta-analyses (MAs) of nursing interventions have become increasingly popular in China. This review provides the first examination of... (Review)
Review
OBJECTIVES
Systematic reviews (SRs) and meta-analyses (MAs) of nursing interventions have become increasingly popular in China. This review provides the first examination of epidemiological characteristics of these SRs as well as compliance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses and Assessment of Multiple Systematic Reviews guidelines. The purpose of this study was to examine epidemiologic and reporting characteristics as well as the methodologic quality of SRs and MAs of nursing interventions published in Chinese journals.
METHODS
Four Chinese databases were searched (the Chinese Biomedicine Literature Database, Chinese Scientific Journal Full-text Database, Chinese Journal Full-text Database, and Wanfang Database) for SRs and MAs of nursing intervention from inception through June 2013. Data were extracted into Excel (Microsoft, Redmond, WA). The Assessment of Multiple Systematic Reviews and Preferred Reporting Items for Systematic Reviews and Meta-analyses checklists were used to assess methodologic quality and reporting characteristics, respectively.
RESULTS
A total of 144 SRs were identified, most (97.2%) of which used "systematic review" or "meta-analyses" in the titles. None of the reviews had been updated. Nearly half (41%) were written by nurses, and more than half (61%) were reported in specialist journals. The most common conditions studied were endocrine, nutritional and metabolic diseases, and neoplasms. Most (70.8%) reported information about quality assessment, whereas less than half (25%) reported assessing for publication bias. None of the reviews reported a conflict of interest.
CONCLUSIONS
Although many SRs of nursing interventions have been published in Chinese journals, the quality of these reviews is of concern. As a potential key source of information for nurses and nursing administrators, not only were many of these reviews incomplete in the information they provided, but also some results were misleading. Improving the quality of SRs of nursing interventions conducted and published by nurses in China is urgently needed in order to increase the value of these studies.
Topics: China; Humans; Meta-Analysis as Topic; Nursing; Periodicals as Topic; Publishing; Quality Control; Review Literature as Topic
PubMed: 26187084
DOI: 10.1016/j.outlook.2014.11.020 -
Annals of the Royal College of Surgeons... Jul 2018Introduction Surgeon-specific outcome data, or consultant outcome publication, refers to public access to named surgeon procedural outcomes. Consultant outcome... (Review)
Review
Introduction Surgeon-specific outcome data, or consultant outcome publication, refers to public access to named surgeon procedural outcomes. Consultant outcome publication originates from cardiothoracic surgery, having been introduced to US and UK surgery in 1991 and 2005, respectively. It has been associated with an improvement in patient outcomes. However, there is concern that it may also have led to changes in surgeon behaviour. This review assesses the literature for evidence of risk-averse behaviour, upgrading of patient risk factors and cessation of low-volume or poorly performing surgeons. Materials and methods A systematic literature review of Embase and Medline databases was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. Original studies including data on consultant outcome publication and its potential effect on surgeon behaviour were included. Results Twenty-five studies were identified from the literature search. Studies suggesting the presence of risk-averse behaviour and upgrading of risk factors tended to be survey based, with studies contrary to these findings using recognised regional and national databases. Discussion and conclusion Our review includes instances of consultant outcome publication leading to risk-averse behaviour, upgrading of risk factors and cessation of low-volume or poorly performing surgeons. As UK data on consultant outcome publication matures, further research is essential to ensure that high-risk patients are not inappropriately turned down for surgery.
Topics: Humans; Outcome Assessment, Health Care; Patient Selection; Practice Patterns, Physicians'; Publishing; Quality Improvement; Risk Assessment; Risk-Taking; Surgeons; United Kingdom; United States
PubMed: 29962298
DOI: 10.1308/rcsann.2018.0052 -
Neurosurgery Mar 2022Statistically significant positive results are more likely to be published than negative or insignificant outcomes. This phenomenon, also termed publication bias, can...
BACKGROUND
Statistically significant positive results are more likely to be published than negative or insignificant outcomes. This phenomenon, also termed publication bias, can skew the interpretation of meta-analyses. The widespread presence of publication bias in the biomedical literature has led to the development of various statistical approaches, such as the visual inspection of funnel plots, Begg test, and Egger test, to assess and account for it.
OBJECTIVE
To determine how well publication bias is assessed for in meta-analyses of the neurosurgical literature.
METHODS
A systematic search for meta-analyses from the top neurosurgery journals was conducted. Data relevant to the presence, assessment, and adjustments for publication bias were extracted.
RESULTS
The search yielded 190 articles. Most of the articles (n = 108, 56.8%) were assessed for publication bias, of which 40 (37.0%) found evidence for publication bias whereas 61 (56.5%) did not. In the former case, only 11 (27.5%) made corrections for the bias using the trim-and-fill method, whereas 29 (72.5%) made no correction. Thus, 111 meta-analyses (58.4%) either did not assess for publication bias or, if assessed to be present, did not adjust for it.
CONCLUSION
Taken together, these results indicate that publication bias remains largely unaccounted for in neurosurgical meta-analyses.
Topics: Humans; Meta-Analysis as Topic; Neurosurgery; Neurosurgical Procedures; Publication Bias; Research Design
PubMed: 35849494
DOI: 10.1227/NEU.0000000000001788 -
BMC Medical Research Methodology Jun 2020Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but...
BACKGROUND
Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but relatively little is known about their presence and extent in health services research (HSR). This paper aims to systematically review evidence concerning publication and related bias in quantitative HSR.
METHODS
Databases including MEDLINE, EMBASE, HMIC, CINAHL, Web of Science, Health Systems Evidence, Cochrane EPOC Review Group and several websites were searched to July 2018. Information was obtained from: (1) Methodological studies that set out to investigate publication and related biases in HSR; (2) Systematic reviews of HSR topics which examined such biases as part of the review process. Relevant information was extracted from included studies by one reviewer and checked by another. Studies were appraised according to commonly accepted scientific principles due to lack of suitable checklists. Data were synthesised narratively.
RESULTS
After screening 6155 citations, four methodological studies investigating publication bias in HSR and 184 systematic reviews of HSR topics (including three comparing published with unpublished evidence) were examined. Evidence suggestive of publication bias was reported in some of the methodological studies, but evidence presented was very weak, limited in both quality and scope. Reliable data on outcome reporting bias and p-hacking were scant. HSR systematic reviews in which published literature was compared with unpublished evidence found significant differences in the estimated intervention effects or association in some but not all cases.
CONCLUSIONS
Methodological research on publication and related biases in HSR is sparse. Evidence from available literature suggests that such biases may exist in HSR but their scale and impact are difficult to estimate for various reasons discussed in this paper.
SYSTEMATIC REVIEW REGISTRATION
PROSPERO 2016 CRD42016052333.
Topics: Bias; Health Services Research; Humans; Publication Bias; Research Design
PubMed: 32487022
DOI: 10.1186/s12874-020-01010-1 -
Defining the publication source of high-quality evidence in urology: an analysis of EvidenceUpdates.BJU International Jun 2016To determine the publication sources of urology articles within EvidenceUpdates, a second-order peer review system of the medical literature designed to identify... (Review)
Review
OBJECTIVES
To determine the publication sources of urology articles within EvidenceUpdates, a second-order peer review system of the medical literature designed to identify high-quality articles to support up-to-date and evidence-based clinical decisions.
MATERIALS AND METHODS
Using administrator-level access, all EvidenceUpdates citations from 2005 to 2014 were downloaded from the topics 'Surgery-Urology' and 'Oncology-Genitourinary'. Data fields accessed included PubMed unique reference identifier, study title, abstract, journal and date of publication, as well as clinical relevance and newsworthiness ratings as determined by discipline-specific physician raters. The citations were then coded by clinical topic (oncology, voiding dysfunction, erectile dysfunction/infertility, infection/inflammation, stones/endourology/laparoscopy, trauma/reconstruction, transplant, or other), journal category (general medical journal, oncology journal, urology journal, non-urology specialty journal, Cochrane review, or other), and study design (randomised controlled trial [RCT], systematic review, observational study, or other). Articles that were perceived to be misclassified and/or of no direct interest to urologists were excluded. Descriptive statistics using proportions and 95% confidence intervals, as well as means and standard deviations (SDs) were used to characterise the overall data cohort and to analyse trends over time.
RESULTS
We identified 731 unique citations classified under either 'Surgery-Urology' or 'Oncology-Genitourinary' for analysis after exclusions. Between 2005 and 2014, the most common topics were oncology (48.6%, 355 articles) and voiding dysfunction (21.8%, 159). Within the topic of oncology, prostate cancer contributed over half the studies (54.6%, n = 194). The most common study types were RCTs (42.3%, 309 articles) and systematic reviews (39.6%, 290). Systematic reviews had a nearly fourfold relative increase within less than a decade. The largest proportion of studies relevant to urology were published in general oncology journals (20.0%, n = 146), followed by the Cochrane Library (19.3%, n = 141) and general medical journals (17.2%, n = 126). Urology-specific journals contributed to only approximately one-tenth of EvidenceUpdates alerts (9.4%, n = 69), with the highest contribution occurring during the 2013/2014 period. For clinical relevance and newsworthiness scores (each graded on scales of 1-7), urology journals scored the highest in clinical relevance with a mean (SD) of 5.9 (0.75) and general medical journals scored highest for newsworthiness at 5.3 (0.94). On average, RCTs scored highest both for clinical relevance and newsworthiness with mean (SD) scores of 5.71 (0.81) and 5.22 (0.91), respectively.
CONCLUSION
A large number of high-quality, clinically relevant, and newsworthy peer-reviewed urology publications are published outside of traditional urology journals. This requires urologists to implement well-defined strategies to stay abreast of current best evidence.
Topics: Clinical Decision-Making; Evidence-Based Practice; Humans; Medical Oncology; Peer Review, Research; Periodicals as Topic; Publications; Quality Improvement; Urology
PubMed: 26663761
DOI: 10.1111/bju.13392 -
World Neurosurgery Jul 2023To develop a research overview of brain tumor classification using machine learning, we conducted a systematic review with a bibliometric analysis. Our systematic review... (Review)
Review
To develop a research overview of brain tumor classification using machine learning, we conducted a systematic review with a bibliometric analysis. Our systematic review and bibliometric analysis included 1747 studies of automated brain tumor detection using machine learning reported in the previous 5 years (2019-2023) from 679 different sources and authored by 6632 investigators. Bibliographic data were collected from the Scopus database, and a comprehensive bibliometric analysis was conducted using Biblioshiny and the R platform. The most productive and collaborative institutes, reports, journals, and countries were determined using citation analysis. In addition, various collaboration metrics were determined at the institute, country, and author level. Lotka's law was tested using the authors' performance. Analysis showed that the authors' publication trends followed Lotka's inverse square law. An annual publication analysis showed that 36.46% of the studies had been reported in 2022, with steady growth from previous years. Most of the cited authors had focused on multiclass classification and novel convolutional neural network models that are efficient for small training sets. A keyword analysis showed that "deep learning," "magnetic resonance imaging," "nuclear magnetic resonance imaging," and "glioma" appeared most often, proving that of the several brain tumor types, most studies had focused on glioma. India, China, and the United States were among the highest collaborative countries in terms of both authors and institutes. The University of Toronto and Harvard Medical School had the highest number of affiliations with 132 and 87 publications, respectively.
Topics: Humans; Brain; Brain Neoplasms; Glioma; Machine Learning; Bibliometrics; Radiopharmaceuticals
PubMed: 37019303
DOI: 10.1016/j.wneu.2023.03.115 -
Systematic Reviews Dec 2023The living systematic review (LSR) approach is based on ongoing surveillance of the literature and continual updating. Most currently available guidance documents...
BACKGROUND AND OBJECTIVE
The living systematic review (LSR) approach is based on ongoing surveillance of the literature and continual updating. Most currently available guidance documents address the conduct, reporting, publishing, and appraisal of systematic reviews (SRs), but are not suitable for LSRs per se and miss additional LSR-specific considerations. In this scoping review, we aim to systematically collate methodological guidance literature on how to conduct, report, publish, and appraise the quality of LSRs and identify current gaps in guidance.
METHODS
A standard scoping review methodology was used. We searched MEDLINE (Ovid), EMBASE (Ovid), and The Cochrane Library on August 28, 2021. As for searching gray literature, we looked for existing guidelines and handbooks on LSRs from organizations that conduct evidence syntheses. The screening was conducted by two authors independently in Rayyan, and data extraction was done in duplicate using a pilot-tested data extraction form in Excel. Data was extracted according to four pre-defined categories for (i) conducting, (ii) reporting, (iii) publishing, and (iv) appraising LSRs. We mapped the findings by visualizing overview tables created in Microsoft Word.
RESULTS
Of the 21 included papers, methodological guidance was found in 17 papers for conducting, in six papers for reporting, in 15 papers for publishing, and in two papers for appraising LSRs. Some of the identified key items for (i) conducting LSRs were identifying the rationale, screening tools, or re-revaluating inclusion criteria. Identified items of (ii) the original PRISMA checklist included reporting the registration and protocol, title, or synthesis methods. For (iii) publishing, there was guidance available on publication type and frequency or update trigger, and for (iv) appraising, guidance on the appropriate use of bias assessment or reporting funding of included studies was found. Our search revealed major evidence gaps, particularly for guidance on certain PRISMA items such as reporting results, discussion, support and funding, and availability of data and material of a LSR.
CONCLUSION
Important evidence gaps were identified for guidance on how to report in LSRs and appraise their quality. Our findings were applied to inform and prepare a PRISMA 2020 extension for LSR.
Topics: Humans; Publishing; Bias; Checklist; Research Report; MEDLINE
PubMed: 38098023
DOI: 10.1186/s13643-023-02396-x