-
Critical Care (London, England) Dec 2020
Topics: Bayes Theorem; COVID-19; Critical Illness; Humans; Intensive Care Units; Research Design
PubMed: 33298134
DOI: 10.1186/s13054-020-03393-5 -
Behavior Research Methods Feb 2021When two independent means μ and μ are compared, H : μ = μ, H : μ≠μ, and H : μ > μ are the hypotheses of interest. This paper introduces the R package SSDbain,...
When two independent means μ and μ are compared, H : μ = μ, H : μ≠μ, and H : μ > μ are the hypotheses of interest. This paper introduces the R package SSDbain, which can be used to determine the sample size needed to evaluate these hypotheses using the approximate adjusted fractional Bayes factor (AAFBF) implemented in the R package bain. Both the Bayesian t test and the Bayesian Welch's test are available in this R package. The sample size required will be calculated such that the probability that the Bayes factor is larger than a threshold value is at least η if either the null or alternative hypothesis is true. Using the R package SSDbain and/or the tables provided in this paper, psychological researchers can easily determine the required sample size for their experiments.
Topics: Bayes Theorem; Humans; Probability; Research Design; Sample Size
PubMed: 32632740
DOI: 10.3758/s13428-020-01408-1 -
Journal of Neurochemistry Oct 2016The most obvious difference in science publishing between 'then' and 'now' is the dramatic change in the communication of data and in their interpretation. The... (Review)
Review
The most obvious difference in science publishing between 'then' and 'now' is the dramatic change in the communication of data and in their interpretation. The democratization of science via the Internet has brought not only benefits but also challenges to publishing including fraudulent behavior and plagiarism, data and statistics reporting standards, authorship confirmation and other issues which affect authors, readers, and publishers in different ways. The wide accessibility of data on a global scale permits acquisition and meta-analysis to mine for novel synergies, and has created a highly commercialized environment. As we illustrate here, identifying unacceptable practices leads to changes in the standards for data reporting. In the past decades, science publishing underwent dramatic changes in the communication of data and in their interpretation, in the increasing pressure and commercialization, and the democratization of science on a global scale via the Internet. This article reviews the benefits and challenges to publishing including fraudulent behavior and plagiarism, data and statistics reporting standards, authorship confirmation and other issues, with the aim to provide readers with practical examples and hands-on guidelines. As we illustrate here, identifying unacceptable practices leads to changes in the standards for data reporting. This article is part of the 60th Anniversary special issue.
Topics: Authorship; Humans; Periodicals as Topic; Plagiarism; Publishing; Research Design; Scientific Misconduct
PubMed: 26997145
DOI: 10.1111/jnc.13550 -
Clinical Infectious Diseases : An... Mar 2018Clinical trials with adaptive designs use data that accumulate during the course of the study to modify study elements in a prespecified manner. The goal is to provide... (Review)
Review
Clinical trials with adaptive designs use data that accumulate during the course of the study to modify study elements in a prespecified manner. The goal is to provide flexibility such that a trial can serve as a definitive test of its primary hypothesis, preferably in a shorter time period, involving fewer human subjects, and at lower cost. Elements that may be modified include the sample size, end points, eligible population, randomization ratio, and interventions. Accumulating data used to drive these modifications include the outcomes, subject enrollment (including factors associated with the outcomes), and information about the application of the interventions. This review discusses the types of adaptive designs for clinical trials, emphasizing their advantages and limitations in comparison with conventional designs, and opportunities for applying these designs to healthcare epidemiology research, including studies of interventions to prevent healthcare-associated infections, combat antimicrobial resistance, and improve antimicrobial stewardship.
Topics: Clinical Trials as Topic; Epidemiologic Studies; Health Services Research; Humans; Research Design; Sample Size
PubMed: 29121202
DOI: 10.1093/cid/cix907 -
ELife Jan 2021The purpose of preclinical research is to inform the development of novel diagnostics or therapeutics, and the results of experiments on animal models of disease often...
The purpose of preclinical research is to inform the development of novel diagnostics or therapeutics, and the results of experiments on animal models of disease often inform the decision to conduct studies in humans. However, a substantial number of clinical trials fail, even when preclinical studies have apparently demonstrated the efficacy of a given intervention. A number of large-scale replication studies are currently trying to identify the factors that influence the robustness of preclinical research. Here, we discuss replications in the context of preclinical research trajectories, and argue that increasing validity should be a priority when selecting experiments to replicate and when performing the replication. We conclude that systematically improving three domains of validity - internal, external and translational - will result in a more efficient allocation of resources, will be more ethical, and will ultimately increase the chances of successful translation.
Topics: Animals; Disease Models, Animal; Humans; Research Design
PubMed: 33432925
DOI: 10.7554/eLife.62101 -
BMC Bioinformatics Jan 2022Cellular heterogeneity underlies cancer evolution and metastasis. Advances in single-cell technologies such as single-cell RNA sequencing and mass cytometry have enabled...
Cellular heterogeneity underlies cancer evolution and metastasis. Advances in single-cell technologies such as single-cell RNA sequencing and mass cytometry have enabled interrogation of cell type-specific expression profiles and abundance across heterogeneous cancer samples obtained from clinical trials and preclinical studies. However, challenges remain in determining sample sizes needed for ascertaining changes in cell type abundances in a controlled study. To address this statistical challenge, we have developed a new approach, named Sensei, to determine the number of samples and the number of cells that are required to ascertain such changes between two groups of samples in single-cell studies. Sensei expands the t-test and models the cell abundances using a beta-binomial distribution. We evaluate the mathematical accuracy of Sensei and provide practical guidelines on over 20 cell types in over 30 cancer types based on knowledge acquired from the cancer cell atlas (TCGA) and prior single-cell studies. We provide a web application to enable user-friendly study design via https://kchen-lab.github.io/sensei/table_beta.html .
Topics: Binomial Distribution; Humans; Neoplasms; Research Design; Sample Size; Software
PubMed: 34983369
DOI: 10.1186/s12859-021-04526-5 -
Statistical Methods in Medical Research Jun 2020Among various design aspects, the choice of randomization procedure have to be agreed on, when planning a clinical trial stratified by center. The aim of the paper is to...
BACKGROUND
Among various design aspects, the choice of randomization procedure have to be agreed on, when planning a clinical trial stratified by center. The aim of the paper is to present a methodological approach to evaluate whether a randomization procedure mitigates the impact of bias on the test decision in clinical trial stratified by center.
METHODS
We use the weighted test to analyze the data from a clinical trial stratified by center with a two-arm parallel group design, an intended 1:1 allocation ratio, aiming to prove a superiority hypothesis with a continuous normal endpoint without interim analysis and no adaptation in the randomization process. The derivation is based on the weighted test under misclassification, i.e. ignoring bias. An additive bias model combing selection bias and time-trend bias is linked to different stratified randomization procedures.
RESULTS
Various aspects to formulate stratified versions of randomization procedures are discussed. A formula for sample size calculation of the weighted test is derived and used to specify the tolerated imbalance allowed by some randomization procedures. The distribution of the weighted test under misclassification is deduced, taking the sequence of patient allocation to treatment, i.e. the randomization sequence into account. An additive bias model combining selection bias and time-trend bias at strata level linked to the applied randomization sequence is proposed. With these before mentioned components, the potential impact of bias on the type one error probability depending on the selected randomization sequence and thus the randomization procedure is formally derived and exemplarily calculated within a numerical evaluation study.
CONCLUSION
The proposed biasing policy and test distribution are necessary to conduct an evaluation of the comparative performance of (stratified) randomization procedure in multi-center clinical trials with a two-arm parallel group design. It enables the choice of the best practice procedure. The evaluation stimulates the discussion about the level of evidence resulting in those kind of clinical trials.
Topics: Bias; Humans; Probability; Random Allocation; Research Design
PubMed: 31074333
DOI: 10.1177/0962280219846146 -
Globalization and Health Feb 2017This commentary sums the findings of a series of papers on a study that mapped the global research agenda for maternal health. The mapping reviewed published...
Priority gaps and promising areas in maternal health research in low- and middle-income countries: summary findings of a mapping of 2292 publications between 2000 and 2012.
This commentary sums the findings of a series of papers on a study that mapped the global research agenda for maternal health. The mapping reviewed published interventional research across low- and middle-income countries (LMICs) from 2000 to 2012, specifically focusing on investigating the topics covered by this research, the methodologies applied, the funding landscape and trends in authorship attribution.The overarching aim underpinning the mapping activities was to evaluate whether research and funding align with causes of maternal mortality, and thereby highlight gaps in research priorities and governance. Fifteen reviewers from 8 countries screened 35,078 titles and abstracts, and extracted data from 2292 full-text articles.Over the period reviewed, the volume of publications rose several-fold, especially from 2004 to 2007. The methodologies broadened, increasingly encompassing qualitative research and systematic review. Malaria and HIV research dominated over other topics, while sexually-transmitted infection research progressively diminished. Health systems and health promotion research increased rapidly, but were less frequently evaluated in trials or published in high-impact journals. Relative to disease burden, hypertension had double the publications of haemorrhage. Many Latin American countries, China and Russia had relatively few papers per billion US dollars Gross Domestic Product. Total LMIC lead authorships rose substantially, but only a quarter of countries had a local first author lead on >75% of their research, with levels lowest in sub-Saharan Africa. The median Impact Factor of high-income country led papers was 3.1 and LMIC-led 1.8. The NIH, USAID and Gates Foundation constituted 40% of funder acknowledgements, and addressed similar topics and countries.The commentary notes that increases in outputs and broadening of methodologies suggest research capacity has expanded considerably, allowing for more nuanced, systems-based and context-specific studies. However, funders seemingly duplicate efforts, with topics and countries either receiving excessive or little attention. Better coordinated funding might reduce duplication and allow researchers to develop highly-specialised expertise. Repeated scrutiny of research agendas and funding may foment shifts in priorities. Building leadership capacity in LMICs and reconsidering authorship guidelines is needed.
Topics: Developing Countries; Humans; Maternal Health; Publications; Research; Research Design
PubMed: 28153038
DOI: 10.1186/s12992-016-0227-z -
BMJ (Clinical Research Ed.) Feb 2022examine the problems with designing and implementing trials of acupuncture
examine the problems with designing and implementing trials of acupuncture
Topics: Acupuncture Therapy; Humans; Randomized Controlled Trials as Topic; Research Design
PubMed: 35217507
DOI: 10.1136/bmj-2021-064345 -
Journal of Medical Internet Research Aug 2022The RAND/UCLA Appropriateness Method (RAM), a variant of the Delphi Method, was developed to synthesize existing evidence and elicit the clinical judgement of medical... (Review)
Review
BACKGROUND
The RAND/UCLA Appropriateness Method (RAM), a variant of the Delphi Method, was developed to synthesize existing evidence and elicit the clinical judgement of medical experts on the appropriate treatment of specific clinical presentations. Technological advances now allow researchers to conduct expert panels on the internet, offering a cost-effective and convenient alternative to the traditional RAM. For example, the Department of Veterans Affairs recently used a web-based RAM to validate clinical recommendations for de-intensifying routine primary care services. A substantial literature describes and tests various aspects of the traditional RAM in health research; yet we know comparatively less about how researchers implement web-based expert panels.
OBJECTIVE
The objectives of this study are twofold: (1) to understand how the web-based RAM process is currently used and reported in health research and (2) to provide preliminary reporting guidance for researchers to improve the transparency and reproducibility of reporting practices.
METHODS
The PubMed database was searched to identify studies published between 2009 and 2019 that used a web-based RAM to measure the appropriateness of medical care. Methodological data from each article were abstracted. The following categories were assessed: composition and characteristics of the web-based expert panels, characteristics of panel procedures, results, and panel satisfaction and engagement.
RESULTS
Of the 12 studies meeting the eligibility criteria and reviewed, only 42% (5/12) implemented the full RAM process with the remaining studies opting for a partial approach. Among those studies reporting, the median number of participants at first rating was 42. While 92% (11/12) of studies involved clinicians, 50% (6/12) involved multiple stakeholder types. Our review revealed that the studies failed to report on critical aspects of the RAM process. For example, no studies reported response rates with the denominator of previous rounds, 42% (5/12) did not provide panelists with feedback between rating periods, 50% (6/12) either did not have or did not report on the panel discussion period, and 25% (3/12) did not report on quality measures to assess aspects of the panel process (eg, satisfaction with the process).
CONCLUSIONS
Conducting web-based RAM panels will continue to be an appealing option for researchers seeking a safe, efficient, and democratic process of expert agreement. Our literature review uncovered inconsistent reporting frameworks and insufficient detail to evaluate study outcomes. We provide preliminary recommendations for reporting that are both timely and important for producing replicable, high-quality findings. The need for reporting standards is especially critical given that more people may prefer to participate in web-based rather than in-person panels due to the ongoing COVID-19 pandemic.
Topics: COVID-19; Delphi Technique; Expert Testimony; Humans; Internet; Pandemics; Patient Care; Reproducibility of Results; Research Design
PubMed: 36018626
DOI: 10.2196/33898