-
Journal of Thrombosis and Haemostasis :... Aug 2017
Topics: Biomedical Research; Comprehension; Humans; Periodicals as Topic; Prospective Studies; Research Design; Retrospective Studies; Terminology as Topic
PubMed: 28762625
DOI: 10.1111/jth.13776 -
Korean Journal of Anesthesiology Aug 2019Most parametric tests start with the basic assumption on the distribution of populations. The conditions required to conduct the t-test include the measured values in...
Most parametric tests start with the basic assumption on the distribution of populations. The conditions required to conduct the t-test include the measured values in ratio scale or interval scale, simple random extraction, normal distribution of data, appropriate sample size, and homogeneity of variance. The normality test is a kind of hypothesis test which has Type I and II errors, similar to the other hypothesis tests. It means that the sample size must influence the power of the normality test and its reliability. It is hard to find an established sample size for satisfying the power of the normality test. In the current article, the relationships between normality, power, and sample size were discussed. As the sample size decreased in the normality test, sufficient power was not guaranteed even with the same significance level. In the independent t-test, the change in power according to sample size and sample size ratio between groups was observed. When the sample size of one group was fixed and that of another group increased, power increased to some extent. However, it was not more efficient than increasing the sample sizes of both groups equally. To ensure the power in the normality test, sufficient sample size is required. The power is maximized when the sample size ratio between two groups is 1 : 1.
Topics: Data Interpretation, Statistical; Normal Distribution; Reproducibility of Results; Research Design; Sample Size
PubMed: 30929413
DOI: 10.4097/kja.d.18.00292 -
American Journal of Physiology. Heart... Mar 2022The number of research studies investigating whether similar or different cardiovascular responses or adaptations exist between males and females is increasing....
The number of research studies investigating whether similar or different cardiovascular responses or adaptations exist between males and females is increasing. Traditionally, difference-based statistical methods, e.g., test, ANOVA, etc., have been implemented to compare cardiovascular function between males and females, with a value of >0.05 used to denote similarity between sexes. However, an absence of evidence, i.e., large value, is not evidence of absence, i.e., no sex differences. Equivalence testing determines whether two measures or groups provide statistically equivalent outcomes, in that they differ by less than an "ideally prespecified" smallest effect size of interest. Our perspective discusses the applicability and utility of integrating equivalence testing when conducting sex comparisons in cardiovascular research. An emphasis is placed on how cardiovascular researchers may conduct equivalence testing across multiple study designs, e.g., cross-sectional comparisons, repeated-measures intervention, etc. The strengths and weaknesses of this statistical tool are discussed. Equivalence analyses are relatively simple to conduct, may be used in conjunction with traditional hypothesis testing to interpret findings, and permit the determination of statistically equivalent responses between sexes. We recommend that cardiovascular researchers consider implementing equivalence testing to better our understanding of similar and different cardiovascular processes between sexes.
Topics: Animals; Cardiovascular Physiological Phenomena; Humans; Research Design; Sex; Sex Characteristics
PubMed: 34995165
DOI: 10.1152/ajpheart.00687.2021 -
Journal of Cerebral Blood Flow and... Jun 2019Whenever an experiment yields a statistically significant outcome you should ask yourself: To what extent can I trust this result? This is especially important for...
Whenever an experiment yields a statistically significant outcome you should ask yourself: To what extent can I trust this result? This is especially important for pre-clinical drug studies because of the frequent failures of phase III clinical trials of neurological diseases, which has put the reliability of pre-clinical research into question. Two important factors, the pre-study likelihood of treatment benefit, and statistical power, affects the reliability of the result in a quantifiable way. This can be used to assess to what extent the result of a study can be trusted (discovery reliability), and to guide the design of pre-clinical research.
Topics: Animals; Drug Evaluation, Preclinical; Humans; Models, Statistical; Research Design; Sample Size
PubMed: 30866739
DOI: 10.1177/0271678X19837015 -
ELife Jan 2020Arguments in support of open science tend to focus on confirmatory research practices. Here we argue that exploratory research should also be encouraged within the...
Arguments in support of open science tend to focus on confirmatory research practices. Here we argue that exploratory research should also be encouraged within the framework of open science. We lay out the benefits of 'open exploration' and propose two complementary ways to implement this with little infrastructural change.
Topics: Research Design; Research Personnel
PubMed: 31916934
DOI: 10.7554/eLife.52157 -
International Journal of Environmental... Jun 2019Since the early 1960s, long-term care (LTC) has attracted a broad range of attention from public health practitioners and researchers worldwide and produced a large... (Review)
Review
Since the early 1960s, long-term care (LTC) has attracted a broad range of attention from public health practitioners and researchers worldwide and produced a large volume of literature. We conducted a comprehensive scientometric review based on 14,019 LTC articles retrieved from the Web of Science Core Collection database from 1963 to 2018, to explore the status and trends of global LTC research. Using CiteSpace software, we conducted collaboration analysis, document co-citation analysis, and keyword co-occurrence analysis. The results showed a rapid increase in annual LTC publications, while the annual citation counts exhibited an inverted U-shaped relationship with years. The most productive LTC research institutions and authors are located primarily in North American and European countries. A simultaneous analysis of both references and keywords revealed that common LTC hot topics include dementia care, quality of care, prevalence and risk factors, mortality, and randomized controlled trial. In addition, LTC research trends have shifted from the demand side to the supply side, and from basic studies to practical applications. The new research frontiers are frailty in elderly people and dementia care. This study provides an in-depth understanding of the current state, popular themes, trends, and future directions of LTC research worldwide.
Topics: Aged; Aged, 80 and over; Biomedical Research; Europe; Female; Forecasting; Global Health; Humans; Long-Term Care; Male; Middle Aged; Research Design
PubMed: 31212782
DOI: 10.3390/ijerph16122077 -
Bioinformatics (Oxford, England) May 2021There is growing interest in the biomedical research community to incorporate retrospective data, available in healthcare systems, to shed light on associations between...
MOTIVATION
There is growing interest in the biomedical research community to incorporate retrospective data, available in healthcare systems, to shed light on associations between different biomarkers. Understanding the association between various types of biomedical data, such as genetic, blood biomarkers, imaging, etc. can provide a holistic understanding of human diseases. To formally test a hypothesized association between two types of data in Electronic Health Records (EHRs), one requires a substantial sample size with both data modalities to achieve a reasonable power. Current association test methods only allow using data from individuals who have both data modalities. Hence, researchers cannot take advantage of much larger EHR samples that includes individuals with at least one of the data types, which limits the power of the association test.
RESULTS
We present a new method called the Semi-paired Association Test (SAT) that makes use of both paired and unpaired data. In contrast to classical approaches, incorporating unpaired data allows SAT to produce better control of false discovery and to improve the power of the association test. We study the properties of the new test theoretically and empirically, through a series of simulations and by applying our method on real studies in the context of Chronic Obstructive Pulmonary Disease. We are able to identify an association between the high-dimensional characterization of Computed Tomography chest images and several blood biomarkers as well as the expression of dozens of genes involved in the immune system.
AVAILABILITY AND IMPLEMENTATION
Code is available on https://github.com/batmanlab/Semi-paired-Association-Test.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online.
Topics: Electronic Health Records; Humans; Research Design; Retrospective Studies; Sample Size
PubMed: 33070196
DOI: 10.1093/bioinformatics/btaa886 -
Developmental Cognitive Neuroscience Oct 2018There has been a large spike in longitudinal fMRI studies in recent years, and so it is essential that researchers carefully assess the limitations and challenges... (Review)
Review
There has been a large spike in longitudinal fMRI studies in recent years, and so it is essential that researchers carefully assess the limitations and challenges afforded by longitudinal designs. In this article, we provide an overview of important considerations for longitudinal fMRI research in developmental samples, including task design, sampling strategies, and group-level analyses. We first discuss considerations for task designs, weighing the pros and cons of many commonly used tasks, as well as outlining how the tasks may be impacted by repeated exposure. Secondly, we review the types of group-level analyses that can be conducted on longitudinal fMRI data, analyses which must account for repeated measures. Finally, we review and critique recent longitudinal studies that have emerged in the past few years.
Topics: Humans; Longitudinal Studies; Magnetic Resonance Imaging; Research Design
PubMed: 29456104
DOI: 10.1016/j.dcn.2018.02.004 -
Circulation Oct 2019Reports highlighting the problems with the standard practice of using bar graphs to show continuous data have prompted many journals to adopt new visualization policies.... (Review)
Review
Reports highlighting the problems with the standard practice of using bar graphs to show continuous data have prompted many journals to adopt new visualization policies. These policies encourage authors to avoid bar graphs and use graphics that show the data distribution; however, they provide little guidance on how to effectively display data. We conducted a systematic review of studies published in top peripheral vascular disease journals to determine what types of figures are used, and to assess the prevalence of suboptimal data visualization practices. Among papers with data figures, 47.7% of papers used bar graphs to present continuous data. This primer provides a detailed overview of strategies for addressing this issue by (1) outlining strategies for selecting the correct type of figure depending on the study design, sample size, and the type of variable; (2) examining techniques for making effective dot plots, box plots, and violin plots; and (3) illustrating how to avoid sending mixed messages by aligning the figure structure with the study design and statistical analysis. We also present solutions to other common problems identified in the systematic review. Resources include a list of free tools and templates that authors can use to create more informative figures and an online simulator that illustrates why summary statistics are meaningful only when there are enough data to summarize. Last, we consider steps that investigators can take to improve figures in the scientific literature.
Topics: Biomedical Research; Data Interpretation, Statistical; Data Visualization; Humans; Research Design; Sample Size
PubMed: 31657957
DOI: 10.1161/CIRCULATIONAHA.118.037777 -
Statistics in Medicine Apr 2020Two methods for designing adaptive multiarm multistage (MAMS) clinical trials, originating from conceptually different group sequential frameworks are presented, and...
Two methods for designing adaptive multiarm multistage (MAMS) clinical trials, originating from conceptually different group sequential frameworks are presented, and their operating characteristics are compared. In both methods pairwise comparisons are made, stage-by-stage, between each treatment arm and a common control arm with the goal of identifying active treatments and dropping inactive ones. At any stage one may alter the future course of the trial through adaptive changes to the prespecified decision rules for treatment selection and sample size reestimation, and notwithstanding such changes, both methods guarantee strong control of the family-wise error rate. The stage-wise MAMS approach was historically the first to be developed and remains the standard method for designing inferentially seamless phase 2-3 clinical trials. In this approach, at each stage, the data from each treatment comparison are summarized by a single multiplicity adjusted P-value. These stage-wise P-values are combined by a prespecified combination function and the resultant test statistic is monitored with respect to the classical two-arm group sequential efficacy boundaries. The cumulative MAMS approach is a more recent development in which a separate test statistic is constructed for each treatment comparison from the cumulative data at each stage. These statistics are then monitored with respect to multiplicity adjusted group sequential efficacy boundaries. We compared the powers of the two methods for designs with two and three active treatment arms, under commonly utilized decision rules for treatment selection, sample size reestimation and early stopping. In our investigations, which were carried out over a reasonably exhaustive exploration of the parameter space, the cumulative MAMS designs were more powerful than the stage-wise MAMS designs, except for the homogeneous case of equal treatment effects, where a small power advantage was discernable for the stage-wise MAMS designs.
Topics: Research Design; Sample Size
PubMed: 32048313
DOI: 10.1002/sim.8464