-
Journal of Educational Evaluation For... 2021Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of... (Review)
Review
Appropriate sample size calculation and power analysis have become major issues in research and publication processes. However, the complexity and difficulty of calculating sample size and power require broad statistical knowledge, there is a shortage of personnel with programming skills, and commercial programs are often too expensive to use in practice. The review article aimed to explain the basic concepts of sample size calculation and power analysis; the process of sample estimation; and how to calculate sample size using G*Power software (latest ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with 5 statistical examples. The null and alternative hypothesis, effect size, power, alpha, type I error, and type II error should be described when calculating the sample size or power. G*Power is recommended for sample size and power calculations for various statistical methods (F, t, χ2, Z, and exact tests), because it is easy to use and free. The process of sample estimation consists of establishing research goals and hypotheses, choosing appropriate statistical tests, choosing one of 5 possible power analysis methods, inputting the required variables for analysis, and selecting the “calculate” button. The G*Power software supports sample size and power calculation for various statistical methods (F, t, χ2, z, and exact tests). This software is helpful for researchers to estimate the sample size and to conduct power analysis.
Topics: Humans; Research Design; Sample Size; Software
PubMed: 34325496
DOI: 10.3352/jeehp.2021.18.17 -
American Journal of Ophthalmology Sep 2005To begin a process of standardizing the methods for reporting clinical data in the field of uveitis. (Review)
Review
PURPOSE
To begin a process of standardizing the methods for reporting clinical data in the field of uveitis.
DESIGN
Consensus workshop.
METHODS
Members of an international working group were surveyed about diagnostic terminology, inflammation grading schema, and outcome measures, and the results used to develop a series of proposals to better standardize the use of these entities. Small groups employed nominal group techniques to achieve consensus on several of these issues.
RESULTS
The group affirmed that an anatomic classification of uveitis should be used as a framework for subsequent work on diagnostic criteria for specific uveitic syndromes, and that the classification of uveitis entities should be on the basis of the location of the inflammation and not on the presence of structural complications. Issues regarding the use of the terms "intermediate uveitis," "pars planitis," "panuveitis," and descriptors of the onset and course of the uveitis were addressed. The following were adopted: standardized grading schema for anterior chamber cells, anterior chamber flare, and for vitreous haze; standardized methods of recording structural complications of uveitis; standardized definitions of outcomes, including "inactive" inflammation, "improvement'; and "worsening" of the inflammation, and "corticosteroid sparing," and standardized guidelines for reporting visual acuity outcomes.
CONCLUSIONS
A process of standardizing the approach to reporting clinical data in uveitis research has begun, and several terms have been standardized.
Topics: Humans; Ophthalmology; Research Design; Terminology as Topic; United States; Uveitis
PubMed: 16196117
DOI: 10.1016/j.ajo.2005.03.057 -
Annals of Internal Medicine Jun 2010The CONSORT (Consolidated Standards of Reporting Trials) statement is used worldwide to improve the reporting of randomized, controlled trials. Schulz and colleagues...
The CONSORT (Consolidated Standards of Reporting Trials) statement is used worldwide to improve the reporting of randomized, controlled trials. Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience.
Topics: Publishing; Randomized Controlled Trials as Topic; Research Design
PubMed: 20335313
DOI: 10.7326/0003-4819-152-11-201006010-00232 -
BMC Medicine Mar 2010The CONSORT statement is used worldwide to improve the reporting of randomised controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT...
The CONSORT statement is used worldwide to improve the reporting of randomised controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience.To encourage dissemination of the CONSORT 2010 Statement, this article is freely accessible on bmj.com and will also be published in the Lancet, Obstetrics and Gynecology, PLoS Medicine, Annals of Internal Medicine, Open Medicine, Journal of Clinical Epidemiology, BMC Medicine, and Trials.
Topics: Publishing; Randomized Controlled Trials as Topic; Research Design
PubMed: 20334633
DOI: 10.1186/1741-7015-8-18 -
BMJ (Clinical Research Ed.) Mar 2010
Topics: Consensus; Practice Guidelines as Topic; Randomized Controlled Trials as Topic; Research Design
PubMed: 20332511
DOI: 10.1136/bmj.c869 -
Nephron. Clinical Practice 2011The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the... (Review)
Review
The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for investigators to decide which method to use. Moreover, these calculations are prone to errors, because small changes in the selected parameters can lead to large differences in the sample size. This paper explains the basic principles of sample size calculations and demonstrates how to perform such a calculation for a simple study design.
Topics: Clinical Trials as Topic; Humans; Mathematical Concepts; Research Design; Sample Size
PubMed: 21293154
DOI: 10.1159/000322830 -
International Journal of Surgery... 2010
Topics: Evidence-Based Practice; Humans; Meta-Analysis as Topic; Periodicals as Topic; Publication Bias; Publishing; Quality Control; Research Design; Review Literature as Topic; Terminology as Topic
PubMed: 20171303
DOI: 10.1016/j.ijsu.2010.02.007 -
Military Medical Research Feb 2020Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and... (Review)
Review
Methodological quality (risk of bias) assessment is an important step before study initiation usage. Therefore, accurately judging study type is the first priority, and the choosing proper tool is also important. In this review, we introduced methodological quality assessment tools for randomized controlled trial (including individual and cluster), animal study, non-randomized interventional studies (including follow-up study, controlled before-and-after study, before-after/ pre-post study, uncontrolled longitudinal study, interrupted time series study), cohort study, case-control study, cross-sectional study (including analytical and descriptive), observational case series and case reports, comparative effectiveness research, diagnostic study, health economic evaluation, prediction study (including predictor finding study, prediction model impact study, prognostic prediction model study), qualitative study, outcome measurement instruments (including patient - reported outcome measure development, content validity, structural validity, internal consistency, cross-cultural validity/ measurement invariance, reliability, measurement error, criterion validity, hypotheses testing for construct validity, and responsiveness), systematic review and meta-analysis, and clinical practice guideline. The readers of our review can distinguish the types of medical studies and choose appropriate tools. In one word, comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality.
Topics: Animals; Bias; Humans; Psychometrics; Reproducibility of Results; Research; Research Design
PubMed: 32111253
DOI: 10.1186/s40779-020-00238-8 -
Korean Journal of Anesthesiology Apr 2020Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant... (Review)
Review
Properly set sample size is one of the important factors for scientific and persuasive research. The sample size that can guarantee both clinically significant differences and adequate power in the phenomena of interest to the investigator, without causing excessive financial or medical considerations, will always be the object of concern. In this paper, we reviewed the essential factors for sample size calculation. We described the primary endpoints that are the main concern of the study and the basis for calculating sample size, the statistics used to analyze the primary endpoints, type I error and power, the effect size and the rationale. It also included a method of calculating the adjusted sample size considering the dropout rate inevitably occurring during the research. Finally, examples regarding sample size calculation that are appropriately and incorrectly described in the published papers are presented with explanations.
Topics: Biometry; Humans; Patient Dropouts; Research Design; Sample Size
PubMed: 32229812
DOI: 10.4097/kja.19497 -
PLoS Biology Mar 2020Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a...
Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.
Topics: Data Interpretation, Statistical; Humans; Reproducibility of Results; Research Design; Statistics as Topic
PubMed: 32218571
DOI: 10.1371/journal.pbio.3000691