-
Nephron. Clinical Practice 2010The internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given... (Review)
Review
The internal validity of an epidemiological study can be affected by random error and systematic error. Random error reflects a problem of precision in assessing a given exposure-disease relationship and can be reduced by increasing the sample size. On the other hand, systematic error or bias reflects a problem of validity of the study and arises because of any error resulting from methods used by the investigator when recruiting individuals for the study, from factors affecting the study participation (selection bias) or from systematic distortions when collecting information about exposures and outcomes (information bias). Another important factor which may affect the internal validity of a clinical study is confounding. In this article, we focus on two categories of bias: selection bias and information bias. Confounding will be described in a future article of this series.
Topics: Animals; Bias; Biomedical Research; Humans; Kidney Diseases; Reproducibility of Results; Selection Bias
PubMed: 20407272
DOI: 10.1159/000312871 -
Korean Journal of Anesthesiology Jun 2019Randomized controlled trial is widely accepted as the best design for evaluating the efficacy of a new treatment because of the advantages of randomization (random... (Review)
Review
Randomized controlled trial is widely accepted as the best design for evaluating the efficacy of a new treatment because of the advantages of randomization (random allocation). Randomization eliminates accidental bias, including selection bias, and provides a base for allowing the use of probability theory. Despite its importance, randomization has not been properly understood. This article introduces the different randomization methods with examples: simple randomization; block randomization; adaptive randomization, including minimization; and response-adaptive randomization. Ethics related to randomization are also discussed. The study is helpful in understanding the basic concepts of randomization and how to use R software.
Topics: Bias; Humans; Random Allocation; Randomized Controlled Trials as Topic; Research Design; Selection Bias
PubMed: 30929415
DOI: 10.4097/kja.19049 -
Journal of Clinical Epidemiology Nov 2016Many analyses of observational data are attempts to emulate a target trial. The emulation of the target trial may fail when researchers deviate from simple principles... (Review)
Review
Many analyses of observational data are attempts to emulate a target trial. The emulation of the target trial may fail when researchers deviate from simple principles that guide the design and analysis of randomized experiments. We review a framework to describe and prevent biases, including immortal time bias, that result from a failure to align start of follow-up, specification of eligibility, and treatment assignment. We review some analytic approaches to avoid these problems in comparative effectiveness or safety research.
Topics: Bias; Comparative Effectiveness Research; Epidemiologic Research Design; Humans; Observational Studies as Topic; Selection Bias; Time Factors
PubMed: 27237061
DOI: 10.1016/j.jclinepi.2016.04.014 -
Acta Obstetricia Et Gynecologica... Apr 2018Longitudinal cohort studies can provide important evidence about preventable causes of disease, but the success relies heavily on the commitment of their participants,... (Review)
Review
Longitudinal cohort studies can provide important evidence about preventable causes of disease, but the success relies heavily on the commitment of their participants, both at recruitment and during follow up. Initial participation rates have decreased in recent decades as have willingness to participate in subsequent follow ups. It is important to examine how such selection affects the validity of the results. In this article, we describe the conceptual framework for selection bias due to nonparticipation and loss to follow up in cohort studies, using both a traditional epidemiological approach and directed acyclic graphs. Methods to quantify selection bias are introduced together with analytical strategies to adjust for the bias including controlling for covariates associated with selection, inverse probability weighting and bias analysis. We use several studies conducted in the Danish National Birth Cohort as examples of how to quantify selection bias and also understand the underlying selection mechanisms. Although women who chose to participate in this cohort were typically of higher social status, healthier and with less disease than all those eligible for study, differential selection was modest and the influence of selection bias on several selected exposure-outcome associations was limited. These findings are reassuring and support enrolling a subset of motivated participants who would engage in long-term follow up rather than prioritize representativeness. Some of the presented methods are applicable even with limited data on nonparticipants and those lost to follow up, and can also be applied to other study designs such as case-control studies and surveys.
Topics: Cohort Studies; Data Interpretation, Statistical; Gynecology; Humans; Obstetrics; Research Design; Selection Bias
PubMed: 29415329
DOI: 10.1111/aogs.13319 -
Emergency Medicine Journal : EMJ Jan 2003Cohort, cross sectional, and case-control studies are collectively referred to as observational studies. Often these studies are the only practicable method of studying... (Review)
Review
Cohort, cross sectional, and case-control studies are collectively referred to as observational studies. Often these studies are the only practicable method of studying various problems, for example, studies of aetiology, instances where a randomised controlled trial might be unethical, or if the condition to be studied is rare. Cohort studies are used to study incidence, causes, and prognosis. Because they measure events in chronological order they can be used to distinguish between cause and effect. Cross sectional studies are used to determine prevalence. They are relatively quick and easy but do not permit distinction between cause and effect. Case controlled studies compare groups retrospectively. They seek to identify possible predictors of outcome and are useful for studying rare diseases or outcomes. They are often used to generate hypotheses that can then be studied via prospective cohort or other studies.
Topics: Case-Control Studies; Cohort Studies; Cross-Sectional Studies; Data Collection; Databases as Topic; Selection Bias
PubMed: 12533370
DOI: 10.1136/emj.20.1.54 -
Journal of Clinical Epidemiology Feb 2022Directed acyclic graphs (DAGs) are an intuitive yet rigorous tool to communicate about causal questions in clinical and epidemiologic research and inform study design...
Directed acyclic graphs (DAGs) are an intuitive yet rigorous tool to communicate about causal questions in clinical and epidemiologic research and inform study design and statistical analysis. DAGs are constructed to depict prior knowledge about biological and behavioral systems related to specific causal research questions. DAG components portray who receives treatment or experiences exposures; mechanisms by which treatments and exposures operate; and other factors that influence the outcome of interest or which persons are included in an analysis. Once assembled, DAGs - via a few simple rules - guide the researcher in identifying whether the causal effect of interest can be identified without bias and, if so, what must be done either in study design or data analysis to achieve this. Specifically, DAGs can identify variables that, if controlled for in the design or analysis phase, are sufficient to eliminate confounding and some forms of selection bias. DAGs also help recognize variables that, if controlled for, bias the analysis (e.g., mediators or factors influenced by both exposure and outcome). Finally, DAGs help researchers recognize insidious sources of bias introduced by selection of individuals into studies or failure to completely observe all individuals until study outcomes are reached. DAGs, however, are not infallible, largely owing to limitations in prior knowledge about the system in question. In such instances, several alternative DAGs are plausible, and researchers should assess whether results differ meaningfully across analyses guided by different DAGs and be forthright about uncertainty. DAGs are powerful tools to guide the conduct of clinical research.
Topics: Bias; Causality; Confounding Factors, Epidemiologic; Data Interpretation, Statistical; Humans; Selection Bias
PubMed: 34371103
DOI: 10.1016/j.jclinepi.2021.08.001 -
Epidemiology (Cambridge, Mass.) Sep 2022Selection bias remains a subject of controversy. Existing definitions of selection bias are ambiguous. To improve communication and the conduct of epidemiologic research...
Selection bias remains a subject of controversy. Existing definitions of selection bias are ambiguous. To improve communication and the conduct of epidemiologic research focused on estimating causal effects, we propose to unify the various existing definitions of selection bias in the literature by considering any bias away from the true causal effect in the referent population (the population before the selection process), due to selecting the sample from the referent population, as selection bias. Given this unified definition, selection bias can be further categorized into two broad types: type 1 selection bias owing to restricting to one or more level(s) of a collider (or a descendant of a collider) and type 2 selection bias owing to restricting to one or more level(s) of an effect measure modifier. To aid in explaining these two types-which can co-occur-we start by reviewing the concepts of the target population, the study sample, and the analytic sample. Then, we illustrate both types of selection bias using causal diagrams. In addition, we explore the differences between these two types of selection bias, and describe methods to minimize selection bias. Finally, we use an example of "M-bias" to demonstrate the advantage of classifying selection bias into these two types.
Topics: Bias; Causality; Humans; Selection Bias
PubMed: 35700187
DOI: 10.1097/EDE.0000000000001516 -
International Journal of Environmental... Jan 2011When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental...
When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.
Topics: Bias; Humans; Random Allocation; Randomized Controlled Trials as Topic; Sample Size; Selection Bias
PubMed: 21318011
DOI: 10.3390/ijerph8010015 -
The Journal of Investigative Dermatology Nov 2016Systematic reviews are increasingly utilized in the medical literature to summarize available evidence on a research question. Like other studies, systematic reviews are... (Review)
Review
Systematic reviews are increasingly utilized in the medical literature to summarize available evidence on a research question. Like other studies, systematic reviews are at risk for bias from a number of sources. A systematic review should be based on a formal protocol developed and made publicly available before the conduct of the review; deviations from a protocol with selective presentation of data can result in reporting bias. Evidence selection bias occurs when a systematic review does not identify all available data on a topic. This can arise from publication bias, where data from statistically significant studies are more likely to be published than those that are not statistically significant. Systematic reviews are also susceptible to bias that arises in any of the included primary studies, each of which needs to be critically appraised. Finally, competing interests can lead to bias in favor of a particular intervention. Awareness of these sources of bias is important for authors and consumers of the scientific literature as they conduct and read systematic reviews and incorporate their findings into clinical practice and policy making.
Topics: Dermatology; Disease Management; Humans; Research Design; Selection Bias; Skin Diseases
PubMed: 27772550
DOI: 10.1016/j.jid.2016.08.021 -
CMAJ : Canadian Medical Association... May 2017
Topics: Humans; Selection Bias
PubMed: 28483850
DOI: 10.1503/cmaj.732958