-
BMJ (Clinical Research Ed.) Aug 1997
Review
Topics: Case-Control Studies; Clinical Trials as Topic; Cohort Studies; Follow-Up Studies; Patient Selection; Publishing; Research Design; Sampling Studies; Selection Bias
PubMed: 9274555
DOI: 10.1136/bmj.315.7103.305 -
Fertility and Sterility Dec 2020
Topics: COVID-19; Humans; Pandemics; Parents; SARS-CoV-2; Selection Bias
PubMed: 33280724
DOI: 10.1016/j.fertnstert.2020.10.057 -
Journal of the American Geriatrics... Sep 2019Selection bias is a well-known concern in research on older adults. We discuss two common forms of selection bias in aging research: (1) survivor bias and (2) bias due... (Review)
Review
OBJECTIVES
Selection bias is a well-known concern in research on older adults. We discuss two common forms of selection bias in aging research: (1) survivor bias and (2) bias due to loss to follow-up. Our objective was to review these two forms of selection bias in geriatrics research. In clinical aging research, selection bias is a particular concern because all participants must have survived to old age, and be healthy enough, to take part in a research study in geriatrics.
DESIGN
We demonstrate the key issues related to selection bias using three case studies focused on obesity, a common clinical risk factor in older adults. We also created a Selection Bias Toolkit that includes strategies to prevent selection bias when designing a research study in older adults and analytic techniques that can be used to examine, and correct for, the influence of selection bias in geriatrics research.
RESULTS
Survivor bias and bias due to loss to follow-up can distort study results in geriatric populations. Key steps to avoid selection bias at the study design stage include creating causal diagrams, minimizing barriers to participation, and measuring variables that predict loss to follow-up. The Selection Bias Toolkit details several analytic strategies available to geriatrics researchers to examine and correct for selection bias (eg, regression modeling and sensitivity analysis).
CONCLUSION
The toolkit is designed to provide a broad overview of methods available to examine and correct for selection bias. It is specifically intended for use in the context of aging research. J Am Geriatr Soc 67:1970-1976, 2019.
Topics: Aged; Aged, 80 and over; Female; Geriatrics; Humans; Lost to Follow-Up; Male; Patient Selection; Research Design; Selection Bias; Survivors
PubMed: 31211407
DOI: 10.1111/jgs.16022 -
Pharmaceutical Statistics Nov 2021When making decisions regarding the investment and design for a Phase 3 programme in the development of a new drug, the results from preceding Phase 2 trials are an...
When making decisions regarding the investment and design for a Phase 3 programme in the development of a new drug, the results from preceding Phase 2 trials are an important source of information. However, only projects in which the Phase 2 results show promising treatment effects will typically be considered for a Phase 3 investment decision. This implies that, for those projects where Phase 3 is pursued, the underlying Phase 2 estimates are subject to selection bias. We will in this article investigate the nature of this selection bias based on a selection of distributions for the treatment effect. We illustrate some properties of Bayesian estimates, providing shrinkage of the Phase 2 estimate to counteract the selection bias. We further give some empirical guidance regarding the choice of prior distribution and comment on the consequences for decision-making in investment and planning for Phase 3 programmes.
Topics: Bayes Theorem; Bias; Humans; Selection Bias
PubMed: 34002467
DOI: 10.1002/pst.2132 -
Nephrology (Carlton, Vic.) Jun 2020Study quality depends on a number of factors, one of them being internal validity. Such validity can be affected by random and systematic error, the latter also known as... (Review)
Review
Study quality depends on a number of factors, one of them being internal validity. Such validity can be affected by random and systematic error, the latter also known as bias. Both make it more difficult to assess a correct frequency or the true relationship between exposure and outcome. Where random error can be addressed by increasing the sample size, a systematic error in the design, the conduct or the reporting of a study is more problematic. In this article, we will focus on bias, discuss different types of selection bias (sampling bias, confounding by indication, incidence-prevalence bias, attrition bias, collider stratification bias and publication bias) and information bias (recall bias, interviewer bias, observer bias and lead-time bias), indicate the type of studies where they most frequently occur and provide suggestions for their prevention.
Topics: Biomedical Research; Humans; Interviews as Topic; Observer Variation; Research Design; Selection Bias; Self Report
PubMed: 32133725
DOI: 10.1111/nep.13706 -
Epidemiology (Cambridge, Mass.) May 2017Instrumental variables (IV) are used to draw causal conclusions about the effect of exposure E on outcome Y in the presence of unmeasured confounders. IV assumptions...
Instrumental variables (IV) are used to draw causal conclusions about the effect of exposure E on outcome Y in the presence of unmeasured confounders. IV assumptions have been well described: (1) IV affects E; (2) IV affects Y only through E; (3) IV shares no common cause with Y. Even when these assumptions are met, biased effect estimates can result if selection bias allows a noncausal path from E to Y. We demonstrate the presence of bias in IV analyses on a sample from a simulated dataset, where selection into the sample was a collider on a noncausal path from E to Y. By applying inverse probability of selection weights, we were able to eliminate the selection bias. IV approaches may protect against unmeasured confounding but are not immune from selection bias. Inverse probability of selection weights used with IV approaches can minimize bias.
Topics: Causality; Computer Simulation; Confounding Factors, Epidemiologic; Humans; Probability; Selection Bias
PubMed: 28169934
DOI: 10.1097/EDE.0000000000000639 -
International Journal of Epidemiology Aug 2023Adjusting for multiple biases usually involves adjusting for one bias at a time, with careful attention to the order in which these biases are adjusted. A novel,...
BACKGROUND
Adjusting for multiple biases usually involves adjusting for one bias at a time, with careful attention to the order in which these biases are adjusted. A novel, alternative approach to multiple-bias adjustment involves the simultaneous adjustment of all biases via imputation and/or regression weighting. The imputed value or weight corresponds to the probability of the missing data and serves to 'reconstruct' the unbiased data that would be observed based on the provided assumptions of the degree of bias.
METHODS
We motivate and describe the steps necessary to implement this method. We also demonstrate the validity of this method through a simulation study with an exposure-outcome relationship that is biased by uncontrolled confounding, exposure misclassification, and selection bias.
RESULTS
The study revealed that a non-biased effect estimate can be obtained when correct bias parameters are applied. It also found that incorrect specification of every bias parameter by +/-25% still produced an effect estimate with less bias than the observed, biased effect.
CONCLUSIONS
Simultaneous multi-bias analysis is a useful way of investigating and understanding how multiple sources of bias may affect naive effect estimates. This new method can be used to enhance the validity and transparency of real-world evidence obtained from observational, longitudinal studies.
Topics: Humans; Selection Bias; Bias; Computer Simulation; Probability; Longitudinal Studies
PubMed: 36718093
DOI: 10.1093/ije/dyad001 -
Medical Care Apr 2016Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they...
Comparative effectiveness research (CER) aims to provide patients and physicians with evidence-based guidance on treatment decisions. As researchers conduct CER they face myriad challenges. Although inadequate control of confounding is the most-often cited source of potential bias, selection bias that arises when patients are differentially excluded from analyses is a distinct phenomenon with distinct consequences: confounding bias compromises internal validity, whereas selection bias compromises external validity. Despite this distinction, however, the label "treatment-selection bias" is being used in the CER literature to denote the phenomenon of confounding bias. Motivated by an ongoing study of treatment choice for depression on weight change over time, this paper formally distinguishes selection and confounding bias in CER. By formally distinguishing selection and confounding bias, this paper clarifies important scientific, design, and analysis issues relevant to ensuring validity. First is that the 2 types of biases may arise simultaneously in any given study; even if confounding bias is completely controlled, a study may nevertheless suffer from selection bias so that the results are not generalizable to the patient population of interest. Second is that the statistical methods used to mitigate the 2 biases are themselves distinct; methods developed to control one type of bias should not be expected to address the other. Finally, the control of selection and confounding bias will often require distinct covariate information. Consequently, as researchers plan future studies of comparative effectiveness, care must be taken to ensure that all data elements relevant to both confounding and selection bias are collected.
Topics: Comparative Effectiveness Research; Confounding Factors, Epidemiologic; Humans; Models, Statistical; Patient Selection; Reproducibility of Results; Research Design; Selection Bias
PubMed: 24309675
DOI: 10.1097/MLR.0000000000000011 -
The Iowa Orthopaedic Journal 2019Patient satisfaction surveys are increasingly utilized to measure the patient experience and as a tool to assess the quality of care delivered by medical providers.... (Comparative Study)
Comparative Study
BACKGROUND
Patient satisfaction surveys are increasingly utilized to measure the patient experience and as a tool to assess the quality of care delivered by medical providers. Press Ganey (PG) is the largest provider of tools for patient satisfaction measurement and analysis. The purpose of this study was to determine if patient satisfaction surveys were subject to selection and/ or nonresponse bias.
METHODS
Patients seen in an outpatient academic orthopedic clinic were included in this retrospective cohort study. Demographic data included age, race, gender, marital status, primary payer, and native language. All surveys were administered by PG Associates per internal protocols adhering to exclusion criteria within the institutional contract with PG Associates.
RESULTS
3.5% of outpatient encounters generated PG survey data, which were generated by 9.1% of all patients evaluated. The population of patients who were administered as well as patients who responded to the patient satisfaction survey represented a unique population with regards to age, race, gender, marital status, insurance status, and native language.
CONCLUSIONS
Demographically, patients who were administered and patients who responded to PG surveys differed from the overall population of patients seen in an outpatient orthopedic setting, evidencing both selection and non-response bias. Because of these differences, and considering the small number of survey returned, caution should be exercised when interpreting and applying these data. III.
Topics: Adult; Aged; Delivery of Health Care; Evidence-Based Medicine; Female; Humans; Male; Middle Aged; Orthopedic Procedures; Outpatients; Patient Satisfaction; Selection Bias; Surveys and Questionnaires; United States
PubMed: 31413694
DOI: No ID Found -
Epidemiology (Cambridge, Mass.) Sep 2021Confounding, selection bias, and measurement error are well-known sources of bias in epidemiologic research. Methods for assessing these biases have their own...
Confounding, selection bias, and measurement error are well-known sources of bias in epidemiologic research. Methods for assessing these biases have their own limitations. Many quantitative sensitivity analysis approaches consider each type of bias individually, although more complex approaches are harder to implement or require numerous assumptions. By failing to consider multiple biases at once, researchers can underestimate-or overestimate-their joint impact. We show that it is possible to bound the total composite bias owing to these three sources and to use that bound to assess the sensitivity of a risk ratio to any combination of these biases. We derive bounds for the total composite bias under a variety of scenarios, providing researchers with tools to assess their total potential impact. We apply this technique to a study where unmeasured confounding and selection bias are both concerns and to another study in which possible differential exposure misclassification and confounding are concerns. The approach we describe, though conservative, is easier to implement and makes simpler assumptions than quantitative bias analysis. We provide R functions to aid implementation.
Topics: Bias; Confounding Factors, Epidemiologic; Epidemiologic Studies; Humans; Research Design; Selection Bias
PubMed: 34224471
DOI: 10.1097/EDE.0000000000001380