-
Scientific Reports May 2023The purpose of this retrospective, longitudinal study is to evaluate the relationship between MD slope from visual field tests collected over a short period of time...
The purpose of this retrospective, longitudinal study is to evaluate the relationship between MD slope from visual field tests collected over a short period of time (2 years) and the current United States' Food and Drug Administration (FDA) recommended endpoints for visual field outcomes. If this correlation is strong and highly predictive, clinical trials employing MD slopes as primary endpoints could be employed in neuroprotection clinical trials with shorter duration and help expedite the development of novel IOP-independent therapies. Visual field tests of patients with or suspected glaucoma were selected from an academic institution and evaluated based on two functional progression endpoints: (A) five or more locations worsening by at least 7 dB, and (B) at least five test locations based upon the GCP algorithm. A total of 271 (57.6%) and 278 (59.1%) eyes reached Endpoints A and B, respectively during the follow up period. The median (IQR) MD slope of eyes reaching vs. not reaching Endpoint A and B were -1.19 (-2.00 to -0.41) vs. 0.36 (0.00 to 1.00) dB/year and -1.16 (-1.98 to -0.40) vs. 0.41 (0.02 to 1.03) dB/year, respectively (P < 0.001). It was found that eyes experiencing rapid 24-2 visual field MD slopes over a 2-year period were on average tenfold more likely to reach one of the FDA accepted endpoints during or soon after that period.
Topics: Humans; Retrospective Studies; Longitudinal Studies; Neuroprotection; Intraocular Pressure; Vision Disorders; Glaucoma; Visual Field Tests; Disease Progression; Follow-Up Studies
PubMed: 37130950
DOI: 10.1038/s41598-023-34009-x -
BMJ Open Oct 2022Using a surrogate endpoint as a substitute for a primary patient-relevant outcome enables randomised controlled trials (RCTs) to be conducted more efficiently, that is,...
INTRODUCTION
Using a surrogate endpoint as a substitute for a primary patient-relevant outcome enables randomised controlled trials (RCTs) to be conducted more efficiently, that is, with shorter time, smaller sample size and lower cost. However, there is currently no consensus-driven guideline for the reporting of RCTs using a surrogate endpoint as a primary outcome; therefore, we seek to develop SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) and CONSORT (Consolidated Standards of Reporting Trials) extensions to improve the design and reporting of these trials. As an initial step, scoping and targeted reviews will identify potential items for inclusion in the extensions and participants to contribute to a Delphi consensus process.
METHODS AND ANALYSIS
The scoping review will search and include literature reporting on the current understanding, limitations and guidance on using surrogate endpoints in trials. Relevant literature will be identified through: (1) bibliographic databases; (2) grey literature; (3) handsearching of reference lists and (4) solicitation from experts. Data from eligible records will be thematically analysed into potential items for inclusion in extensions. The targeted review will search for RCT reports and protocols published from 2017 to 2021 in six high impact general medical journals. Trial corresponding author contacts will be listed as potential participants for the Delphi exercise.
ETHICS AND DISSEMINATION
Ethical approval is not required. The reviews will support the development of SPIRIT and CONSORT extensions for reporting surrogate primary endpoints (surrogate endpoint as the primary outcome). The findings will be published in open-access publications.This review has been prospectively registered in the OSF Registration DOI: 10.17605/OSF.IO/WP3QH.
Topics: Consensus; Humans; Publications; Randomized Controlled Trials as Topic; Reference Standards; Research Design; Review Literature as Topic; Treatment Outcome
PubMed: 36229145
DOI: 10.1136/bmjopen-2022-062798 -
Orphanet Journal of Rare Diseases Apr 2022The small patient populations inherent to rare genetic diseases present many challenges to the traditional drug development paradigm. One major challenge is generating...
BACKGROUND
The small patient populations inherent to rare genetic diseases present many challenges to the traditional drug development paradigm. One major challenge is generating sufficient data in early phase studies to inform dose selection for later phase studies and dose optimization for clinical use of the drug. However, optimizing the benefit-risk profile of drugs through appropriate dose selection during drug development is critical for all drugs, including those being developed to treat rare diseases. Recognizing the challenges of conducting dose finding studies in rare disease populations and the importance of dose selection and optimization for successful drug development, we assessed the dose-finding studies and analyses conducted for drugs recently approved for rare genetic diseases.
RESULTS
Of the 40 marketing applications for new molecular entity (NME) drugs and biologics approved by the United States Food and Drug Administration for rare genetic diseases from 2015 to 2020, 21 (53%) of the development programs conducted at least one dedicated dose-finding study. In addition, the majority of drug development programs conducted clinical studies in healthy subjects and included population pharmacokinetic and exposure-response analyses; some programs also conducted clinical studies in patient populations other than the disease for which the drug was initially approved. The majority of primary endpoints utilized in dedicated dose-finding studies were biomarkers, and the primary endpoint of the safety and efficacy study matched the primary endpoint used in the dose finding study in 9 of 13 (69%) drug development programs where primary study endpoints were assessed.
CONCLUSIONS
Our study showed that NME drug development programs for rare genetic diseases utilize multiple data sources for dosing information, including studies in healthy subjects, population pharmacokinetic analyses, and exposure-response analyses. In addition, our results indicate that biomarkers play a key role in dose-finding studies for rare genetic disease drug development programs. Our findings highlight the need to develop study designs and methods to allow adequate dose-finding efforts within rare disease drug development programs that help overcome the challenges presented by low patient prevalence and other factors. Furthermore, the frequent reliance on biomarkers as endpoints for dose-finding studies underscores the importance of biomarker development in rare diseases.
Topics: Biological Products; Drug Approval; Drug Development; Humans; Rare Diseases; Research Design; United States; United States Food and Drug Administration
PubMed: 35382851
DOI: 10.1186/s13023-022-02298-6 -
Cardiology 2022Unstable angina (UA) is a component of acute coronary syndrome that is only occasionally included in primary composite endpoints in clinical cardiovascular trials. The... (Review)
Review
BACKGROUND
Unstable angina (UA) is a component of acute coronary syndrome that is only occasionally included in primary composite endpoints in clinical cardiovascular trials. The aim of this paper is to elucidate the potential benefits and disadvantages of including UA in such contexts.
SUMMARY
UA comprises <10% of patients with acute coronary syndromes in contemporary settings. Based on the pathophysiological similarities, it is ideal as a part of a composite endpoint along with myocardial infarction (MI). By adding UA as a component of a primary composite endpoint, the number of events and feasibility of the trial should increase, thus decreasing its size and cost. Furthermore, UA has both economic and quality of life implications on a societal and an individual level. However, there are important challenges associated with the use of UA as an endpoint. With the introduction of high-sensitivity troponins, the number of individuals diagnosed with UA has decreased to rather low levels, with a reciprocal increase in the number of MI. In addition, UA is particularly challenging to define given the subjective assessment of the index symptoms, rendering a high risk of bias. To minimize bias, strict criteria are warranted, and events should be adjudicated by a blinded endpoint adjudication committee.
KEY MESSAGES
UA should only be chosen as a component of a primary composite endpoint in cardiovascular trials after thoroughly evaluating the pros and cons. If it is chosen to include UA, appropriate precautions should be taken to minimize possible bias.
Topics: Acute Coronary Syndrome; Angina, Unstable; Clinical Trials as Topic; Humans; Myocardial Infarction; Quality of Life; Troponin
PubMed: 35537418
DOI: 10.1159/000524948 -
Translational Lung Cancer Research Mar 2012In the last decades significant progress has been achieved in the biological understanding of non-small-cell lung cancer (NSCLC) and its tumor heterogeneity has become... (Review)
Review
Statistical considerations and endpoints for clinical lung cancer studies: Can progression free survival (PFS) substitute overall survival (OS) as a valid endpoint in clinical trials for advanced non-small-cell lung cancer?
In the last decades significant progress has been achieved in the biological understanding of non-small-cell lung cancer (NSCLC) and its tumor heterogeneity has become more evident. The identification of novel tumor targets with different pathways has stimulated the search for anti-tumor agents with a specific target directed mode of action, stipulating the need of testing these agents in clinical trials with an appropriate choice of the study endpoint. Gold standard as an endpoint has been so far overall survival (OS). By definition there are 3 categories of classical endpoints applied generally in clinical lung cancer studies: survival time endpoints, symptom endpoints, and endpoints relying on patients' reporting. Beside classical endpoints like OS which are tending to show the direct clinical effect of treatment, efforts have been taken to substitute these classical endpoints by surrogates. As a surrogate candidate for OS progression-free survival (PFS) should have the inherent considerable advantage, that it can detect subpopulations with longer PFS intervals early. Based on the (sub-) population treated and having in mind the risk-benefit profile of the drug under consideration, PFS can be considered for regulatory decision making. If accompanied by some independent measures like quality of life or treatment toxicity, PFS should be able to cover the clinical benefit achieved by treatment. Selecting PFS as primary endpoint in Phase III trials of advanced NSCLC may be based on a number of questions such as: Does the definition of PFS fit into the setting used by other trials? Are there accepted consensus standards? Are there consistent surveillance intervals? Is validation for each agent group planned? Is the incremental improvement of PFS big enough (≥30%)? And are there some additional measures to confine clinical benefit? OS is still accepted as the gold standard in trials investigating advanced NSCLC. OS is easy to measure and precise but it may be difficult to interpret if treatment action takes place only in a small subinterval of overall survival. PFS with some additional measures has become attractive when it seems advisable to make study results available earlier. Candidates for supporting PFS as "additional measures" may be treatment toxicity and quality of life measures. PFS allows a more precise detection and attribution to effects of the investigational treatment without being compromised by subsequent treatments. Therefore "enriched PFS" can be considered as an alternative primary endpoint replacing OS in studies investigating advanced NSCLC. The endpoint selection process should always be performed carefully considering all true and surrogate endpoint options in respect to the hypotheses to be proven.
PubMed: 25806152
DOI: 10.3978/j.issn.2218-6751.2011.12.08 -
Investigative Ophthalmology & Visual... May 2017The purpose of this study was to investigate the use of imaging biomarkers in published clinical trials (CTs) in ophthalmology and its eventual changes during the past... (Review)
Review
PURPOSE
The purpose of this study was to investigate the use of imaging biomarkers in published clinical trials (CTs) in ophthalmology and its eventual changes during the past 10 years.
METHODS
We sampled from published CTs in the fields of cornea, retina, and glaucoma between 2005-2006 and 2015-2016. Data collected included year of publication, phase, subspecialty, location, compliance with Consolidated Standards for Reporting Trials, impact factor, presence and use of imaging biomarkers (diagnostic, prognostic and predictive; primary and secondary surrogate endpoints), and use of centralized reading centers.
RESULTS
We included 652 articles for analysis, equally distributed in three timeframes (2005-2006, 2010-2011, and 2015-2016), mainly reporting phase IV CTs and trials on procedures (42.2% and 35.4%, respectively). Imaging biomarkers were included in 46.3% of the analyzed CTs and their use significantly increased over time (P < 0.05). Optical coherence tomography was the most frequently used device (27.7%), whereas diagnostic biomarkers and secondary surrogate endpoints were the most frequent biomarker types (19.5% and 22.5%, respectively). Early-phase CTs showed an increase in the use of biomarkers for patient selection and stratification over time (P < 0.05), but not in the use of imaging surrogate endpoints (P = 0.90). Only 3 of 59 (5.1%) of phase III CTs included primary surrogate imaging endpoints, whereas secondary surrogate imaging endpoints were present in 50.8% of these trials (P < 0.001). Retinal CTs had the highest prevalence for each type of imaging biomarker (P < 0.001). Reading centers were used in 52 of 302 CTs (17.2%), with no significant time-related increase.
CONCLUSIONS
Imaging biomarkers are increasingly used in published CTs in ophthalmology. Additional efforts, including centralized reading centers, are needed to improve their validation and use, allowing a wider use of these tools as primary surrogate endpoints in phase III CTs.
Topics: Clinical Trials as Topic; Diagnostic Techniques, Ophthalmological; Endpoint Determination; Eye Diseases; Forecasting; Humans; Ophthalmology; Prognosis
PubMed: 28525561
DOI: 10.1167/iovs.17-21790 -
World Journal of Hepatology Sep 2023Surrogate endpoints are needed to estimate clinical outcomes in primary sclerosing cholangitis (PSC). Serum alkaline phosphatase was among the first markers studied, but... (Review)
Review
Surrogate endpoints are needed to estimate clinical outcomes in primary sclerosing cholangitis (PSC). Serum alkaline phosphatase was among the first markers studied, but there is substantial variability in alkaline phosphatase levels during the natural history of PSC without intervention. The Mayo risk score incorporates noninvasive variables and has served as a surrogate endpoint for survival for more than two decades. Newer models have better test performance than the Mayo risk score, including the primary sclerosing risk estimate tool (PREsTo) model and UK-PSC score that estimate hepatic decompensation and transplant free survival, respectively. The c-statistics for transplant-free survival for the Mayo risk model and the long-term UK-PSC model are 0.68 and 0.85, respectively. The c-statistics for hepatic decompensation for the Mayo risk model and PREsTo model are 0.85 and 0.90, respectively. The Amsterdam-Oxford model included patients with large duct and small duct PSC and patients with PSC-autoimmune hepatitis overlap and had a c-statistic of 0.68 for transplant-free survival. Other noninvasive tests that warrant further validation include magnetic resonance imaging, elastography and the enhanced liver fibrosis score. Prognostic models, noninvasive tests or a combination of these surrogate endpoints may not only serve to be useful in clinical trials of investigational agents, but also serve to inform our patients about their prognosis.
PubMed: 37900215
DOI: 10.4254/wjh.v15.i9.1013 -
The Journal of Antimicrobial... Dec 1995In 1994, an international group of interested clinicians and biostatisticians met to discuss the design of clinical trials in herpes zoster. They agreed that trials in... (Review)
Review
In 1994, an international group of interested clinicians and biostatisticians met to discuss the design of clinical trials in herpes zoster. They agreed that trials in herpes zoster should have prospectively agreed definitions of all outcome measures and plans for data analysis. In immunocompetent individuals, in whom pain is the major outcome measure, trials should only include patients over the age of 50 years, and for those recruited within 72 h of rash onset, should be designed to demonstrate superiority of any new therapy over existing antivirals. The primary endpoint should be time to cessation of pain for at least 4 weeks and, for the purposes of statistical analysis of its duration, the pain associated with herpes zoster ought to be considered as a continuum. All other variables, including the incidence of post-herpetic neuralgia and effects upon quality of life should be considered as secondary end-points. Evaluation of treatment effects on primary endpoints should be based upon an intent-to-treat (ITT) analysis and subgroup analysis should be used only to support the findings of the ITT analysis. These elements of good study design should be borne in mind in the evaluation of current and future trails of antiviral drugs in herpes zoster.
Topics: Clinical Trials as Topic; Herpes Zoster; Humans; Research Design; Treatment Outcome
PubMed: 8821612
DOI: 10.1093/jac/36.6.1089 -
BMC Cardiovascular Disorders Sep 2022Lactate dehydrogenase (LDH) has been reported in multiple heart diseases. Herein, we explored the prognostic effects of preoperative LDH on adverse outcomes in cardiac...
BACKGROUND
Lactate dehydrogenase (LDH) has been reported in multiple heart diseases. Herein, we explored the prognostic effects of preoperative LDH on adverse outcomes in cardiac surgery patients.
METHODS
Retrospective data analysis was conducted from two large medical databases: Medical Information Mart for Intensive Care (MIMIC) III and MIMIC IV databases. The primary outcome was in-hospital mortality, whereas the secondary outcomes were 1-year mortality, continuous renal replacement therapy, prolonged ventilation, and prolonged length of intensive care unit and hospital stay.
RESULTS
Patients with a primary endpoint had significantly higher levels of LDH (p < 0.001). Multivariate regression analysis presented that elevated LDH was independently correlated with increased risk of primary and secondary endpoints (all p < 0.001). Subgroup analyses showed that high LDH was consistently associated with primary endpoint. Moreover, LDH exhibited the highest area under the curve (0.768) for the prediction of primary endpoint compared to the other indicators, including neutrophil-lymphocyte ratio (NLR), lymphocyte-monocyte ratio (LMR), platelet-lymphocyte ratio (PLR), lactate, and simplified acute physiology score (SAPS) II. The above results were further confirmed in the MIMIC IV dataset.
CONCLUSIONS
Elevated preoperative LDH may be a robust predictor of poor prognosis in cardiac surgery patients, and its predictive ability is superior to NLR, LMR, PLR, lactate, and SAPS II.
Topics: Cardiac Surgical Procedures; Humans; L-Lactate Dehydrogenase; Lactates; Prognosis; Retrospective Studies
PubMed: 36088306
DOI: 10.1186/s12872-022-02848-7 -
The prognostic role of anticoagulants in COVID-19 patients: national COVID-19 cohort in South Korea.Annals of Palliative Medicine Apr 2022There currently exists a paucity of data on whether pre-admission anticoagulants use may have benefits among COVID-19 patients by preventing COVID-19 associated...
BACKGROUND
There currently exists a paucity of data on whether pre-admission anticoagulants use may have benefits among COVID-19 patients by preventing COVID-19 associated thromboembolism. The aim of this study was to assess the association between pre-admission anticoagulants use and COVID-19 adverse outcomes.
METHODS
We conducted a population-based cohort studying using the Health Insurance Review and Assessment Service (HIRA) claims data released by the South Korean government. Our study population consisted of South Koreans who were aged 40 years or older and hospitalized with COVID-19 between 1 January 2020 through 15 May 2020. We defined anticoagulants users as individuals with inpatient and outpatient prescription records in 120 days before cohort entry. Our primary endpoint was a composite of all-cause death, intensive care unit (ICU) admission, and mechanical ventilation use. Individual components of the primary endpoint were secondary endpoints. We compared the risk of endpoints between the anticoagulants users and non-users by logistic regression models, with the standardized mortality ratio weighting (SMRW) adjustment.
RESULTS
In our cohort of 4,349 patients, for the primary endpoint of mortality, mechanical ventilation and ICU admission, no difference was noted between anticoagulants users and non-users (SMRW OR 1.11, 95% CI: 0.60-2.05). No differences were noted, among individual components. No effect modification was observed by age, sex, history of atrial fibrillation and thromboembolism, and history of cardiovascular disease. When applying the inverse probability of treatment weighting (IPTW) and SMRW with doubly robust methods in sensitivity analysis, anticoagulants use was associated with increased odds of the primary endpoint.
CONCLUSIONS
Pre-admission anticoagulants were not determined to have a protective role against severe COVID-19 outcomes.
Topics: Anticoagulants; COVID-19; Humans; Prognosis; SARS-CoV-2; Thromboembolism
PubMed: 35400157
DOI: 10.21037/apm-21-3466