-
BMC Medical Research Methodology Jul 2021Data monitoring of clinical trials is a tool aimed at reducing the risks of random errors (e.g. clerical errors) and systematic errors, which include misinterpretation,... (Randomized Controlled Trial)
Randomized Controlled Trial
BACKGROUND
Data monitoring of clinical trials is a tool aimed at reducing the risks of random errors (e.g. clerical errors) and systematic errors, which include misinterpretation, misunderstandings, and fabrication. Traditional 'good clinical practice data monitoring' with on-site monitors increases trial costs and is time consuming for the local investigators. This paper aims to outline our approach of time-effective central data monitoring for the SafeBoosC-III multicentre randomised clinical trial and present the results from the first three central data monitoring meetings.
METHODS
The present approach to central data monitoring was implemented for the SafeBoosC-III trial, a large, pragmatic, multicentre, randomised clinical trial evaluating the benefits and harms of treatment based on cerebral oxygenation monitoring in preterm infants during the first days of life versus monitoring and treatment as usual. We aimed to optimise completeness and quality and to minimise deviations, thereby limiting random and systematic errors. We designed an automated report which was blinded to group allocation, to ease the work of data monitoring. The central data monitoring group first reviewed the data using summary plots only, and thereafter included the results of the multivariate Mahalanobis distance of each centre from the common mean. The decisions of the group were manually added to the reports for dissemination, information, correcting errors, preventing furture errors and documentation.
RESULTS
The first three central monitoring meetings identified 156 entries of interest, decided upon contacting the local investigators for 146 of these, which resulted in correction of 53 entries. Multiple systematic errors and protocol violations were identified, one of these included 103/818 randomised participants. Accordingly, the electronic participant record form (ePRF) was improved to reduce ambiguity.
DISCUSSION
We present a methodology for central data monitoring to optimise quality control and quality development. The initial results included identification of random errors in data entries leading to correction of the ePRF, systematic protocol violations, and potential protocol adherence issues. Central data monitoring may optimise concurrent data completeness and may help timely detection of data deviations due to misunderstandings or fabricated data.
Topics: Humans; Infant, Newborn; Infant, Premature; Monitoring, Physiologic
PubMed: 34332547
DOI: 10.1186/s12874-021-01344-4 -
Resuscitation Plus Mar 2022Automated external defibrillators (AEDs) use various shock protocols with different characteristics when deployed in pediatric mode. The aim of this study is to assess...
AIM
Automated external defibrillators (AEDs) use various shock protocols with different characteristics when deployed in pediatric mode. The aim of this study is to assess and compare the safety and efficacy of different AED pediatric protocols using novel experimental approaches.
METHODS
Two defibrillation protocols (A and B) were assessed across two studies: Protocol A: escalating (50-75-90 J) defibrillation waveform with higher voltage, shorter duration and equal phase durations. Protocol B; non-escalating (50-50-50 J) defibrillation waveform with lower voltage, longer duration and unequal phase durations.Experiment 1: Isolated shock damage was assessed following shocks to 12 anesthetized pigs. Animals were randomized into two groups, receiving three shocks from Protocol A (50-75-90 J) or B (50-50-50 J). Cardiac function, cardiac troponin I (cTnI), creatine phosphokinase (CPK) and histopathology were analyzed. Experiment 2: Defibrillation safety and efficacy were assessed through shock success, ROSC, ST-segment deviation and contractility following 16 randomized shocks from protocol A or B delivered to 10 anesthetized pigs in VF.
RESULTS
Experiment 1: No clinically meaningful difference in cTnI, CPK, ST-segment deviation, ejection fraction or histopathological damage was observed following defibrillation with either protocol. No difference was observed between protocols at any timepoint. Experiment 2: all defibrillation types demonstrated shock success and ROSC ≥ 97.5%. Post-ROSC contractility was similar between protocols.
CONCLUSIONS
There is no evidence that administration of clinically relevant shock sequences, without experimental confounders, result in significant myocardial damage in this model of pediatric resuscitation. Typical variations in AED pediatric mode settings do not affect defibrillation safety and efficacy.
PubMed: 35146463
DOI: 10.1016/j.resplu.2022.100203 -
International Journal of Molecular... May 2019The refinement of predicted 3D protein models is crucial in bringing them closer towards experimental accuracy for further computational studies. Refinement approaches... (Review)
Review
The refinement of predicted 3D protein models is crucial in bringing them closer towards experimental accuracy for further computational studies. Refinement approaches can be divided into two main stages: The sampling and scoring stages. Sampling strategies, such as the popular Molecular Dynamics (MD)-based protocols, aim to generate improved 3D models. However, generating 3D models that are closer to the native structure than the initial model remains challenging, as structural deviations from the native basin can be encountered due to force-field inaccuracies. Therefore, different restraint strategies have been applied in order to avoid deviations away from the native structure. For example, the accurate prediction of local errors and/or contacts in the initial models can be used to guide restraints. MD-based protocols, using physics-based force fields and smart restraints, have made significant progress towards a more consistent refinement of 3D models. The scoring stage, including energy functions and Model Quality Assessment Programs (MQAPs) are also used to discriminate near-native conformations from non-native conformations. Nevertheless, there are often very small differences among generated 3D models in refinement pipelines, which makes model discrimination and selection problematic. For this reason, the identification of the most native-like conformations remains a major challenge.
Topics: Computational Biology; Models, Molecular; Proteins
PubMed: 31075942
DOI: 10.3390/ijms20092301 -
Acta Ophthalmologica May 2022To assess the diagnostic accuracy and agreement between a paediatric electroretinography protocol used at Great Ormond Street Hospital (GOSH-ERG) and the 'gold standard'...
PURPOSE
To assess the diagnostic accuracy and agreement between a paediatric electroretinography protocol used at Great Ormond Street Hospital (GOSH-ERG) and the 'gold standard' international protocol (ISCEV-ERG) in health and disease.
METHODS
Patient databases between 2010 and 2020 were screened to identify children with an ISCEV-ERG recorded within four years of a GOSH-ERG. Electroretinogram (ERG) component peak times and amplitudes were re-measured, and data were analysed in terms of absolute abnormality and proportional deviation from respective reference ranges. Abnormality was defined by the retinal system affected and by individual ERG a- and b-wave component analysis.
RESULTS
A total of 59 patients were included: 38 patients had retinal disease defined by an abnormal ISCEV-ERG and 21 had normal ISCEV-ERGs. When absolute abnormality was defined by combined retinal systems, the GOSH-ERG showed an excellent overall sensitivity of 95% (accuracy 86%). Individual retinal systems showed good-excellent sensitivity (67%-100%) and specificity (68%-97%). Electroretinogram (ERG) component sensitivities ranged between 60% and 97% and specificities between 79% and 97% dependent upon the protocol step. The proportional relationship appeared mostly linear between protocols. Electroretinogram (ERG) morphology was comparable for both protocols in a range of retinal diseases including those with pathognomonic ERGs.
CONCLUSION
We demonstrate the high diagnostic accuracy of a paediatric ERG protocol (GOSH-ERG) relative to ISCEV standard ERGs. The close proportional deviation and similar waveform morphology indicate ERGs from each protocol are similarly affected in disease. This encourages the use of the GOSH-ERG protocol in the screening, diagnosis and monitoring of retinal disease in children who are unable to comply with the rigorous ISCEV-ERG protocol.
Topics: Child; Electroretinography; Humans; Photic Stimulation; Retina; Retinal Diseases; Societies, Medical
PubMed: 34126657
DOI: 10.1111/aos.14938 -
Sleep Jan 2021The psychomotor vigilance test (PVT) is frequently used to measure behavioral alertness in sleep research on various software and hardware platforms. In contrast to many...
STUDY OBJECTIVES
The psychomotor vigilance test (PVT) is frequently used to measure behavioral alertness in sleep research on various software and hardware platforms. In contrast to many other cognitive tests, PVT response time (RT) shifts of a few milliseconds can be meaningful. It is, therefore, important to use calibrated systems, but calibration standards are currently missing. This study investigated the influence of system latency bias and its variability on two frequently used PVT performance metrics, attentional lapses (RTs ≥500 ms) and response speed, in sleep-deprived and alert participants.
METHODS
PVT data from one acute total (N = 31 participants) and one chronic partial (N = 43 participants) sleep deprivation protocol were the basis for simulations in which response bias (±15 ms) and its variability (0-50 ms) were systematically varied and transgressions of predefined thresholds (i.e. ±1 for lapses, ±0.1/s for response speed) recorded.
RESULTS
Both increasing bias and its variability caused deviations from true scores that were higher for the number of lapses in sleep-deprived participants and for response speed in alert participants. Threshold transgressions were typically rare (i.e. <5%) if system latency bias was less than ±5 ms and its standard deviation was ≤10 ms.
CONCLUSIONS
A bias of ±5 ms with a standard deviation of ≤10 ms could be considered maximally allowable margins for calibrating PVT systems for timing accuracy. Future studies should report the average system latency and its standard deviation in addition to adhering to published standards for administering and analyzing the PVT.
Topics: Attention; Humans; Psychomotor Performance; Reaction Time; Sleep Deprivation; Wakefulness
PubMed: 32556295
DOI: 10.1093/sleep/zsaa121 -
Perspectives in Clinical Research 2016Deviations from the approved trial protocol are common during clinical trials. They have been conventionally classified as deviations or violations, depending on their...
INTRODUCTION
Deviations from the approved trial protocol are common during clinical trials. They have been conventionally classified as deviations or violations, depending on their impact on the trial.
METHODS
A new method has been proposed by which deviations are classified in five grades from 1 to 5. A deviation of Grade 1 has no impact on the subjects' well-being or on the quality of data. At the maximum, a deviation Grade 5 leads to the death of the subject. This method of classification was applied to deviations noted in the center over the last 3 years.
RESULTS
It was observed that most deviations were of Grades 1 and 2, with fewer falling in Grades 3 and 4. There were no deviations that led to the death of the subject (Grade 5).
DISCUSSION
This method of classification would help trial managers decide on the action to be taken on the occurrence of deviations, which would be based on their impact.
PubMed: 27453830
DOI: 10.4103/2229-3485.184817 -
Antimicrobial Resistance and Infection... 2018Healthcare workers (HCWs) use personal protective equipment (PPE) in Ebola virus disease (EVD) situations. However, preventing the contamination of HCWs and the... (Comparative Study)
Comparative Study
BACKGROUND
Healthcare workers (HCWs) use personal protective equipment (PPE) in Ebola virus disease (EVD) situations. However, preventing the contamination of HCWs and the environment during PPE removal crucially requires improved strategies. This study aimed to compare the efficacy of three PPE ensembles, namely, Hospital Authority (HA) Standard Ebola PPE set (PPE1), Dupont Tyvek Model, style 1422A (PPE2), and HA isolation gown for routine patient care and performing aerosol-generating procedures (PPE3) to prevent EVD transmission by measuring the degree of contamination of HCWs and the environment.
METHODS
A total of 59 participants randomly performed PPE donning and doffing. The trial consisted of PPE donning, applying fluorescent solution on the PPE surface, PPE doffing of participants, and estimation of the degree of contamination as indicated by the number of fluorescent stains on the working clothes and environment. Protocol deviations during PPE donning and doffing were monitored.
RESULTS
PPE2 and PPE3 presented higher contamination risks than PPE1. Environmental contaminations such as those originating from rubbish bin covers, chairs, faucets, and sinks were detected. Procedure deviations were observed during PPE donning and doffing, with PPE1 presenting the lowest overall deviation rate (%) among the three PPE ensembles ( 0.05).
CONCLUSION
Contamination of the subjects' working clothes and surrounding environment occurred frequently during PPE doffing. Procedure deviations were observed during PPE donning and doffing. Although PPE1 presented a lower contamination risk than PPE2 and PPE3 during doffing and protocol deviations, the design of PPE1 can still be further improved. Future directions should focus on designing a high-coverage-area PPE with simple ergonomic features and on evaluating the doffing procedure to minimise the risk of recontamination. Regular training for users should be emphasised to minimise protocol deviations, and in turn, guarantee the best protection to HCWs.
Topics: Adult; Aerosols; Environmental Exposure; Female; Health Personnel; Hemorrhagic Fever, Ebola; Humans; Infectious Disease Transmission, Patient-to-Professional; Male; Middle Aged; Personal Protective Equipment; Random Allocation; Young Adult
PubMed: 30607244
DOI: 10.1186/s13756-018-0433-y -
Fertility and Sterility Sep 2020To identify and treat the gamete responsible for complete fertilization failure with intracytoplasmic sperm injection (ICSI) using a newly proposed assisted gamete...
OBJECTIVE
To identify and treat the gamete responsible for complete fertilization failure with intracytoplasmic sperm injection (ICSI) using a newly proposed assisted gamete treatment (AGT).
DESIGN
Prospective cohort study.
SETTING
Center for reproductive medicine.
PATIENT(S)
One-hundred and fourteen couples with an adequate number of spermatozoa for ICSI and a fertilization rate of ≤10%, after controlling for maternal age.
INTERVENTION(S)
Couples with an oocyte-related oocyte activation deficiency (OAD) underwent a subsequent cycle with a modified superovulation protocol; couples with sperm-related OAD had an additional genetic and epigenetic assessment to identify mutations and expression levels of the corresponding genes.
MAIN OUTCOME MEASURE(S)
Treatment cycle outcome for couples undergoing ICSI with either a modified superovulation protocol or AGT compared with their historical cycle.
RESULT(S)
A total of 114 couples matched the inclusion criteria, representing approximately 1.3% of the total ICSI cycles performed at our center, with age-matched controls. Fifty-two couples were confirmed negative for sperm-related OAD by the phospholipase Cζ (PLCζ) assay, indicating oocyte-related factors in their failed fertilization cycles. Couples were treated by one of two AGT protocols, AGT-initial or AGT-revised, in a subsequent attempt that was compared with their historical cycle. Subsequent ICSI cycles with a tailored superovulation protocol yielded significantly higher fertilization (59.0% vs. 2.1%) and clinical pregnancy (28.6% vs. 0) rates. In 24 couples (mean ± standard deviation: maternal age, 35.6 ± 5 years; paternal age, 39.8 ± 6 years) sperm-related OAD was confirmed; in four men, a deletion on the PLCZ1 gene was identified. Additional mutations were also identified of genes supporting spermiogenesis and embryo development (PIWIL1, BSX, NLRP5) and gene deletions confirming a complete absence of the subacrosomal perinuclear theca (PICK1, SPATA16, DPY19L). Subsequent AGT treatment provided higher fertilization (42.1%) and clinical pregnancy (36% vs. 0%) rates for couples with a history of impaired (9.1%) fertilization. A comparison of the two AGT protocols, AGT-initial or AGT-revised, revealed that the latter yielded even more favorable fertilization (37.6% vs. 45.9%) and clinical pregnancy (21.1% vs. 83.3%) rates.
CONCLUSION(S)
In couples with an oocyte-related OAD, tailoring the superovulation protocol resulted in successful fertilization, term pregnancies, and deliveries. In couples with a sperm-related OAD as determined by PLCζ assay, mouse oocyte activation test, and the assessment of gene mutations and function, AGT was successful. The AGT-revised protocol yielded an even higher fertilization rate than the AGT-initial protocol, resulting in the birth of healthy offspring in all couples who achieved a clinical pregnancy.
Topics: Adult; Female; Fertility; Genetic Predisposition to Disease; Humans; Infertility, Male; Live Birth; Male; Mutation; Phenotype; Phosphoinositide Phospholipase C; Pregnancy; Pregnancy Rate; Prospective Studies; Retreatment; Sperm Injections, Intracytoplasmic; Sperm-Ovum Interactions; Spermatozoa; Superovulation; Treatment Failure
PubMed: 32712020
DOI: 10.1016/j.fertnstert.2020.04.044 -
Journal of Visualized Experiments : JoVE Mar 2018The human intestinal microbiome plays a central role in protecting cells from injury, in processing energy and nutrients, and in promoting immunity. Deviations from what...
The human intestinal microbiome plays a central role in protecting cells from injury, in processing energy and nutrients, and in promoting immunity. Deviations from what is considered a healthy microbiota composition (dysbiosis) may impair vital functions leading to pathologic conditions. Recent and ongoing research efforts have been directed toward the characterization of associations between microbial composition and human health and disease. Advances in high-throughput sequencing technologies enable characterization of the gut microbial composition. These methods include 16S rRNA-amplicon sequencing and shotgun sequencing. 16S rRNA-amplicon sequencing is used to profile taxonomical composition, while shotgun sequencing provides additional information about gene predictions and functional annotation. An advantage in using a targeted sequencing method of the 16S rRNA gene variable region is its substantially lower cost compared to shotgun sequencing. Sequence differences in the 16S rRNA gene are used as a microbial fingerprint to identify and quantify different taxa within an individual sample. Major international efforts have enlisted standards for 16S rRNA-amplicon sequencing. However, several studies report a common source of variation caused by batch effect. To minimize this effect, uniformed protocols for sample collection, processing, and sequencing must be implemented. This protocol proposes the integration of broadly used protocols starting from fecal sample collection to data analyses. This protocol includes a column-free, direct-PCR approach that enables simultaneous handling and DNA extraction of large numbers of fecal samples, along with PCR amplification of the V4 region. In addition, the protocol describes the analysis pipeline and provides a script using the latest version of QIIME (QIIME 2 version 2017.7.0 and DADA2). This step-by-step protocol is aimed to guide those interested in initiating the use of 16S rRNA-amplicon sequencing in a robust, reproductive, easy to use, detailed way.
Topics: Feces; High-Throughput Nucleotide Sequencing; Humans; RNA, Ribosomal, 16S
PubMed: 29608151
DOI: 10.3791/56845 -
Scientific Reports Nov 2019Medetomidine has become a popular choice for anesthetizing rats during long-lasting sessions of blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging...
Medetomidine has become a popular choice for anesthetizing rats during long-lasting sessions of blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI). Despite this, it has not yet been thoroughly established how commonly reported fMRI readouts evolve over several hours of medetomidine anesthesia and how they are affected by the precise timing, dose, and route of administration. We used four different protocols of medetomidine administration to anesthetize rats for up to six hours and repeatedly evaluated somatosensory stimulus-evoked BOLD responses and resting state functional connectivity. We found that the temporal evolution of fMRI readouts strongly depended on the method of administration. Intravenous administration of a medetomidine bolus (0.05 mg/kg), combined with a subsequent continuous infusion (0.1 mg/kg/h), led to temporally stable measures of stimulus-evoked activity and functional connectivity throughout the anesthesia. Deviating from the above protocol-by omitting the bolus, lowering the medetomidine dose, or using the subcutaneous route-compromised the stability of these measures in the initial two-hour period. We conclude that both an appropriate protocol of medetomidine administration and a suitable timing of fMRI experiments are crucial for obtaining consistent results. These factors should be considered for the design and interpretation of future rat fMRI studies.
Topics: Animals; Brain; Brain Mapping; Evoked Potentials, Somatosensory; Female; Hypnotics and Sedatives; Magnetic Resonance Imaging; Male; Medetomidine; Rats; Rats, Wistar; Rest
PubMed: 31723186
DOI: 10.1038/s41598-019-53144-y