-
PloS One 2024Burns are tissue traumas caused by energy transfer and occur with a variable inflammatory response. The consequences of burns represent a public health problem...
INTRODUCTION
Burns are tissue traumas caused by energy transfer and occur with a variable inflammatory response. The consequences of burns represent a public health problem worldwide. Inhalation injury (II) is a severity factor when associated with burn, leading to a worse prognosis. Its treatment is complex and often involves invasive mechanical ventilation (IMV). The primary purpose of this study will be to assess the evidence regarding the frequency and mortality of II in burn patients. The secondary purposes will be to assess the evidence regarding the association between IIs and respiratory complications (pneumonia, airway obstruction, acute respiratory failure, acute respiratory distress syndrome), need for IMV and complications in other organ systems, and highlight factors associated with IIs in burn patients and prognostic factors associated with acute respiratory failure, need for IMV and mortality of II in burn patients.
METHODS
This is a systematic literature review and meta-analysis, according to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA). PubMed/MEDLINE, Embase, LILACS/VHL, Scopus, Web of Science, and CINAHL databases will be consulted without language restrictions and publication date. Studies presenting incomplete data and patients under 19 years of age will be excluded. Data will be synthesized through continuous (mean and standard deviation) and dichotomous (relative risk) variables and the total number of participants. The means, sample sizes, standard deviations from the mean, and relative risks will be entered into the Review Manager web analysis software (The Cochrane Collaboration).
DISCUSSION
Despite the extensive experience managing IIs in burn patients, they still represent an important cause of morbidity and mortality. Diagnosis and accurate measurement of its damage are complex, and therapies are essentially based on supportive measures. Considering the challenge, their impact, and their potential severity, IIs represent a promising area for research, needing further studies to understand and contribute to its better evolution. The protocol of this review is registered on the International prospective register of systematic reviews platform of the Center for Revisions and Disclosure of the University of York, United Kingdom (https://www.crd.york.ac.uk/prospero), under number RD42022343944.
Topics: Humans; Systematic Reviews as Topic; Meta-Analysis as Topic; Burns; Respiration, Artificial; Burns, Inhalation; Prognosis; Smoke Inhalation Injury
PubMed: 38652713
DOI: 10.1371/journal.pone.0295318 -
Trials Oct 2017When a randomised trial is subject to deviations from randomised treatment, analysis according to intention-to-treat does not estimate two important quantities: relative... (Review)
Review
BACKGROUND
When a randomised trial is subject to deviations from randomised treatment, analysis according to intention-to-treat does not estimate two important quantities: relative treatment efficacy and effectiveness in a setting different from that in the trial. Even in trials of a predominantly pragmatic nature, there may be numerous reasons to consider the extent, and impact on analysis, of such deviations from protocol. Simple methods such as per-protocol or as-treated analyses, which exclude or censor patients on the basis of their adherence, usually introduce selection and confounding biases. However, there exist appropriate causal estimation methods which seek to overcome these inherent biases, but these methods remain relatively unfamiliar and are rarely implemented in trials.
METHODS
This paper demonstrates when it may be of interest to look beyond intention-to-treat analysis for answers to alternative causal research questions through illustrative case studies. We seek to guide trialists on how to handle treatment changes in the design, conduct and planning the analysis of a trial; these changes may be planned or unplanned, and may or may not be permitted in the protocol. We highlight issues that must be considered at the trial planning stage relating to: the definition of nonadherence and the causal research question of interest, trial design, data collection, monitoring, statistical analysis and sample size.
RESULTS AND CONCLUSIONS
During trial planning, trialists should define their causal research questions of interest, anticipate the likely extent of treatment changes and use these to inform trial design, including the extent of data collection and data monitoring. A series of concise recommendations is presented to guide trialists when considering undertaking causal analyses.
Topics: Bias; Data Interpretation, Statistical; Drug Substitution; Endpoint Determination; Guideline Adherence; Humans; Infant; Intention to Treat Analysis; Patient Compliance; Practice Guidelines as Topic; Randomized Controlled Trials as Topic; Research Design; Sample Size; Treatment Outcome
PubMed: 29070048
DOI: 10.1186/s13063-017-2240-9 -
Journal of Applied Clinical Medical... Nov 2022The 4D computed tomography (CT) simulation is an essential procedure for tumors exhibiting breathing-induced motion. However, to date there are no established guidelines...
PURPOSE
The 4D computed tomography (CT) simulation is an essential procedure for tumors exhibiting breathing-induced motion. However, to date there are no established guidelines to assess the characteristics of existing systems and to describe meaningful performance. We propose a commissioning quality assurance (QA) protocol consisting of measurements and acquisitions that assess the mechanical and computational operation for 4D CT with both phase and amplitude-based reconstructions, for regular and irregular respiratory patterns.
METHODS
The 4D CT scans of a QUASAR motion phantom were acquired for both regular and irregular breathing patterns. The hardware consisted of the Canon Aquilion Exceed LB CT scanner used in conjunction with the Anzai laser motion monitoring system. The nominal machine performance and reconstruction were demonstrated with measurements using regular breathing patterns. For irregular breathing patterns the performance was quantified through the analysis of the target motion in the superior and inferior directions, and the volume of the internal target volume (ITV). Acquisitions were performed using multiple pitches and the reconstructions were performed using both phase and amplitude-based binning.
RESULTS
The target was accurately captured during regular breathing. For the irregular breathing, the measured ITV exceeded the nominal ITV parameters in all scenarios, but all deviations were less than the reconstructed slice thickness. The mismatch between the nominal pitch and the actual breathing rate did not affect markedly the size of the ITV. Phase and normalized amplitude binning performed similarly.
CONCLUSIONS
We demonstrated a framework for measuring and quantifying the initial performance of 4D CT simulation scans that can also be applied during periodic QAs. The regular breathing provided confidence that the hardware and the software between the systems performs adequately. The irregular breathing data suggest that the system may be expected to capture in excess the target motion and geometry, but the deviation is expected to be within the slice thickness.
Topics: Humans; Four-Dimensional Computed Tomography; Lung Neoplasms; Phantoms, Imaging; Respiration; Motion; Radiotherapy Planning, Computer-Assisted
PubMed: 36057944
DOI: 10.1002/acm2.13764 -
Sensors (Basel, Switzerland) Jul 2023Clustering is considered to be one of the most effective ways for energy preservation and lifetime maximization in wireless sensor networks (WSNs) because the sensor...
Clustering is considered to be one of the most effective ways for energy preservation and lifetime maximization in wireless sensor networks (WSNs) because the sensor nodes are equipped with limited energy. Thus, energy efficiency and energy balance have always been the main challenges faced by clustering approaches. To overcome these, a distributed particle swarm optimization-based fuzzy clustering protocol called DPFCP is proposed in this paper to reduce and balance energy consumption, to thereby extend the network lifetime as long as possible. To this end, in DPFCP cluster heads (CHs) are nominated by a Mamdani fuzzy logic system with descriptors' residual energy, node degree, distance to the base station (BS), and distance to the centroid. Moreover, a particle swarm optimization (PSO) algorithm is applied to optimize the fuzzy rules, instead of conventional manual design. Thus, the best nodes are ensured to be selected as CHs for energy reduction. Once the CHs are selected, distance to the CH, residual energy, and deviation in the CH's number of members are considered for the non-CH joining cluster in order to form energy-balanced clusters. Finally, an on-demand mechanism, instead of periodic re-clustering, is utilized to maintain clusters locally and globally based on local information, so as to further reduce computation and message overheads, thereby saving energy consumption. Compared with the existing relevant protocols, the performance of DPFCP was verified by extensive simulation experiments. The results show that, on average, DPFCP improves energy consumption by 38.20%, 15.85%, 21.15%, and 13.06% compared to LEACH, LEACH-SF, FLS-PSO, and KM-PSO, and increases network lifetime by 46.19%, 20.69%, 20.44%, and 10.99% compared to LEACH, LEACH-SF, FLS-PSO, and KM-PSO, respectively. Moreover, the standard deviation of the residual network was reduced by 61.88%, 55.36%, 54.02%, and 19.39% compared to LEACH, LEACH-SF, FLS-PSO, and KM-PSO. It is thus clear that the proposed DPFCP protocol efficiently balances energy consumption to improve the overall network performance and maximize the network lifetime.
PubMed: 37571483
DOI: 10.3390/s23156699 -
Contemporary Clinical Trials May 2021With the growing use of online study management systems and rapid availability of data, timely data review and quality assessments are necessary to ensure proper... (Randomized Controlled Trial)
Randomized Controlled Trial
INTRODUCTION
With the growing use of online study management systems and rapid availability of data, timely data review and quality assessments are necessary to ensure proper clinical trial implementation. In this report we describe central monitoring used to ensure protocol compliance and accurate data reporting, implemented during a large phase 3 clinical trial.
MATERIAL AND METHODS
The Tuberculosis Trials Consortium (TBTC) Study 31/AIDS Clinical Trials Group (ACTG) study A5349 (S31) is an international, multi-site, randomized, open-label, controlled, non-inferiority phase 3 clinical trial comparing two 4-month regimens to a standard 6 month regimen for treatment of drug-susceptible tuberculosis (TB) among adolescents and adults with a sample size of 2500 participants.
RESULTS
Central monitoring utilized primary study data in a five-tiered approach, including (1) real-time data checks & topic-specific intervention reports, (2) missing forms reports, (3) quality assurance metrics, (4) critical data reports and (5) protocol deviation identification, aimed to detect and resolve quality challenges. Over the course of the study, 240 data checks and reports were programed across the five tiers used.
DISCUSSION
This use of primary study data to identify issues rapidly allowed the study sponsor to focus quality assurance and data cleaning activities on prioritized data, related to protocol compliance and accurate reporting of study results. Our approach enabled us to become more efficient and effective as we informed sites about deviations, resolved missing or inconsistent data, provided targeted guidance, and gained a deeper understanding of challenges experienced at clinical trial sites.
TRIAL REGISTRATION
This trial was registered with ClinicalTrials.gov (Identifier: NCT02410772) on April 8, 2015.
Topics: Adolescent; Adult; Antitubercular Agents; Clinical Protocols; Humans; Treatment Outcome; Tuberculosis, Pulmonary
PubMed: 33713841
DOI: 10.1016/j.cct.2021.106355 -
BMC Infectious Diseases Apr 2018A method for rapid detection of dengue virus using the reverse-transcription recombinase polymerase amplification (RT-RPA) was recently developed, evaluated and made...
BACKGROUND
A method for rapid detection of dengue virus using the reverse-transcription recombinase polymerase amplification (RT-RPA) was recently developed, evaluated and made ready for deployment. However, reliance solely on the evaluation performed by experienced researchers in a well-structured and well-equipped reference laboratory may overlook the potential intrinsic problems that may arise during deployment of the assay into new application sites, especially for users unfamiliar with the test. Appropriate assessment of this newly developed assay by users who are unfamiliar with the assay is, therefore, vital.
METHODS
An operational utility test to elucidate the efficiency and effectiveness of the dengue RT-RPA assay was conducted among a group of researchers new to the assay. Nineteen volunteer researchers with different research experience were recruited. The participants performed the RT-RPA assay and interpreted the test results according to the protocol provided. Deviation from the protocol was identified and tabulated by trained facilitators. Post-test questionnaires were conducted to determine the user satisfaction and acceptability of the dengue RT-RPA assay.
RESULTS
All the participants completed the test and successfully interpreted the results according to the provided instructions, regardless of their research experience. Of the 19 participants, three (15.8%) performed the assay with no deviations and 16 (84.2%) performed the assay with only 1 to 5 deviations. The number of deviations from protocol, however, was not correlated with the user laboratory experience. The accuracy of the results was also not affected by user laboratory experience. The concordance of the assay results against that of the expected was at 89.3%. The user satisfaction towards the RT-RPA protocol and interpretation of results was 90% and 100%, respectively.
CONCLUSIONS
The dengue RT-RPA assay can be successfully performed by simply following the provided written instructions. Deviations from the written protocols did not adversely affect the outcome of the assay. These suggest that the RT-RPA assay is indeed a simple, robust and efficient laboratory method for detection of dengue virus. Furthermore, high new user acceptance of the RT-RPA assay suggests that this assay could be successfully deployed into new laboratories where RT-RPA was not previously performed.
Topics: Dengue; Dengue Virus; Humans; Nucleic Acid Amplification Techniques; RNA, Viral; Recombinases; Reverse Transcription
PubMed: 29642856
DOI: 10.1186/s12879-018-3065-1 -
European Journal of Radiology Open 2022The exposure index (EI) is used in routine quality control (QC) tests performed in the radiographic equipment installed in our hospitals. This study aimed at...
AIM
The exposure index (EI) is used in routine quality control (QC) tests performed in the radiographic equipment installed in our hospitals. This study aimed at investigating the factors affecting the calculation of EI in QC and clinical images, and the implementation of target EI (EI) and deviation index (DI) in clinical practice.
METHODS
The EI is 100 times the incident air kerma (IAK) in μGy on the image receptor, using the RQA-5 X-ray beam quality. Conformance to this relationship was investigated in QC images and clinical images acquired using anthropomorphic phantom body parts and different examination protocols, tube potential settings and radiation field sizes. Furthermore, a survey on EI and DI data from clinical images was performed.
RESULTS
Though automatic exposure control (AEC) systems have been adjusted for an IAK of 2.5 μGy, for most anthropomorphic phantom images the EIs were far from 250, depending on the manufacturer, the anatomy imaged, and the examination protocol. Regarding the survey results, DI calculation was feasible in only 38 % of the systems, since for the rest EI values have not been set. However, the rationale based on which EI have been selected is unclear. Some systems use only one while others many different EI values.
CONCLUSION
Before using EI for quality control of clinical images image all receptors and AEC systems should be properly calibrated. Then, the methodology of selecting appropriate EI should be refined, since the EI calculation may vary, depending on the manufacturer, the anatomy imaged, and the examination protocol.
PubMed: 36386764
DOI: 10.1016/j.ejro.2022.100454 -
Skin Health and Disease Feb 2023Precision is crucial in determining the appropriate procedure for implementing further trials. We conducted a study to explore the reliability of a novel measuring...
BACKGROUND
Precision is crucial in determining the appropriate procedure for implementing further trials. We conducted a study to explore the reliability of a novel measuring system for human skin color.
METHODS
The novel skin color measuring system was used to capture the skin color of four volunteers (2 males and 2 females) from the same location on each subject by the same operator. The measurement was repeated for different poses and instrument factors (camera and shooting protocol) in the red, green, and blue (RGB) system. The average color depth in each image was calculated and converted from 0 to 255. The spread of measures and the Bland-Altman plot was displayed to determine each variance source's random error, with the interclass correlation coefficients applied to reflect the reliability.
RESULT
The RGB color depth in the experiment ranged from 190, 152, and 122 to 208, 170, and 142. The 95% confidential interval of the differences from the means in RGB colors for the different protocols were ±2.8, ±2.6, and ±2.1, respectively. The largest variation in the replicate trials was observed when subjects were in a supine position (standard deviation: 2). The interclass correlation coefficients were greater than 90%, suggesting that the developed system is highly precise.
CONCLUSION
This study demonstrated that the developed device could stably and reliably detect human skin color across different common sources of variation, and thus could be applied clinically to explore relationships between health/disease and skin color changes.
PubMed: 36751325
DOI: 10.1002/ski2.182 -
Journal of Clinical and Translational... 2024Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design,...
OBJECTIVE
Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design, sample, and data management. High complexity often requires more resources, advanced planning, and specialized expertise to execute studies effectively. However, there are limited instruments that scale study complexity across research designs. The purpose of this study was to develop and establish initial psychometric properties of an instrument that scales research study complexity.
METHODS
Technical and grammatical principles were followed to produce clear, concise items using language familiar to researchers. Items underwent face, content, and cognitive validity testing through quantitative surveys and qualitative interviews. Content validity indices were calculated, and iterative scale revision was performed. The instrument underwent pilot testing using 2 exemplar protocols, asking participants ( = 31) to score 25 items (e.g., study arms, data collection procedures).
RESULTS
The instrument (Research Complexity Index) demonstrated face, content, and cognitive validity. Item mean and standard deviation ranged from 1.0 to 2.75 (Protocol 1) and 1.31 to 2.86 (Protocol 2). Corrected item-total correlations ranged from .030 to .618. Eight elements appear to be under correlated to other elements. Cronbach's alpha was 0.586 (Protocol 1) and 0.764 (Protocol 2). Inter-rater reliability was fair (kappa = 0.338).
CONCLUSION
Initial pilot testing demonstrates face, content, and cognitive validity, moderate internal consistency reliability and fair inter-rater reliability. Further refinement of the instrument may increase reliability thus providing a comprehensive method to assess study complexity and related resource quantification (e.g., staffing requirements).
PubMed: 38836248
DOI: 10.1017/cts.2024.534 -
Journal of Bone and Mineral Research :... Nov 2023Opportunistic screening is a new promising technique to identify individuals at high risk for osteoporotic fracture using computed tomography (CT) scans originally...
Opportunistic screening is a new promising technique to identify individuals at high risk for osteoporotic fracture using computed tomography (CT) scans originally acquired for an clinical purpose unrelated to osteoporosis. In these CT scans, a calibration phantom traditionally required to convert measured CT values to bone mineral density (BMD) is missing. As an alternative, phantomless calibration has been developed. This study aimed to review the principles of four existing phantomless calibration methods and to compare their performance against the gold standard of simultaneous calibration (ΔBMD). All methods were applied to a dataset of 350 females scanned with a highly standardized CT protocol (DS1) and to a second dataset of 114 patients (38 female) from clinical routine covering a large range of CT acquisition and reconstruction parameters (DS2). Three of the phantomless calibration methods must be precalibrated with a reference dataset containing a calibration phantom. Sixty scans from DS1 and 57 from DS2 were randomly selected for this precalibration. For each phantomless calibration method first the best combination of internal reference materials (IMs) was selected. These were either air and blood or subcutaneous adipose tissue, blood, and cortical bone. In addition, for phantomless calibration a fifth method based on average calibration parameters derived from the reference dataset was applied. For DS1, ΔBMD results (mean standard deviation) for the phantomless calibration methods requiring a precalibration ranged from 0.1 2.7 mg/cm to 2.4 3.5 mg/cm with similar means but significantly higher standard deviations for DS2. Performance of the phantomless calibration method, which does not require a precalibration was worse (ΔBMD DS1: 12.6 13.2 mg/cm , DS2: 0.5 8.8 mg/cm ). In conclusion, phantomless BMD calibration performs well if precalibrated with a reference dataset. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Topics: Humans; Female; Bone Density; Calibration; Tomography, X-Ray Computed; Osteoporosis; Minerals; Absorptiometry, Photon
PubMed: 37732678
DOI: 10.1002/jbmr.4917