-
Pharmaceutical Statistics Jan 2022Exposure-adjusted event rate is a quantity often used in clinical trials to describe average event count per unit of person-time. The event count may represent the... (Review)
Review
Exposure-adjusted event rate is a quantity often used in clinical trials to describe average event count per unit of person-time. The event count may represent the number of patients experiencing first (incident) event episode, or the total number of event episodes, including recurring events. For inference about difference in the exposure-adjusted rates between interventions, many methods of interval estimation rely on the assumption of Poisson distribution for the event counts. These intervals may suffer from substantial undercoverage both, asymptotically due to extra-Poisson variation, and in the settings with rare events even when the Poisson assumption is satisfied. We review asymptotically robust methods of interval estimation for the rate difference that do not depend on distributional assumptions for the event counts, and propose a modification of one of these methods. The new interval estimator has asymptotically nominal coverage for the rate difference with an arbitrary distribution of event counts, and good finite sample properties, avoiding substantial undercoverage with small samples, rare events, or over-dispersed data. The proposed method can handle covariate adjustment and can be implemented with commonly available software. The method is illustrated using real data on adverse events in a clinical trial.
Topics: Causality; Confidence Intervals; Humans; Poisson Distribution; Randomized Controlled Trials as Topic; Software
PubMed: 34342122
DOI: 10.1002/pst.2155 -
Journal of Speech, Language, and... Oct 2022Almost 90% of people with Parkinson's disease (PD) develop voice and speech disorders during the course of the disease. Ventilatory dysfunction is one of the main...
PURPOSE
Almost 90% of people with Parkinson's disease (PD) develop voice and speech disorders during the course of the disease. Ventilatory dysfunction is one of the main causes. We aimed to evaluate relationships between respiratory impairments and speech/voice changes in PD.
METHOD
At Day 15 from admission, in consecutive clinically stable PD patients in a neurorehabilitation unit, we collected clinical data as follows: comorbidities, PD severity, motor function and balance, respiratory function at rest (including muscle strength and cough ability), during exercise-induced desaturation and at night, voice function (Voice Handicap Index [VHI] and acoustic analysis [Praat]), speech disorders (Robertson Dysarthria Profile [RDP]), and postural abnormalities. Based on an arbitrary RDP cutoff, two groups with different dysarthria degree were identified-moderate-severe versus no-mild dysarthria-and compared.
RESULTS
Of 55 patients analyzed (median value Unified Parkinson's Disease Rating Scale Part II 9 and Part III 17), we found significant impairments in inspiratory and expiratory muscle pressure (> 90%, both), exercise tolerance at 6-min walking distance (96%), nocturnal (12.7%) and exercise-induced (21.8%) desaturation, VHI (34%), and Praat Shimmer% (89%). Patients with moderate-severe dysarthria (16% of total sample) had more comorbidities/disabilities and worse respiratory pattern and postural abnormalities (camptocormia) than those with no-mild dysarthria. Moreover, the risk of presenting nocturnal desaturation, reduced peak expiratory flow, and cough ability was about 11, 13, and 8 times higher in the moderate-severe group.
CONCLUSIONS
Dysarthria and respiratory dysfunction are closely associated in PD patients, particularly nocturnal desaturation and reduced cough ability. In addition, postural condition could be at the base of both respiratory and voice impairments.
SUPPLEMENTAL MATERIAL
https://doi.org/10.23641/asha.21210944.
Topics: Cough; Dysarthria; Humans; Parkinson Disease; Speech Disorders; Voice Disorders
PubMed: 36194769
DOI: 10.1044/2022_JSLHR-21-00539 -
Clinical Toxicology (Philadelphia, Pa.) Jul 2023In September 1987, two men in Goiânia, Brazil, discovered an abandoned international standard capsule containing less than 100 g of cesium-137 chloride. The material...
THE GOIÂNIA INCIDENT
In September 1987, two men in Goiânia, Brazil, discovered an abandoned international standard capsule containing less than 100 g of cesium-137 chloride. The material was unguarded, and the warning systems were inadequate and inscrutable. The men took the capsule and sold it for scrap, and within days the city would be contaminated with highly radioactive material. Within weeks, 112,000 individuals would be screened for radioactive contamination, 249 would be exposed to radioactive materials, 46 would receive medical treatment for radioactive contamination, and four would die from acute radiation sickness. The citywide radioactive contamination occurred, in part, due to arbitrary and unfamiliar written warning systems. The individuals who discovered the cesium-137 capsule were illiterate and unfamiliar with the radiation trefoil logo, which was first used in 1946 in California, United States of America. As a result, written language and visual symbols were useless warnings against the dangerous contents of the capsule.
MANAGEMENT OF CESIUM-137 EXPOSURE IN 2023
Cesium-137 enters the body through ingestion or inhalation. This isotope emits beta and gamma radiation, both forms of ionizing radiation which damage living tissues. The radiation dose lethal to 50% of an exposed population within 60 days (LD50/60) is approximately 3.5 to 4 Gray (Gy) without medical intervention. However, this dose increases to around 6-7 Gy when medical support is provided, which typically includes antibiotics, blood transfusions, granulocyte-macrophage colony-stimulating factor, and Prussian blue. Prussian blue binds to cesium, thereby facilitating its elimination from the body.
LESSONS LEARNED REGARDING RADIOACTIVE WASTE DISPOSAL AND THE NEXT 10,000 YEARS
The radiological disaster in Goiânia was due in large part to the failures of various agencies to warn of danger and minimize access to radioactive material. Barriers to risk communication included a lack of a universal semiotic language regarding radioactive hazards, which was compounded by the illiteracy of the scrappers and their inability to recognize the radioactivity warning trefoil. There is no society in which every member understands written language or recognizes every symbol. Given that the teletherapy unit was abandoned in an urban environment, there were no administrative or engineering controls in place to prevent human beings from becoming exposed to radioactive material.
CONCLUSIONS
As little as 100 g of highly radioactive material, such as cesium-137, may lead to massive environmental contamination, fatalities and permanent disability due to acute radiation sickness, wreak havoc, and disrupt society on a scale that is challenging for public health officials to manage. Thousands of tons of radioactive materials from the waste products of nuclear weapons and power plant manufacture will have to be stored for at least 100,000 years to prevent danger to human life and society. Public health officials and governments must build systems to keep humans safe and physically isolated from these radioactive materials for as long as possible.
Topics: Male; Humans; Cesium Radioisotopes; Ferrocyanides; Radiation Injuries
PubMed: 37535035
DOI: 10.1080/15563650.2023.2235889 -
PLoS Computational Biology Nov 2021We present artificial neural networks as a feasible replacement for a mechanistic model of mosquito abundance. We develop a feed-forward neural network, a long...
We present artificial neural networks as a feasible replacement for a mechanistic model of mosquito abundance. We develop a feed-forward neural network, a long short-term memory recurrent neural network, and a gated recurrent unit network. We evaluate the networks in their ability to replicate the spatiotemporal features of mosquito populations predicted by the mechanistic model, and discuss how augmenting the training data with time series that emphasize specific dynamical behaviors affects model performance. We conclude with an outlook on how such equation-free models may facilitate vector control or the estimation of disease risk at arbitrary spatial scales.
Topics: Aedes; Animals; Computational Biology; Databases, Factual; Humans; Models, Biological; Mosquito Vectors; Neural Networks, Computer; Population Dynamics; Spatio-Temporal Analysis; Stochastic Processes; Systems Analysis; United States; Vector Borne Diseases; Weather
PubMed: 34797822
DOI: 10.1371/journal.pcbi.1009467 -
Lab on a Chip Jun 2021Unsteady and pulsatile flows receive increasing attention due to their potential to enhance various microscale processes. Further, they possess significant relevance for...
Unsteady and pulsatile flows receive increasing attention due to their potential to enhance various microscale processes. Further, they possess significant relevance for microfluidic studies under physiological flow conditions. However, generating a precise time-dependent flow field with commercial, pneumatically operated pressure controllers remains challenging and can lead to significant deviations from the desired waveform. In this study, we present a method to correct such deviations and thus optimize pulsatile flows in microfluidic experiments using two commercial pressure pumps. Therefore, we first analyze the linear response of the systems to a sinusoidal pressure input, which allows us to predict the time-dependent pressure output for arbitrary pulsatile input signals. Second, we explain how to derive an adapted input signal, which significantly reduces deviations between the desired and actual output pressure signals of various waveforms. We demonstrate that this adapted pressure input leads to an enhancement of the time-dependent flow of red blood cells in microchannels. The presented method does not rely on any hardware modifications and can be easily implemented in standard pressure-driven microfluidic setups to generate accurate pulsatile flows with arbitrary waveforms.
Topics: Lab-On-A-Chip Devices; Microfluidics; Pulsatile Flow
PubMed: 34008605
DOI: 10.1039/d0lc01297a -
Critical Reviews in Food Science and... 2023Mechanical damage of fresh fruit occurs throughout the postharvest supply chain leading to poor consumer acceptance and marketability. In this review, the mechanisms of... (Review)
Review
Mechanical damage of fresh fruit occurs throughout the postharvest supply chain leading to poor consumer acceptance and marketability. In this review, the mechanisms of damage development are discussed first. Mathematical modeling provides advanced ways to describe and predict the deformation of fruit with arbitrary geometry, which is important to understand their mechanical responses to external forces. Also, the effects of damage at the cellular and molecular levels are discussed as this provides insight into fruit physiological responses to damage. Next, direct measurement methods for damage including manual evaluation, optical detection, magnetic resonance imaging, and X-ray computed tomography are examined, as well as indirect methods based on physiochemical indexes. Also, methods to measure fruit susceptibility to mechanical damage based on the bruise threshold and the amount of damage per unit of impact energy are reviewed. Further, commonly used external and interior packaging and their applications in reducing damage are summarized, and a recent biomimetic approach for designing novel lightweight packaging inspired by the fruit pericarp. Finally, future research directions are provided.HIGHLIGHTSMathematical modeling has been increasingly used to calculate damage to fruit.Cell and molecular mechanisms response to fruit damage is an under-explored area.Susceptibility measurement of different mechanical forces has received attention.Customized design of reusable and biodegradable packaging is a hot topic of research.
Topics: Fruit; Mechanical Phenomena
PubMed: 35647708
DOI: 10.1080/10408398.2022.2078783 -
Medical Physics Aug 2023Agatston scoring, the traditional method for measuring coronary artery calcium, is limited in its ability to accurately quantify low-density calcifications, among other...
BACKGROUND
Agatston scoring, the traditional method for measuring coronary artery calcium, is limited in its ability to accurately quantify low-density calcifications, among other things. The inaccuracy of Agatston scoring is likely due partly to the arbitrary thresholding requirement of Agatston scoring.
PURPOSE
A calcium quantification technique that removes the need for arbitrary thresholding and is more accurate, sensitive, reproducible, and robust is needed. Improvements to calcium scoring will likely improve patient risk stratification and outcome.
METHODS
The integrated Hounsfield technique was adapted for calcium scoring (integrated calcium mass). Integrated calcium mass requires no thresholding and includes all calcium information within an image. This study utilized phantom images acquired by G van Praagh et al., with calcium hydroxyapatite (HA) densities in the range of 200-800 mgHAcm to measure calcium according to integrated calcium mass and Agatston scoring. The calcium mass was known, which allowed for accuracy, reproducibility, sensitivity, and robustness comparisons between integrated calcium mass and Agatston scoring. Multiple CT vendors (Canon, GE, Philips, Siemens) were used during the image acquisition phase, which provided a more robust comparison between the two calcium scoring techniques. Three calcification inserts of different diameters (1, 3, and 5 mm) and different HA densities (200, 400, and 800 mgHAcm ) were placed within the phantom. The effect of motion was also analyzed using a dynamic phantom. All dynamic phantom calcium inserts were 5.0 ± 0.1 mm in diameter with a length of 10.0 ± 0.1 mm. The four different densities were 196 ± 3, 380 ± 2, 408 ± 2, and 800 ± 2 mgHAcm .
RESULTS
Integrated calcium mass was more accurate than Agatston scoring for stationary scans ( , ) and motion affected scans ( , ). On average, integrated calcium mass was more reproducible than Agatston scoring for two of the CT vendors. The percentage of false-negative and false-positive calcium scores were lower for integrated calcium mass (15.00%, 0.00%) than Agatston scoring (28.33%, 6.67%). Integrated calcium mass was more robust to changes in scan parameters than Agatston scoring.
CONCLUSIONS
The results of this study indicate that integrated calcium mass is more accurate, reproducible, and sensitive than Agatston scoring on a variety of different CT vendors. The substantial reduction in false-negative scores for integrated calcium mass is likely to improve risk-stratification for patients undergoing calcium scoring and their potential outcome.
Topics: Humans; Calcium; Coronary Vessels; Reproducibility of Results; Calcinosis; Motion
PubMed: 36852776
DOI: 10.1002/mp.16326 -
Advanced Materials (Deerfield Beach,... Nov 2023Film-type shape-configurable speakers with tunable sound directivity are in high demand for wearable electronics. Flexible, thin thermoacoustic (TA) loudspeakers-which...
Film-type shape-configurable speakers with tunable sound directivity are in high demand for wearable electronics. Flexible, thin thermoacoustic (TA) loudspeakers-which are free from bulky vibrating diaphragms-show promise in this regard. However, configuring thin TA loudspeakers into arbitrary shapes is challenging because of their low sound pressure level (SPL) under mechanical deformations and low conformability to other surfaces. By carefully controlling the heat capacity per unit area and thermal effusivity of an MXene conductor and substrates, respectively, it fabricates an ultrathin MXene-based TA loudspeaker exhibiting high SPL output (74.5 dB at 15 kHz) and stable sound performance for 14 days. Loudspeakers with the parylene substrate, whose thickness is less than the thermal penetration depth, generated bidirectional and deformation-independent sound in bent, twisted, cylindrical, and stretched-kirigami configurations. Furthermore, it constructs parabolic and spherical versions of ultrathin, large-area (20 cm × 20 cm) MXene-based TA loudspeakers, which display sound-focusing and 3D omnidirectional-sound-generating attributes, respectively.
PubMed: 37740254
DOI: 10.1002/adma.202306637 -
Environmental Science and Pollution... Aug 2021This paper contributes to the environmental literature by (i) demonstrating that the estimated coefficients and the statistical significance of the non-leading terms in...
This paper contributes to the environmental literature by (i) demonstrating that the estimated coefficients and the statistical significance of the non-leading terms in quadratic, cubic, and quartic logarithmic environmental Kuznets curve (EKC) specifications are arbitrary and should therefore not be used to choose the preferred specification and (ii) detailing a proposed general-to-specific type methodology for choosing the appropriate specifications when attempting to estimate higher-order polynomials such as cubic and quartic logarithmic EKC relationships. Testing for the existence and shape of the well-known EKC phenomenon is a hot topic in the environmental economics literature. The conventional approach widely employs quadratic and cubic specifications and more recently also the quartic specification, where the variables are in logarithmic form. However, it is important that researchers understand whether the estimated EKC coefficients, turning points, and elasticities are statistically acceptable, economically interpretable, and comparable. In addition, it is vital that researchers have a clear structured non-arbitrary methodology for determining the preferred specification and hence shape of the estimated EKC. We therefore show mathematically and empirically the arbitrary nature of estimated non-leading coefficients in quadratic, cubic, and quartic logarithmic EKC specifications, being dependent upon the units of measurement chosen for the independent variables (e.g. dependent upon a rescaling of the variables such as moving from $m to $bn). Consequently, the practice followed in many previously papers, whereby the estimates of the non-leading terms are used in the decision to choose the preferred specification of an estimated EKC relationship, is incorrect and should not be followed since it potentially could lead to misleading conclusions. Instead, it should be based upon the sign and statistical significance of the estimated coefficients of the leading terms, the location of turning point(s), and the sign and statistical significance of the estimated elasticities. Furthermore, we suggest that researchers should follow a proposed general-to-specific type methodology for choosing the appropriate order of polynomials when attempting to estimate higher-order polynomial logarithmic EKCs.
Topics: Algorithms; Carbon Dioxide; Economic Development; Humans
PubMed: 33797046
DOI: 10.1007/s11356-021-13463-y -
The European Journal of Health... Aug 2022From both the methodological point of view and standardization of methodology, little attention has been paid to the estimation of direct costs in evaluation of... (Review)
Review
From both the methodological point of view and standardization of methodology, little attention has been paid to the estimation of direct costs in evaluation of healthcare technologies. The objective is to revise the recommendations on direct costs provided in European economic evaluation guidelines and to identify the commonalities and divergences among them. To achieve this, a comprehensive search through several online databases was performed resulting in 41 documents from 26 European countries, be they economic evaluation guidelines or costing guidelines. The results show a large disparity in methodologies used in estimation of direct costs to be included in economic evaluations of health technologies recommended by European countries. A lack of standardization of cost estimation methodologies influences arbitrariness in selecting costs of resources included in economic evaluations of medicinal products or any other technologies and, therefore, in decision making process necessary to introduce new technology. In addition, this heterogeneity poses a major challenge for identifying factors that could affect the variability of unit costs across countries.
Topics: Biomedical Technology; Cost-Benefit Analysis; Europe; Humans
PubMed: 34825296
DOI: 10.1007/s10198-021-01414-w