-
Journal of the Mechanical Behavior of... Feb 2024Aseptic loosening due to mechanical failure of bone cement is considered to be a leading cause of revision of joint replacement systems. Detailed quantified information...
Aseptic loosening due to mechanical failure of bone cement is considered to be a leading cause of revision of joint replacement systems. Detailed quantified information on the number, size and distribution pattern of pores can help to obtain a deeper understanding of the bone cement's fatigue behavior. The objective of this study was to provide statistical descriptions for the pore distribution characteristics of laboratory bone cement specimens with different amounts of antibiotic contents. For four groups of bone cement (Palacos) specimens, containing 0.3, 0.6, 1.2 and 2.4 wt/wt% of telavancin antibiotic, seven samples per group were micro computed tomography scanned (38.97 μm voxel size). The images were first preprocessed in Mimics and then analyzed in Dragonfly, with the level of threshold being set such that single-pixel pores become visible. The normalized pore volume data of the specimens were then used to extract the logarithmic histograms of the pore densities for antibiotic groups, as well as their three-parameter Weibull probability density functions. Statistical comparison of the pore distribution data of the antibiotic groups using the Mann-Whitney non-parametric test revealed a significantly larger porosity (p < 0.05) in groups with larger added antibiotic contents (2.4 and 0.6 wt/wt% vs 0.3 wt/wt%). Further analysis revealed that this effect was associated with the significantly larger frequency of micropores of 0.1-0.5 mm diameter (p < 0.05) in groups with larger antibiotic content (2.4 wt/wt% vs and 0.6 and 0.3 wt/wt%), implying that the elution of the added antibiotic produces micropores in this diameter range mainly. Based on this observation and the fatigue test results in the literature, it was suggested that micropore clusters have a detrimental effect on the mechanical properties of bone cement and play a major role in initiating fatigue cracks in highly antibiotic added specimens.
Topics: Animals; Polymethyl Methacrylate; Anti-Bacterial Agents; Bone Cements; Odonata; X-Ray Microtomography; Statistical Distributions
PubMed: 38100980
DOI: 10.1016/j.jmbbm.2023.106297 -
Cognitive Psychology Sep 2023We present results from five visual working memory (VWM) experiments in which participants were briefly shown between 2 and 6 colored squares. They were then cued to...
We present results from five visual working memory (VWM) experiments in which participants were briefly shown between 2 and 6 colored squares. They were then cued to recall the color of one of the squares and they responded by choosing the color on a continuous color wheel. The experiments provided response proportions and response time (RT) measures as a function of angle for the choices. Current VWM models for this task include discrete models that assume an item is either within working memory or not and resource models that assume that memory strength varies as a function of the number of items. Because these models do not include processes that allow them to account for RT data, we implemented them within the spatially continuous diffusion model (SCDM, Ratcliff, 2018) and use the experimental data to evaluate these combined models. In the SCDM, evidence retrieved from memory is represented as a spatially continuous normal distribution and this drives the decision process until a criterion (represented as a 1-D line) is reached, which produces a decision. Noise in the accumulation process is represented by continuous Gaussian process noise over spatial position. The models that fit best from the discrete and resource-based classes converged on a common model that had a guessing component and that allowed the height of the normal memory-strength distribution to vary with number of items. The guessing component was implemented as a regular decision process driven by a flat evidence distribution, a zero-drift process. The combination of choice and RT data allows models that were not identifiable based on choice data alone to be discriminated.
Topics: Humans; Memory, Short-Term; Mental Recall; Cues; Normal Distribution; Reaction Time
PubMed: 37659278
DOI: 10.1016/j.cogpsych.2023.101595 -
American Heart Journal Aug 2024Clinicians often suspect that a treatment effect can vary across individuals. However, they usually lack "evidence-based" guidance regarding potential heterogeneity of... (Review)
Review
Clinicians often suspect that a treatment effect can vary across individuals. However, they usually lack "evidence-based" guidance regarding potential heterogeneity of treatment effects (HTE). Potentially actionable HTE is rarely discovered in clinical trials and is widely believed (or rationalized) by researchers to be rare. Conventional statistical methods to test for possible HTE are extremely conservative and tend to reinforce this belief. In truth, though, there is no realistic way to know whether a common, or average, effect estimated from a clinical trial is relevant for all, or even most, patients. This absence of evidence, misinterpreted as evidence of absence, may be resulting in sub-optimal treatment for many individuals. We first summarize the historical context in which current statistical methods for randomized controlled trials (RCTs) were developed, focusing on the conceptual and technical limitations that shaped, and restricted, these methods. In particular, we explain how the common-effect assumption came to be virtually unchallenged. Second, we propose a simple graphical method for exploratory data analysis that can provide useful visual evidence of possible HTE. The basic approach is to display the complete distribution of outcome data rather than relying uncritically on simple summary statistics. Modern graphical methods, unavailable when statistical methods were initially formulated a century ago, now render fine-grained interrogation of the data feasible. We propose comparing observed treatment-group data to "pseudo data" engineered to mimic that which would be expected under a particular HTE model, such as the common-effect model. A clear discrepancy between the distributions of the common-effect pseudo data and the actual treatment-effect data provides prima facie evidence of HTE to motivate additional confirmatory investigation. Artificial data are used to illustrate implications of ignoring heterogeneity in practice and how the graphical method can be useful.
Topics: Humans; Randomized Controlled Trials as Topic; Evidence-Based Medicine; Treatment Outcome; Data Interpretation, Statistical; Treatment Effect Heterogeneity
PubMed: 38701962
DOI: 10.1016/j.ahj.2024.04.020 -
Plants (Basel, Switzerland) Sep 2023The quantitative description of growth rings is yet incomplete, including the functional division into earlywood and latewood. Methods developed to date, such as the...
The quantitative description of growth rings is yet incomplete, including the functional division into earlywood and latewood. Methods developed to date, such as the Mork criterion for conifers, can be biased and arbitrary depending on species and growth conditions. We proposed the use of modeling of the statistical distribution of tracheids to determine a universal criterion applicable to all conifer species. Thisstudy was based on 50-year anatomical measurements of L., Du Tour, and Ledeb. near the upper tree line in the Western Sayan Mountains (South Siberia). Statistical distributions of the cell wall thickness (CWT)-to-radial-diameter (D) ratio and its slope were investigated for raw and standardized data (divided by the mean). The bimodal distribution of the slope for standardized CWT and D was modeled with beta distributions for earlywood and latewood tracheids and a generalized normal distribution for transition wood to account for the gradual shift in cell traits. The modelcan describe with high accuracy the growth ring structure for species characterized by various proportions of latewood, histometric traits, and gradual or abrupt transition. The proportion of two (or three, including transition wood) zones in the modeled distribution is proposed as a desired criterion.
PubMed: 37836196
DOI: 10.3390/plants12193454 -
PloS One 2023Gul and Mohsin 2021 developed a new modified form of renowned "Half logistic" distribution introduced by Balakrishnan (1991) and named it half logistic-truncated...
Gul and Mohsin 2021 developed a new modified form of renowned "Half logistic" distribution introduced by Balakrishnan (1991) and named it half logistic-truncated exponential distribution (HL-TEXPD). Some mathematical characteristics are studied, including hazard function, Pth percentile, moment generating function and Shannon entropy. Simulation study is performed to examine the behaviour of parameter estimates. The proposed model is fitted on three real data sets to check its efficacy. Additionally, TTT (total time on test) plot is drawn to study the failure rate of the three data sets. The results verdict that HL-TEXPD can be efficiently utilized in the field of engineering and medical sciences based on the data sets under study contrary to the classical and baseline models.
Topics: Computer Simulation; Statistical Distributions; Entropy
PubMed: 37963157
DOI: 10.1371/journal.pone.0285992 -
Psychometrika Dec 2023Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is...
Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also on the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require knowing several anchor items that are DIF-free in order to draw inferences on whether each of the rest is a DIF item, where the anchor items are used to identify the latent trait distributions. When no prior information on anchor items is available, or some anchor items are misspecified, item purification methods and regularized estimation methods can be used. The former iteratively purifies the anchor set by a stepwise model selection procedure, and the latter selects the DIF-free items by a LASSO-type regularization approach. Unfortunately, unlike the methods based on a correctly specified anchor set, these methods are not guaranteed to provide valid statistical inference (e.g., confidence intervals and p-values). In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal [Formula: see text] norm condition for identifying the latent trait distributions. Without requiring prior knowledge about an anchor set, it can accurately estimate the DIF effects of individual items and further draw valid statistical inferences for quantifying the uncertainty. Specifically, the inference results allow us to control the type-I error for DIF detection, which may not be possible with item purification and regularized estimation methods. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is applied to analysing the three personality scales of the Eysenck personality questionnaire-revised (EPQ-R).
Topics: Psychometrics; Surveys and Questionnaires; Likelihood Functions; Uncertainty
PubMed: 37550561
DOI: 10.1007/s11336-023-09930-9 -
Journal of Racial and Ethnic Health... Dec 2023Corona is a disease that affects the whole world. Countries with weak economies are specifically more vulnerable. A proper understanding of COVID-19 spreading,...
Corona is a disease that affects the whole world. Countries with weak economies are specifically more vulnerable. A proper understanding of COVID-19 spreading, identifying the high-risk areas, and discovering factors influencing the spread of the disease are crucial to improving disease control. This study evaluates the geo-statistical distribution of COVID-19 to identify critical areas of Africa using spatial clustering pattern analysis. In addition, the spatial correlation between infected cases and variables such as the unemployment rate, gross domestic product (GDP), population, and vaccination rate is calculated using Geographically Weighted Regression (GWR) analysis. The hot-spot map showed a statistically significant cluster of high values in southern and northern Africa. Moreover, the outcome of the GWR analysis revealed the GDP and population had the most significant correlation with the spreading of COVID-19, with Local R2 values of (0.01-0.99) and (0-0.89), respectively.
Topics: Humans; COVID-19; Spatial Analysis; Africa; Socioeconomic Factors; Demography
PubMed: 36394796
DOI: 10.1007/s40615-022-01453-w -
The Science of the Total Environment Jul 2023Understanding the probability distributions of precipitation is crucial for predicting climatic events and constructing hydraulic facilities. To overcome the inadequacy...
Understanding the probability distributions of precipitation is crucial for predicting climatic events and constructing hydraulic facilities. To overcome the inadequacy of precipitation data, regional frequency analysis was commonly used by "trading space for time". However, with the increasing availability of gridded precipitation datasets with high spatial and temporal resolutions, the probability distributions of precipitation for these datasets have been less explored. We used L-moments and goodness-of-fit criteria to identify the probability distributions of annual, seasonal, and monthly precipitation for a 0.5° × 0.5° dataset across the Loess Plateau (LP). We examined five 3-parameter distributions, namely General Extreme Value (GEV), Generalized Logistic (GLO), Generalized Pareto (GPA), Generalized Normal (GNO), and Pearson type III (PE3), and evaluated the accuracy of estimated rainfall using the leave-one-out method. We also presented pixel-wise fit-parameters and quantiles of precipitation as supplements. Our findings indicated that precipitation probability distributions vary by location and time scale, and the fitted probability distribution functions are reliable for estimating precipitation under various return periods. Specifically, for annual precipitation, GLO was prevalent in humid and semi-humid areas, GEV in semi-arid and arid areas, and PE3 in cold-arid areas. For seasonal precipitation, spring precipitation mainly conforms to GLO distribution, summer precipitation around the 400 mm isohyet prevalently follows GEV distribution, autumn precipitation primarily meets GPA and PE3 distributions, and winter precipitation in the northwest, south, and east of the LP mainly conforms to GPA, PE3 and GEV distributions, respectively. Regarding monthly precipitation, the common distribution functions are PE3 and GPA for the less-precipitation months, whereas the distribution functions of precipitation for more-precipitation months vary substantially across different regions of the LP. Our study contributes to a better understanding of precipitation probability distributions in the LP and provides insights for future studies on gridded precipitation datasets using robust statistical methods.
PubMed: 37100144
DOI: 10.1016/j.scitotenv.2023.163528 -
Biometrika Mar 2024Rooted and ranked phylogenetic trees are mathematical objects that are useful in modelling hierarchical data and evolutionary relationships with applications to many...
Rooted and ranked phylogenetic trees are mathematical objects that are useful in modelling hierarchical data and evolutionary relationships with applications to many fields such as evolutionary biology and genetic epidemiology. Bayesian phylogenetic inference usually explores the posterior distribution of trees via Markov chain Monte Carlo methods. However, assessing uncertainty and summarizing distributions remains challenging for these types of structures. While labelled phylogenetic trees have been extensively studied, relatively less literature exists for unlabelled trees that are increasingly useful, for example when one seeks to summarize samples of trees obtained with different methods, or from different samples and environments, and wishes to assess the stability and generalizability of these summaries. In our paper, we exploit recently proposed distance metrics of unlabelled ranked binary trees and unlabelled ranked genealogies, or trees equipped with branch lengths, to define the Fréchet mean, variance and interquartile sets as summaries of these tree distributions. We provide an efficient combinatorial optimization algorithm for computing the Fréchet mean of a sample or of distributions on unlabelled ranked tree shapes and unlabelled ranked genealogies. We show the applicability of our summary statistics for studying popular tree distributions and for comparing the SARS-CoV-2 evolutionary trees across different locations during the COVID-19 epidemic in 2020. Our current implementations are publicly available at https://github.com/RSamyak/fmatrix.
PubMed: 38352626
DOI: 10.1093/biomet/asad025 -
Gait & Posture Sep 2023Net joint moments (NJM) are typically normalized for a (combination of) physical body characteristics such as mass, height, and limb length using ratio scaling to...
BACKGROUND
Net joint moments (NJM) are typically normalized for a (combination of) physical body characteristics such as mass, height, and limb length using ratio scaling to account for differences in body characteristics between individuals. Four assumptions must be met when normalizing NJM data this way to ensure valid conclusions. First, the relationship between the non-normalized NJM and participant characteristic should be linear. Second, the regression line between NJM and the characteristic(s) used should pass through the origin. Third, scaling should not significantly perturb the statistical distribution of the data. Fourth, normalizing a NJM should eliminate its correlation with the characteristic(s) normalized for.
RESEARCH QUESTION
This study assessed these assumptions using data collected among 59 individuals running at 10 km h.
METHODS
Standard inverse dynamics analyses were conducted, and ratios were computed between the sagittal-plane hip, knee and ankle NJM's and the participant's mass, height, leg length, mass × height, and mass × leg length.
RESULTS
The most important finding of this study was that none of the scaling variables fulfilled all assumptions across all joints. However, scaling by mass, mass*height and mass*leg length satisfied the assumptions for the knee joint moment and log-transformed hip joint moment, suggesting these methods generally performed best.
SIGNIFICANCE
Our findings suggests that scaling by mass, mass*height and mass*leg length may be considered to normalize joint moments during running. Nevertheless, we urge researchers to check the statistical assumptions to ensure valid conclusions. We provide supplementary code to check the statistical assumptions, and discuss consequences of inappropriate scaling.
Topics: Humans; Lower Extremity; Knee Joint; Hip Joint; Running; Ankle Joint; Biomechanical Phenomena
PubMed: 37494781
DOI: 10.1016/j.gaitpost.2023.07.278