-
Dental Materials : Official Publication... May 2012The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data. (Comparative Study)
Comparative Study Randomized Controlled Trial
OBJECTIVES
The aim of this study was to compare the fracture load of veneered anterior zirconia crowns using normal and Weibull distribution of complete and censored data.
METHODS
Standardized zirconia frameworks for maxillary canines were milled using a CAD/CAM system and randomly divided into 3 groups (N=90, n=30 per group). They were veneered with three veneering ceramics, namely GC Initial ZR, Vita VM9, IPS e.max Ceram using layering technique. The crowns were cemented with glass ionomer cement on metal abutments. The specimens were then loaded to fracture (1 mm/min) in a Universal Testing Machine. The data were analyzed using classical method (normal data distribution (μ, σ); Levene test and one-way ANOVA) and according to the Weibull statistics (s, m). In addition, fracture load results were analyzed depending on complete and censored failure types (only chipping vs. total fracture together with chipping).
RESULTS
When computed with complete data, significantly higher mean fracture loads (N) were observed for GC Initial ZR (μ=978, σ=157; s=1043, m=7.2) and VITA VM9 (μ=1074, σ=179; s=1139; m=7.8) than that of IPS e.max Ceram (μ=798, σ=174; s=859, m=5.8) (p<0.05) by classical and Weibull statistics, respectively. When the data were censored for only total fracture, IPS e.max Ceram presented the lowest fracture load for chipping with both classical distribution (μ=790, σ=160) and Weibull statistics (s=836, m=6.5). When total fracture with chipping (classical distribution) was considered as failure, IPS e.max Ceram did not show significant fracture load for total fracture (μ=1054, σ=110) compared to other groups (GC Initial ZR: μ=1039, σ=152, VITA VM9: μ=1170, σ=166). According to Weibull distributed data, VITA VM9 showed significantly higher fracture load (s=1228, m=9.4) than those of other groups.
SIGNIFICANCE
Both classical distribution and Weibull statistics for complete data yielded similar outcomes. Censored data analysis of all ceramic systems based on failure types is essential and brings additional information regarding the susceptibility to chipping or total fracture.
Topics: Cementation; Computer-Aided Design; Crowns; Data Interpretation, Statistical; Dental Abutments; Dental Bonding; Dental Materials; Dental Porcelain; Dental Restoration Failure; Dental Stress Analysis; Dental Veneers; Glass Ionomer Cements; Humans; Materials Testing; Metal Ceramic Alloys; Normal Distribution; Probability; Statistical Distributions; Stress, Mechanical; Zirconium
PubMed: 22196897
DOI: 10.1016/j.dental.2011.11.023 -
Ecotoxicology and Environmental Safety Jun 2024Species sensitivity distributions (SSDs) estimated by fitting a statistical distribution to ecotoxicity data are indispensable tools used to derive the hazardous...
Species sensitivity distributions (SSDs) estimated by fitting a statistical distribution to ecotoxicity data are indispensable tools used to derive the hazardous concentration for 5 % of species (HC5) and thereby a predicted no-effect concentration in environmental risk assessment. Whereas various statistical distributions are available for SSD estimation, the fundamental question of which statistical distribution should be used has received limited systematic analysis. We aimed to address this knowledge gap by applying four frequently used statistical distributions (log-normal, log-logistic, Burr type III, and Weibull distributions) to acute and chronic SSD estimation using aquatic toxicity data for 191 and 31 chemicals, respectively. Based on the differences in the corrected Akaike's information criterion (AICc) as well as visual inspection of the fitting of the lower tails of SSD curves, the log-normal SSD was generally better or equally good for the majority of chemicals examined. Together with the fact that the ratios of HC5 values of other alternative SSDs to those of log-normal SSDs generally fell within the range 0.1-10, our findings indicate that the log-normal distribution can be a reasonable first candidate for SSD derivation, which does not contest the existing widespread use of log-normal SSDs.
Topics: Risk Assessment; Animals; Water Pollutants, Chemical; Ecotoxicology; Species Specificity; Toxicity Tests, Acute; Aquatic Organisms; Toxicity Tests, Chronic; Models, Statistical
PubMed: 38714082
DOI: 10.1016/j.ecoenv.2024.116379 -
Anais Da Academia Brasileira de Ciencias 2021In this paper, a new three-parameter lifetime model called the Topp-Leone odd log-logistic exponential distribution is proposed. Its density function can be expressed as...
In this paper, a new three-parameter lifetime model called the Topp-Leone odd log-logistic exponential distribution is proposed. Its density function can be expressed as a linear mixture of exponentiated exponential densities and can be reversed-J shaped, skewed to the left and to the right. Further, the hazard rate function of the new model can be monotone, unimodal, constant, J-shaped, constant-increasing-decreasing and decreasing-increasing-decreasing and bathtub-shaped. Our main focus is on estimation from a frequentist point of view, yet, some statistical and reliability characteristics for the proposed model are derived. We briefly describe different estimators namely, the maximum likelihood estimators, ordinary least-squares estimators, weighted least-squares estimators, percentile estimators, maximum product of spacings estimators, Cramér-von-Mises minimum distance estimators, Anderson-Darling estimators and right-tail Anderson-Darling estimators. Monte Carlo simulations are performed to compare the performance of the proposed methods of estimation for both small and large samples. We illustrate the performance of the proposed distribution by means of two real data sets and both the data sets show the new distribution is more appropriate as compared to some other well-known distributions.
Topics: Least-Squares Analysis; Likelihood Functions; Monte Carlo Method; Reproducibility of Results; Statistical Distributions
PubMed: 34550163
DOI: 10.1590/0001-3765202120190586 -
PLoS Computational Biology Apr 2015We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we...
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.
Topics: Computational Biology; Kinetics; Ligands; Models, Theoretical; Protein Binding; Statistical Distributions; Thermodynamics
PubMed: 25885453
DOI: 10.1371/journal.pcbi.1004212 -
PloS One 2022This study suggested a new four-parameter Exponentiated Odd Lomax Exponential (EOLE) distribution by compounding an exponentiated odd function with Lomax distribution as...
This study suggested a new four-parameter Exponentiated Odd Lomax Exponential (EOLE) distribution by compounding an exponentiated odd function with Lomax distribution as a generator. The proposed model is unimodal and positively skewed whereas the hazard rate function is monotonically increasing and inverted bathtubs. Some important properties of the new distribution are derived such as quintile function and median; asymptotic properties and mode; moments; mean residual life, mean path time; mean deviation; order statistics; and Bonferroni & Lorenz curve. The value of the parameters is obtained from the maximum likelihood estimation, least-square estimation, and Cramér-Von-Mises methods. Here, a simulation study and two real data sets, "the number of deaths per day due to COVID-19 of the first wave in Nepal" and ''failure stresses (In Gpa) of single carbon fibers of lengths 50 mm", have been applied to validate the different theoretical findings. The finding of an order of COVID-19 deaths in 153 days in Nepal obey the proposed distribution, it has a significantly positive relationship between the predictive test positive rate and the predictive number of deaths per day. Therefore, the intended model is an alternative model for survival data and lifetime data analysis.
Topics: COVID-19; Humans; Least-Squares Analysis; Likelihood Functions; Nepal; Statistical Distributions
PubMed: 35657989
DOI: 10.1371/journal.pone.0269450 -
Computational and Mathematical Methods... 2022In this work, we presented the type I half logistic Burr-Weibull distribution, which is a unique continuous distribution. It offers several superior benefits in fitting...
In this work, we presented the type I half logistic Burr-Weibull distribution, which is a unique continuous distribution. It offers several superior benefits in fitting various sorts of data. Estimates of the model parameters based on classical and nonclassical approaches are offered. Also, the Bayesian estimates of the model parameters were examined. The Bayesian estimate method employs the Monte Carlo Markov chain approach for the posterior function since the posterior function came from an uncertain distribution. The use of Monte Carlo simulation is to assess the parameters. We established the superiority of the proposed distribution by utilising real COVID-19 data from varied countries such as Saudi Arabia and Italy to highlight the relevance and flexibility of the provided technique. We proved our superiority using both real data.
Topics: Bayes Theorem; COVID-19; Humans; Markov Chains; Monte Carlo Method; Statistical Distributions
PubMed: 36035288
DOI: 10.1155/2022/1444859 -
G3 (Bethesda, Md.) Jan 2022Gene-set analysis (GSA) is a standard procedure for exploring potential biological functions of a group of genes. The development of its methodology has been an active...
Gene-set analysis (GSA) is a standard procedure for exploring potential biological functions of a group of genes. The development of its methodology has been an active research topic in recent decades. Many GSA methods, when newly proposed, rely on simulation studies to evaluate their performance with an implicit assumption that the multivariate expression values are normally distributed. This assumption is commonly adopted in GSAs, particularly those in the group of functional class scoring (FCS) methods. The validity of the normality assumption, however, has been disputed in several studies, yet no systematic analysis has been carried out to assess the effect of this distributional assumption. Our goal in this study is not to propose a new GSA method but to first examine if the multi-dimensional gene expression data in gene sets follow a multivariate normal (MVN) distribution. Six statistical methods in three categories of MVN tests were considered and applied to a total of 24 RNA data sets. These RNA values were collected from cancer patients as well as normal subjects, and the values were derived from microarray experiments, RNA sequencing, and single-cell RNA sequencing. Our first finding suggests that the MVN assumption is not always satisfied. This assumption does not hold true in many applications tested here. In the second part of this research, we evaluated the influence of non-normality on the statistical power of current FCS methods, both parametric and nonparametric ones. Specifically, the scenario of mixture distributions representing more than one population for the RNA values was considered. This second investigation demonstrates that the non-normality distribution of the RNA values causes a loss in the statistical power of these GSA tests, especially when subtypes exist. Among the FCS GSA tools examined here and among the scenarios studied in this research, the N-statistics outperform the others. Based on the results from these two investigations, we conclude that the assumption of MVN should be used with caution when evaluating new GSA tools, since this assumption cannot be guaranteed and violation may lead to spurious results, loss of power, and incorrect comparison between methods. If a newly proposed GSA tool is to be evaluated, we recommend the incorporation of a wide range of multivariate non-normal distributions or sampling from large databases if available.
Topics: Computer Simulation; Humans; Normal Distribution; RNA; Sequence Analysis, RNA
PubMed: 34791175
DOI: 10.1093/g3journal/jkab365 -
PloS One 2012Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some...
Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.
Topics: Algorithms; Blogging; Computer Simulation; Data Interpretation, Statistical; Geography; Humans; Models, Statistical; Models, Theoretical; Probability; Reproducibility of Results; Social Networking; Species Specificity; Statistical Distributions
PubMed: 22383966
DOI: 10.1371/journal.pone.0030549 -
Neural Networks : the Official Journal... May 2022Probabilistic finite mixture models are widely used for unsupervised clustering. These models can often be improved by adapting them to the topology of the data. For...
Probabilistic finite mixture models are widely used for unsupervised clustering. These models can often be improved by adapting them to the topology of the data. For instance, in order to classify spatially adjacent data points similarly, it is common to introduce a Laplacian constraint on the posterior probability that each data point belongs to a class. Alternatively, the mixing probabilities can be treated as free parameters, while assuming Gauss-Markov or more complex priors to regularize those mixing probabilities. However, these approaches are constrained by the shape of the prior and often lead to complicated or intractable inference. Here, we propose a new parametrization of the Dirichlet distribution to flexibly regularize the mixing probabilities of over-parametrized mixture distributions. Using the Expectation-Maximization algorithm, we show that our approach allows us to define any linear update rule for the mixing probabilities, including spatial smoothing regularization as a special case. We then show that this flexible design can be extended to share class information between multiple mixture models. We apply our algorithm to artificial and natural image segmentation tasks, and we provide quantitative and qualitative comparison of the performance of Gaussian and Student-t mixtures on the Berkeley Segmentation Dataset. We also demonstrate how to propagate class information across the layers of deep convolutional neural networks in a probabilistically optimal way, suggesting a new interpretation for feedback signals in biological visual systems. Our flexible approach can be easily generalized to adapt probabilistic mixture models to arbitrary data topologies.
Topics: Algorithms; Cluster Analysis; Humans; Models, Statistical; Neural Networks, Computer; Normal Distribution
PubMed: 35228148
DOI: 10.1016/j.neunet.2022.02.010 -
Medical Image Analysis Feb 2005Almost all diseases affect blood vessel attributes (vessel number, radius, tortuosity, and branching pattern). Quantitative measurement of vessel attributes over...
Almost all diseases affect blood vessel attributes (vessel number, radius, tortuosity, and branching pattern). Quantitative measurement of vessel attributes over relevant vessel populations could thus provide an important means of diagnosing and staging disease. Unfortunately, little is known about the statistical properties of vessel attributes. In particular, it is unclear whether vessel attributes fit a Gaussian distribution, how dependent these values are upon anatomical location, and how best to represent the attribute values of the multiple vessels comprising a population of interest in a single patient. The purpose of this report is to explore the distributions of several vessel attributes over vessel populations located in different parts of the head. In 13 healthy subjects, we extract vessels from MRA data, define vessel trees comprising the anterior cerebral, right and left middle cerebral, and posterior cerebral circulations, and, for each of these four populations, analyze the vessel number, average radius, branching frequency, and tortuosity. For the parameters analyzed, we conclude that statistical methods employing summary measures for each attribute within each region of interest for each patient are preferable to methods that deal with individual vessels, that the distributions of the summary measures are indeed Gaussian, and that attribute values may differ by anatomical location. These results should be useful in designing studies that compare patients with suspected disease to a database of healthy subjects and are relevant to groups interested in atlas formation and in the statistics of tubular objects.
Topics: Cerebral Arteries; Humans; Magnetic Resonance Angiography; Normal Distribution
PubMed: 15581811
DOI: 10.1016/j.media.2004.06.024