-
Computational Intelligence and... 2021In the field of life testing, it is very important to study the reliability of any component under testing. One of the most important subjects is the "stress-strength...
In the field of life testing, it is very important to study the reliability of any component under testing. One of the most important subjects is the "stress-strength reliability" term which always refers to the quantity ( > ) in any statistical literature. It resamples a system with random strength () that is subjected to a random strength () such that a system fails in case the stress exceeds the strength. In this study, we consider stress-strength reliability where the strength () follows Rayleigh-half-normal distribution and stress ( , , , and ) follows Rayleigh-half-normal distribution, exponential distribution, Rayleigh distribution, and half-normal distribution, respectively. This effort comprises determining the general formulations of the reliabilities of a system. Also, the maximum likelihood estimation approach and method of moment (MOM) will be utilized to estimate the parameters. Finally, reliability has been attained utilizing various values of stress and strength parameters.
Topics: Humans; Normal Distribution; Reproducibility of Results; Statistical Distributions
PubMed: 34285693
DOI: 10.1155/2021/7653581 -
Scientific Reports Dec 2017A taxonomy is a standardized framework to classify and organize items into categories. Hierarchical taxonomies are ubiquitous, ranging from the classification of...
A taxonomy is a standardized framework to classify and organize items into categories. Hierarchical taxonomies are ubiquitous, ranging from the classification of organisms to the file system on a computer. Characterizing the typical distribution of items within taxonomic categories is an important question with applications in many disciplines. Ecologists have long sought to account for the patterns observed in species-abundance distributions (the number of individuals per species found in some sample), and computer scientists study the distribution of files per directory. Is there a universal statistical distribution describing how many items are typically found in each category in large taxonomies? Here, we analyze a wide array of large, real-world datasets - including items lost and found on the New York City transit system, library books, and a bacterial microbiome - and discover such an underlying commonality. A simple, non-parametric branching model that randomly categorizes items and takes as input only the total number of items and the total number of categories is quite successful in reproducing the observed abundance distributions. This result may shed light on patterns in species-abundance distributions long observed in ecology. The model also predicts the number of taxonomic categories that remain unrepresented in a finite sample.
Topics: Databases, Factual; Models, Biological; Statistical Distributions
PubMed: 29213056
DOI: 10.1038/s41598-017-17168-6 -
BMC Medical Research Methodology Jul 2017In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive...
BACKGROUND
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval.
METHODS
We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable.
RESULTS
A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model.
CONCLUSIONS
The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Topics: Algorithms; Bayes Theorem; Computer Simulation; Humans; Meta-Analysis as Topic; Models, Statistical; Multivariate Analysis; Normal Distribution
PubMed: 28724350
DOI: 10.1186/s12874-017-0376-7 -
Cerebral Cortex (New York, N.Y. : 1991) Aug 2023Numbers of neurons and their spatial variation are fundamental organizational features of the brain. Despite the large corpus of cytoarchitectonic data available in the...
Numbers of neurons and their spatial variation are fundamental organizational features of the brain. Despite the large corpus of cytoarchitectonic data available in the literature, the statistical distributions of neuron densities within and across brain areas remain largely uncharacterized. Here, we show that neuron densities are compatible with a lognormal distribution across cortical areas in several mammalian species, and find that this also holds true within cortical areas. A minimal model of noisy cell division, in combination with distributed proliferation times, can account for the coexistence of lognormal distributions within and across cortical areas. Our findings uncover a new organizational principle of cortical cytoarchitecture: the ubiquitous lognormal distribution of neuron densities, which adds to a long list of lognormal variables in the brain.
Topics: Animals; Neurons; Brain; Mammals; Cerebral Cortex; Statistical Distributions
PubMed: 37409647
DOI: 10.1093/cercor/bhad160 -
PLoS Computational Biology Apr 2015We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we...
We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics.
Topics: Computational Biology; Kinetics; Ligands; Models, Theoretical; Protein Binding; Statistical Distributions; Thermodynamics
PubMed: 25885453
DOI: 10.1371/journal.pcbi.1004212 -
Statistics in Medicine Dec 2020Recently developed accelerometer devices have been used in large epidemiological studies for continuous and objective monitoring of physical activities. Typically,...
Recently developed accelerometer devices have been used in large epidemiological studies for continuous and objective monitoring of physical activities. Typically, physical movements are summarized as minutes in light, moderate, and vigorous physical activities in each wearing day. Because of preponderance of zeros, zero-inflated distributions have been used for modeling the daily moderate or higher levels of physical activity. Yet, these models do not fully account for variations in daily physical activity and cannot be extended to model weekly physical activity explicitly, while the weekly physical activity is considered as an indicator for a subject's average level of physical activity. To overcome these limitations, we propose to use a zero-inflated Poisson mixture distribution that can model daily and weekly physical activity in same family of mixture distributions. Under this method, the likelihood of an inactive day and the amount of exercise in an active day are simultaneously modeled by a joint random effects model to incorporate heterogeneity across participants. If needed, the method has the flexibility to include an additional random effect to address extra variations in daily physical activity. Maximum likelihood estimation can be obtained through Gaussian quadrature technique, which is implemented conveniently in an R package GLMMadaptive. Method performances are examined using simulation studies. The method is applied to data from the Hispanic Community Health Study/Study of Latinos to examine the relationship between physical activity and BMI groups and within a participant the difference in physical activity between weekends and weekdays.
Topics: Computer Simulation; Exercise; Humans; Models, Statistical; Poisson Distribution; Research Design
PubMed: 32949036
DOI: 10.1002/sim.8748 -
Journal of the Experimental Analysis of... Jul 2002Ideal free distribution theory predicts that foragers will form groups proportional in number to the resources available in alternative resource sites or patches, a...
Ideal free distribution theory predicts that foragers will form groups proportional in number to the resources available in alternative resource sites or patches, a phenomenon termed habitat matching. Three experiments tested this prediction with college students in discrete-trial simulations and a free-operant simulation. Sensitivity to differences in programmed reinforcement rates was quantified by using the sensitivity parameter of the generalized matching law (s). The first experiment, replicating prior published experiments, produced a greater degree of undermatching for the initial choice (s = 0.59) compared to final choices (s = 0.86). The second experiment, which extended prior findings by allowing only one choice per trial, produced comparable undermatching (s = 0.82). The third experiment used free-operant procedures more typical of laboratory studies of habitat matching with other species and produced the most undermatching (s = 0.71). The results of these experiments replicated previous results with human groups, supported predictions of the ideal free distribution, and suggested that undermatching represents a systematic deviation from the ideal free distribution. These results are consistent with a melioration account of individual behavior as the basis for group choice.
Topics: Adolescent; Adult; Choice Behavior; Competitive Behavior; Female; Group Processes; Humans; Male; Motivation; Reinforcement Schedule; Statistical Distributions; Students
PubMed: 12144309
DOI: 10.1901/jeab.2002.78-1 -
ENeuro Jan 2023A central question in neuroscience is how sensory inputs are transformed into percepts. At this point, it is clear that this process is strongly influenced by prior... (Review)
Review
A central question in neuroscience is how sensory inputs are transformed into percepts. At this point, it is clear that this process is strongly influenced by prior knowledge of the sensory environment. Bayesian ideal observer models provide a useful link between data and theory that can help researchers evaluate how prior knowledge is represented and integrated with incoming sensory information. However, the statistical prior employed by a Bayesian observer cannot be measured directly, and must instead be inferred from behavioral measurements. Here, we review the general problem of inferring priors from psychophysical data, and the simple solution that follows from assuming a prior that is a Gaussian probability distribution. As our understanding of sensory processing advances, however, there is an increasing need for methods to flexibly recover the shape of Bayesian priors that are not well approximated by elementary functions. To address this issue, we describe a novel approach that applies to arbitrary prior shapes, which we parameterize using mixtures of Gaussian distributions. After incorporating a simple approximation, this method produces an analytical solution for psychophysical quantities that can be numerically optimized to recover the shapes of Bayesian priors. This approach offers advantages in flexibility, while still providing an analytical framework for many scenarios. We provide a MATLAB toolbox implementing key computations described herein.
Topics: Bayes Theorem; Probability; Sensation; Normal Distribution
PubMed: 36316119
DOI: 10.1523/ENEURO.0144-22.2022 -
PloS One 2020A new generalized linear mixed quantile model for panel data is proposed. This proposed approach applies GEE with smoothed estimating functions, which leads to...
A new generalized linear mixed quantile model for panel data is proposed. This proposed approach applies GEE with smoothed estimating functions, which leads to asymptotically equivalent estimation of the regression coefficients. Random effects are predicted by using the best linear unbiased predictors (BLUP) based on the Tweedie exponential dispersion distributions which cover a wide range of distributions, including those widely used ones, such as the normal distribution, Poisson distribution and gamma distribution. A Taylor expansion of the quantile estimating function is used to linearize the random effects in the quantile process. The parameter estimation is based on the Newton-Raphson iteration method. Our proposed quantile mixed model gives consistent estimates that have asymptotic normal distributions. Simulation studies are carried out to investigate the small sample performance of the proposed approach. As an illustration, the proposed method is applied to analyze the epilepsy data.
Topics: Computer Simulation; Data Interpretation, Statistical; Linear Models; Normal Distribution
PubMed: 32780767
DOI: 10.1371/journal.pone.0237326 -
PloS One 2013Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis...
Regression analysis with a bounded outcome is a common problem in applied statistics. Typical examples include regression models for percentage outcomes and the analysis of ratings that are measured on a bounded scale. In this paper, we consider beta regression, which is a generalization of logit models to situations where the response is continuous on the interval (0,1). Consequently, beta regression is a convenient tool for analyzing percentage responses. The classical approach to fit a beta regression model is to use maximum likelihood estimation with subsequent AIC-based variable selection. As an alternative to this established - yet unstable - approach, we propose a new estimation technique called boosted beta regression. With boosted beta regression estimation and variable selection can be carried out simultaneously in a highly efficient way. Additionally, both the mean and the variance of a percentage response can be modeled using flexible nonlinear covariate effects. As a consequence, the new method accounts for common problems such as overdispersion and non-binomial variance structures.
Topics: Humans; Likelihood Functions; Logistic Models; Research Design; Spatio-Temporal Analysis; Statistical Distributions
PubMed: 23626706
DOI: 10.1371/journal.pone.0061623