-
Journal of Neural Engineering Apr 2024The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient...
The extended infomax algorithm for independent component analysis (ICA) can separate sub- and super-Gaussian signals but converges slowly as it uses stochastic gradient optimization. In this paper, an improved extended infomax algorithm is presented that converges much faster.Accelerated convergence is achieved by replacing the natural gradient learning rule of extended infomax by a fully-multiplicative orthogonal-group based update scheme of the ICA unmixing matrix, leading to an orthogonal extended infomax algorithm (OgExtInf). The computational performance of OgExtInf was compared with original extended infomax and with two fast ICA algorithms: the popular FastICA and Picard, a preconditioned limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm belonging to the family of quasi-Newton methods.OgExtInf converges much faster than original extended infomax. For small-size electroencephalogram (EEG) data segments, as used for example in online EEG processing, OgExtInf is also faster than FastICA and Picard.OgExtInf may be useful for fast and reliable ICA, e.g. in online systems for epileptic spike and seizure detection or brain-computer interfaces.
Topics: Algorithms; Brain-Computer Interfaces; Electroencephalography; Learning; Normal Distribution
PubMed: 38592090
DOI: 10.1088/1741-2552/ad38db -
PloS One 2023In probability theory and statistics, it is customary to employ unit distributions to explain practical variables having values between zero and one. This study suggests...
In probability theory and statistics, it is customary to employ unit distributions to explain practical variables having values between zero and one. This study suggests a brand-new distribution for modelling data on the unit interval called the unit-exponentiated Lomax (UEL) distribution. The statistical aspects of the UEL distribution are shown. The parameters corresponding to the proposed distribution are estimated using widely recognized estimation techniques, such as Bayesian, maximum product of spacing, and maximum likelihood. The effectiveness of the various estimators is assessed through a simulated scenario. Using mock jurors and food spending data sets, the UEL regression model is demonstrated as an alternative to unit-Weibull regression, beta regression, and the original linear regression models. Using Covid-19 data, the novel model outperforms certain other unit distributions according to different comparison criteria.
Topics: Humans; Models, Statistical; Likelihood Functions; Bayes Theorem; COVID-19; Linear Models
PubMed: 37463159
DOI: 10.1371/journal.pone.0288635 -
The Journal of Applied Laboratory... Mar 2024Parametric statistical methods are generally better than nonparametric, but require that data follow a known, usually normal, distribution. One important application is...
BACKGROUND
Parametric statistical methods are generally better than nonparametric, but require that data follow a known, usually normal, distribution. One important application is finding reference limits and detection limits. Parametric analyses yield better estimates and measures of their uncertainty than nonparametric approaches, which rely solely on a few extreme values. Some reference data follow normal distributions; some can be transformed to normal; some are normal or transformable to normal apart from a few extreme values; and detection and quantitation limits can lead to data censoring.
METHODS
A quantile-quantile (QQ) toolbox provides powerful general methodology for all these settings.
RESULTS
QQ methodology leads to a family of simple methods for finding optimal power transformations, testing for normality before and after transformation, estimating reference limits, and constructing confidence intervals.
CONCLUSIONS
These parametric methods have a particular appeal to clinical laboratorians because, while statistically rigorous, they do not require specialized software or statistical expertise, but can be implemented even in spreadsheets. We conclude with an exploration of reference values for amyloid beta proteins associated with Alzheimer disease.
Topics: Humans; Amyloid beta-Peptides; Alzheimer Disease; Reference Values; Software
PubMed: 38204173
DOI: 10.1093/jalm/jfad109 -
HERD Apr 2024The aim of this study is to analyze the consistency, variability, and potential standardization of terminology used to describe architectural variables (AVs) and health...
OBJECTIVE
The aim of this study is to analyze the consistency, variability, and potential standardization of terminology used to describe architectural variables (AVs) and health outcomes in evidence-based design (EBD) studies.
BACKGROUND
In EBD research, consistent terminology is crucial for studying the effects of AVs on health outcomes. However, there is a possibility that diverse terms have been used by researchers, which could lead to potential confusion and inconsistencies.
METHODS
Three recent large systematic reviews were used as a source of publications, and 105 were extracted. The analysis aimed to extract a list of the terms used to refer to the unique concepts of AVs and health outcomes, with a specific focus on people with dementia. Each term's frequency was calculated, and statistical tests, including the χ and the post hoc test, were employed to compare their distributions.
RESULTS
The study identified representative terms for AVs and health outcomes, revealing the variability in terminology usage within EBD field for dementia-friendly design. The comparative analysis of the identified terms highlighted patterns of frequency and distribution, shedding light on potential areas for standardization.
CONCLUSIONS
The findings emphasize the need for standardized terminologies in EBD to improve communication, collaboration, and knowledge synthesis. Standardization of terminology can facilitate research comparability, enhance the generalizability of findings by creating a common language across studies and practitioners, and support the development of EBD guidelines. The study contributes to the ongoing discourse on standardizing terminologies in the field and provides insights into strategies for achieving consensus among researchers, practitioners, and stakeholders in health environmental research.
Topics: Terminology as Topic; Humans; Dementia; Evidence-Based Facility Design
PubMed: 38264993
DOI: 10.1177/19375867231225395 -
Scientific Reports Nov 2023Comics are a bimodal form of art involving a mixture of text and images. Since comics require a combination of various cognitive processes to comprehend their contents,...
Comics are a bimodal form of art involving a mixture of text and images. Since comics require a combination of various cognitive processes to comprehend their contents, the analysis of human comic reading behavior sheds light on how humans process such bimodal forms of media. In this paper, we particularly focus on the viewing times of each comic panel as a quantitative measure of attention, and analyze the statistical characteristics of the distributions of comic panel viewing times. We create a user interface that presents comics in a panel-wise manner, and measure the viewing times of each panel through a user study experiment. We collected data from 18 participants reading 7 comic book volumes resulting in over 99,000 viewing time data points, which will be released publicly. The results show that the average viewing times are proportional to the text length contained in the panel's speech bubbles, with a rate of proportion differing for each reader, despite the bimodal setting. Additionally, we find that the viewing time for all users follows a common heavy-tailed distribution.
PubMed: 37985682
DOI: 10.1038/s41598-023-47120-w -
Nature Ecology & Evolution Oct 2023Whether most species are rare or have some intermediate abundance is a long-standing question in ecology. Here, we use more than one billion observations from the Global...
Whether most species are rare or have some intermediate abundance is a long-standing question in ecology. Here, we use more than one billion observations from the Global Biodiversity Information Facility to assess global species abundance distributions (gSADs) of 39 taxonomic classes of eukaryotic organisms from 1900 to 2019. We show that, as sampling effort increases through time, the shape of the gSAD is unveiled; that is, the shape of the sampled gSAD changes, revealing the underlying gSAD. The fraction of species unveiled for each class decreases with the total number of species in that class and increases with the number of individuals sampled, with some groups, such as birds, being fully unveiled. The best statistical fit for almost all classes was the Poisson log-normal distribution. This strong evidence for a universal pattern of gSADs across classes suggests that there may be general ecological or evolutionary mechanisms governing the commonness and rarity of life on Earth.
Topics: Humans; Animals; Models, Biological; Biodiversity; Biological Evolution; Birds
PubMed: 37667000
DOI: 10.1038/s41559-023-02173-y -
Scientific Reports Jul 2023Among diseases, cancer exhibits the fastest global spread, presenting a substantial challenge for patients, their families, and the communities they belong to. This...
Among diseases, cancer exhibits the fastest global spread, presenting a substantial challenge for patients, their families, and the communities they belong to. This paper is devoted to modeling such a disease as a special case. A newly proposed distribution called the binomial-discrete Erlang-truncated exponential (BDETE) is introduced. The BDETE is a mixture of binomial distribution with the number of trials (parameter [Formula: see text]) taken after a discrete Erlang-truncated exponential distribution. A comprehensive mathematical treatment of the proposed distribution and expressions of its density, cumulative distribution function, survival function, failure rate function, Quantile function, moment generating function, Shannon entropy, order statistics, and stress-strength reliability, are provided. The distribution's parameters are estimated using the maximum likelihood method. Two real-world lifetime count data sets from the cancer disease, both of which are right-skewed and over-dispersed, are fitted using the proposed BDETE distribution to evaluate its efficacy and viability. We expect the findings to become standard works in probability theory and its related fields.
Topics: Humans; Reproducibility of Results; Statistical Distributions; Entropy; Neoplasms
PubMed: 37507433
DOI: 10.1038/s41598-023-38709-2 -
Scientific Reports Oct 2023Understanding the evolution mechanism of cracks helps to evaluate the behavior and performance of rock masses and provides a theoretical basis for the mechanism of crack...
Understanding the evolution mechanism of cracks helps to evaluate the behavior and performance of rock masses and provides a theoretical basis for the mechanism of crack propagation and instability. For this purpose, a rock mechanics testing system and an acoustic emission monitoring system were used to conduct acoustic emission positioning experiments on coal samples under uniaxial compression. According to clustering theory, the distribution pattern of microcracks and the dynamic evolution process of multiple cracks were studied. Subsequently, the reasons for the change in the spatio-temporal entropy (H) and fractal dimension (D) of a single crack were revealed. The research results show that microcracks present a statistical equilibrium distribution, the Gaussian distribution model is applicable to cluster crack distribution patterns, and a machine learning method can effectively identify cracks. The fractal dimension reflects the spatial characteristics of three-dimensional elliptical cracks, and low-dimensional cluster cracks are more likely to develop into macroscopic cracks. The change of H is related to the formation process of cracks, and an abnormal H (sudden increase and sudden decrease) could provide precursor information for the instability of coal samples. This research provides a new method to study crack distributions and formations and shows the competitiveness of the method in evaluating the damage state of coal.
PubMed: 37863986
DOI: 10.1038/s41598-023-45276-z -
Entropy (Basel, Switzerland) Jul 2023The paper reviews the "κ-generalized distribution", a statistical model for the analysis of income data. Basic analytical properties, interrelationships with other... (Review)
Review
The paper reviews the "κ-generalized distribution", a statistical model for the analysis of income data. Basic analytical properties, interrelationships with other distributions, and standard measures of inequality such as the Gini index and the Lorenz curve are covered. An extension of the basic model that best fits wealth data is also discussed. The new and old empirical evidence presented in the article shows that the κ-generalized model of income/wealth is often in very good agreement with the observed data.
PubMed: 37628171
DOI: 10.3390/e25081141 -
IEEE Transactions on Visualization and... Feb 2024Reflectance models capture many types of visual appearances. The most plausible reflectance models follow the Microfacet theory, which is specifically based on...
Reflectance models capture many types of visual appearances. The most plausible reflectance models follow the Microfacet theory, which is specifically based on statistical representations, with an analytic visibility term. This visibility term has a significant impact on appearance. Visibility computed with the masking term proposed by Smith [1], and revisited by Ashikhmin et al.[2], is nowadays considered as the most plausible in the literature. It is simple and efficient to evaluate for statistical distributions, but it relies on assumptions that are not necessarily respected by real surfaces. This paper proposes an in-depth study of masking for meshed height-field surfaces, generated either from measured real-world materials or from functions derived from distributions of surface normals. We experimentally estimate the masking (and shadowing) of surfaces using a ray-casting technique, and compare their measurements with the theoretical model from Smith and Ashikhmin et al. We show that their assumptions are too restrictive for a majority of real-world surfaces. We propose a model capable of predicting how close the theoretical masking term can be from the masking term estimated by a ray-casting approach. Although most surfaces break their assumptions, our results show that the term from Smith and Ashikhmin et al. can still be reasonably employed for a fraction in a set of more than 400 measured surfaces, with low errors compared to a ray-casting masking estimation, much lower computation times, and very similar visual appearances. Our model can be used to predict the incurred error on a physically-based rendering simulation with a microfacet-based BRDF created from real-world surfaces, instead of explicitly calculating the masking term from its height field.
PubMed: 38329853
DOI: 10.1109/TVCG.2024.3363659