-
PNAS Nexus Nov 2022A typical model for a gyrating engine consists of an inertial wheel powered by an energy source that generates an angle-dependent torque. Examples of such engines...
A typical model for a gyrating engine consists of an inertial wheel powered by an energy source that generates an angle-dependent torque. Examples of such engines include a pendulum with an externally applied torque, Stirling engines, and the Brownian gyrating engine. Variations in the torque are averaged out by the inertia of the system to produce limit cycle oscillations. While torque generating mechanisms are also ubiquitous in the biological world, where they typically feed on chemical gradients, inertia is not a property that one naturally associates with such processes. In the present work, seeking ways to dispense of the need for inertial effects, we study an inertia-less concept where the combined effect of coupled torque-producing components averages out variations in the ambient potential and helps overcome dissipative forces to allow sustained operation for vanishingly small inertia. We exemplify this inertia-less concept through analysis of two of the aforementioned engines, the Stirling engine, and the Brownian gyrating engine. An analogous principle may be sought in biomolecular processes as well as in modern-day technological engines, where for the latter, the coupled torque-producing components reduce vibrations that stem from the variability of the generated torque.
PubMed: 36712376
DOI: 10.1093/pnasnexus/pgac251 -
Gait & Posture Oct 2022Balance is often affected after stroke, severely impacting activities of daily life. Conventional testing methods to assess balance provide limited information, as they...
BACKGROUND
Balance is often affected after stroke, severely impacting activities of daily life. Conventional testing methods to assess balance provide limited information, as they are subjected to floor and ceiling effects. Instrumented tests, for instance using inertial measurement units, offer a feasible and promising alternative.
RESEARCH QUESTION
We examined whether postural sway can reliably be measured in sitting and standing balance in people after stroke in clinical rehabilitation using a single inertial measurement unit. Additionally, we assessed to what extent averaging two measurements would improve test-retest reliability compared to a single measurement, and if sway features can potentially be used to monitor progression.
METHOD
Forty participants performed two assessments with a test-retest interval of 24 h. Each assessment consisted of one sitting and four standing balance conditions (eyes open, feet together, eyes closed and foam). The standing balance conditions were performed twice during both assessments. In total, 35 sway features were calculated for each condition. For the standing balance conditions, these were calculated for both single test-retest measurement and the average of the two test and retest measurements. We determined the reliability using the intraclass correlation coefficient for both single and averaged measurements. Additionally, the minimal detectable change and the relative minimal detectable change were computed.
RESULTS
The single and averaged measurements resulted in 22 sitting, 30 & 32 eyes open, 27 & 22 feet together, 28 & 33 eyes closed and 23 & 13 foam sway features with good-excellent reliability. Overall, the difference between intraclass correlation coefficient values of the single and averaged measurements was small and inconsistent. The relative minimal detectable change ranged between 0.5 and 1.5 standard deviation.
SIGNIFICANCE
Sitting and standing balance can reliably be assessed in people after stroke in clinical rehabilitation with a single measurement using one inertial measurement unit.
Topics: Humans; Stroke Rehabilitation; Reproducibility of Results; Postural Balance; Stroke
PubMed: 36055184
DOI: 10.1016/j.gaitpost.2022.08.005 -
ELife Sep 2021Actions often require the selection of a specific goal amongst a range of possibilities, like when a softball player must precisely position her glove to field a...
Actions often require the selection of a specific goal amongst a range of possibilities, like when a softball player must precisely position her glove to field a fast-approaching ground ball. Previous studies have suggested that during goal uncertainty the brain prepares for all potential goals in parallel and averages the corresponding motor plans to command an intermediate movement that is progressively refined as additional information becomes available. Although intermediate movements are widely observed, they could instead reflect a neural decision about the single best action choice given the uncertainty present. Here we systematically dissociate these possibilities using novel experimental manipulations and find that when confronted with uncertainty, humans generate a motor plan that optimizes task performance rather than averaging potential motor plans. In addition to accurate predictions of population-averaged changes in motor output, a novel computational model based on this performance-optimization theory accounted for a majority of the variance in individual differences between participants. Our findings resolve a long-standing question about how the brain selects an action to execute during goal uncertainty, providing fundamental insight into motor planning in the nervous system.
Topics: Adolescent; Adult; Brain; Decision Making; Female; Humans; Male; Models, Biological; Movement; Uncertainty; Young Adult
PubMed: 34486520
DOI: 10.7554/eLife.67019 -
Acta Crystallographica. Section F,... Jan 2019Biological samples are radiation-sensitive and require imaging under low-dose conditions to minimize damage. As a result, images contain a high level of noise and... (Review)
Review
Biological samples are radiation-sensitive and require imaging under low-dose conditions to minimize damage. As a result, images contain a high level of noise and exhibit signal-to-noise ratios that are typically significantly smaller than 1. Averaging techniques, either implicit or explicit, are used to overcome the limitations imposed by the high level of noise. Averaging of 2D images showing the same molecule in the same orientation results in highly significant projections. A high-resolution structure can be obtained by combining the information from many single-particle images to determine a 3D structure. Similarly, averaging of multiple copies of macromolecular assembly subvolumes extracted from tomographic reconstructions can lead to a virtually noise-free high-resolution structure. Cross-correlation methods are often used in the alignment and classification steps of averaging processes for both 2D images and 3D volumes. However, the high noise level can bias alignment and certain classification results. While other approaches may be implicitly affected, sensitivity to noise is most apparent in multireference alignments, 3D reference-based projection alignments and projection-based volume alignments. Here, the influence of the image signal-to-noise ratio on the value of the cross-correlation coefficient is analyzed and a method for compensating for this effect is provided.
Topics: Algorithms; Bacterial Proteins; Cryoelectron Microscopy; Electron Transport Complex I; History, 20th Century; History, 21st Century; Humans; Image Processing, Computer-Assisted; Imaging, Three-Dimensional; Signal-To-Noise Ratio; Yarrowia
PubMed: 30605121
DOI: 10.1107/S2053230X18014036 -
Medical Physics Oct 2023Dosimetry in radionuclide therapy often requires the calculation of average absorbed doses within and between spatial regions, for example, for voxel-based dosimetry...
BACKGROUND
Dosimetry in radionuclide therapy often requires the calculation of average absorbed doses within and between spatial regions, for example, for voxel-based dosimetry methods, for paired organs, or across multiple tumors. Formation of such averages can be made in different ways, starting from different definitions.
PURPOSE
The aim of this study is to formally specify different averaging strategies for absorbed doses, and to compare their results when applied to absorbed dose distributions that are non-uniform within and between regions.
METHODS
For averaging within regions, two definitions of the average absorbed dose are considered: the simple average over the region (the region average) and the average when weighting by the mass density (density-weighted region average). The latter is shown to follow from the definition of mean absorbed dose according to the ICRU, and to be consistent with the MIRD formalism. For averaging between different spatial regions, three definitions follow: the volume-weighted, the mass-weighted, and the unweighted average. With respect to characterizing non-uniformity, the different average definitions lead to the use of dose-volume histograms (DVHs) (region average), dose-mass histograms (DMHs) (density-weighted region average), and unweighted histograms (unweighted average). Average absorbed doses are calculated for three worked examples, starting from the different definitions. The first, schematic, example concerns the calculation of the average absorbed dose between two regions with different volumes or mass densities. The second, stylized, example concerns voxel-based dosimetry, for which the average absorbed-dose rate within a region is calculated. The geometries studied include three Lu-filled voxelized spheres, where the sphere masses are held constant while the material compositions, densities, and volumes are varied. For comparison, the mean absorbed-dose rates obtained using unit-density sphere S-values are also included. The third example concerns SPECT/CT-based tumor dosimetry for five patients undergoing therapy with Lu-PSMA and six patients undergoing therapy with Lu-DOTA-TATE, for which the average absorbed-dose rates across multiple tumors are calculated. For the second and third examples, analyses also include representations by histograms.
RESULTS
Example 1 shows that the average absorbed doses, calculated using different definitions, can differ considerably if the masses and absorbed doses for two regions are markedly different. From example 2 it is seen that the density-weighted region average is stable under different activity and density distributions and is also in line with results using S-values. In contrast, the region average varies as function of the activity distribution. In example 3, the absorbed dose rates for individual tumors differ by (1.1 ± 4.3)% and (-0.1 ± 0.4)% with maximum deviations of +34.4% and -1.4% for Lu-PSMA and Lu-DOTA-TATE, respectively, when calculated as region averages or density-weighted region averages, with largest deviations obtained when the density is non-uniform. The average absorbed doses calculated across all tumors are similar when comparing mass-weighted and volume-weighted averages but these differ substantially from unweighted averages.
CONCLUSION
Different strategies for averaging of absorbed doses within and between regions can lead to substantially different absorbed-dose estimates. At reporting of radionuclide therapy dosimetry, it is important to specify the averaging strategy applied.
Topics: Humans; Radiopharmaceuticals; Radiometry; Single Photon Emission Computed Tomography Computed Tomography; Radioisotopes; Neoplasms
PubMed: 37272586
DOI: 10.1002/mp.16528 -
Surgical Neurology International 2018Reviewing the neurosurgical literature demonstrated that spinal neurosurgeons rarely (0.78%) diagnose chiari-1 malformation (CM-1) in adults on magnetic resonance (MR)...
BACKGROUND
Reviewing the neurosurgical literature demonstrated that spinal neurosurgeons rarely (0.78%) diagnose chiari-1 malformation (CM-1) in adults on magnetic resonance (MR) studies defined by tonsillar descent >5 mm below the foramen magnum (FM). Children, averaging 10 years of age, exhibit CM-1 in 96/100,000 cases. According to the literature, fewer spinal neurosurgeons additionally recognize and treat the low lying cerebellar tonsil (LLCT) syndrome.
METHODS
The normal location of the cerebellar tonsils on cranial/cervical MR averages 2.9 mm ± 3.4 mm above or up to 3 mm below the FM. The neurosurgical literature revealed that most neurosurgeons diagnose and treat CM-1 where the tonsils are >5 mm to an average of 12 mm below the FM. Fewer spinal neurosurgeons additionally diagnose and treat the LLCT syndrome defined by <5 mm of tonsillar descent below the FM.
RESULTS
According to the neurosurgical literature, many neurosurgeons perform cranial/spinal decompression with/without fusion and/or duraplasty for CM-1. Fewer neurosurgeons perform these procedures for CM-1 and the LLCT syndrome, for which they additionally perform preoperative cervical traction under anesthesia, and the postoperative placement of occipital neurostimulators (ONS) for intractable headaches following chiari-1/LLCT surgery.
CONCLUSION
Reviewing the literature revealed that spinal neurosurgeons rarely diagnose CM-1, and treat them with decompressions with/without fusions and/or duraplasty. Fewer spinal neurosurgeons diagnose/treat both the CM-1 and LLCT syndromes, perform preoperative traction under anesthesia, and place ONS for persistent headaches following CM-1 surgery.
PubMed: 30105146
DOI: 10.4103/sni.sni_208_18 -
JSES International Jul 2022Superior labrum anterior-posterior tears (SLAP) can be a career-altering injury for Major League Baseball (MLB) pitchers. Surgery and postoperative rehabilitation keep...
BACKGROUND
Superior labrum anterior-posterior tears (SLAP) can be a career-altering injury for Major League Baseball (MLB) pitchers. Surgery and postoperative rehabilitation keep pitchers on the injured list (IL) for extended time, which results in a significant cost to a team. To date, no analyses have focused on the financial cost of SLAP repairs in MLB pitchers.
METHODS
A retrospective review of MLB pitchers with SLAP repair from 2004 to 2019 was conducted utilizing IL and financial contract data from the MLB website. Cost of injury was calculated from salary of the player. Performance metrics including earned run average, walks + hits per innings pitched, and innings pitched (IP) were averaged for one and all seasons played before and after injury. Return to play and return to prior performance rates were calculated and reported.
RESULTS
Of the 55 players identified, 22 players (40%) returned to play and 18 of these 22 players (82%) returned to prior performance. Annual cost increased over the study period (R = 0.288) averaging $3.5 million, and a stable average of 172 days was spent on the IL (R = 0.001). Performance was negligible except IP (106.95 vs. 50.85; < .01) for 1 season before and after injury. For all seasons, earned run average and walks + hits per innings pitched significantly increased (4.13 vs. 5.19; = .030, and 1.36 vs. 1.53; = .033, respectively), while IP downtrended without significance ( = .058).
CONCLUSION
SLAP repairs in MLB pitchers have significant financial impact and time spent on the IL, which surprisingly has not changed over time. It is encouraging to know return-to-play pitchers return without profound decline in performance level following SLAP repair.
PubMed: 35813154
DOI: 10.1016/j.jseint.2022.03.003 -
PloS One 2023Various methods have been developed to combine inference across multiple sets of results for unsupervised clustering, within the ensemble clustering literature. The...
Various methods have been developed to combine inference across multiple sets of results for unsupervised clustering, within the ensemble clustering literature. The approach of reporting results from one 'best' model out of several candidate clustering models generally ignores the uncertainty that arises from model selection, and results in inferences that are sensitive to the particular model and parameters chosen. Bayesian model averaging (BMA) is a popular approach for combining results across multiple models that offers some attractive benefits in this setting, including probabilistic interpretation of the combined cluster structure and quantification of model-based uncertainty. In this work we introduce clusterBMA, a method that enables weighted model averaging across results from multiple unsupervised clustering algorithms. We use clustering internal validation criteria to develop an approximation of the posterior model probability, used for weighting the results from each model. From a combined posterior similarity matrix representing a weighted average of the clustering solutions across models, we apply symmetric simplex matrix factorisation to calculate final probabilistic cluster allocations. In addition to outperforming other ensemble clustering methods on simulated data, clusterBMA offers unique features including probabilistic allocation to averaged clusters, combining allocation probabilities from 'hard' and 'soft' clustering algorithms, and measuring model-based uncertainty in averaged cluster allocation. This method is implemented in an accompanying R package of the same name. We use simulated datasets to explore the ability of the proposed technique to identify robust integrated clusters with varying levels of separation between subgroups, and with varying numbers of clusters between models. Benchmarking accuracy against four other ensemble methods previously demonstrated to be highly effective in the literature, clusterBMA matches or exceeds the performance of competing approaches under various conditions of dimensionality and cluster separation. clusterBMA substantially outperformed other ensemble methods for high dimensional simulated data with low cluster separation, with 1.16 to 7.12 times better performance as measured by the Adjusted Rand Index. We also explore the performance of this approach through a case study that aims to identify probabilistic clusters of individuals based on electroencephalography (EEG) data. In applied settings for clustering individuals based on health data, the features of probabilistic allocation and measurement of model-based uncertainty in averaged clusters are useful for clinical relevance and statistical communication.
Topics: Humans; Bayes Theorem; Algorithms; Benchmarking; Clinical Relevance; Cluster Analysis
PubMed: 37603575
DOI: 10.1371/journal.pone.0288000 -
Cognition Nov 2021Average faces have been used frequently in face recognition studies, either as a theoretical concept (e.g., face norm) or as a tool to manipulate facial attributes...
Average faces have been used frequently in face recognition studies, either as a theoretical concept (e.g., face norm) or as a tool to manipulate facial attributes (e.g., modifying identity strength). Nonetheless, how the face averaging process- the creation of average faces using an increasing number of faces -changes the resulting averaged faces and our ability to differentiate between them remains to be elucidated. Here we addressed these questions by combining 3D-face averaging, eye-movement tracking, and the computation of image-based face similarity. Participants judged whether two average faces showed the same person while we systematically increased their average level (i.e., number of faces being averaged). Our results showed, with increasing averaging, both a nonlinear increase of the computational similarity between the resulting average faces and a nonlinear decrease of face discrimination performance. Participants' performance dropped from near-ceiling level when two different faces had been averaged together to chance level when 80 faces were mixed. We also found a nonlinear relationship between face similarity and face discrimination performance, which was fitted nicely with an exponential function. Furthermore, when the comparison task became more challenging, participants performed more fixations onto the faces. Nonetheless, the distribution of fixations across facial features (eyes, nose, mouth, and the center area of a face) remained unchanged. These results not only set new constraints on the theoretical characterization of the average face and its role in establishing face norms but also offer practical guidance for creating approximated face norms to manipulate face identity.
Topics: Eye; Eye Movements; Face; Facial Recognition; Humans; Mouth
PubMed: 34364004
DOI: 10.1016/j.cognition.2021.104867 -
Journal of Vision 2015Visual memory can draw upon averaged perceptual representations, a dependence that could be both adaptive and obligatory. In support of this idea, we review a wide range... (Review)
Review
Visual memory can draw upon averaged perceptual representations, a dependence that could be both adaptive and obligatory. In support of this idea, we review a wide range of evidence, including findings from our own lab. This evidence shows that time- and space-averaged memory representations influence detection and recognition responses, and do so without instruction to compute or report an average. Some of the work reviewed exploits fine-grained measures of retrieval from visual short-term memory to closely track the influence of stored averages on recall and recognition of briefly presented visual textures. Results show that reliance on perceptual averages is greatest when memory resources are taxed or when subjects are uncertain about the fidelity of their memory representation. We relate these findings to models of how summary statistics impact visual short-term memory, and discuss a neural signature for contexts in which perceptual averaging exerts maximal influence.
Topics: Biometry; Humans; Memory, Short-Term; Mental Recall; Models, Theoretical; Space-Time Clustering; Visual Perception
PubMed: 26406353
DOI: 10.1167/5.4.13