-
Journal of Clinical Epidemiology Mar 2021Establishing an accurate diagnosis is crucial in everyday clinical practice. It forms the starting point for clinical decision-making, for instance regarding treatment... (Review)
Review
Establishing an accurate diagnosis is crucial in everyday clinical practice. It forms the starting point for clinical decision-making, for instance regarding treatment options or further testing. In this context, clinicians have to deal with probabilities (instead of certainties) that are often hard to quantify. During the diagnostic process, clinicians move from the probability of disease before testing (prior or pretest probability) to the probability of disease after testing (posterior or posttest probability) based on the results of one or more diagnostic tests. This reasoning in probabilities is reflected by a statistical theorem that has an important application in diagnosis: Bayes' rule. A basic understanding of the use of Bayes' rule in diagnosis is pivotal for clinicians. This rule shows how both the prior probability (also called prevalence) and the measurement properties of diagnostic tests (sensitivity and specificity) are crucial determinants of the posterior probability of disease (predictive value), on the basis of which clinical decisions are made. This article provides a simple explanation of the interpretation and use of Bayes' rule in diagnosis.
Topics: Bayes Theorem; Clinical Decision-Making; Humans; Probability; Sensitivity and Specificity
PubMed: 33741123
DOI: 10.1016/j.jclinepi.2020.12.021 -
Psychonomic Bulletin & Review Feb 2022A major hypothesis about conditionals is the Equation in which the probability of a conditional equals the corresponding conditional probability: p(if A then C) =... (Review)
Review
A major hypothesis about conditionals is the Equation in which the probability of a conditional equals the corresponding conditional probability: p(if A then C) = p(C|A). Probabilistic theories often treat it as axiomatic, whereas it follows from the meanings of conditionals in the theory of mental models. In this theory, intuitive models (system 1) do not represent what is false, and so produce errors in estimates of p(if A then C), yielding instead p(A & C). Deliberative models (system 2) are normative, and yield the proportion of cases of A in which C holds, i.e., the Equation. Intuitive estimates of the probability of a conditional about unique events: If covid-19 disappears in the USA, then Biden will run for a second term, together with those of each of its clauses, are liable to yield joint probability distributions that sum to over 100%. The error, which is inconsistent with the probability calculus, is massive when participants estimate the joint probabilities of conditionals with each of the different possibilities to which they refer. This result and others under review corroborate the model theory.
Topics: COVID-19; Humans; Judgment; Logic; Models, Psychological; Probability; Problem Solving; SARS-CoV-2
PubMed: 34173186
DOI: 10.3758/s13423-021-01938-5 -
Studies in History and Philosophy of... Oct 2023Problems with uniform probabilities on an infinite support show up in contemporary cosmology. This paper focuses on the context of inflation theory, where it complicates...
Problems with uniform probabilities on an infinite support show up in contemporary cosmology. This paper focuses on the context of inflation theory, where it complicates the assignment of a probability measure over pocket universes. The measure problem in cosmology, whereby it seems impossible to pick out a uniquely well-motivated measure, is associated with a paradox that occurs in standard probability theory and crucially involves uniformity on an infinite sample space. This problem has been discussed by physicists, albeit without reference to earlier work on this topic. The aim of this article is both to introduce philosophers of probability to these recent discussions in cosmology and to familiarize physicists and philosophers working on cosmology with relevant foundational work by Kolmogorov, de Finetti, Jaynes, and other probabilists. As such, the main goal is not to solve the measure problem, but to clarify the exact origin of some of the current obstacles. The analysis of the assumptions going into the paradox indicates that there exist multiple ways of dealing consistently with uniform probabilities on infinite sample spaces. Taking a pluralist stance towards the mathematical methods used in cosmology shows there is some room for progress with assigning probabilities in cosmological theories.
Topics: Cultural Diversity; Insufflation; Probability; Probability Theory
PubMed: 37690232
DOI: 10.1016/j.shpsa.2023.08.009 -
PLoS Computational Biology Feb 2022Is it possible to learn and create a first Hidden Markov Model (HMM) without programming skills or understanding the algorithms in detail? In this concise tutorial, we...
Is it possible to learn and create a first Hidden Markov Model (HMM) without programming skills or understanding the algorithms in detail? In this concise tutorial, we present the HMM through the 2 general questions it was initially developed to answer and describe its elements. The HMM elements include variables, hidden and observed parameters, the vector of initial probabilities, and the transition and emission probability matrices. Then, we suggest a set of ordered steps, for modeling the variables and illustrate them with a simple exercise of modeling and predicting transmembrane segments in a protein sequence. Finally, we show how to interpret the results of the algorithms for this particular problem. To guide the process of information input and explicit solution of the basic HMM algorithms that answer the HMM questions posed, we developed an educational webserver called HMMTeacher. Additional solved HMM modeling exercises can be found in the user's manual and answers to frequently asked questions. HMMTeacher is available at https://hmmteacher.mobilomics.org, mirrored at https://hmmteacher1.mobilomics.org. A repository with the code of the tool and the webpage is available at https://gitlab.com/kmilo.f/hmmteacher.
Topics: Algorithms; Markov Chains; Probability; Software
PubMed: 35143480
DOI: 10.1371/journal.pcbi.1009703 -
Trends in Cognitive Sciences Jun 2022Life in an increasingly information-rich but highly uncertain world calls for an effective means of communicating uncertainty to a range of audiences. Senders prefer to... (Review)
Review
Life in an increasingly information-rich but highly uncertain world calls for an effective means of communicating uncertainty to a range of audiences. Senders prefer to convey uncertainty using verbal (e.g., likely) rather than numeric (e.g., 75% chance) probabilities, even in consequential domains, such as climate science. However, verbal probabilities can convey something other than uncertainty, and senders may exploit this. For instance, senders can maintain credibility after making erroneous predictions. While verbal probabilities afford ease of expression, they can be easily misunderstood, and the potential for miscommunication is not effectively mitigated by assigning (imprecise) numeric probabilities to words. When making consequential decisions, recipients prefer (precise) numeric probabilities.
Topics: Communication; Humans; Probability; Uncertainty
PubMed: 35397985
DOI: 10.1016/j.tics.2022.03.002 -
Molecular Biology of the Cell Nov 2015Single-molecule detection in fluorescence nanoscopy has become a powerful tool in cell biology but can present vexing issues in image analysis, such as limited signal,...
Single-molecule detection in fluorescence nanoscopy has become a powerful tool in cell biology but can present vexing issues in image analysis, such as limited signal, unspecific background, empirically set thresholds, image filtering, and false-positive detection limiting overall detection efficiency. Here we present a framework in which expert knowledge and parameter tweaking are replaced with a probability-based hypothesis test. Our method delivers robust and threshold-free signal detection with a defined error estimate and improved detection of weaker signals. The probability value has consequences for downstream data analysis, such as weighing a series of detections and corresponding probabilities, Bayesian propagation of probability, or defining metrics in tracking applications. We show that the method outperforms all current approaches, yielding a detection efficiency of >70% and a false-positive detection rate of <5% under conditions down to 17 photons/pixel background and 180 photons/molecule signal, which is beneficial for any kind of photon-limited application. Examples include limited brightness and photostability, phototoxicity in live-cell single-molecule imaging, and use of new labels for nanoscopy. We present simulations, experimental data, and tracking of low-signal mRNAs in yeast cells.
Topics: Bayes Theorem; Computer Simulation; Microscopy, Fluorescence; Molecular Imaging; Photons; Probability; RNA, Messenger; Saccharomyces cerevisiae
PubMed: 26424801
DOI: 10.1091/mbc.E15-06-0448 -
Epidemiology (Cambridge, Mass.) Sep 2019A common reason given for assessing interaction is to evaluate "whether the effect is larger in one group versus another". It has long been known that the answer to this... (Review)
Review
A common reason given for assessing interaction is to evaluate "whether the effect is larger in one group versus another". It has long been known that the answer to this question is scale dependent: the "effect" may be larger for one subgroup on the difference scale, but smaller on the ratio scale. In this article, we show that if the relative magnitude of effects across subgroups is of interest then there exists an "interaction continuum" that characterizes the nature of these relations. When both main effects are positive then the placement on the continuum depends on the relative magnitude of the probability of the outcome in the doubly exposed group. For high probabilities of the outcome in the doubly exposed group, the interaction may be positive-multiplicative positive-additive, the strongest form of positive interaction on the "interaction continuum". As the probability of the outcome in the doubly exposed group goes down, the form of interaction descends through ranks, of what we will refer to as the following: positive-multiplicative positive-additive, no-multiplicative positive-additive, negative-multiplicative positive-additive, negative-multiplicative zero-additive, negative-multiplicative negative-additive, single pure interaction, single qualitative interaction, single-qualitative single-pure interaction, double qualitative interaction, perfect antagonism, inverted interaction. One can thus place a particular set of outcome probabilities into one of these eleven states on the interaction continuum. Analogous results are also given when both exposures are protective, or when one is protective and one causative. The "interaction continuum" can allow for inquiries as to relative effects sizes, while also acknowledging the scale dependence of the notion of interaction itself.
Topics: Causality; Effect Modifier, Epidemiologic; Environmental Exposure; Humans; Probability; Protective Factors
PubMed: 31205287
DOI: 10.1097/EDE.0000000000001054 -
Animal Cognition Sep 2021When choosing among multi-attribute options, integrating the full information may be computationally costly and time-consuming. So-called non-compensatory decision rules...
When choosing among multi-attribute options, integrating the full information may be computationally costly and time-consuming. So-called non-compensatory decision rules only rely on partial information, for example when a difference on a single attribute overrides all others. Such rules may be ecologically more advantageous, despite being economically suboptimal. Here, we present a study that investigates to what extent animals rely on integrative rules (using the full information) versus non-compensatory rules when choosing where to forage. Groups of mice were trained to obtain water from dispensers varying along two reward dimensions: volume and probability. The mice's choices over the course of the experiment suggested an initial reliance on integrative rules, later displaced by a sequential rule, in which volume was evaluated before probability. Our results also demonstrate that while the evaluation of probability differences may depend on the reward volumes, the evaluation of volume differences is seemingly unaffected by the reward probabilities.
Topics: Animals; Choice Behavior; Decision Making; Mice; Probability; Reward
PubMed: 33721139
DOI: 10.1007/s10071-021-01482-8 -
Journal of Clinical Hypertension... Apr 2012Patients don't have an "individual risk" or unique probability of an outcome. Outside Mendelian inheritance, risks are conditional probabilities and differ as the risk...
Patients don't have an "individual risk" or unique probability of an outcome. Outside Mendelian inheritance, risks are conditional probabilities and differ as the risk factors included differ, at times substantially. This lack of reliability is an inherent limitation and is not resolved by including additional risk factors. Groups of like individuals need to be assembled to measure the probability of an outcome. Many groups, like any individual, can be identified, eg, groups of the same age, sex, race, or any combination of these attributes (or any others). That each of these groups may have different risk means there is no such thing as individual risk. This issue was identified by John Venn in 1866 and is known as the reference class problem. Models relate risk factors to outcomes in populations. The number calculated for an individual should not be reported as their individual or true risk, nor should it be used as the sole criterion for clinical decisions. Instead, Feinstein proposed relying on clinically important subgroups. An example would be utilizing an individual's blood pressure as the primary determinant of hypertension treatment decisions, not an unreliable individual risk estimate.
Topics: Biomarkers; Humans; Models, Theoretical; Predictive Value of Tests; Probability; Prognosis; Risk Assessment
PubMed: 22458749
DOI: 10.1111/j.1751-7176.2012.00592.x -
Cognitive Science Jan 2023Though individual categorization or decision processes have been studied separately in many previous investigations, few studies have investigated how they interact by...
Though individual categorization or decision processes have been studied separately in many previous investigations, few studies have investigated how they interact by using a two-stage task of first categorizing and then deciding. To address this issue, we investigated a categorization-decision task in two experiments. In both, participants were shown six faces varying in width, first asked to categorize the faces, and then decide a course of action for each face. Each experiment was designed to include three groups, and for each group, we manipulated the probabilistic contingencies between stimulus, category assignments, and decision consequences. For each group, each participant received three different sequences of category response, category feedback, decision response, and decision feedback. We found that participants were only partially responsive in the appropriate directions to the contingencies assigned to each group. Comparisons of results from different sequences provided evidence for empirical interference effects of categorization on decisions. The empirical interference effect is defined as the difference between the probability of taking a hostile action in decision-alone conditions and the total probability of taking a hostile action in categorization-decision conditions. To test competing accounts for multiple empirical results, including two-stage choice probabilities and empirical interference effects, we compared a quantum cognition model versus a two-stage exemplar categorization model at both aggregate and individual levels. Using a Bayesian information criterion, we found that the quantum model provided an overall better model fit than the exemplar model. Although both models predicted empirical interference effects, the exemplar model was able to generate probabilistic deviation by incorporating category information of the first stage into the feature representation of the subsequent decision stage, while the quantum model produced interference effect by superposition, measurement, and quantum entanglement.
Topics: Humans; Bayes Theorem; Cognition; Probability; Decision Making
PubMed: 36655984
DOI: 10.1111/cogs.13235