-
Medical Decision Making : An... 1994Most standard discussions of Bayes' formula treat sensitivity, specificity, and the prior probability as fixed parameters for probability revision, but in fact these...
Most standard discussions of Bayes' formula treat sensitivity, specificity, and the prior probability as fixed parameters for probability revision, but in fact these usually have associated variability. This in turn generates predictable patterns of uncertainty in the posterior probabilities. Although these have not been investigated in detail, they have important implications for the interpretation of posterior probabilities and the clinical use of Bayesian probability revision. For a test with a high likelihood ratio for a positive result, the positive predictive value (PPV) is strongly affected by uncertainties in the prior probability when the prior probability is small, but PPV is almost independent of such uncertainties at high values of the prior probability. The PPV is more affected by changes in specificity than by changes in sensitivity, and uncertainty in specificity has its maximal impact on the PPV at low prior probability values. These patterns are most pronounced for tests with high likelihood ratios of positive results. Similar results can be shown for the negative predictive value. These results imply that for suitability good tests, probability revision in certain definable ranges of prior probability may be so strongly affected by errors in the estimations of both the prior probability and the operating characteristics that the posterior probabilities may be unstable in practice. On the other hand, at other values of the prior probability, the posterior probabilities are almost constant, and formal probability revision will not have much impact. These patterns indicate limitations to the reliability and usefulness of calculated posterior probabilities, and have important implications for the clinical use of Bayes' formula.
Topics: Bayes Theorem; Erythrocytes; Humans; Predictive Value of Tests; Probability; Radionuclide Imaging; Technetium; Thrombophlebitis
PubMed: 8152356
DOI: 10.1177/0272989X9401400106 -
Perspectives on Psychological Science :... Jul 2019Humans frequently make inferences about uncertain future events with limited data. A growing body of work suggests that infants and other primates make surprisingly... (Review)
Review
Humans frequently make inferences about uncertain future events with limited data. A growing body of work suggests that infants and other primates make surprisingly sophisticated inferences under uncertainty. First, we ask what underlying cognitive mechanisms allow young learners to make such sophisticated inferences under uncertainty. We outline three possibilities, the , and views, and assess the empirical evidence for each. We argue that the weight of the empirical work favors the probabilistic view, in which early reasoning under uncertainty is grounded in inferences about the relationship between samples and populations as opposed to being grounded in simple heuristics. Second, we discuss the apparent contradiction between this early-emerging sensitivity to probabilities with the decades of literature suggesting that adults show limited use of base-rate and sampling principles in their inductive inferences. Third, we ask how these early inductive abilities can be harnessed for improving later mathematics education and inductive inference. We make several suggestions for future empirical work that should go a long way in addressing the many remaining open questions in this growing research area.
Topics: Cognition; Humans; Infant; Probability; Problem Solving; Psychology, Child; Uncertainty
PubMed: 31185184
DOI: 10.1177/1745691619847201 -
Philosophical Transactions of the Royal... Feb 2019Modern theories of decision-making typically model uncertainty about decision options using the tools of probability theory. This is exemplified by the Savage framework,...
Modern theories of decision-making typically model uncertainty about decision options using the tools of probability theory. This is exemplified by the Savage framework, the most popular framework in decision-making research. There, decision-makers are assumed to choose from among available decision options as if they maximized subjective expected utility, which is given by the utilities of outcomes in different states weighted with subjective beliefs about the occurrence of those states. Beliefs are captured by probabilities and new information is incorporated using Bayes' Law. The primary concern of the Savage framework is to ensure that decision-makers' choices are rational. Here, we use concepts from computational complexity theory to expose two major weaknesses of the framework. Firstly, we argue that in most situations, subjective utility maximization is computationally intractable, which means that the Savage axioms are implausible. We discuss empirical evidence supporting this claim. Secondly, we argue that there exist many decision situations in which the nature of uncertainty is such that (random) sampling in combination with Bayes' Law is an ineffective strategy to reduce uncertainty. We discuss several implications of these weaknesses from both an empirical and a normative perspective. This article is part of the theme issue 'Risk taking and impulsive behaviour: fundamental discoveries, theoretical perspectives and clinical implications'.
Topics: Bayes Theorem; Decision Making; Humans; Probability; Uncertainty
PubMed: 30966921
DOI: 10.1098/rstb.2018.0138 -
Cognitive Psychology Feb 1999Empirical studies have shown that decision makers do not usually treat probabilities linearly. Instead, people tend to overweight small probabilities and underweight...
Empirical studies have shown that decision makers do not usually treat probabilities linearly. Instead, people tend to overweight small probabilities and underweight large probabilities. One way to model such distortions in decision making under risk is through a probability weighting function. We present a nonparametric estimation procedure for assessing the probability weighting function and value function at the level of the individual subject. The evidence in the domain of gains supports a two-parameter weighting function, where each parameter is given a psychological interpretation: one parameter measures how the decision maker discriminates probabilities, and the other parameter measures how attractive the decision maker views gambling. These findings are consistent with a growing body of empirical and theoretical work attempting to establish a psychological rationale for the probability weighting function.
Topics: Humans; Judgment; Probability
PubMed: 10090801
DOI: 10.1006/cogp.1998.0710 -
Topics in Cognitive Science Jul 2022Scientists studying decision-making often provide a set of choices, each specified with values or distributions of values, and probabilities or distributions of...
Scientists studying decision-making often provide a set of choices, each specified with values or distributions of values, and probabilities or distributions of probabilities. For example, "Would you prefer $100 with probability 1.0 or $1 with probability .9 and $1,000 with probability 0.1?" Other decision research examines choices made in the absence of most quantitative information; for example, "Would you prefer a Ford now or a Porsche a year from now?," "Which food would you prefer," but models the findings with precise quantitative assumptions. Yet other research does neither; for example, modeling verbally stated choices with verbally stated heuristics. This article asks about the relevance of the first two research approaches for much of the decision-making made in life. The use of quantitative research and modeling is unsurprising, given that this approach underlies most of science. In life, values and probabilities are almost always partly or wholly vague and qualitative rather than quantitative. For example, when deciding which house to buy, there are relevant features such as size, color, neighborhood schools, construction materials, attractiveness, and many more, but the decision-maker finds it difficult and of little use to assign these precise values or weights. Nonetheless, humans have evolved to make decisions in such vaguely specified settings. I provide an example showing how a very high degree of uncertainty can defeat the application of quantitative decision-making, but such a demonstration is not critical if quantitative research and modeling produce a good understanding of and a good approximation to decision-making in the natural environment. This perspective addresses these issues.
Topics: Decision Making; Heuristics; Humans; Probability; Uncertainty
PubMed: 34050714
DOI: 10.1111/tops.12541 -
Journal of Biopharmaceutical Statistics May 2024Bayesian predictive probabilities have become a ubiquitous tool for design and monitoring of clinical trials. The typical procedure is to average predictive...
Bayesian predictive probabilities have become a ubiquitous tool for design and monitoring of clinical trials. The typical procedure is to average predictive probabilities over the prior or posterior distributions. In this paper, we highlight the limitations of relying solely on averaging, and propose the reporting of intervals or quantiles for the predictive probabilities. These intervals formalize the intuition that uncertainty decreases with more information. We present four different applications (Phase 1 dose escalation, early stopping for futility, sample size re-estimation, and assurance/probability of success) to demonstrate the practicality and generality of the proposed approach.
Topics: Humans; Bayes Theorem; Uncertainty; Models, Statistical; Probability; Sample Size; Research Design
PubMed: 37157818
DOI: 10.1080/10543406.2023.2204943 -
Forensic Science International 1985This paper discusses the statistical interpretation of blood group findings in paternity testing. As a consequence of the large number of systems now employed, high...
This paper discusses the statistical interpretation of blood group findings in paternity testing. As a consequence of the large number of systems now employed, high probabilities of paternity are usual and evaluation problems arise. The purpose of this investigation was to calculate the paternity probabilities for a sample of legitimate families with a true father compared with those obtained in some cases of non-excluded men chosen randomly from the population as the accused fathers for the same mother-child pairs. The calculations were based on Essen-Möller formula, derived from Bayes' Theorem. The blood group systems taken into account were ABO, Rh, MNSs, Kell-Cellano, P1, Duffy, Lutheran, Kidd, Gc, Hp, Gm, Km, Tf, alpha 1-AT, AcP, PGM1, AK, ADA, EsD, 6-PGDH and GLO-I. Applied together these give an exclusion probability of 97.32%. The results of probability of paternity for some mother-child-father triplets and its comparison with the chance of exclusion for the same mother-child pairs are reported.
Topics: Humans; Male; Paternity; Probability
PubMed: 4076951
DOI: 10.1016/0379-0738(85)90113-6 -
Journal of Experimental Psychology.... Jul 2007Choice strategies for selecting among outcomes in multiple-cue probability learning were investigated using a simulated medical diagnosis task. Expected choice...
Choice strategies for selecting among outcomes in multiple-cue probability learning were investigated using a simulated medical diagnosis task. Expected choice probabilities (the proportion of times each outcome was selected given each cue pattern) under alternative choice strategies were constructed from corresponding observed judged probabilities (of each outcome given each cue pattern) and compared with observed choice probabilities. Most of the participants were inferred to have responded by using a deterministic strategy, in which the outcome with the higher judged probability is consistently chosen, rather than a probabilistic strategy, in which an outcome is chosen with a probability equal to its judged probability. Extended practice in the learning environment did not affect choice strategy selection, contrary to reports from previous studies, results of which may instead be attributable to changes with practice in the variability and extremity of the perceived probabilities on which the choices were based.
Topics: Choice Behavior; Cues; Humans; Learning; Probability
PubMed: 17576152
DOI: 10.1037/0278-7393.33.4.757 -
The probability of correctly resolving a split as an experimental design criterion in phylogenetics.Systematic Biology Oct 2012We illustrate how recently developed large sequence-length approximations to probabilities of correct phylogenetic reconstruction for maximum likelihood estimation can...
We illustrate how recently developed large sequence-length approximations to probabilities of correct phylogenetic reconstruction for maximum likelihood estimation can be used to evaluate experimental design strategies. The specific criterion of interest is the probability of correctly resolving an a priori defined split of interest in a phylogenetic tree. Design strategies considered include increased taxon sampling and increasing sequence length. Our analyses of specific examples strongly suggest that it is better to sample taxa that connect as close as possible to the split of interest. Assuming this can be done, these examples suggest it is better to sample additional taxa than to add a comparable number of sites for the existing taxa. If the rates of evolution in the added taxa are slow, it is better to choose taxa connecting to a long edge, but if rates are comparable to a sister lineage, it is not necessarily the best strategy to sample taxa connected to a long edge. We also examined deleting taxa while increasing the number of sites. Although deleting a small number of taxa distant from the split of interest can be beneficial, deleting too many or making poor choices as to what should be deleted can lead to smaller probabilities of correct reconstruction than for the original sequence data.
Topics: Classification; Embryophyta; Evolution, Molecular; Genes, Plant; Likelihood Functions; Phylogeny; Probability; Research Design
PubMed: 22337142
DOI: 10.1093/sysbio/sys033 -
Molecules (Basel, Switzerland) Oct 2021Human interaction with the world is dominated by uncertainty. Probability theory is a valuable tool to face such uncertainty. According to the Bayesian definition,...
Establishing a New Link between Fuzzy Logic, Neuroscience, and Quantum Mechanics through Bayesian Probability: Perspectives in Artificial Intelligence and Unconventional Computing.
Human interaction with the world is dominated by uncertainty. Probability theory is a valuable tool to face such uncertainty. According to the Bayesian definition, probabilities are personal beliefs. Experimental evidence supports the notion that human behavior is highly consistent with Bayesian probabilistic inference in both the sensory and motor and cognitive domain. All the higher-level psychophysical functions of our brain are believed to take the activities of interconnected and distributed networks of neurons in the neocortex as their physiological substrate. Neurons in the neocortex are organized in cortical columns that behave as fuzzy sets. Fuzzy sets theory has embraced uncertainty modeling when membership functions have been reinterpreted as possibility distributions. The terms of Bayes' formula are conceivable as fuzzy sets and Bayes' inference becomes a fuzzy inference. According to the QBism, quantum probabilities are also Bayesian. They are logical constructs rather than physical realities. It derives that the Born rule is nothing but a kind of Quantum Law of Total Probability. Wavefunctions and measurement operators are viewed epistemically. Both of them are similar to fuzzy sets. The new link that is established between fuzzy logic, neuroscience, and quantum mechanics through Bayesian probability could spark new ideas for the development of artificial intelligence and unconventional computing.
Topics: Artificial Intelligence; Bayes Theorem; Brain; Fuzzy Logic; Humans; Neurosciences; Probability; Quantum Theory
PubMed: 34641530
DOI: 10.3390/molecules26195987