-
The American Journal of Cardiology Aug 1989A new discriminant function model for estimating probabilities of angiographic coronary disease was tested for reliability and clinical utility in 3 patient test groups.... (Comparative Study)
Comparative Study
A new discriminant function model for estimating probabilities of angiographic coronary disease was tested for reliability and clinical utility in 3 patient test groups. This model, derived from the clinical and noninvasive test results of 303 patients undergoing angiography at the Cleveland Clinic in Cleveland, Ohio, was applied to a group of 425 patients undergoing angiography at the Hungarian Institute of Cardiology in Budapest, Hungary (disease prevalence 38%); 200 patients undergoing angiography at the Veterans Administration Medical Center in Long Beach, California (disease prevalence 75%); and 143 such patients from the University Hospitals in Zurich and Basel, Switzerland (disease prevalence 84%). The probabilities that resulted from the application of the Cleveland algorithm were compared with those derived by applying a Bayesian algorithm derived from published medical studies called CADENZA to the same 3 patient test groups. Both algorithms overpredicted the probability of disease at the Hungarian and American centers. Overprediction was more pronounced with the use of CADENZA (average overestimation 16 vs 10% and 11 vs 5%, p less than 0.001). In the Swiss group, the discriminant function underestimated (by 7%) and CADENZA slightly overestimated (by 2%) disease probability. Clinical utility, assessed as the percentage of patients correctly classified, was modestly superior for the new discriminant function as compared with CADENZA in the Hungarian group and similar in the American and Swiss groups. It was concluded that coronary disease probabilities derived from discriminant functions are reliable and clinically useful when applied to patients with chest pain syndromes and intermediate disease prevalence.
Topics: Algorithms; Angiography; Bayes Theorem; California; Coronary Angiography; Coronary Disease; Female; Humans; Hungary; Male; Middle Aged; Ohio; Probability; Switzerland
PubMed: 2756873
DOI: 10.1016/0002-9149(89)90524-9 -
The American Psychologist Apr 2021Intelligence analysis is fundamentally an exercise in expert judgment made under conditions of uncertainty. These judgments are used to inform consequential decisions.... (Review)
Review
Intelligence analysis is fundamentally an exercise in expert judgment made under conditions of uncertainty. These judgments are used to inform consequential decisions. Following the major intelligence failure that led to the 2003 war in Iraq, intelligence organizations implemented policies for communicating probability in their assessments. Virtually all chose to convey probability using standardized linguistic lexicons in which an ordered set of select probability terms (e.g., highly likely) is associated with numeric ranges (e.g., 80-90%). We review the benefits and drawbacks of this approach, drawing on psychological research on probability communication and studies that have examined the effectiveness of standardized lexicons. We further discuss how numeric probabilities can overcome many of the shortcomings of linguistic probabilities. Numeric probabilities are not without drawbacks (e.g., they are more difficult to elicit and may be misunderstood by receivers with poor numeracy). However, these drawbacks can be ameliorated with training and practice, whereas the pitfalls of linguistic probabilities are endemic to the approach. We propose that, on balance, the benefits of using numeric probabilities outweigh their drawbacks. Given the enormous costs associated with intelligence failure, the intelligence community should reconsider its reliance on using linguistic probabilities to convey probability in intelligence assessments. Our discussion also has implications for probability communication in other domains such as climate science. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Topics: Communication; Decision Making; Humans; Judgment; Linguistics; Policy Making; Probability; Uncertainty
PubMed: 32700939
DOI: 10.1037/amp0000637 -
Studies in History and Philosophy of... Oct 2023Problems with uniform probabilities on an infinite support show up in contemporary cosmology. This paper focuses on the context of inflation theory, where it complicates...
Problems with uniform probabilities on an infinite support show up in contemporary cosmology. This paper focuses on the context of inflation theory, where it complicates the assignment of a probability measure over pocket universes. The measure problem in cosmology, whereby it seems impossible to pick out a uniquely well-motivated measure, is associated with a paradox that occurs in standard probability theory and crucially involves uniformity on an infinite sample space. This problem has been discussed by physicists, albeit without reference to earlier work on this topic. The aim of this article is both to introduce philosophers of probability to these recent discussions in cosmology and to familiarize physicists and philosophers working on cosmology with relevant foundational work by Kolmogorov, de Finetti, Jaynes, and other probabilists. As such, the main goal is not to solve the measure problem, but to clarify the exact origin of some of the current obstacles. The analysis of the assumptions going into the paradox indicates that there exist multiple ways of dealing consistently with uniform probabilities on infinite sample spaces. Taking a pluralist stance towards the mathematical methods used in cosmology shows there is some room for progress with assigning probabilities in cosmological theories.
Topics: Cultural Diversity; Insufflation; Probability; Probability Theory
PubMed: 37690232
DOI: 10.1016/j.shpsa.2023.08.009 -
Risk Analysis : An Official Publication... Feb 2009Communicating probability information about risks to the public is more difficult than might be expected. Many studies have examined this subject, so that their... (Review)
Review
Communicating probability information about risks to the public is more difficult than might be expected. Many studies have examined this subject, so that their resulting recommendations are scattered over various publications, diverse research fields, and are about different presentation formats. An integration of empirical findings in one review would be useful therefore to describe the evidence base for communication about probability information and to present the recommendations that can be made so far. We categorized the studies in the following presentation formats: frequencies, percentages, base rates and proportions, absolute and relative risk reduction, cumulative probabilities, verbal probability information, numerical versus verbal probability information, graphs, and risk ladders. We suggest several recommendations for these formats. Based on the results of our review, we show that the effects of presentation format depend not only on the type of format, but also on the context in which the format is used. We therefore argue that the presentation format has the strongest effect when the receiver processes probability information heuristically instead of systematically. We conclude that future research and risk communication practitioners should not only concentrate on the presentation format of the probability information but also on the situation in which this message is presented, as this may predict how people process the information and how this may influence their interpretation of the risk.
Topics: Communication; Female; Humans; Male; Models, Theoretical; Patient Education as Topic; Patient Participation; Physician-Patient Relations; Probability; Risk; Risk Assessment; Risk Factors
PubMed: 19000070
DOI: 10.1111/j.1539-6924.2008.01137.x -
Neuron Oct 2016In this issue of Neuron, Orbán et al. (2016) test whether the brain represents probabilities by sampling: do neurons interpret the world by generating causal...
In this issue of Neuron, Orbán et al. (2016) test whether the brain represents probabilities by sampling: do neurons interpret the world by generating causal explanations of sense data and quickly sample different interpretations over time? Orbán et al. (2016) find agreement between this model's predictions and neural data.
Topics: Brain; Neurons; Probability; Time
PubMed: 27764660
DOI: 10.1016/j.neuron.2016.10.007 -
TAG. Theoretical and Applied Genetics.... Apr 2022We propose using probability concepts from Bayesian models to leverage a more informed decision-making process toward cultivar recommendation in multi-environment...
We propose using probability concepts from Bayesian models to leverage a more informed decision-making process toward cultivar recommendation in multi-environment trials. Statistical models that capture the phenotypic plasticity of a genotype across environments are crucial in plant breeding programs to potentially identify parents, generate offspring, and obtain highly productive genotypes for target environments. In this study, our aim is to leverage concepts of Bayesian models and probability methods of stability analysis to untangle genotype-by-environment interaction (GEI). The proposed method employs the posterior distribution obtained with the No-U-Turn sampler algorithm to get Hamiltonian Monte Carlo estimates of adaptation and stability probabilities. We applied the proposed models in two empirical tropical datasets. Our findings provide a basis to enhance our ability to consider the uncertainty of cultivar recommendation for global or specific adaptation. We further demonstrate that probability methods of stability analysis in a Bayesian framework are a powerful tool for unraveling GEI given a defined intensity of selection that results in a more informed decision-making process toward cultivar recommendation in multi-environment trials.
Topics: Bayes Theorem; Environment; Genotype; Plant Breeding; Probability
PubMed: 35192008
DOI: 10.1007/s00122-022-04041-y -
Conservation Biology : the Journal of... Aug 2022The Judas technique is often used in control or eradication of particular vertebrate pests. The technique exploits the tendency of individuals to form social groups. A...
The Judas technique is often used in control or eradication of particular vertebrate pests. The technique exploits the tendency of individuals to form social groups. A radio collar is affixed to an individual and its subsequent monitoring facilitates the detection of other conspecifics. Efficacy of this technique would be improved if managers could estimate the probability that a Judas individual would detect conspecifics. To calculate this probability, we estimated association rates of Judas individuals with other Judas individuals, given the length of time the Judas has been deployed. We developed a simple model of space-use for individual Judas animals and constrained detection probabilities to those specific areas. We then combined estimates for individual Judas animals to infer the probability that a wild individual could be detected in an area of interest via Judas surveillance. We illustrated the method by using data from a feral goat eradication program on Isla Santiago, Galápagos, and a feral pig eradication program on Santa Cruz Island, California. Association probabilities declined as the proximity between individual areas of use of a Judas pair decreased. Unconditional probabilities of detection within individual areas of use averaged 0.09 per month for feral pigs and 0.11 per month for feral goats. Probabilities that eradication had been achieved, given no detections of wild conspecifics, and an uninformative prior probability of eradication were 0.79 (90% CI 0.22-0.99) for feral goats and 0.87 (90% CI 0.44-1.0) for feral pigs. We envisage several additions to the analyses used that could improve estimates of Judas detection probability. Analyses such as these can help managers increase the efficacy of eradication efforts, leading to more effective effects to restore native biodiversity.
Topics: Animals; Animals, Wild; Biodiversity; Conservation of Natural Resources; Probability; Vertebrates
PubMed: 35122326
DOI: 10.1111/cobi.13898 -
Topics in Cognitive Science Jan 2018We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment....
We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks.
Topics: Computer Simulation; Humans; Judgment; Models, Psychological; Probability; Probability Theory
PubMed: 29383882
DOI: 10.1111/tops.12319 -
British Journal of Plastic Surgery Nov 1988Recent publications indicate an increasing interest in the description and assessment of risks related to medical and surgical treatment. The law may come to require... (Review)
Review
Recent publications indicate an increasing interest in the description and assessment of risks related to medical and surgical treatment. The law may come to require fuller disclosure of therapeutic risks than was formerly the case. Against this background the mortality risks of plastic surgery procedures are considered in relation to other well documented mortality probabilities. The possible implications for clinical practice and "informed consent" are discussed.
Topics: Adolescent; Adult; Aged; Child; Child, Preschool; Humans; Infant; Middle Aged; Probability; Risk Factors; Surgery, Plastic
PubMed: 3061540
DOI: 10.1016/0007-1226(88)90179-8 -
Radiology Jan 2003In this article, a summary of the basic rules of probability using examples of their application in radiology is presented. Those rules describe how probabilities may be...
In this article, a summary of the basic rules of probability using examples of their application in radiology is presented. Those rules describe how probabilities may be combined to obtain the chance of "success" with either of two diagnostic or therapeutic procedures or with both. They define independence and relate it to the conditional probability. They describe the relationship (Bayes rule) between sensitivity, specificity, and prevalence on the one hand and the positive and negative predictive values on the other. Finally, the two distributions most commonly encountered in statistical models of radiologic data are presented: the binomial and normal distributions.
Topics: Bayes Theorem; Models, Statistical; Predictive Value of Tests; Probability; Radiology
PubMed: 12511662
DOI: 10.1148/radiol.2261011712