-
The Journal of Neuroscience : the... Apr 2021Humans do not have an accurate representation of probability information in the environment but distort it in a surprisingly stereotyped way ("probability distortion"),...
Humans do not have an accurate representation of probability information in the environment but distort it in a surprisingly stereotyped way ("probability distortion"), as shown in a wide range of judgment and decision-making tasks. Many theories hypothesize that humans automatically compensate for the uncertainty inherent in probability information ("representational uncertainty") and probability distortion is a consequence of uncertainty compensation. Here we examined whether and how the representational uncertainty of probability is quantified in the human brain and its relevance to probability distortion behavior. Human subjects (13 female and 9 male) kept tracking the relative frequency of one color of dot in a sequence of dot arrays while their brain activity was recorded by MEG. We found converging evidence from both neural entrainment and time-resolved decoding analysis that a mathematically derived measure of representational uncertainty is automatically computed in the brain, despite it is not explicitly required by the task. In particular, the encodings of relative frequency and its representational uncertainty, respectively, occur at latencies of ∼300 and 400 ms. The relative strength of the brain responses to these two quantities correlates with the probability distortion behavior. The automatic and fast encoding of the representational uncertainty provides neural basis for the uncertainty compensation hypothesis of probability distortion. More generally, since representational uncertainty is closely related to confidence estimation, our findings exemplify how confidence might emerge before perceptual judgment. Human perception of probabilities and relative frequencies can be markedly distorted, which is a potential source of disastrous decisions. But the brain is not just ignorant of probability; probability distortions are highly patterned and similar across different tasks. Recent theoretical work suggests that probability distortions arise from the brain's compensation of its own uncertainty in representing probability. Is such representational uncertainty really computed in the brain? To answer this question, we asked human subjects to track an ongoing stimulus sequence of relative frequencies and recorded their brain responses using MEG. Indeed, we found that the neural encoding of representational uncertainty accompanies that of relative frequency, although the former is not explicitly required by the task.
Topics: Adolescent; Adult; Algorithms; Behavior; Brain; Brain Mapping; Color Perception; Decision Making; Female; Frontal Lobe; Humans; Judgment; Linear Models; Magnetoencephalography; Male; Models, Psychological; Perception; Probability; Reaction Time; Uncertainty; Young Adult
PubMed: 33674418
DOI: 10.1523/JNEUROSCI.2006-20.2021 -
Journal of Speech, Language, and... Mar 2021Purpose Of the three currently recognized variants of primary progressive aphasia, behavioral differentiation between the nonfluent/agrammatic (nfvPPA) and logopenic...
Purpose Of the three currently recognized variants of primary progressive aphasia, behavioral differentiation between the nonfluent/agrammatic (nfvPPA) and logopenic (lvPPA) variants is particularly difficult. The challenge includes uncertainty regarding diagnosis of apraxia of speech, which is subsumed within criteria for variant classification. The purpose of this study was to determine the extent to which a variety of speech articulation and prosody metrics for apraxia of speech differentiate between nfvPPA and lvPPA across diverse speech samples. Method The study involved 25 participants with progressive aphasia (10 with nfvPPA, 10 with lvPPA, and five with the semantic variant). Speech samples included a word repetition task, a picture description task, and a story narrative task. We completed acoustic analyses of temporal prosody and quantitative perceptual analyses based on narrow phonetic transcription and then evaluated the degree of differentiation between nfvPPA and lvPPA participants (with the semantic variant serving as a reference point for minimal speech production impairment). Results Most, but not all, articulatory and prosodic metrics differentiated statistically between the nfvPPA and lvPPA groups. Measures of distortion frequency, syllable duration, syllable scanning, and-to a limited extent-syllable stress and phonemic accuracy showed greater impairment in the nfvPPA group. Contrary to expectations, classification was most accurate in connected speech samples. A customized connected speech metric-the narrative syllable duration-yielded excellent to perfect classification accuracy. Discussion Measures of average syllable duration in multisyllabic utterances are useful diagnostic tools for differentiating between nfvPPA and lvPPA, particularly when based on connected speech samples. As such, they are suitable candidates for automatization, large-scale study, and application to clinical practice. The observation that both speech rate and distortion frequency differentiated more effectively in connected speech than on a motor speech examination suggests that it will be important to evaluate interactions between speech and discourse production in future research.
Topics: Aphasia, Primary Progressive; Apraxias; Benchmarking; Humans; Semantics; Speech
PubMed: 33630653
DOI: 10.1044/2020_JSLHR-20-00445 -
Current Biology : CB Apr 2021Dreams take us to a different reality, a hallucinatory world that feels as real as any waking experience. These often-bizarre episodes are emblematic of human sleep but...
Dreams take us to a different reality, a hallucinatory world that feels as real as any waking experience. These often-bizarre episodes are emblematic of human sleep but have yet to be adequately explained. Retrospective dream reports are subject to distortion and forgetting, presenting a fundamental challenge for neuroscientific studies of dreaming. Here we show that individuals who are asleep and in the midst of a lucid dream (aware of the fact that they are currently dreaming) can perceive questions from an experimenter and provide answers using electrophysiological signals. We implemented our procedures for two-way communication during polysomnographically verified rapid-eye-movement (REM) sleep in 36 individuals. Some had minimal prior experience with lucid dreaming, others were frequent lucid dreamers, and one was a patient with narcolepsy who had frequent lucid dreams. During REM sleep, these individuals exhibited various capabilities, including performing veridical perceptual analysis of novel information, maintaining information in working memory, computing simple answers, and expressing volitional replies. Their responses included distinctive eye movements and selective facial muscle contractions, constituting correctly answered questions on 29 occasions across 6 of the individuals tested. These repeated observations of interactive dreaming, documented by four independent laboratory groups, demonstrate that phenomenological and cognitive characteristics of dreaming can be interrogated in real time. This relatively unexplored communication channel can enable a variety of practical applications and a new strategy for the empirical exploration of dreams.
Topics: Adolescent; Adult; Communication; Dreams; Female; Humans; Male; Polysomnography; Research Personnel; Research Subjects; Researcher-Subject Relations; Sleep, REM; Young Adult
PubMed: 33607035
DOI: 10.1016/j.cub.2021.01.026 -
Design and Comparative Performance of a Robust Lung Auscultation System for Noisy Clinical Settings.IEEE Journal of Biomedical and Health... Jul 2021Chest auscultation is a widely used clinical tool for respiratory disease detection. The stethoscope has undergone a number of transformative enhancements since its...
Chest auscultation is a widely used clinical tool for respiratory disease detection. The stethoscope has undergone a number of transformative enhancements since its invention, including the introduction of electronic systems in the last two decades. Nevertheless, stethoscopes remain riddled with a number of issues that limit their signal quality and diagnostic capability, rendering both traditional and electronic stethoscopes unusable in noisy or non-traditional environments (e.g., emergency rooms, rural clinics, ambulatory vehicles). This work outlines the design and validation of an advanced electronic stethoscope that dramatically reduces external noise contamination through hardware redesign and real-time, dynamic signal processing. The proposed system takes advantage of an acoustic sensor array, an external facing microphone, and on-board processing to perform adaptive noise suppression. The proposed system is objectively compared to six commercially-available acoustic and electronic devices in varying levels of simulated noisy clinical settings and quantified using two metrics that reflect perceptual audibility and statistical similarity, normalized covariance measure (NCM) and magnitude squared coherence (MSC). The analyses highlight the major limitations of current stethoscopes and the significant improvements the proposed system makes in challenging settings by minimizing both distortion of lung sounds and contamination by ambient noise.
Topics: Auscultation; Humans; Lung; Noise; Respiratory Sounds; Stethoscopes
PubMed: 33534721
DOI: 10.1109/JBHI.2021.3056916 -
Clinical Linguistics & Phonetics Dec 2021The extent to which treatment of speech errors that are phonetic in nature (i.e., distortions) produces generalization to untrained sounds is not well understood. This...
The extent to which treatment of speech errors that are phonetic in nature (i.e., distortions) produces generalization to untrained sounds is not well understood. This case study reports a child referred for treatment of a velarized distortion of American English /ɹ/, who also demonstrated an inconsistent velarized distortion of /l/. Acoustic analysis revealed evidence of a covert contrast between /ɹ/ and /l/ prior to treatment. Ultrasound biofeedback treatment and perceptual training targeted /ɹ/ only, but progress was tracked for both /ɹ/ and /l/. Substantial improvements in perceptually rated accuracy and significant changes in acoustic features were observed for both sounds, indicating generalization. These results highlight that generalization from trained to untrained sounds is possible for children with residual speech errors characterized by phonetic distortions.
Topics: Child; Humans; Phonetics; Speech; Speech Production Measurement; Speech Sound Disorder; Ultrasonography; United States
PubMed: 33530759
DOI: 10.1080/02699206.2021.1879273 -
Cognitive Science Jan 2021In a series of three behavioral experiments, we found a systematic distortion of probability judgments concerning elementary visual stimuli. Participants were briefly...
In a series of three behavioral experiments, we found a systematic distortion of probability judgments concerning elementary visual stimuli. Participants were briefly shown a set of figures that had two features (e.g., a geometric shape and a color) with two possible values each (e.g., triangle or circle and black or white). A figure was then drawn, and participants were informed about the value of one of its features (e.g., that the figure was a "circle") and had to predict the value of the other feature (e.g., whether the figure was "black" or "white"). We repeated this procedure for various sets of figures and, by varying the statistical association between features in the sets, we manipulated the probability of a feature given the evidence of another (e.g., the posterior probability of hypothesis "black" given the evidence "circle") as well as the support provided by a feature to another (e.g., the impact, or confirmation, of evidence "circle" on the hypothesis "black"). Results indicated that participants' judgments were deeply affected by impact, although they only should have depended on the probability distributions over the features, and that the dissociation between evidential impact and posterior probability increased the number of errors. The implications of these findings for lower and higher level cognitive models are discussed.
Topics: Humans; Judgment; Probability
PubMed: 33398915
DOI: 10.1111/cogs.12919 -
Current Biology : CB Mar 2021How do we estimate the position of an object in the world around us? Naturally, we would direct our gaze to that object. Accordingly, neural motor coordinates entail the...
How do we estimate the position of an object in the world around us? Naturally, we would direct our gaze to that object. Accordingly, neural motor coordinates entail the distance of external objects and thus might be used to derive perceptual estimates. Several general frameworks in the history of perceptual science have offered such a view. However, a mechanism showing how motor and visual processes communicate remains elusive. Here, we report that every post-saccadic error biases visual localization in a serially dependent manner. In order to simulate a realignment of visual space through motor coordinates, we induced an artificial de-alignment between visual and motor space. We found that when performing saccades under this distortion, post-saccadic error information clearly realigned visual and motor space, again in a serially dependent manner. These results demonstrate that the consequences of every saccade directly influence where we see objects in the world. On a neural basis, this requires that motor signals, which generate close to the saccade production machinery, are reported to cortical areas and arrange visual space. This view is consistent with recent electrophysiological findings of post-saccadic error processing in posterior parietal cortex..
Topics: Adult; Distance Perception; Female; Humans; Male; Models, Neurological; Parietal Lobe; Photic Stimulation; Saccades
PubMed: 33290742
DOI: 10.1016/j.cub.2020.11.032 -
Entropy (Basel, Switzerland) Jan 2020Data hiding is the art of embedding data into a cover image without any perceptual distortion of the cover image. Moreover, data hiding is a very crucial research topic...
Data hiding is the art of embedding data into a cover image without any perceptual distortion of the cover image. Moreover, data hiding is a very crucial research topic in information security because it can be used for various applications. In this study, we proposed a high-capacity data-hiding scheme for absolute moment block truncation coding (AMBTC) decompressed images. We statistically analyzed the composition of the secret data string and developed a unique encoding and decoding dictionary search for adjusting pixel values. The dictionary was used in the embedding and extraction stages. The dictionary provides high data-hiding capacity because the secret data was compressed using dictionary-based coding. The experimental results of this study reveal that the proposed scheme is better than the existing schemes, with respect to the data-hiding capacity and visual quality.
PubMed: 33285920
DOI: 10.3390/e22020145 -
Trends in Hearing 2020The sources and consequences of a sensorineural hearing loss are diverse. While several approaches have aimed at disentangling the physiological and perceptual...
The sources and consequences of a sensorineural hearing loss are diverse. While several approaches have aimed at disentangling the physiological and perceptual consequences of different etiologies, hearing deficit characterization and rehabilitation have been dominated by the results from pure-tone audiometry. Here, we present a novel approach based on data-driven profiling of perceptual auditory deficits that attempts to represent auditory phenomena that are usually hidden by, or entangled with, audibility loss. We hypothesize that the hearing deficits of a given listener, both at hearing threshold and at suprathreshold sound levels, result from two independent types of "auditory distortions." In this two-dimensional space, four distinct "auditory profiles" can be identified. To test this hypothesis, we gathered a data set consisting of a heterogeneous group of listeners that were evaluated using measures of speech intelligibility, loudness perception, binaural processing abilities, and spectrotemporal resolution. The subsequent analysis revealed that distortion type-I was associated with elevated hearing thresholds at high frequencies and reduced temporal masking release and was significantly correlated with elevated speech reception thresholds in noise. Distortion type-II was associated with low-frequency hearing loss and abnormally steep loudness functions. The auditory profiles represent four robust subpopulations of hearing-impaired listeners that exhibit different degrees of perceptual distortions. The four auditory profiles may provide a valuable basis for improved hearing rehabilitation, for example, through profile-based hearing-aid fitting.
Topics: Audiology; Audiometry, Pure-Tone; Auditory Threshold; Hearing; Hearing Loss, Sensorineural; Humans; Noise; Perceptual Masking; Speech Perception
PubMed: 33272110
DOI: 10.1177/2331216520973539 -
Medical Physics Jun 2021To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial...
PURPOSE
To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network.
METHODS
One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams.
RESULTS
The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture).
CONCLUSION
The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
Topics: Cone-Beam Computed Tomography; Deep Learning; Humans; Image Processing, Computer-Assisted; Radiotherapy Planning, Computer-Assisted; Spiral Cone-Beam Computed Tomography; Tomography, X-Ray Computed
PubMed: 33259647
DOI: 10.1002/mp.14624