-
Heliyon Sep 2023Medical video watermarking is one of the beneficial and efficient tools to prohibit important patients' data from illicit enrollment and redistribution. In this paper, a...
Medical video watermarking is one of the beneficial and efficient tools to prohibit important patients' data from illicit enrollment and redistribution. In this paper, a new blind watermarking scheme has been proposed to improve the confidentiality, integrity, authenticity, and perceptual quality of a medical video with minimum distortion. The proposed scheme is based on 2D-DWT and dual Hessenberg-QR decomposition, where the input medical video is initially processed into frames. Then, the processed frames are transformed into sub-bands using 2D-DWT, followed by applying Hessenberg-QR decomposition on the selected wavelet HL2 sub-band. The watermark is scrambled via Arnold cat map to raise confidentiality and then concealed in the modified selected features. The watermark is extracted in a fully blind mode without referencing the original video, which reduces the extraction time. The proposed scheme maintained a fundamental tradeoff between robustness and visual imperceptibility compared to existing methods against many commonly encountered attacks. The visual imperceptibility has been evaluated using well-known metrics PSNR, SSIM, Q-index, and histogram analysis. The proposed scheme achieves a high PSNR value of (70.6899 dB) with minimal distortion and a high robustness level with an average NC value of (0.9998) and BER value of (0.0023) while conserving a large payload capacity. The obtained results show superior performance over similar video watermarking methods. The limitation of this scheme is the elapsed time during the embedding process since we utilized dual Hessenberg-QR decomposition. One possible solution to reduce time consumption is simple decompositions like bound-constrained SVM or similar decompositions.
PubMed: 37809959
DOI: 10.1016/j.heliyon.2023.e19809 -
Journal of Racial and Ethnic Health... Sep 2023The purpose of this study was to evaluate the differences in perceptual and attitudinal body image between White and African-American males and females matched for sex,...
PURPOSE
The purpose of this study was to evaluate the differences in perceptual and attitudinal body image between White and African-American males and females matched for sex, age, BMI, and other body composition components using a combination of 3-dimensional mobile digital imaging analysis (DIA) and the Multidimensional Body-Self Relations Questionnaire-Appearance Scale (MBSRQ-AS).
METHODS
One-hundred non-Hispanic White (n=50) and non-Hispanic African-American (n=50) adults (M=34, F=66) matched for sex, age, BMI, and body composition components completed this cross-sectional study. Participants underwent several anthropometric assessments, completed the MBSRQ-AS, and rated their perceived appearance, ideal appearance, and the appearance they believed a partner would find societally attractive using a state of the art mobile 3-dimensional DIA produced using broad developmental populations. Body image distortion was measured as the perceived minus actual appearance, and body image dissatisfaction was defined as the ideal appearance and appearance a partner would find attractive minus the perceived appearance.
RESULTS
Using the DIA, only African-American females demonstrated significant body image distortion (p<0.001); reporting perceived appearances significantly lower their than their actual. Further, AA females demonstrated significantly larger differences between their ideal and perceived appearance (p=0.009), perceived larger bodies as more attractive to a potential partner (p=0.009), and reported higher ratings of appearance evaluation (p=0.001) and body area satisfaction (p=0.011) compared to White females.
CONCLUSIONS
After accounting for all anthropometric determinants of body image, perceptual and attitudinal body image differs between White and African-American adults with differences supporting larger body size acceptance for African-American individuals, particularly African-American females.
PubMed: 37749440
DOI: 10.1007/s40615-023-01799-9 -
IEEE Transactions on Image Processing :... 2023Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms. Here we consider the problem of learning perceptually...
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms. Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner. Distortion type identification and degradation level determination is employed as an auxiliary task to train a deep learning model containing a deep Convolutional Neural Network (CNN) that extracts spatial features, as well as a recurrent unit that captures temporal information. The model is trained using a contrastive loss and we therefore refer to this training framework and resulting model as CONtrastive VIdeo Quality EstimaTor (CONVIQT). During testing, the weights of the trained model are frozen, and a linear regressor maps the learned features to quality scores in a no-reference (NR) setting. We conduct comprehensive evaluations of the proposed model against leading algorithms on multiple VQA databases containing wide ranges of spatial and temporal distortions. We analyze the correlations between model predictions and ground-truth quality ratings, and show that CONVIQT achieves competitive performance when compared to state-of-the-art NR-VQA models, even though it is not trained on those databases. Our ablation experiments demonstrate that the learned representations are highly robust and generalize well across synthetic and realistic distortions. Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
PubMed: 37676804
DOI: 10.1109/TIP.2023.3310344 -
Journal of Experimental Psychology.... Jan 2024Comparing a visual memory with new visual stimuli can bias memory content, especially when the new stimuli are perceived as similar. Perceptual comparisons of this kind...
Comparing a visual memory with new visual stimuli can bias memory content, especially when the new stimuli are perceived as similar. Perceptual comparisons of this kind may play a mechanistic role in memory updating and can explain how memories can become erroneous in daily life. To test this possibility, we investigated whether comparisons can produce other types of memory distortion beyond memory bias that are commonly implicated in erroneous memories (e.g., memory misattribution). We hypothesized that the type of memory distortion induced during a comparison depends on the perceived overlap between the memory and incoming stimulus-when the input is perceived as similar, it biases memory content; when perceived as the same, it replaces memory content. Participants completed a delayed estimation task in which they compared their memories of color (Experiment 1) and shape stimuli (Experiment 2) to probe stimuli before reporting memory content. We found systematic errors in participants' memory reports following perceived similarity and sameness that were toward the probes and larger following perceived sameness. Simulations confirmed that these errors were not explained by noisy encoding processes that occurred before comparisons. Instead, computational modeling suggested that these errors were likely explained by the probabilistic replacement of the memory by the probe following perceived sameness and integration between the memory and the probe following perceived similarity. Together, these findings suggest that perceptual comparisons can prompt distinct forms of memory updating that have been described previously and may explain how memories become erroneous during their use in everyday behavior. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Topics: Humans; Judgment; Memory; Computer Simulation
PubMed: 37650822
DOI: 10.1037/xge0001469 -
Quarterly Journal of Experimental... Sep 2023It has been proposed that autistic people experience a temporal distortion whereby the temporal binding window of multisensory integration is extended. Research to date...
It has been proposed that autistic people experience a temporal distortion whereby the temporal binding window of multisensory integration is extended. Research to date has focused on autistic children so whether these differences persist into adulthood remains unknown. In addition, the possibility that the previous observations have arisen from between-group differences in response bias, rather than perceptual differences, has not been addressed. Participants completed simultaneity judgements of audiovisual speech stimuli across a range of stimulus-onset asynchronies. Response times and accuracy data were fitted to a drift-diffusion model so that the drift rate (a measure of processing efficiency) and starting point (response bias) could be estimated. In Experiment 1, we tested a sample of non-autistic adults who completed the Autism Quotient questionnaire. Autism Quotient score was not correlated with either drift rate or response bias, nor were there between-group differences when splitting based on the first and third quantiles of scores. In Experiment 2, we compared the performance of autistic with a group of non-autistic adults. There were no between-group differences in either drift rate or starting point. The results of this study do not support the previous suggestion that autistic people have an extended temporal binding window for audiovisual speech. In addition, exploratory analysis revealed that operationalising the temporal binding window in different ways influenced whether a group difference was observed, which is an important consideration for future work.
PubMed: 37593957
DOI: 10.1177/17470218231197518 -
Royal Society Open Science Aug 2023Prolonged visual exposure to large bodies produces a thinning aftereffect on subsequently seen bodies, and vice versa. This visual adaptation effect could contribute to...
Prolonged visual exposure to large bodies produces a thinning aftereffect on subsequently seen bodies, and vice versa. This visual adaptation effect could contribute to the link between media exposure and body shape misperception. Indeed, people exposed to thin bodies in the media, who experience fattening aftereffects, may internalize the distorted image of their body they see in the mirror. This preregistered study tested this internalization hypothesis by exposing 196 young women to an obese adaptor before showing them their reflection in the mirror, or to a control condition. Then, we used a psychophysical task to measure the effects of this procedure on perceptual judgements about their own body size, relative to another body and to the control mirror exposure condition. We found moderate evidence against the hypothesized self-specific effects of mirror exposure on perceptual judgements. Our work strengthens the idea that body size adaptation affects the perception of test stimuli rather than the participants' own body image. We discuss recent studies which may provide an alternative framework to study media-related distortions of perceptual body image.
PubMed: 37593706
DOI: 10.1098/rsos.221589 -
Journal of Vision Aug 2023Wearable optics have a broad range of uses, for example, in refractive spectacles and augmented/virtual reality devices. Despite the long-standing and widespread use of...
Wearable optics have a broad range of uses, for example, in refractive spectacles and augmented/virtual reality devices. Despite the long-standing and widespread use of wearable optics in vision care and technology, user discomfort remains an enduring mystery. Some of this discomfort is thought to derive from optical image minification and magnification. However, there is limited scientific data characterizing the full range of physical and perceptual symptoms caused by minification or magnification during daily life. In this study, we aimed to evaluate sensitivity to changes in retinal image size introduced by wearable optics. Forty participants wore 0%, 2%, and 4% radially symmetric optical minifying lenses binocularly (over both eyes) and monocularly (over just one eye). Physical and perceptual symptoms were measured during tasks that required head movement, visual search, and judgment of world motion. All lens pairs except the controls (0% binocular) were consistently associated with increased discomfort along some dimension. Greater minification tended to be associated with greater discomfort, and monocular minification was often-but not always-associated with greater symptoms than binocular minification. Furthermore, our results suggest that dizziness and visual motion were the most reported physical and perceptual symptoms during naturalistic tasks. This work establishes preliminary guidelines for tolerances to binocular and monocular image size distortion in wearable optics.
Topics: Humans; Eye; Refraction, Ocular; Vision, Ocular; Vision, Low; Wearable Electronic Devices; Vision, Binocular
PubMed: 37552022
DOI: 10.1167/jov.23.8.10 -
Neural Networks : the Official Journal... Sep 2023This paper proposes an unsupervised image-to-image (UI2I) translation model, called Perceptual Contrastive Generative Adversarial Network (PCGAN), which can mitigate the...
This paper proposes an unsupervised image-to-image (UI2I) translation model, called Perceptual Contrastive Generative Adversarial Network (PCGAN), which can mitigate the distortion problem to enhance performance of the traditional UI2I methods. The PCGAN is designed with a two-stage UI2I model. In the first stage of the PCGAN, it leverages a novel image warping to transform shapes of objects in input (source) images. In the second stage of the PCGAN, the residual prediction is devised in refinements of the outputs of the first stage of the PCGAN. To promote performance of the image warping, a loss function, called Perceptual Patch-Wise InfoNCE, is developed in the PCGAN to effectively memorize the visual correspondences between warped images and refined images. Experimental results on quantitative evaluation and visualization comparison for UI2I benchmarks show that the PCGAN is superior to other existing methods considered here.
Topics: Benchmarking; Image Processing, Computer-Assisted
PubMed: 37541163
DOI: 10.1016/j.neunet.2023.07.010 -
Cognition Oct 2023Perceptual distraction distorts visual working memory representations. Previous research has shown that memory responses are systematically biased towards passively...
Perceptual distraction distorts visual working memory representations. Previous research has shown that memory responses are systematically biased towards passively viewed visual distractors that are similar to the memoranda. However, it remains unclear whether the prioritization of one working memory representation over another reduces the impact of perceptual distractors. We designed a study with five different types of visual distraction that varied in engagement and found evidence for both subtle distortions and catastrophic failures of memory. Importantly, prioritization protected working memories from catastrophic loss (fewer "swap errors") but rendered them more vulnerable to distortion (greater attractive "biases" towards the distractor). Our findings demonstrate that prioritization does not simply protect working memory from any and all interference, but rather it reduces the likelihood of catastrophic disruption from perceptual distraction at the cost of an increased likelihood of distortion.
Topics: Humans; Memory, Short-Term; Visual Perception; Attention; Probability; Bias
PubMed: 37541028
DOI: 10.1016/j.cognition.2023.105574