-
European Review For Medical and... Mar 2022COVID-19 has been associated with a wide range of quantitative and qualitative disorders of smell, including hyposmia/anosmia, parosmia, and phantosmia; however, no...
OBJECTIVE
COVID-19 has been associated with a wide range of quantitative and qualitative disorders of smell, including hyposmia/anosmia, parosmia, and phantosmia; however, no reports to date have reported hyperosmia as a sequela of SARS-CoV-2 infection.
PATIENTS AND METHODS
We present two cases of subjective hyperosmia in a South Tyrolean Alps family, occurring within days after recovery from SARS-CoV-2 infection with transient anosmia.
RESULTS
The subjects, a mother and son, exhibited subjective hyperosmia despite normal objective olfactory testing. During independent assessments, the severity of hyperosmia and specific odors affected were highly correlated, consistent with shared genetic and environmental factors. In contrast, two other family members with COVID-19 had no perceptual distortion and normal recovery of smell.
CONCLUSIONS
Subjective hyperosmia after COVID-19 infection exhibited striking similarity in two affected family members, suggesting interaction of environment, genetics, and perception.
Topics: COVID-19; Female; Humans; Mothers; Olfaction Disorders; Perception; SARS-CoV-2; Smell
PubMed: 35363370
DOI: 10.26355/eurrev_202203_28368 -
Journal of Clinical Medicine Feb 2022A growing number of studies have used virtual reality (VR) for the assessment and treatment of body image disturbances (BIDs). This study, conducted in a community...
LoriCorps Immersive Body Rating Scale and LoriCorps Mobile Versions: Validation to Assess Body Image Disturbances from Allocentric and Egocentric Perspectives in a Nonclinical Sample of Adolescents.
A growing number of studies have used virtual reality (VR) for the assessment and treatment of body image disturbances (BIDs). This study, conducted in a community sample of adolescents, documents the convergent and discriminant validity between (a) the traditional paper-based Figure Rating Scale (paper-based FRS), (b) the VR-based Body Rating Scale (LoriCorps-IBRS 1.1), and (c) the mobile app-based Body Rating Scale (LoriCorps-IBRS 1.1-Mobile). A total of 93 adolescents (14 to 18 years old) participated in the study. Body dissatisfaction and body distortion were assessed through the paper-based FRS, the LoriCorps-IBRS 1.1 and the LoriCorps-IBRS 1.1-Mobile. Eating disorder symptoms, body image avoidance, and social physique anxiety were also measured. Correlation analyses were performed. Overall, the results showed a good and statistically significant convergence between allocentric perspectives as measured by the paper-based FRS, the LoriCorps-IBRS 1.1 and the LoriCorps-IBRS 1.1-Mobile. As expected, the egocentric perspective measured in VR produced different results from the allocentric perspective, and from cognitive-attitudinal-affective dimensions of BIDs, with the exception of body distortion. These differences support the discriminant validity of the egocentric perspective of LoriCorps-IBRS 1.1 and are consistent with emerging evidence, highlighting a difference between experiencing the body from an egocentric (i.e., the body as a subject) and allocentric (i.e., the body as an object) perspective. The egocentric perspective could reflect a perceptual-sensory-affective construction of BIDs, whereas allocentric measures seem to be more related to a cognitive-affective-attitudinal construction of BIDs. Moreover, the results support the validity of the LoriCorps-IBRS 1.1-Mobile with promising perspectives of implementation among young populations.
PubMed: 35268247
DOI: 10.3390/jcm11051156 -
Journal of the Association For Research... Apr 2022We investigated the effect of a biasing tone close to 5, 15, or 30 Hz on the response to higher-frequency probe tones, behaviorally, and by measuring distortion-product...
We investigated the effect of a biasing tone close to 5, 15, or 30 Hz on the response to higher-frequency probe tones, behaviorally, and by measuring distortion-product otoacoustic emissions (DPOAEs). The amplitude of the biasing tone was adjusted for criterion suppression of cubic DPOAE elicited by probe tones presented between 0.7 and 8 kHz, or criterion loudness suppression of a train of tone-pip probes in the range 0.125-8 kHz. For DPOAEs, the biasing-tone level for criterion suppression increased with probe-tone frequency by 8-9 dB/octave, consistent with an apex-to-base gradient of biasing-tone-induced basilar membrane displacement, as we verified by computational simulation. In contrast, the biasing-tone level for criterion loudness suppression increased with probe frequency by only 1-3 dB/octave, reminiscent of previously published data on low-side suppression of auditory nerve responses to characteristic frequency tones. These slopes were independent of biasing-tone frequency, but the biasing-tone sensation level required for criterion suppression was ~ 10 dB lower for the two infrasound biasing tones than for the 30-Hz biasing tone. On average, biasing-tone sensation levels as low as 5 dB were sufficient to modulate the perception of higher frequency sounds. Our results are relevant for recent debates on perceptual effects of environmental noise with very low-frequency content and might offer insight into the mechanism underlying low-side suppression.
Topics: Acoustic Stimulation; Basilar Membrane; Cochlea; Noise; Otoacoustic Emissions, Spontaneous; Sound
PubMed: 35132510
DOI: 10.1007/s10162-021-00830-2 -
BMC Ophthalmology Feb 2022Visual impairment is a functional limitation of the eye(s) that results in reduced visual acuity, visual field loss, visual distortion, perceptual difficulties, or any...
Visual impairment and its predictors among people living with type 2 diabetes mellitus at Dessie town hospitals, Northeast Ethiopia: institution-based cross-sectional study.
BACKGROUND
Visual impairment is a functional limitation of the eye(s) that results in reduced visual acuity, visual field loss, visual distortion, perceptual difficulties, or any combination of the above. Type 2 diabetes mellitus is one of the common causes of visual impairment. Since there is no study conducted in Ethiopia so far in this regard, the current study aimed to determine the prevalence and predictors of visual impairment among people living with diabetes at Dessie town Hospitals, Northeast Ethiopia.
METHODS
Institution based cross-sectional study was carried out from 15 February to 15 March 2020 using simple random sampling to recruit study participants among type 2 diabetes. Visual impairment was measured using visual acuity test. We used Epi Data 3.1 and SPSS version 22 for data entry and statistical analysis, respectively. Bi-variable binary logistic regression was performed to check independent association of each factor with visual impairment. After selecting candidate variables at p < 0.25, we computed multivariable binary logistic regression to identify statistically associated factors of visual impairment. The degree of association was determined using adjusted odds ratio with 95%CI. In the final model, statistical significance was declared at p < 0.05.
RESULTS
Three hundred and twenty-two people living with T2DM participated in this study with 97% response rate. The prevalence of visual impairment was 37.58% (95% CI: 32.3, 42.9). Age (AOR: 1.06, 95% CI: 1.02, 1.09, p < or = 0.001), poor regular exercise (AOR = 2.91, 95%CI: 1.47-5.76, p < or = 0.001), duration of DM above 5 years (AOR = 2.42, 95% CI: 1.25-4.73, p < or = 0.01), insulin treatment (AOR = 14.05, 95% CI: 2.72, 72.35, p < or = 0.01), and poor glycemic control (AOR = 2.17, 95% CI: 1.13-4.14, p < 0.05) were statistically associated with visual impairment.
CONCLUSION
The prevalence of visual impairment in Dessie town hospitals accounted for more than a third of patients living with T2DM. Visual impairment is associated with increased age, poor regular exercise, longer duration of DM, and insulin treatment. Thus, early detection of VI through screening and regular follow-up is recommended to reduce the risk of VI and vision loss.
Topics: Cross-Sectional Studies; Diabetes Mellitus, Type 2; Ethiopia; Hospitals; Humans; Vision Disorders
PubMed: 35114950
DOI: 10.1186/s12886-022-02292-3 -
The Journal of the Acoustical Society... Jan 2022Aging, noise exposure, and ototoxic medications lead to cochlear synapse loss in animal models. As cochlear function is highly conserved across mammalian species,...
Aging, noise exposure, and ototoxic medications lead to cochlear synapse loss in animal models. As cochlear function is highly conserved across mammalian species, synaptopathy likely occurs in humans as well. Synaptopathy is predicted to result in perceptual deficits including tinnitus, hyperacusis, and difficulty understanding speech-in-noise. The lack of a method for diagnosing synaptopathy in living humans hinders studies designed to determine if noise-induced synaptopathy occurs in humans, identify the perceptual consequences of synaptopathy, or test potential drug treatments. Several physiological measures are sensitive to synaptopathy in animal models including auditory brainstem response (ABR) wave I amplitude. However, it is unclear how to translate these measures to synaptopathy diagnosis in humans. This work demonstrates how a human computational model of the auditory periphery, which can predict ABR waveforms and distortion product otoacoustic emissions (DPOAEs), can be used to predict synaptic loss in individual human participants based on their measured DPOAE levels and ABR wave I amplitudes. Lower predicted synapse numbers were associated with advancing age, higher noise exposure history, increased likelihood of tinnitus, and poorer speech-in-noise perception. These findings demonstrate the utility of this modeling approach in predicting synapse counts from physiological data in individual human subjects.
Topics: Animals; Auditory Threshold; Cochlea; Computer Simulation; Evoked Potentials, Auditory, Brain Stem; Hearing Loss, Noise-Induced; Humans; Otoacoustic Emissions, Spontaneous; Synapses
PubMed: 35105019
DOI: 10.1121/10.0009238 -
Frontiers in Neuroscience 2021In multi-talker listening environments, the culmination of different voice streams may lead to the distortion of each source's individual message, causing deficits in...
In multi-talker listening environments, the culmination of different voice streams may lead to the distortion of each source's individual message, causing deficits in comprehension. Voice characteristics, such as pitch and timbre, are major dimensions of auditory perception and play a vital role in grouping and segregating incoming sounds based on their acoustic properties. The current study investigated how pitch and timbre cues (determined by fundamental frequency, notated as F0, and spectral slope, respectively) can affect perceptual integration and segregation of complex-tone sequences within an auditory streaming paradigm. Twenty normal-hearing listeners participated in a traditional auditory streaming experiment using two alternating sequences of harmonic tone complexes A and B with manipulating F0 and spectral slope. Grouping ranges, the F0/spectral slope ranges over which auditory grouping occurs, were measured with various F0/spectral slope differences between tones A and B. Results demonstrated that the grouping ranges were maximized in the absence of the F0/spectral slope differences between tones A and B and decreased by 2 times as their differences increased to ±1-semitone F0 and ±1-dB/octave spectral slope. In other words, increased differences in either F0 or spectral slope allowed listeners to more easily distinguish between harmonic stimuli, and thus group them together less. These findings suggest that pitch/timbre difference cues play an important role in how we perceive harmonic sounds in an auditory stream, representing our ability to group or segregate human voices in a multi-talker listening environment.
PubMed: 35087369
DOI: 10.3389/fnins.2021.725093 -
The Journal of Pain Jun 2022Orofacial pain patients often report that the painful facial area is "swollen" without clinical signs - known as perceptual distortion (PD). The neuromodulatory effect...
Orofacial pain patients often report that the painful facial area is "swollen" without clinical signs - known as perceptual distortion (PD). The neuromodulatory effect of facilitatory repetitive transcranial magnetic stimulation (rTMS) on PD in healthy individuals was investigated, to provide further support that the primary somatosensory cortex (SI) is involved in facial PD. Participants were allocated to active (n = 26) or sham (n = 26) rTMS group in this case-control study. PD was induced experimentally by injecting local anesthesia (LA) in the right infraorbital region. PD was measured at baseline, 6 min after LA, immediately, 20 and 40 min after rTMS. Intermittent theta-burst stimulation (iTBS) as active rTMS and sham rTMS was applied to the face representation area of SI at 10 min after LA. The magnitude of PD was compared between the groups. The magnitude of PD significantly increased immediately after iTBS compared with sham rTMS (P = .009). The PD was significantly higher immediately after iTBS compared to 6 min after LA (P = .004) in the active rTMS group, but not in the sham rTMS group (P = .054). iTBS applied to a somatotopic-relevant cortical region appears to facilitate facial PD further supporting the involvement of SI in the processing of one´s own face and PD. PERSPECTIVE: This study provides information on neural substrate responsible for processing of perceptual distortion of the face which is speculated to contribute to the chronification of orofacial pain. The findings of this study may aid in mechanism-based management of the condition in orofacial pain disorders and possibly other chronic pain states.
Topics: Case-Control Studies; Facial Pain; Humans; Perceptual Distortion; Transcranial Magnetic Stimulation
PubMed: 35041936
DOI: 10.1016/j.jpain.2021.12.013 -
Scientific Reports Jan 2022Self-related stimuli are important cues for people to recognize themselves in the external world and hold a special status in our perceptual system. Self-voice plays an...
Self-related stimuli are important cues for people to recognize themselves in the external world and hold a special status in our perceptual system. Self-voice plays an important role in daily social communication and is also a frequent input for self-identification. Although many studies have been conducted on the acoustic features of self-voice, no research has ever examined the spatial aspect, although the spatial perception of voice is important for humans. This study proposes a novel perspective for studying self-voice. We investigated people's distance perception of their own voice when the voice was heard from an external position. Participants heard their own voice from one of four speakers located either 90 or 180 cm from their sitting position, either immediately after uttering a short vowel (i.e., active session) or hearing the replay of their own pronunciation (i.e., replay session). They were then asked to indicate which speaker they heard the voice from. Their voices were either pitch-shifted by ± 4 semitones (i.e., other-voice condition) or unaltered (i.e., self-voice condition). The results of spatial judgment showed that self-voice from the closer speakers was misattributed to that from the speakers further away at a significantly higher proportion than other-voice. This phenomenon was also observed when the participants remained silent and heard prerecorded voices. Additional structural equation modeling using participants' schizotypal scores showed that the effect of self-voice on distance perception was significantly associated with the score of delusional thoughts (Peters Delusion Inventory) and distorted body image (Perceptual Aberration Scale) in the active speaking session but not in the replay session. The findings of this study provide important insights for understanding how people process self-related stimuli when there is a small distortion and how this may be linked to the risk of psychosis.
PubMed: 35013503
DOI: 10.1038/s41598-021-04437-8 -
Sensors (Basel, Switzerland) Dec 2021This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without...
This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without the need for a reference image or a specific distortion assumption. Two main processes have been used to build our models: The first one uses deep learning with a convolutional neural network process, without any preprocessing. The second objective visual quality is computed by pooling several image features extracted from different concepts: the natural scene statistic in the spatial domain, the gradient magnitude, the Laplacian of Gaussian, as well as the spectral and spatial entropies. The features extracted from the image file are used as the input of machine learning techniques to build the models that are used to estimate the visual quality level of any image. For the machine learning training phase, two main processes are proposed: The first proposed process consists of a direct learning using all the selected features in only one training phase, named direct learning blind visual quality assessment DLBQA. The second process is an indirect learning and consists of two training phases, named indirect learning blind visual quality assessment ILBQA. This second process includes an additional phase of construction of intermediary metrics used for the construction of the prediction model. The produced models are evaluated on many benchmarks image databases as TID2013, LIVE, and LIVE in the wild image quality challenge. The experimental results demonstrate that the proposed models produce the best visual perception quality prediction, compared to the state-of-the-art models. The proposed models have been implemented on an FPGA platform to demonstrate the feasibility of integrating the proposed solution on an image sensor.
Topics: Databases, Factual; Image Processing, Computer-Assisted; Machine Learning; Neural Networks, Computer; Normal Distribution
PubMed: 35009718
DOI: 10.3390/s22010175 -
Computational Intelligence and... 2021During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral...
During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral bands. The main objective is fuse the low-resolution multispectral (MS) image and the high-spatial-resolution panchromatic (PAN) image to obtain a fused image having high spatial and spectral information. Recently, many artificial intelligence-based deep learning models have been designed to fuse the remote sensing images. But these models do not consider the inherent image distribution difference between MS and PAN images. Therefore, the obtained fused images may suffer from gradient and color distortion problems. To overcome these problems, in this paper, an efficient artificial intelligence-based deep transfer learning model is proposed. Inception-ResNet-v2 model is improved by using a color-aware perceptual loss (CPL). The obtained fused images are further improved by using gradient channel prior as a postprocessing step. Gradient channel prior is used to preserve the color and gradient information. Extensive experiments are carried out by considering the benchmark datasets. Performance analysis shows that the proposed model can efficiently preserve color and gradient information in the fused remote sensing images than the existing models.
Topics: Artificial Intelligence; Remote Sensing Technology
PubMed: 34976044
DOI: 10.1155/2021/7615106