-
Journal of AAPOS : the Official... Dec 2023To evaluate VisuALL, a game-based automated perimetry device, utilizing virtual reality (VR) goggles, in a cohort of patients with childhood glaucoma.
PURPOSE
To evaluate VisuALL, a game-based automated perimetry device, utilizing virtual reality (VR) goggles, in a cohort of patients with childhood glaucoma.
METHODS
In this prospective series, the results of consecutive patients with childhood glaucoma performing both VisuALL VR field (VRF) and Humphrey visual field (HVF) 24-2 testing were compared. A masked ophthalmologist graded both VRF and HVF tests for field defects (three clustered abnormal points in total or pattern deviation plot). VRF testing was performed binocularly and with the child's own spectacles. The two devices were assessed with respect to agreement of (1) global indices, such as mean deviation (MD) and pattern standard deviation (PSD), (2) point-by-point sensitivity, and (3) the ability to detect visual field defects determined by a grader.
RESULTS
A total of 39 children (77 eyes) were enrolled, with mean age 14.1 ± 3.6 years; 3 patients (5 eyes) could not complete the HVF. Average HVF MD was -6.3 ± 6.4 dB. There was strong correlation between VRF and HVF for MD (R = 0.68, P < 0.001), PSD (R = 0.78, P < 0.001), and point-by-point sensitivity (R = 0.63, P < 0.001). Bland Altman analysis showed no systematic difference between VRF and HVF in assessing MD and PSD. Of 72 eyes having results for both modalities, 63 (87.5%) had agreement between VRF and HVF with respect to the presence/absence of any field defect, and 52 (72.2%) had agreement regarding the presence/absence of fixation-threatening field loss.
CONCLUSIONS
VRF is comparable to the gold standard HVF in both identification and quantification of visual field deficits in pediatric glaucoma patients and may offer a valuable supplement or alternative to standard automated perimetry.
Topics: Humans; Child; Adolescent; Visual Field Tests; Visual Fields; Eye; Vision Disorders; Glaucoma
PubMed: 38597674
DOI: 10.1016/j.jaapos.2023.08.014 -
HERD Apr 2024Falls in hospitals pose a significant safety risk, leading to injuries, prolonged hospitalization, and lasting complications. This study explores the potential of...
OBJECTIVES
Falls in hospitals pose a significant safety risk, leading to injuries, prolonged hospitalization, and lasting complications. This study explores the potential of augmented reality (AR) technology in healthcare facility design to mitigate fall risk.
BACKGROUND
Few studies have investigated the impact of hospital room layouts on falls due to the high cost of building physical prototypes. This study introduces an innovative approach using AR technology to advance methods for healthcare facility design efficiently.
METHODS
Ten healthy participants enrolled in this study to examine different hospital room designs in AR. Factors of interest included room configuration, door type, exit side of the bed, toilet placement, and the presence of IV equipment. AR trackers captured trajectories of the body as participants navigated through these AR hospital layouts, providing insights into user behavior and preferences.
RESULTS
Door type influenced the degree of backward and sideways movement, with the presence of an IV pole intensifying the interaction between door and room type, leading to increased sideways and backward motion. Participants displayed varying patterns of backward and sideways travel depending on the specific room configurations they encountered.
CONCLUSIONS
AR can be an efficient and cost-effective method to modify room configurations to identify important design factors before conducting physical testing. The results of this study provide valuable insights into the effect of environmental factors on movement patterns in simulated hospital rooms. These results highlight the importance of considering environmental factors, such as the type of door and bathroom location, when designing healthcare facilities.
PubMed: 38591574
DOI: 10.1177/19375867241238434 -
The International Journal of Eating... Apr 2024Paranjothy and Wade's (2024) meta-analysis identifying relations between self-criticism, self-compassion, and disordered eating prompted recommendations for augmenting...
Paranjothy and Wade's (2024) meta-analysis identifying relations between self-criticism, self-compassion, and disordered eating prompted recommendations for augmenting existing front-line interventions with compassion-focused therapy (CFT) principles among self-critical individuals. While in theory this sounds promising, the reality is that the evidence supporting the use of CFT for eating disorders (EDs) is limited. I argue that before any clinical recommendations can made, more research is needed to better understand the utility of CFT, as well as what precise role self-criticism and self-compassion play in the context of intervention. In this commentary, I present three critical avenues for future research necessary to achieve this level of understanding. These include: (1) identifying moderators of response in clinical trials so that CFT can be safely delivered to those likely to benefit from this approach and avoided for those likely to experience harm; (2) establishing mediators of change so that we can understand whether CFT works through theory-specific or common mechanisms; and (3) testing the causal impact of intervention components so that knowledge on how to most effectively trigger the probable mediators of change can be gathered. This commentary will ideally spark further discussion, collaboration, and rigorous research dedicated to improving ED outcomes. PUBLIC SIGNIFICANCE: This commentary discusses the importance of further research dedicated towards enhancing understanding of the utility of compassion-focused interventions for eating disorders. It calls for more research on (1) testing moderators of response, (2) identifying mechanisms of change, and (3) establishing the most effective intervention components.
PubMed: 38581248
DOI: 10.1002/eat.24214 -
MethodsX Jun 2024This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We...
This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.
PubMed: 38577409
DOI: 10.1016/j.mex.2024.102662 -
Frontiers in Neuroscience 2024Occupational Noise Induced Hearing Loss (ONIHL) is one of the most prevalent conditions among mine workers globally. This reality is due to mine workers being exposed to...
INTRODUCTION
Occupational Noise Induced Hearing Loss (ONIHL) is one of the most prevalent conditions among mine workers globally. This reality is due to mine workers being exposed to noise produced by heavy machinery, rock drilling, blasting, and so on. This condition can be compounded by the fact that mine workers often work in confined workspaces for extended periods of time, where little to no attenuation of noise occurs. The objective of this research work is to present a preliminary study of the development of a hearing loss, early monitoring system for mine workers.
METHODOLOGY
The system consists of a smart watch and smart hearing muff equipped with sound sensors which collect noise intensity levels and the frequency of exposure. The collected information is transferred to a database where machine learning algorithms namely the logistic regression, support vector machines, decision tree and Random Forest Classifier are used to classify and cluster it into levels of priority. Feedback is then sent from the database to a mine worker smart watch based on priority level. In cases where the priority level is extreme, indicating high levels of noise, the smart watch vibrates to alert the miner. The developed system was tested in a mock mine environment consisting of a 67 metres tunnel located in the basement of a building whose roof top represents the "surface" of a mine. The mock-mine shape, size of the tunnel, steel-support infrastructure, and ventilation system are analogous to deep hard-rock mine. The wireless channel propagation of the mock-mine is statistically characterized in 2.4-2.5 GHz frequency band. Actual underground mine material was used to build the mock mine to ensure it mimics a real mine as close as possible. The system was tested by 50 participants both male and female ranging from ages of 18 to 60 years.
RESULTS AND DISCUSSION
Preliminary results of the system show decision tree had the highest accuracy compared to the other algorithms used. It has an average testing accuracy of 91.25% and average training accuracy of 99.79%. The system also showed a good response level in terms of detection of noise input levels of exposure, transmission of the information to the data base and communication of recommendations to the miner. The developed system is still undergoing further refinements and testing prior to being tested in an actual mine.
PubMed: 38576872
DOI: 10.3389/fnins.2024.1321357 -
PloS One 2024While the musical instrument classification task is well-studied, there remains a gap in identifying non-pitched percussion instruments which have greater overlaps in...
While the musical instrument classification task is well-studied, there remains a gap in identifying non-pitched percussion instruments which have greater overlaps in frequency bands and variation in sound quality and play style than pitched instruments. In this paper, we present a musical instrument classifier for detecting tambourines, maracas and castanets, instruments that are often used in early childhood music education. We generated a dataset with diverse instruments (e.g., brand, materials, construction) played in different locations with varying background noise and play styles. We conducted sensitivity analyses to optimize feature selection, windowing time, and model selection. We deployed and evaluated our best model in a mixed reality music application with 12 families in a home setting. Our dataset was comprised of over 369,000 samples recorded in-lab and 35,361 samples recorded with families in a home setting. We observed the Light Gradient Boosting Machine (LGBM) model to perform best using an approximate 93 ms window with only 12 mel-frequency cepstral coefficients (MFCCs) and signal entropy. Our best LGBM model was observed to perform with over 84% accuracy across all three instrument families in-lab and over 73% accuracy when deployed to the home. To our knowledge, the dataset compiled of 369,000 samples of non-pitched instruments is first of its kind. This work also suggests that a low feature space is sufficient for the recognition of non-pitched instruments. Lastly, real-world deployment and testing of the algorithms created with participants of diverse physical and cognitive abilities was also an important contribution towards more inclusive design practices. This paper lays the technological groundwork for a mixed reality music application that can detect children's use of non-pitched, percussion instruments to support early childhood music education and play.
Topics: Child; Humans; Child, Preschool; Percussion; Sound; Algorithms; Music; Cognition
PubMed: 38564622
DOI: 10.1371/journal.pone.0299888 -
Database : the Journal of Biological... Feb 2024In this report, we analyse the use of virtual reality (VR) as a method to navigate and explore complex knowledge graphs. Over the past few decades, linked data...
In this report, we analyse the use of virtual reality (VR) as a method to navigate and explore complex knowledge graphs. Over the past few decades, linked data technologies [Resource Description Framework (RDF) and Web Ontology Language (OWL)] have shown to be valuable to encode such graphs and many tools have emerged to interactively visualize RDF. However, as knowledge graphs get larger, most of these tools struggle with the limitations of 2D screens or 3D projections. Therefore, in this paper, we evaluate the use of VR to visually explore SPARQL Protocol and RDF Query Language (SPARQL) (construct) queries, including a series of tutorial videos that demonstrate the power of VR (see Graph2VR tutorial playlist: https://www.youtube.com/playlist?list=PLRQCsKSUyhNIdUzBNRTmE-_JmuiOEZbdH). We first review existing methods for Linked Data visualization and then report the creation of a prototype, Graph2VR. Finally, we report a first evaluation of the use of VR for exploring linked data graphs. Our results show that most participants enjoyed testing Graph2VR and found it to be a useful tool for graph exploration and data discovery. The usability study also provides valuable insights for potential future improvements to Linked Data visualization in VR.
Topics: Humans; Semantic Web; Databases, Factual; Language; Virtual Reality
PubMed: 38554132
DOI: 10.1093/database/baae008 -
Optometry and Vision Science : Official... Apr 2024This work shows the benefits of using two different magnification strategies to improve the reading ability of low-vision patients using a head-mounted technology.
SIGNIFICANCE
This work shows the benefits of using two different magnification strategies to improve the reading ability of low-vision patients using a head-mounted technology.
PURPOSE
The aim of this study was to conduct a comparative clinical trial evaluating the effectiveness of two magnification strategies in a head-mounted virtual reality display.
METHODS
Eighty-eight eligible low-vision subjects were randomized into two arms: (1) the full-field magnification display or (2) the virtual bioptic telescope mode. Subjects completed baseline testing and received training on how to use the device properly and then took the device home for a 2- to 4-week intervention period. An adaptive rating scale questionnaire (Activity Inventory) was administered before and after the intervention (home trial) period to measure the effect of the system. A Simulator Sickness Questionnaire was also administered. Baseline and follow-up results were analyzed using Rasch analysis to assess overall effectiveness of each magnification mode for various functional domain categories.
RESULTS
Both magnification modes showed a positive effect for reading, visual information, and the overall goals functional domain categories, with only reading reaching statistical significance after correction for multiple comparisons. However, there were no significant between-group differences between the two modes. The results of the Simulator Sickness Questionnaire showed that the magnification modes of the head-mounted display device were overall well tolerated among low-vision users.
CONCLUSIONS
Both the full-field and virtual bioptic magnification strategies were effective in significantly improving functional vision outcomes for self-reported reading ability.
PubMed: 38551973
DOI: 10.1097/OPX.0000000000002115 -
Psychoanalytic Review Mar 2024Attention to the manifestations of death anxiety in the clinical context is often absent in the discourse of psychoanalytic training. This exchange addresses some of the...
Attention to the manifestations of death anxiety in the clinical context is often absent in the discourse of psychoanalytic training. This exchange addresses some of the causes of such an absence: a fraught relation between privacy and secrecy, primacy of psychic reality and interpretation, and cultural underpinnings of sanitization of death.
Topics: Humans; Psychoanalytic Therapy; Reality Testing; Psychoanalytic Interpretation; Psychoanalytic Theory
PubMed: 38551659
DOI: 10.1521/prev.2024.111.1.25 -
Clinical Case Reports Apr 2024Subacute thyroiditis which is typically characterized by cervical pain and fever is caused by viral infection and is seen after SARS-CoV-2 vaccination. Here we report a...
KEY CLINICAL MESSAGE
Subacute thyroiditis which is typically characterized by cervical pain and fever is caused by viral infection and is seen after SARS-CoV-2 vaccination. Here we report a post-vaccination subacute thyroiditis after SARS-CoV-2 vaccination.
ABSTRACT
Subacute thyroiditis (SAT) is possibly caused by a viral infection and is typically characterized by cervical pain and fever. SAT associated with SARS-CoV-2 infection or SARS-CoV-2 vaccination has been reported, albeit in limited numbers. A 34-year-old woman was referred to our clinic with typical SAT symptoms. The diagnosis was confirmed through thyroid scintigraphy after receiving the SARS-CoV-2 vaccination, despite testing negative for COVID-19 via RT-PCR. There is a theoretical correlation between SARS-CoV-2 vaccination and SAT. Vaccination may have a direct or indirect impact on the thyroid, but further studies are required to confirm this relationship. A systematic review of the literature of similar cases was performed for comparison. Ultimately, the overall benefits of SARS-CoV-2 vaccination outweigh the potential adverse effects. Therefore, these types of reports should not divert attention from the actual reality.
PubMed: 38550739
DOI: 10.1002/ccr3.8678