-
Frontiers in Bioengineering and... 2023This systematic review offers an overview on clinical and technical aspects of augmented reality (AR) applications in orthopedic and maxillofacial oncological surgery....
This systematic review offers an overview on clinical and technical aspects of augmented reality (AR) applications in orthopedic and maxillofacial oncological surgery. The review also provides a summary of the included articles with objectives and major findings for both specialties. The search was conducted on PubMed/Medline and Scopus databases and returned on 31 May 2023. All articles of the last 10 years found by keywords augmented reality, mixed reality, maxillofacial oncology and orthopedic oncology were considered in this study. For orthopedic oncology, a total of 93 articles were found and only 9 articles were selected following the defined inclusion criteria. These articles were subclassified further based on study type, AR display type, registration/tracking modality and involved anatomical region. Similarly, out of 958 articles on maxillofacial oncology, 27 articles were selected for this review and categorized further in the same manner. The main outcomes reported for both specialties are related to registration error (i.e., how the virtual objects displayed in AR appear in the wrong position relative to the real environment) and surgical accuracy (i.e., resection error) obtained under AR navigation. However, meta-analysis on these outcomes was not possible due to data heterogenicity. Despite having certain limitations related to the still immature technology, we believe that AR is a viable tool to be used in oncological surgeries of orthopedic and maxillofacial field, especially if it is integrated with an external navigation system to improve accuracy. It is emphasized further to conduct more research and pre-clinical testing before the wide adoption of AR in clinical settings.
PubMed: 38076427
DOI: 10.3389/fbioe.2023.1276338 -
Translational Vision Science &... Oct 2021Clinically evaluate the noninferiority of a custom virtual reality (VR) perimetry system when compared to a clinically and routinely used perimeter on both healthy...
PURPOSE
Clinically evaluate the noninferiority of a custom virtual reality (VR) perimetry system when compared to a clinically and routinely used perimeter on both healthy subjects and glaucoma patients.
METHODS
We use a custom-designed VR perimetry system tailored for visual field testing. The system uses Oculus Quest VR headset (Facebook Technologies, LLC, Bern, Switzerland), that includes a clicker for participant response feedback. A prospective, single center, study was conducted at the Department of Ophthalmology of the Bern University Hospital (Bern, Switzerland) for 12 months. Of the 114 participants recruited 70 subjects (36 healthy and 34 glaucoma patients with early to moderate visual field loss) were included in the study. Participants underwent perimetry tests on an Octopus 900 (Haag-Streit, Köniz, Switzerland) as well as on the custom VR perimeter. In both cases, standard dynamic strategy (DS) was used in conjunction with the G testing pattern. Collected visual fields (VFs) from both devices were then analyzed and compared.
RESULTS
High mean defect (MD) correlations between the two systems (Spearman, ρ ≥ 0.75) were obtained. The VR system was found to slightly underestimate VF defects in glaucoma subjects (1.4 dB). No significant bias was found with respect to eccentricity or subject age. On average, a similar number of stimuli presentations per VF was necessary when measuring glaucoma patients and healthy subjects.
CONCLUSIONS
This study demonstrates that a clinically used perimeter and the proposed VR perimetry system have comparable performances with respect to a number of perimetry parameters in healthy and glaucoma patients with early to moderate visual field loss.
TRANSLATIONAL RELEVANCE
This suggests that VR perimeters have the potential to assess VFs with high enough confidence, whereby alleviating challenges in current perimetry practices by providing a portable and more accessible visual field test.
Topics: Glaucoma; Humans; Prospective Studies; Virtual Reality; Visual Field Tests; Visual Fields
PubMed: 34614166
DOI: 10.1167/tvst.10.12.10 -
e-Addictology: An Overview of New Technologies for Assessing and Intervening in Addictive Behaviors.Frontiers in Psychiatry 2018New technologies can profoundly change the way we understand psychiatric pathologies and addictive disorders. New concepts are emerging with the development of more... (Review)
Review
BACKGROUND
New technologies can profoundly change the way we understand psychiatric pathologies and addictive disorders. New concepts are emerging with the development of more accurate means of collecting live data, computerized questionnaires, and the use of passive data. , a paradigmatic example, refers to the use of computerized measurement tools to the characteristics of different psychiatric disorders. Similarly, machine learning-a form of artificial intelligence-can improve the classification of patients based on patterns that clinicians have not always considered in the past. Remote or automated interventions (web-based or smartphone-based apps), as well as virtual reality and neurofeedback, are already available or under development.
OBJECTIVE
These recent changes have the potential to disrupt practices, as well as practitioners' beliefs, ethics and representations, and may even call into question their professional culture. However, the impact of new technologies on health professionals' practice in addictive disorder care has yet to be determined. In the present paper, we therefore present an overview of new technology in the field of addiction medicine.
METHOD
Using the keywords [e-health], [m-health], [computer], [mobile], [smartphone], [wearable], [digital], [machine learning], [ecological momentary assessment], [biofeedback] and [virtual reality], we searched the PubMed database for the most representative articles in the field of assessment and interventions in substance use disorders.
RESULTS
We screened 595 abstracts and analyzed 92 articles, dividing them into seven categories: e-health program and web-based interventions, machine learning, computerized adaptive testing, wearable devices and digital phenotyping, ecological momentary assessment, biofeedback, and virtual reality.
CONCLUSION
This overview shows that new technologies can improve assessment and interventions in the field of addictive disorders. The precise role of connected devices, artificial intelligence and remote monitoring remains to be defined. If they are to be used effectively, these tools must be explained and adapted to the different profiles of physicians and patients. The involvement of patients, caregivers and other health professionals is essential to their design and assessment.
PubMed: 29545756
DOI: 10.3389/fpsyt.2018.00051 -
Nicotine & Tobacco Research : Official... May 2021Cue exposure for extinguishing conditioned urges to smoking cues has been promising in the laboratory, but difficult to implement in natural environments. The recent... (Comparative Study)
Comparative Study
BACKGROUND
Cue exposure for extinguishing conditioned urges to smoking cues has been promising in the laboratory, but difficult to implement in natural environments. The recent availability of augmented reality (AR) via smartphone provides an opportunity to overcome this limitation. Testing the ability of AR to elicit cue-provoked urges to smoke (ie, cue reactivity [CR]) is the first step to systemically testing the efficacy of AR for cue exposure therapy.
OBJECTIVES
To test CR to smoking-related AR cues compared to neutral AR cues, and compared to in vivo cues.
METHODS
A 2 × 2 within-subject design comparing cue content (smoking vs. neutral) and presentation modality (AR vs. in vivo) on urge response. Seventeen smokers viewed six smoking-related and six neutral cues via AR smartphone app and also six smoking and six neutral in vivo cues. Participants rated their urge to smoke and reality/co-existence of the cue.
RESULTS
Average urge to smoke was higher following smoking-related AR images (Median = 7.50) than neutral images (Median = 3.33) (Z = -3.44; p = .001; d = 1.37). Similarly, average urge ratings for in vivo smoking-related cues (Median = 8.12) were higher than for neutral cues (Median = 2.12) (Z = -3.44; p = .001; d = 1.64). Also, greater CR was observed for in vivo cues than for AR cues (Z = -2.67, p = .008; d = .36). AR cues were generally perceived as being realistic and well-integrated.
CONCLUSIONS
CR was demonstrated with very large effect sizes in response to AR smoking cues, although slightly smaller than with in vivo smoking cues. This satisfies the first criterion for the potential use of AR for exposure therapy.
IMPLICATIONS
This study introduces AR as a novel modality for presenting smoking-related stimuli to provoke cue reactivity, and ultimately to conduct extinction-based therapy. AR cues presented via a smartphone have the advantage over other modes of cue presentation (pictures, virtual reality, in vivo, etc.) of being easily transportable, affordable, and realistic, and they can be inserted in a smokers' natural environment rather than being limited to laboratory and clinic settings. These AR features may overcome the generalizability barriers of other methods, thus increasing clinical utility for cue exposure therapies.
Topics: Adult; Augmented Reality; Behavior, Addictive; Conditioning, Psychological; Craving; Cues; Environment; Extinction, Psychological; Female; Humans; Male; Middle Aged; Mobile Applications; Smartphone; Smoke; Smokers; Smoking; Smoking Cessation; Tobacco Smoking; Virtual Reality Exposure Therapy
PubMed: 33277653
DOI: 10.1093/ntr/ntaa259 -
Sensors (Basel, Switzerland) Feb 2021Data and services are available anywhere at any time thanks to the Internet and mobile devices. Nowadays, there are new ways of representing data through trendy...
Data and services are available anywhere at any time thanks to the Internet and mobile devices. Nowadays, there are new ways of representing data through trendy technologies such as augmented reality (AR), which extends our perception of reality through the addition of a virtual layer on top of real-time images. The great potential of unmanned aerial vehicles (UAVs) for carrying out routine and professional tasks has encouraged their use in the creation of several services, such as package delivery or industrial maintenance. Unfortunately, drone piloting is difficult to learn and requires specific training. Since regular training is performed with virtual simulations, we decided to propose a multiplatform cloud-hosted solution based in Web AR for drone training and usability testing. This solution defines a configurable trajectory through virtual elements represented over barcode markers placed on a real environment. The main goal is to provide an inclusive and accessible training solution which could be used by anyone who wants to learn how to pilot or test research related to UAV control. For this paper, we reviewed drones, AR, and human-drone interaction (HDI) to propose an architecture and implement a prototype, which was built using a Raspberry Pi 3, a camera, and barcode markers. The validation was conducted using several test scenarios. The results show that a real-time AR experience for drone pilot training and usability testing is achievable through web technologies. Some of the advantages of this approach, compared to traditional methods, are its high availability by using the web and other ubiquitous devices; the minimization of technophobia related to crashes; and the development of cost-effective alternatives to train pilots and make the testing phase easier for drone researchers and developers through trendy technologies.
PubMed: 33669733
DOI: 10.3390/s21041456 -
North American Spine Society Journal Jun 2021Surgical simulation is a valuable educational tool for trainees to practice in a safe, standardized, and controlled environment. Interactive feedback-based virtual...
BACKGROUND
Surgical simulation is a valuable educational tool for trainees to practice in a safe, standardized, and controlled environment. Interactive feedback-based virtual reality (VR) has recently moved to the forefront of spine surgery training, with most commercial products focusing on instrumentation. There is a paucity of learning tools directed at decompression principles. The purpose of this study was to evaluate the efficacy of VR simulation and its educational role in learning spinal anatomy and decompressive techniques.
METHODS
A VR simulation module was created with custom-developed software. Orthopaedic and neurosurgical trainees were prospectively enrolled and interacted with patient-specific 3D models of lumbar spinal stenosis while wearing a headset. A surgical toolkit allowed users to perform surgical decompression, specifically removing soft tissues and bone. The module allowed users to perform various techniques in posterior decompressions and comprehend anatomic areas of stenosis. Pre- and post-module testing, and utility questionnaires were administered to provide both quantitative and qualitative evaluation of the module as a learning device.
RESULTS
28 trainees were enrolled (20-orthopaedic, 8-neurosurgery) in the study. Pre-test scores on anatomic knowledge progressively improved and showed strong positive correlation with year-in-training (Pearson's = 0.79). Following simulation, the average improvement in post-test scores was 11.4% in junior trainees (PGYI-III), and 1.0% in senior trainees (PGYIII-Fellows). Knowledge improvement approached statistical significance amongst junior trainees ( = 0.0542). 89% of participants found the VR module useful in understanding and learning the pathology of spinal stenosis. 71% found it useful in comprehending decompressive techniques. 96% believed it had utility in preoperative planning with patient-specific models.
CONCLUSIONS
Our original VR spinal decompression simulation has shown to be overwhelmingly positively received amongst trainees as both a learning module of patho-anatomy and patient-specific preoperative planning, with particular benefit for junior trainees.
PubMed: 35141628
DOI: 10.1016/j.xnsj.2021.100063 -
Innovation in Aging 2021This study tests the feasibility of using virtual reality (VR) with older adults with mild cognitive impairment (MCI) or mild-to-moderate dementia with a family member...
BACKGROUND AND OBJECTIVES
This study tests the feasibility of using virtual reality (VR) with older adults with mild cognitive impairment (MCI) or mild-to-moderate dementia with a family member who lives at a distance.
RESEARCH DESIGN AND METHODS
21 residents in a senior living community and a family member (who participated in the VR with the older adult from a distance) engaged in a baseline telephone call, followed by 3 weekly VR sessions.
RESULTS
Residents and family members alike found the VR safe, extremely enjoyable, and easy to use. The VR was also acceptable and highly satisfying for residents with MCI and dementia. Human and automated coding revealed that residents were more conversationally and behaviorally engaged with their family member in the VR sessions compared to the baseline telephone call and in the VR sessions that used reminiscence therapy. The results also illustrate the importance of using multiple methods to assess engagement. Residents with dementia reported greater immersion in the VR than residents with MCI. However, the automated coding indicated that residents with MCI were more kinesically engaged while using the VR than residents with dementia.
DISCUSSION AND IMPLICATIONS
Combining networking and livestreaming features in a single VR platform can allow older adults in senior living communities to still travel, relive their past, and engage fully with life their family members, despite geographical separation and physical and cognitive challenges.
PubMed: 34632105
DOI: 10.1093/geroni/igab014 -
Ear and Hearing 2020To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are...
To assess perception with and performance of modern and future hearing devices with advanced adaptive signal processing capabilities, novel evaluation methods are required that go beyond already established methods. These novel methods will simulate to a certain extent the complexity and variability of acoustic conditions and acoustic communication styles in real life. This article discusses the current state and the perspectives of virtual reality technology use in the lab for designing complex audiovisual communication environments for hearing assessment and hearing device design and evaluation. In an effort to increase the ecological validity of lab experiments, that is, to increase the degree to which lab data reflect real-life hearing-related function, and to support the development of improved hearing-related procedures and interventions, this virtual reality lab marks a transition from conventional (audio-only) lab experiments to the field. The first part of the article introduces and discusses the notion of the communication loop as a theoretical basis for understanding the factors that are relevant for acoustic communication in real life. From this, requirements are derived that allow an assessment of the extent to which a virtual reality lab reflects these factors, and which may be used as a proxy for ecological validity. The most important factor of real-life communication identified is a closed communication loop among the actively behaving participants. The second part of the article gives an overview of the current developments towards a virtual reality lab at Oldenburg University that aims at interactive and reproducible testing of subjects with and without hearing devices in challenging communication conditions. The extent to which the virtual reality lab in its current state meets the requirements defined in the first part is discussed, along with its limitations and potential further developments. Finally, data are presented from a qualitative study that compared subject behavior and performance in two audiovisual environments presented in the virtual reality lab-a street and a cafeteria-with the corresponding field environments. The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.
Topics: Acoustics; Comprehension; Hearing Tests; Humans; Sound; User-Computer Interface; Virtual Reality
PubMed: 33105257
DOI: 10.1097/AUD.0000000000000945 -
Sensors (Basel, Switzerland) Feb 2023Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to...
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the latest-generation smartphones offer AR capabilities out of the box, with sometimes even pre-installed apps. Apple's framework ARKit is available on iOS devices, free to use for developers. Android similarly features a counterpart, ARCore. Both systems work well for small spatially confined applications, but lack global positional awareness. This is a direct result of one limitation in current mobile technology. Global Navigation Satellite Systems (GNSSs) are relatively inaccurate and often cannot work indoors due to the restriction of the signal to penetrate through solid objects, such as walls. In this paper, we present the Pedestrian Augmented Reality Navigator (PAReNt) iOS app as a solution to this problem. The app implements a data fusion technique to increase accuracy in global positioning and showcases AR navigation as one use case for the improved data. ARKit provides data about the smartphone's motion, which is fused with GNSS data and a Bluetooth indoor positioning system via a Kalman Filter (KF). Four different KFs with different underlying models have been implemented and independently evaluated to find the best filter. The evaluation measures the app's accuracy against a ground truth under controlled circumstances. Two main testing methods were introduced and applied to determine which KF works best. Depending on the evaluation method, this novel approach improved the accuracy by 57% (when GPS and AR were used) or 32% (when Bluetooth and AR were used) over the raw sensor data.
PubMed: 36850414
DOI: 10.3390/s23041816 -
Developmental Disabilities Research... 2011Down syndrome is the most common cause of intellectual disability. In the United States, it is recommended that prenatal testing for Down syndrome be offered to all... (Review)
Review
Down syndrome is the most common cause of intellectual disability. In the United States, it is recommended that prenatal testing for Down syndrome be offered to all women. Because of this policy and consequent public perception, having Down syndrome has become a disadvantage in the prenatal period. However, in the postnatal period, there may be some advantage in having Down syndrome. To help parents make informed decisions about screening and testing, it is crucial to reconcile divergent prenatal and postnatal perspectives. Advancements in genetic technologies will also impact the informed consent process and need to be considered.
Topics: Attitude to Health; Down Syndrome; Female; Genetic Testing; Humans; Intellectual Disability; Parents; Pregnancy; Prenatal Diagnosis; United States
PubMed: 22447752
DOI: 10.1002/ddrr.135