-
PloS One 2024When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering...
When humans navigate through complex environments, they coordinate gaze and steering to sample the visual information needed to guide movement. Gaze and steering behavior have been extensively studied in the context of automobile driving along a winding road, leading to accounts of movement along well-defined paths over flat, obstacle-free surfaces. However, humans are also capable of visually guiding self-motion in environments that are cluttered with obstacles and lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a custom-designed forest-like virtual environment. The environment was viewed through a head-mounted display equipped with an eye tracker to record gaze behavior. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. Subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. In conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look. We consider the study's broader implications as well as limitations, including the focus on a small sample of highly skilled subjects and inherent noise in measurement of gaze direction.
Topics: Humans; Movement; Automobile Driving; Motion; Psychomotor Performance; Fixation, Ocular
PubMed: 38457388
DOI: 10.1371/journal.pone.0289855 -
Behavior Research Methods Apr 2024Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a...
Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a participant's eye movements and analyzing them in terms of various gaze events. Manual identification of these events is extremely time-consuming. Although the field has seen the development of automatic event detection and classification methods, these methods have primarily focused on distinguishing events when participants remain stationary. With increasing interest in studying gaze behavior in freely moving participants, such as during daily activities like walking, new methods are required to automatically classify events in data collected under unrestricted conditions. Existing methods often rely on additional information from depth cameras or inertial measurement units (IMUs), which are not typically integrated into mobile eye trackers. To address this challenge, we present a framework for classifying gaze events based solely on eye-movement signals and scene video footage. Our approach, the Automatic Classification of gaze Events in Dynamic and Natural Viewing (ACE-DNV), analyzes eye movements in terms of velocity and direction and leverages visual odometry to capture head and body motion. Additionally, ACE-DNV assesses changes in image content surrounding the point of gaze. We evaluate the performance of ACE-DNV using a publicly available dataset and showcased its ability to discriminate between gaze fixation, gaze pursuit, gaze following, and gaze shifting (saccade) events. ACE-DNV exhibited comparable performance to previous methods, while eliminating the necessity for additional devices such as IMUs and depth cameras. In summary, ACE-DNV simplifies the automatic classification of gaze events in natural and dynamic environments. The source code is accessible at https://github.com/arnejad/ACE-DNV .
Topics: Humans; Eye-Tracking Technology; Eye Movements; Fixation, Ocular; Visual Perception; Video Recording; Male; Adult; Female
PubMed: 38448726
DOI: 10.3758/s13428-024-02358-8 -
BMC Ophthalmology Mar 2024To measure the dislocation forces in relation to haptic material, flange size and needle used.
PURPOSE
To measure the dislocation forces in relation to haptic material, flange size and needle used.
SETTING
Hanusch Hospital, Vienna, Austria.
DESIGN
Laboratory Investigation.
METHODS, MAIN OUTCOME MEASURES
30 G (gauge) thin wall and 27 G standard needles were used for a 2 mm tangential scleral tunnel in combination with different PVDF (polyvinylidene fluoride) and PMMA (polymethylmethacrylate haptics). Flanges were created by heating 1 mm of the haptic end, non-forceps assisted in PVDF and forceps assisted in PMMA haptics. The dislocation force was measured in non-preserved cadaver sclera using a tensiometer device.
RESULTS
PVDF flanges achieved were of a mushroom-like shape and PMMA flanges were of a conic shape. For 30 G needle tunnels the dislocation forces for PVDF and PMMA haptic flanges were 1.58 ± 0.68 N (n = 10) and 0.70 ± 0.14 N (n = 9) (p = 0.003) respectively. For 27 G needle tunnels the dislocation forces for PVDF and PMMA haptic flanges were 0.31 ± 0.35 N (n = 3) and 0.0 N (n = 4), respectively. The flange size correlated with the occurring dislocation force in experiments with 30 G needle tunnels (r = 0.92), when flanges were bigger than 384 micrometres.
CONCLUSIONS
The highest dislocation forces were found for PVDF haptic flanges and their characteristic mushroom-like shape for 30 G thin wall needle scleral tunnels. Forceps assisted flange creation in PMMA haptics did not compensate the disadvantage of PMMA haptics with their characteristic conic shape flange.
Topics: Humans; Haptic Technology; Polymethyl Methacrylate; Sclera; Lenses, Intraocular; Fluorocarbon Polymers; Polyvinyls
PubMed: 38443841
DOI: 10.1186/s12886-024-03369-x -
Perception May 2024To read this article, you have to constantly direct your gaze at the words on the page. If you go for a run instead, your gaze will be less constrained, so many factors...
To read this article, you have to constantly direct your gaze at the words on the page. If you go for a run instead, your gaze will be less constrained, so many factors could influence where you look. We show that you are likely to spend less time looking at the path just in front of you when running alone than when running with someone else, presumably because the presence of the other runner makes foot placement more critical.
Topics: Humans; Running; Adult; Male; Female; Young Adult; Fixation, Ocular
PubMed: 38409958
DOI: 10.1177/03010066241235112 -
Sensors (Basel, Switzerland) Feb 2024In this paper, we present and evaluate a calibration-free mobile eye-traking system. The system's mobile device consists of three cameras: an IR eye camera, an RGB eye...
In this paper, we present and evaluate a calibration-free mobile eye-traking system. The system's mobile device consists of three cameras: an IR eye camera, an RGB eye camera, and a front-scene RGB camera. The three cameras build a reliable corneal imaging system that is used to estimate the user's point of gaze continuously and reliably. The system auto-calibrates the device unobtrusively. Since the user is not required to follow any special instructions to calibrate the system, they can simply put on the eye tracker and start moving around using it. Deep learning algorithms together with 3D geometric computations were used to auto-calibrate the system per user. Once the model is built, a point-to-point transformation from the eye camera to the front camera is computed automatically by matching corneal and scene images, which allows the gaze point in the scene image to be estimated. The system was evaluated by users in real-life scenarios, indoors and outdoors. The average gaze error was 1.6∘ indoors and 1.69∘ outdoors, which is considered very good compared to state-of-the-art approaches.
Topics: Eye Movements; Fixation, Ocular; Eye-Tracking Technology; Cornea; Algorithms
PubMed: 38400392
DOI: 10.3390/s24041237 -
Journal of Vision Feb 2024Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic...
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Topics: Humans; Virtual Reality; Movement; Hand; Fixation, Ocular; Eye
PubMed: 38393742
DOI: 10.1167/jov.24.2.9 -
Ocular Oncology and Pathology Aug 2023The aims of this study were to study use of tissue glue instead of conventional suturing and to secure I-125 plaque in human eyes with uveal melanoma.
PURPOSE
The aims of this study were to study use of tissue glue instead of conventional suturing and to secure I-125 plaque in human eyes with uveal melanoma.
METHODS
We studied 6 patients with choroidal melanoma undergoing plaque radiotherapy who were found to have thin sclera intraoperatively. Following tumor localization and plaque placement, tissue glue was applied over and around the plaque surface. The plaque was held securely in all cases. Conjunctivoplasty was performed with 7-0 vicryl sutures to ensure complete coverage and stability of the plaque. At the time of plaque removal, the tissue glue clot was in place with plaque secured. The clot and plaque were removed without difficulty.
RESULTS
In all 6 cases, the tissue glue secured the plaque in place for the required radiation duration (mean 117.6 h (hrs), median 103.1 h, range 101.6-162.5 h) delivering a tumor apex dose (mean 63.6 cGy/h, median 69.6 cGy/h, range 44.7-70.5 cGy/h). At the time of plaque removal, the plaque was in the designated position without displacement in all cases. There were no toxicities from the tissue glue.
CONCLUSIONS
Tissue glue can serve as an alternative for fixation of plaque radiotherapy to the sclera without the need for suturing. This technique might be useful in eyes with thin sclera.
PubMed: 38376095
DOI: 10.1159/000529382 -
Journal of Neurology May 2024Disconjugate eye movements are essential for depth perception in frontal-eyed species, but their underlying neural substrates are largely unknown. Lesions in the...
BACKGROUND
Disconjugate eye movements are essential for depth perception in frontal-eyed species, but their underlying neural substrates are largely unknown. Lesions in the midbrain can cause disconjugate eye movements. While vertically disconjugate eye movements have been linked to defective visuo-vestibular integration, the pathophysiology and neuroanatomy of horizontally disconjugate eye movements remains elusive.
METHODS
A patient with a solitary focal midbrain lesion was examined using detailed clinical ocular motor assessments, binocular videooculography and diffusion-weighted MRI, which was co-registered to a high-resolution cytoarchitectonic MR-atlas.
RESULTS
The patient exhibited both vertically and horizontally disconjugate eye alignment and nystagmus. Binocular videooculography showed a strong correlation of vertical and horizontal oscillations during fixation but not in darkness. Oscillation intensities and waveforms were modulated by fixation, illumination, and gaze position, suggesting shared visual- and vestibular-related mechanisms. The lesion was mapped to a functionally ill-defined area of the dorsal midbrain, adjacent to the posterior commissure and sparing nuclei with known roles in vertical gaze control.
CONCLUSION
A circumscribed region in the dorsal midbrain appears to be a key node for disconjugate eye movements in both vertical and horizontal planes. Lesioning this area produces a unique ocular motor syndrome mirroring hallmarks of developmental strabismus and nystagmus. Further circuit-level studies could offer pivotal insights into shared pathomechanisms of acquired and developmental disorders affecting eye alignment.
Topics: Humans; Eye Movements; Mesencephalon; Nystagmus, Pathologic; Ocular Motility Disorders
PubMed: 38353747
DOI: 10.1007/s00415-023-12155-6 -
Scientific Reports Feb 2024A tendency to look at the left side of faces from the observer's point of view has been found in older children and adults, but it is not known when this face-specific...
A tendency to look at the left side of faces from the observer's point of view has been found in older children and adults, but it is not known when this face-specific left gaze bias develops and what factors may influence individual differences in gaze lateralization. Therefore, the aims of this study were to estimate gaze lateralization during face observation and to more broadly estimate lateralization tendencies across a wider set of social and non-social stimuli, in early infancy. In addition, we aimed to estimate the influence of genetic and environmental factors on lateralization of gaze. We studied gaze lateralization in 592 5-month-old twins (282 females, 330 monozygotic twins) by recording their gaze while viewing faces and two other types of stimuli that consisted of either collections of dots (non-social stimuli) or faces interspersed with objects (mixed stimuli). A right gaze bias was found when viewing faces, and this measure was moderately heritable (A = 0.38, 95% CI 0.24; 0.50). A left gaze bias was observed in the non-social condition, while a right gaze bias was found in the mixed condition, suggesting that there is no general left gaze bias at this age. Genetic influence on individual differences in gaze lateralization was only found for the tendency to look at the right versus left side of faces, suggesting genetic specificity of lateralized gaze when viewing faces.
Topics: Adult; Female; Infant; Child; Humans; Eye Movements; Face; Biological Phenomena; Fixation, Ocular
PubMed: 38351309
DOI: 10.1038/s41598-024-54373-6 -
Journal of Cataract and Refractive... Jun 2024To evaluate which secondary intraocular lens (IOL) implantation technique was more successful in achieving the best postoperative results and refractive outcomes between... (Observational Study)
Observational Study Comparative Study
PURPOSE
To evaluate which secondary intraocular lens (IOL) implantation technique was more successful in achieving the best postoperative results and refractive outcomes between retropupillary iris-claw IOL (ICIOL) and flanged intrascleral IOL (FIIOL) fixation with the Yamane technique.
SETTING
Eye Clinic of the University of Trieste, Trieste, Italy.
DESIGN
Retrospective observational study.
METHODS
116 eyes of 110 patients who underwent ICIOL or FIIOL were analyzed. Patients with follow-up shorter than 6 months or with incomplete clinical data were excluded. Collected data included demographics, ocular comorbidity, indication of surgery, intraocular pressure, early (≤1 month) and late (>1 month) postoperative complications, corrected distance visual acuity (CDVA), and manifest refraction at the last follow-up visit.
RESULTS
50% (n = 58) of eyes underwent FIIOL and 50% (n = 58) ICIOL implantation for aphakia (n = 44, 38%) and IOL dislocation (n = 72, 62%). No statistically significant differences in demographics, comorbidity, follow-up duration, postoperative complications, and surgical indications were found. The refractive prediction error (RPE) was 0.69 ± 0.94 diopter (D) in the FIIOL group and 0.21 ± 0.75 D in the ICIOL group ( P = .03), indicating residual hyperopia after both techniques. RPE, mean absolute error, and median absolute error were higher in the FIIOL group ( P = .003). ICIOL implantation was more successful in obtaining a RPE between -0.50 D and +0.50 D (52% of ICIOL, n = 30, and 31% of FIIOL, n = 18).
CONCLUSIONS
Both techniques were effective in increasing preoperative CDVA with no statistical difference between them. Although complication rates did not significantly differ, the FIIOL group exhibited less predictable refractive outcomes. Adjusting the dioptric power of the 3-piece IOL, as performed in ciliary sulcus implantation, to prevent myopic shift, is not recommended.
Topics: Humans; Retrospective Studies; Visual Acuity; Lens Implantation, Intraocular; Iris; Male; Female; Refraction, Ocular; Lenses, Intraocular; Middle Aged; Aged; Follow-Up Studies; Phacoemulsification; Pseudophakia; Postoperative Complications; Adult; Treatment Outcome
PubMed: 38350232
DOI: 10.1097/j.jcrs.0000000000001421