-
Scientific Reports Jan 2021When children have visual and/or oculomotor deficits, early diagnosis is critical for rehabilitation. The developmental eye movement (DEM) test is a visual-verbal number...
When children have visual and/or oculomotor deficits, early diagnosis is critical for rehabilitation. The developmental eye movement (DEM) test is a visual-verbal number naming test that aims to measure oculomotor dysfunction in children by comparing scores on a horizontal and vertical subtest. However, empirical comparison of oculomotor behavior during the two subtests is missing. Here, we measured eye movements of healthy children while they performed a digital version of the DEM. In addition, we measured visual processing speed using the Speed Acuity test. We found that parameters of saccade behavior, such as the number, amplitude, and direction of saccades, correlated with performance on the horizontal, but not the vertical subtest. However, the time spent on making saccades was very short compared to the time spent on number fixations and the total time needed for either subtest. Fixation durations correlated positively with performance on both subtests and co-varied tightly with visual processing speed. Accordingly, horizontal and vertical DEM scores showed a strong positive correlation with visual processing speed. We therefore conclude that the DEM is not suitable to measure saccade behavior, but can be a useful indicator of visual-verbal naming skills, visual processing speed, and other cognitive factors of clinical relevance.
Topics: Child; Eye Movements; Female; Fixation, Ocular; Humans; Male; Reading; Saccades; Visual Perception
PubMed: 33441953
DOI: 10.1038/s41598-020-80870-5 -
Strabismus Sep 2019The goal of this study was to compare vertical fusion capability at different orbital eye positions in normal nonhuman primates and attempt to use this information to...
The goal of this study was to compare vertical fusion capability at different orbital eye positions in normal nonhuman primates and attempt to use this information to isolate the extraocular muscles (EOMs) that mediate vertical vergence. Scleral search coils were used to record movements of both eyes as two normal nonhuman primates (M1, M2) performed a vertical vergence task at different horizontal eye positions. In a control experiment, M1 was also tested at different angles of horizontal vergence. To elicit vertical vergence, a 50° x 50° stimulus comprising a central fixation cross and random dots elsewhere was presented separately to each eye under dichoptic viewing conditions. Vertical disparity was introduced by slowly displacing the stimulus for one eye vertically. Vertical fusion amplitude (maximum disparity that the monkey was able to fuse) and vertical vergence (maximum difference in vertical position of the two eyes) were measured. Vertical fusion capability differed at different orbital eye positions. Monkey M1 had significantly smaller vertical fusion capabilities when the right eye (RE) was abducted than left eye (LE) while M2 had significantly smaller vertical fusion capabilities when the RE was adducted and LE abducted. M1 also showed greater vertical fusion capability for near gaze. M1 data suggested that the vertical recti mediated vertical vergence in the RE and the oblique muscles in the LE while M2 data suggested that the oblique muscles mediated vertical vergence in the RE and the vertical recti in the LE. The variable results within the same animal and across animals suggest that EOM involvement during vertical fusional vergence is idiosyncratic and likely a weighted combination of multiple cyclovertical muscles.
Topics: Animals; Convergence, Ocular; Eye Movements; Fixation, Ocular; Macaca mulatta; Oculomotor Muscles; Vision Disparity
PubMed: 31223057
DOI: 10.1080/09273972.2019.1629465 -
PloS One 2018During visual exploration or free-view, gaze positioning is largely determined by the tendency to maximize visual saliency: more salient locations are more likely to be... (Clinical Trial)
Clinical Trial
BACKGROUND
During visual exploration or free-view, gaze positioning is largely determined by the tendency to maximize visual saliency: more salient locations are more likely to be fixated. However, when visual input is completely irrelevant for performance, such as with non-visual tasks, this saliency maximization strategy may be less advantageous and potentially even disruptive for task-performance. Here, we examined whether visual saliency remains a strong driving force in determining gaze positions even in non-visual tasks. We tested three alternative hypotheses: a) That saliency is disadvantageous for non-visual tasks and therefore gaze would tend to shift away from it and towards non-salient locations; b) That saliency is irrelevant during non-visual tasks and therefore gaze would not be directed towards it but also not away-from it; c) That saliency maximization is a strong behavioral drive that would prevail even during non-visual tasks.
METHODS
Gaze position was monitored as participants performed visual or non-visual tasks while they were presented with complex or simple images. The effect of attentional demands was examined by comparing an easy non-visual task with a more difficult one.
RESULTS
Exploratory behavior was evident, regardless of task difficulty, even when the task was non-visual and the visual input was entirely irrelevant. The observed exploratory behaviors included a strong tendency to fixate salient locations, central fixation bias and a gradual reduction in saliency for later fixations. These exploratory behaviors were spatially similar to those of an explicit visual exploration task but they were, nevertheless, attenuated. Temporal differences were also found: in the non-visual task there were longer fixations and later first fixations than in the visual task, reflecting slower visual sampling in this task.
CONCLUSION
We conclude that in the presence of a rich visual environment, visual exploration is evident even when there is no explicit instruction to explore. Compared to visually motivated tasks, exploration in non-visual tasks follows similar selection mechanisms, but occurs at a lower rate. This is consistent with the view that the non-visual task is the equivalent of a dual-task: it combines the instructed task with an uninstructed, perhaps even mandatory, exploratory behavior.
Topics: Adult; Exploratory Behavior; Female; Fixation, Ocular; Humans; Male
PubMed: 29933381
DOI: 10.1371/journal.pone.0198242 -
Computational Intelligence and... 2021In recent years, the prediction of salient regions in RGB-D images has become a focus of research. Compared to its RGB counterpart, the saliency prediction of RGB-D...
In recent years, the prediction of salient regions in RGB-D images has become a focus of research. Compared to its RGB counterpart, the saliency prediction of RGB-D images is more challenging. In this study, we propose a novel deep multimodal fusion autoencoder for the saliency prediction of RGB-D images. The core trainable autoencoder of the RGB-D saliency prediction model employs two raw modalities (RGB and depth/disparity information) as inputs and their corresponding eye-fixation attributes as labels. The autoencoder comprises four main networks: color channel network, disparity channel network, feature concatenated network, and feature learning network. The autoencoder can mine the complex relationship and make the utmost of the complementary characteristics between both color and disparity cues. Finally, the saliency map is predicted via a feature combination subnetwork, which combines the deep features extracted from a prior learning and convolutional feature learning subnetworks. We compare the proposed autoencoder with other saliency prediction models on two publicly available benchmark datasets. The results demonstrate that the proposed autoencoder outperforms these models by a significant margin.
Topics: Fixation, Ocular
PubMed: 34035801
DOI: 10.1155/2021/6610997 -
Vision Research Jan 2015Humans typically make use of both eyes during reading, which necessitates precise binocular coordination in order to achieve a unified perceptual representation of...
Humans typically make use of both eyes during reading, which necessitates precise binocular coordination in order to achieve a unified perceptual representation of written text. A number of studies have explored the magnitude and effects of naturally occurring and induced horizontal fixation disparity during reading and non-reading tasks. However, the literature concerning the processing of disparities in different dimensions, particularly in the context of reading, is considerably limited. We therefore investigated vertical vergence in response to stereoscopically presented linguistic stimuli with varying levels of vertical offset. A lexical decision task was used to explore the ability of participants to fuse binocular image disparity in the vertical direction during word identification. Additionally, a lexical frequency manipulation explored the potential interplay between visual fusion processes and linguistic processes. Results indicated that no significant motor fusional responses were made in the vertical dimension (all p-values>.11), though that did not hinder successful lexical identification. In contrast, horizontal vergence movements were consistently observed on all fixations in the absence of a horizontal disparity manipulation. These findings add to the growing understanding of binocularity and its role in written language processing, and fit neatly with previous literature regarding binocular coordination in non-reading tasks.
Topics: Adult; Convergence, Ocular; Eye Movements; Fixation, Ocular; Humans; Reading; Vision Disparity; Vision, Binocular
PubMed: 25433156
DOI: 10.1016/j.visres.2014.10.034 -
Scientific Reports May 2024Gaze estimation is long been recognised as having potential as the basis for human-computer interaction (HCI) systems, but usability and robustness of performance remain...
Gaze estimation is long been recognised as having potential as the basis for human-computer interaction (HCI) systems, but usability and robustness of performance remain challenging . This work focuses on systems in which there is a live video stream showing enough of the subjects face to track eye movements and some means to infer gaze location from detected eye features. Currently, systems generally require some form of calibration or set-up procedure at the start of each user session. Here we explore some simple strategies for enabling gaze based HCI to operate immediately and robustly without any explicit set-up tasks. We explore different choices of coordinate origin for combining extracted features from multiple subjects and the replacement of subject specific calibration by system initiation based on prior models. Results show that referencing all extracted features to local coordinate origins determined by subject start position enables robust immediate operation. Combining this approach with an adaptive gaze estimation model using an interactive user interface enables continuous operation with the 75th percentile gaze errors of 0.7 , and maximum gaze errors of 1.7 during prospective testing. There constitute state-of-the-art results and have the potential to enable a new generation of reliable gaze based HCI systems.
Topics: Humans; Fixation, Ocular; Eye Movements; User-Computer Interface; Male; Eye-Tracking Technology; Female; Adult
PubMed: 38778122
DOI: 10.1038/s41598-024-62365-9 -
Journal of Vision Mar 2018Small saccades, known as microsaccades, occur frequently during fixation. Several recent studies have argued that a considerable fraction of these movements are present...
Small saccades, known as microsaccades, occur frequently during fixation. Several recent studies have argued that a considerable fraction of these movements are present in the traces from one eye only. This claim contrasts with the findings of older reports, which concluded that microsaccades, like larger saccades, are virtually always binocular events. Here we examined the characteristics of small saccades by means of two of the most established high-resolution eye-tracking techniques available. A binocular Dual Purkinje Image eye-tracker was used to record eye movements while observers fixated, with their head immobilized, on markers displayed on a monitor. A specially designed eye-coil system was used to measure eye movements during normal head-free viewing, while subjects fixated on markers at various distances. Monocular microsaccades were virtually absent in both datasets. In the head-fixed data, not a single monocular microsaccade was observed. In the head-free data, only one event appeared to be monocular out of more than a thousand saccades. Monocular microsaccades do not seem to occur during normal head-free or head-immobilized fixation.
Topics: Adult; Aged; Female; Fixation, Ocular; Humans; Male; Middle Aged; Saccades; Vision, Monocular; Young Adult
PubMed: 29677334
DOI: 10.1167/18.3.18 -
Vision Research Aug 2007Oculomotor behavior contributes importantly to visual search. Saccadic eye movements can direct the fovea to potentially interesting parts of the visual field. Ensuing...
Oculomotor behavior contributes importantly to visual search. Saccadic eye movements can direct the fovea to potentially interesting parts of the visual field. Ensuing stable fixations enables the visual system to analyze those parts. The visual system may use fixation duration and saccadic amplitude as optimizers for visual search performance. Here we investigate whether the time courses of fixation duration and saccade amplitude depend on the subject's knowledge of the search stimulus, in particular target conspicuity. We analyzed 65,000 saccades and fixations in a search experiment for (possibly camouflaged) military vehicles of unknown type and size. Mean saccade amplitude decreased and mean fixation duration increased gradually as a function of the ordinal saccade and fixation number. In addition we analyzed 162,000 saccades and fixations recorded during a search experiment in which the location of the target was the only unknown. Whether target conspicuity was constant or varied appeared to have minor influence on the time courses of fixation duration and saccade amplitude. We hypothesize an intrinsic coarse-to-fine strategy for visual search that is even used when such a strategy is not optimal.
Topics: Adult; Female; Field Dependence-Independence; Fixation, Ocular; Humans; Male; Pattern Recognition, Visual; Photic Stimulation; Psychomotor Performance; Saccades; Time Factors
PubMed: 17617434
DOI: 10.1016/j.visres.2007.05.002 -
Proceedings of the National Academy of... Dec 2015Primates live in highly social environments, where prosocial behaviors promote social bonds and cohesion and contribute to group members' fitness. Despite a growing...
Primates live in highly social environments, where prosocial behaviors promote social bonds and cohesion and contribute to group members' fitness. Despite a growing interest in the biological basis of nonhuman primates' social interactions, their underlying motivations remain a matter of debate. We report that macaque monkeys take into account the welfare of their peers when making behavioral choices bringing about pleasant or unpleasant outcomes to a monkey partner. Two macaques took turns in making decisions that could impact their own welfare or their partner's. Most monkeys were inclined to refrain from delivering a mildly aversive airpuff and to grant juice rewards to their partner. Choice consistency between these two types of outcome suggests that monkeys display coherent motivations in different social interactions. Furthermore, spontaneous affilitative group interactions in the home environment were mostly consistent with the measured social decisions, thus emphasizing the impact of preexisting social bonds on decision-making. Interestingly, unique behavioral markers predicted these decisions: benevolence was associated with enhanced mutual gaze and empathic eye blinking, whereas indifference or malevolence was associated with lower or suppressed such responses. Together our results suggest that prosocial decision-making is sustained by an intrinsic motivation for social affiliation and controlled through positive and negative vicarious reinforcements.
Topics: Animals; Blinking; Choice Behavior; Decision Making; Empathy; Fixation, Ocular; Macaca; Male; Models, Theoretical; Multilevel Analysis; Social Behavior; Task Performance and Analysis
PubMed: 26621711
DOI: 10.1073/pnas.1504454112 -
Scientific Reports Jun 2018The question of how to process an ambiguous word in context has been long-studied in psycholinguistics and the present study examined this question further by...
The question of how to process an ambiguous word in context has been long-studied in psycholinguistics and the present study examined this question further by investigating the spoken word recognition processes of Cantonese homophones (a common type of ambiguous word) in context. Sixty native Cantonese listeners were recruited to participate in an eye-tracking experiment. Listeners were instructed to listen carefully to a sentence ending with a Cantonese homophone and then look at different visual probes (either Chinese characters or line-drawing pictures) presented on the computer screen simultaneously. Two findings were observed. First, the results revealed that sentence context exerted an early effect on homophone processes. Second, visual probes that serve as phonological competitors only had a weak effect on the spoken word recognition processes. Consistent with previous studies, the patterns of eye-movement results appeared to support an interactive processing approach in homophone recognition.
Topics: Eye Movements; Fixation, Ocular; Humans; Language; Phonetics; Time Factors
PubMed: 29955081
DOI: 10.1038/s41598-018-27768-5