-
Behavior Research Methods Feb 2020We present SMITE, a toolbox for the measurement of eye movements using eye trackers manufactured by SMI GmbH. The toolbox provides a wrapper around the iViewX SDK...
We present SMITE, a toolbox for the measurement of eye movements using eye trackers manufactured by SMI GmbH. The toolbox provides a wrapper around the iViewX SDK provided by SMI, allowing simple integration of SMI eye trackers into Psychophysics Toolbox and PsychoPy programs. The toolbox provides a graphical interface for participant setup and calibration that is implemented natively in Psychophysics Toolbox and PsychoPy drawing commands, as well as providing several convenience features for, inter alia, creating gaze-contingent experiments and working with two-computer setups. Given that SMI GmbH and its support department have closed down, it is expected that this toolbox will provide owners of SMI eye trackers with an important new way to continue to create experiments with their systems. The eye trackers supported by this toolbox are the SMI HiSpeed 1250, SMI RED systems, SMI RED-m, SMI RED250mobile, and SMI REDn.
Topics: Eye Movements; Humans; Psychophysics; Software
PubMed: 30937844
DOI: 10.3758/s13428-019-01226-0 -
Hearing Research Dec 2009The nervous system has evolved to transduce different types of environmental energy independently, for example light energy is transduced by the retina whereas sound... (Review)
Review
The nervous system has evolved to transduce different types of environmental energy independently, for example light energy is transduced by the retina whereas sound energy is transduced by the cochlea. However, the neural processing of this energy is necessarily combined, resulting in a unified percept of a real-world object or event. These percepts can be modified in the laboratory, resulting in illusions that can be used to probe how multisensory integration occurs. This paper reviews studies that have utilized such illusory percepts in order to better understand the integration of auditory and visual signals in primates. Results from human psychophysical experiments where visual stimuli alter the perception of acoustic space (the ventriloquism effect) are discussed, as are experiments probing the underlying cortical mechanisms of this integration. Similar psychophysical experiments where auditory stimuli alter the perception of visual temporal processing are also described.
Topics: Acoustics; Animals; Auditory Perception; Hearing; Humans; Models, Biological; Models, Neurological; Nervous System; Psychophysics; Reaction Time; Sound Localization; Space Perception; Time Factors; Vision, Ocular; Visual Perception
PubMed: 19393306
DOI: 10.1016/j.heares.2009.04.009 -
Vision Research Dec 2019Perception of local properties of the visual field is influenced by aftereffects of adaptation. The tilt aftereffect describes repulsion of the perceived orientation of...
Perception of local properties of the visual field is influenced by aftereffects of adaptation. The tilt aftereffect describes repulsion of the perceived orientation of a line from the orientation of an adapting line. Analogous effects of spatial context are often called illusions. Repulsion of the perceived orientation of a grating from the orientation of a surrounding grating is referred to as the tilt illusion. In the same manner, the size aftereffect and Ebbinghaus illusion form a complementary pair of temporal and spatial context effects of size. Here we report psychophysical evidence for a previously unknown aspect-ratio illusion which causes the perceived aspect-ratio of a rectangle to be repelled from the aspect-ratio of rectangles surrounding it. This illusion provides a spatial analogue to the aspect-ratio aftereffect.
Topics: Form Perception; Humans; Optical Illusions; Orientation, Spatial; Photic Stimulation; Psychophysics; Visual Fields
PubMed: 31678618
DOI: 10.1016/j.visres.2019.10.003 -
Neuropsychologia Oct 2016Emotion perception is known to involve multiple operations and waves of analysis, but specific nature of these processes remains poorly understood. Combining...
Emotion perception is known to involve multiple operations and waves of analysis, but specific nature of these processes remains poorly understood. Combining psychophysical testing and neurometric analysis of event-related potentials (ERPs) in a fear detection task with parametrically varied fear intensities (N=45), we sought to elucidate key processes in fear perception. Building on psychophysics marking fear perception thresholds, our neurometric model fitting identified several putative operations and stages: four key processes arose in sequence following face presentation - fear-neutral categorization (P1 at 100ms), fear detection (P300 at 320ms), valuation (early subcomponent of the late positive potential/LPP at 400-500ms) and conscious awareness (late subcomponent LPP at 500-600ms). Furthermore, within-subject brain-behavior association suggests that initial emotion categorization was mandatory and detached from behavior whereas valuation and conscious awareness directly impacted behavioral outcome (explaining 17% and 31% of the total variance, respectively). The current study thus reveals the chronometry of fear perception, ascribing psychological meaning to distinct underlying processes. The combination of early categorization and late valuation of fear reconciles conflicting (categorical versus dimensional) emotion accounts, lending support to a hybrid model. Importantly, future research could specifically interrogate these psychological processes in various behaviors and psychopathologies (e.g., anxiety and depression).
Topics: Adolescent; Adult; Brain Mapping; Electroencephalography; Emotions; Evoked Potentials; Fear; Female; Humans; Male; Perception; Photic Stimulation; Psychophysics; Reaction Time; Signal Detection, Psychological; Statistics, Nonparametric; Young Adult
PubMed: 27546075
DOI: 10.1016/j.neuropsychologia.2016.08.018 -
Nature Human Behaviour Oct 2021Perception and action are tightly coupled: visual responses at the saccade target are enhanced right before saccade onset. This phenomenon, presaccadic attention, is a...
Perception and action are tightly coupled: visual responses at the saccade target are enhanced right before saccade onset. This phenomenon, presaccadic attention, is a form of overt attention-deployment of visual attention with concurrent eye movements. Presaccadic attention is well-documented, but its underlying computational process remains unknown. This is in stark contrast to covert attention-deployment of visual attention without concurrent eye movements-for which the computational processes are well characterized by a normalization model. Here, a series of psychophysical experiments reveal that presaccadic attention modulates visual performance only via response gain changes. A response gain change was observed even when attention field size increased, violating the predictions of a normalization model of attention. Our empirical results and model comparisons reveal that the perceptual modulations by overt presaccadic and covert spatial attention are mediated through different computations.
Topics: Attention; Humans; Photic Stimulation; Psychomotor Performance; Psychophysics; Saccades; Spatial Navigation; Spatial Processing; Visual Perception
PubMed: 33875838
DOI: 10.1038/s41562-021-01099-4 -
Frontiers in Neural Circuits 2019Adaptation is a mechanism by which cortical neurons adjust their responses according to recently viewed stimuli. Visual information is processed in a circuit formed by...
Adaptation is a mechanism by which cortical neurons adjust their responses according to recently viewed stimuli. Visual information is processed in a circuit formed by feedforward (FF) and feedback (FB) synaptic connections of neurons in different cortical layers. Here, the functional role of FF-FB streams and their synaptic dynamics in adaptation to natural stimuli is assessed in psychophysics and neural model. We propose a cortical model which predicts psychophysically observed motion adaptation aftereffects (MAE) after exposure to geometrically distorted natural image sequences. The model comprises direction selective neurons in V1 and MT connected by recurrent FF and FB dynamic synapses. Psychophysically plausible model MAEs were obtained from synaptic changes within neurons tuned to salient direction signals of the broadband natural input. It is conceived that, motion disambiguation by FF-FB interactions is critical to encode this salient information. Moreover, only FF-FB dynamic synapses operating at distinct rates predicted psychophysical MAEs at different adaptation time-scales which could not be accounted for by single rate dynamic synapses in either of the streams. Recurrent FF-FB pathways thereby play a role during adaptation in a natural environment, specifically in inducing multilevel cortical plasticity to salient information and in mediating adaptation at different time-scales.
Topics: Adaptation, Physiological; Animals; Cerebral Cortex; Humans; Models, Neurological; Nerve Net; Neurons; Psychophysics; Synapses
PubMed: 30814934
DOI: 10.3389/fncir.2019.00009 -
PloS One Sep 2010In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the... (Review)
Review
BACKGROUND
In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays.
METHODOLOGY/PRINCIPAL FINDINGS
In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods.
CONCLUSIONS/SIGNIFICANCE
The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors.
Topics: Animals; Humans; Psychology; Psychophysics; Time Factors
PubMed: 20927362
DOI: 10.1371/journal.pone.0012792 -
Proceedings of the National Academy of... Mar 2016Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models...
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Topics: Brain; Humans; Models, Neurological; Nerve Net; Neural Networks, Computer; Pattern Recognition, Visual; Photic Stimulation; Psychophysics; Vision, Ocular; Visual Cortex; Visual Pathways; Visual Perception
PubMed: 26884200
DOI: 10.1073/pnas.1513198113 -
Journal of Neuroscience Methods May 2019Precise definition, rendering and manipulation of visual stimuli are essential in neuroscience. Rather than implementing these tasks from scratch, scientists benefit...
BACKGROUND
Precise definition, rendering and manipulation of visual stimuli are essential in neuroscience. Rather than implementing these tasks from scratch, scientists benefit greatly from using reusable software routines from freely available toolboxes. Existing toolboxes work well when the operating system and hardware are painstakingly optimized, but may be less suited to applications that require multi-tasking (for example, closed-loop systems that involve real-time acquisition and processing of signals).
NEW METHOD
We introduce a new cross-platform visual stimulus toolbox called Shady (https://pypi.org/project/Shady)-so called because of its heavy reliance on a shader program to perform parallel pixel processing on a computer's graphics processor. It was designed with an emphasis on performance robustness in multi-tasking applications under unforgiving conditions. For optimal timing performance, the CPU drawing management commands are carried out by a compiled binary engine. For configuring stimuli and controlling their changes over time, Shady provides a programmer's interface in Python, a powerful, accessible and widely-used high-level programming language.
RESULTS
Our timing benchmark results illustrate that Shady's hybrid compiled/interpreted architecture requires less time to complete drawing operations, exhibits smaller variability in frame-to-frame timing, and hence drops fewer frames, than pure-Python solutions under matched conditions of resource contention. This performance gain comes despite an expansion of functionality (e.g. "noisy-bit" dithering as standard on all pixels and all frames, to enhance effective dynamic range) relative to previous offerings.
CONCLUSIONS
Shady simultaneously advances the functionality and performance available to scientists for rendering visual stimuli and manipulating them in real time.
Topics: Brain Injuries; Child; Eye Movement Measurements; Humans; Neurologic Examination; Neurosciences; Perceptual Disorders; Photic Stimulation; Point-of-Care Testing; Psychophysics; Software Design; Visual Perception
PubMed: 30946876
DOI: 10.1016/j.jneumeth.2019.03.020 -
Investigative Ophthalmology & Visual... May 2024This study aimed to explore the underlying mechanisms of the observed visuomotor deficit in amblyopia.
PURPOSE
This study aimed to explore the underlying mechanisms of the observed visuomotor deficit in amblyopia.
METHODS
Twenty-four amblyopic (25.8 ± 3.8 years; 15 males) and 22 normal participants (25.8 ± 2.1 years; 8 males) took part in the study. The participants were instructed to continuously track a randomly moving Gaussian target on a computer screen using a mouse. In experiment 1, the participants performed the tracking task at six different target sizes. In experiments 2 and 3, they were asked to track a target with the contrast adjusted to individual's threshold. The tracking performance was represented by the kernel function calculated as the cross-correlation between the target and mouse displacements. The peak, latency, and width of the kernel were extracted and compared between the two groups.
RESULTS
In experiment 1, target size had a significant effect on the kernel peak (F(1.649, 46.170) = 200.958, P = 4.420 × 10-22). At the smallest target size, the peak in the amblyopic group was significantly lower than that in the normal group (0.089 ± 0.023 vs. 0.107 ± 0.020, t(28) = -2.390, P = 0.024) and correlated with the contrast sensitivity function (r = 0.739, P = 0.002) in the amblyopic eyes. In experiments 2 and 3, with equally visible stimuli, there were still differences in the kernel between the two groups (all Ps < 0.05).
CONCLUSIONS
When stimulus visibility was compensated, amblyopic participants still showed significantly poorer tracking performance.
Topics: Humans; Amblyopia; Male; Female; Adult; Young Adult; Visual Acuity; Psychophysics; Motion Perception; Contrast Sensitivity; Eye Movements
PubMed: 38700875
DOI: 10.1167/iovs.65.5.7