-
Sensors (Basel, Switzerland) Dec 2021Regular physical exercise is essential for overall health; however, it is also crucial to mitigate the probability of injuries due to incorrect exercise executions....
Regular physical exercise is essential for overall health; however, it is also crucial to mitigate the probability of injuries due to incorrect exercise executions. Existing health or fitness applications often neglect accurate full-body motion recognition and focus on a single body part. Furthermore, they often detect only specific errors or provide feedback first after the execution. This lack raises the necessity for the automated detection of full-body execution errors in real-time to assist users in correcting motor skills. To address this challenge, we propose a method for movement assessment using a full-body haptic motion capture suit. We train probabilistic movement models using the data of 10 inertial sensors to detect exercise execution errors. Additionally, we provide haptic feedback, employing transcutaneous electrical nerve stimulation immediately, as soon as an error occurs, to correct the movements. The results based on a dataset collected from 15 subjects show that our approach can detect severe movement execution errors directly during the workout and provide haptic feedback at respective body locations. These results suggest that a haptic full-body motion capture suit, such as the Teslasuit, is promising for movement assessment and can give appropriate haptic feedback to the users so that they can improve their movements.
Topics: Exercise; Feedback; Humans; Motion; Motor Skills; Movement
PubMed: 34960481
DOI: 10.3390/s21248389 -
Vision Research Jun 2021When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand... (Review)
Review
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Topics: Eye Movements; Hand; Humans; Motion Perception; Movement; Psychomotor Performance; Pursuit, Smooth; Retina; Saccades
PubMed: 33743442
DOI: 10.1016/j.visres.2021.02.007 -
Cortex; a Journal Devoted To the Study... Apr 2017We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor...
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing.
Topics: Eye Movements; Humans; Intention; Models, Theoretical; Motion Perception; Movement; Psychomotor Performance; Theory of Mind
PubMed: 28226255
DOI: 10.1016/j.cortex.2017.01.016 -
Sensors (Basel, Switzerland) May 2021Sensorless and sensor-based upper limb exoskeletons that enhance or support daily motor function are limited for children. This review presents the different needs in... (Review)
Review
Sensorless and sensor-based upper limb exoskeletons that enhance or support daily motor function are limited for children. This review presents the different needs in pediatrics and the latest trends when developing an upper limb exoskeleton and discusses future prospects to improve accessibility. First, the principal diagnoses in pediatrics and their respective challenge are presented. A total of 14 upper limb exoskeletons aimed for pediatric use were identified in the literature. The exoskeletons were then classified as sensorless or sensor-based, and categorized with respect to the application domain, the motorization solution, the targeted population(s), and the supported movement(s). The relative absence of upper limb exoskeleton in pediatrics is mainly due to the additional complexity required in order to adapt to children's growth and answer their specific needs and usage. This review highlights that research should focus on sensor-based exoskeletons, which would benefit the majority of children by allowing easier adjustment to the children's needs. Sensor-based exoskeletons are often the best solution for children to improve their participation in activities of daily living and limit cognitive, social, and motor impairments during their development.
Topics: Activities of Daily Living; Child; Exoskeleton Device; Humans; Movement; Pediatrics; Upper Extremity
PubMed: 34065366
DOI: 10.3390/s21103561 -
Proceedings of the National Academy of... May 2022Visually active animals coordinate vision and movement to achieve spectacular tasks. An essential prerequisite to guide agile locomotion is to keep gaze level and...
Visually active animals coordinate vision and movement to achieve spectacular tasks. An essential prerequisite to guide agile locomotion is to keep gaze level and stable. Since the eyes, head and body can move independently to control gaze, how does the brain effectively coordinate these distinct motor outputs? Furthermore, since the eyes, head, and body have distinct mechanical constraints (e.g., inertia), how does the nervous system adapt its control to these constraints? To address these questions, we studied gaze control in flying fruit flies (Drosophila) using a paradigm which permitted direct measurement of head and body movements. By combining experiments with mathematical modeling, we show that body movements are sensitive to the speed of visual motion whereas head movements are sensitive to its acceleration. This complementary tuning of the head and body permitted flies to stabilize a broader range of visual motion frequencies. We discovered that flies implement proportional-derivative (PD) control, but unlike classical engineering control systems, relay the proportional and derivative signals in parallel to two distinct motor outputs. This scheme, although derived from flies, recapitulated classic primate vision responses thus suggesting convergent mechanisms across phyla. By applying scaling laws, we quantify that animals as diverse as flies, mice, and humans as well as bio-inspired robots can benefit energetically by having a high ratio between head, body, and eye inertias. Our results provide insights into the mechanical constraints that may have shaped the evolution of active vision and present testable neural control hypotheses for visually guided behavior across phyla.
Topics: Animals; Eye Movements; Feedback; Head; Head Movements; Motion
PubMed: 35503912
DOI: 10.1073/pnas.2121660119 -
The Journal of Physiology Feb 2011
Topics: Animals; Head Movements; Humans; Motion Perception; Motor Activity; Movement; Orientation; Space Perception
PubMed: 21486848
DOI: 10.1113/jphysiol.2011.205286 -
Sensors (Basel, Switzerland) Jan 2023Patients after stroke need to re-learn functional movements required for independent living throughout the rehabilitation process. In the study, we used a wearable...
Patients after stroke need to re-learn functional movements required for independent living throughout the rehabilitation process. In the study, we used a wearable sensory system for monitoring the movement of the upper limbs while performing activities of daily living. We implemented time-based and path-based segmentation of movement trajectories and muscle activity to quantify the activities of the unaffected and the affected upper limbs. While time-based segmentation splits the trajectory in quants of equal duration, path-based segmentation isolates completed movements. We analyzed the hand movement path and forearm muscle activity and introduced a bimanual movement parameter, which enables differentiation between unimanual and bimanual activities. The approach was validated in a study that included a healthy subject and seven patients after stroke with different levels of disabilities. Path-based segmentation provides a more detailed and comprehensive evaluation of upper limb activities, while time-based segmentation is more suitable for real-time assessment and providing feedback to patients. Bimanual movement parameter effectively differentiates between different levels of upper limb involvement and is a clear indicator of the activity of the affected limb relative to the unaffected limb.
Topics: Humans; Activities of Daily Living; Stroke Rehabilitation; Upper Extremity; Movement; Stroke
PubMed: 36772329
DOI: 10.3390/s23031289 -
Journal of Neurophysiology Oct 2018If a whole body reaching task is produced when standing or adopting challenging postures, it is unclear whether changes in attentional demands or the sensorimotor...
If a whole body reaching task is produced when standing or adopting challenging postures, it is unclear whether changes in attentional demands or the sensorimotor integration necessary for balance control influence the interaction between visuomotor and postural components of the movement. Is gaze control prioritized by the central nervous system (CNS) to produce coordinated eye movements with the head and whole body regardless of movement context? Considering the coupled nature of visuomotor and whole body postural control during action, this study aimed to understand how changing equilibrium constraints (in the form of different postural configurations) influenced the initiation of eye, head, and arm movements. We quantified the eye-head metrics and segmental kinematics as participants executed either isolated gaze shifts or whole body reaching movements to visual targets. In total, four postural configurations were compared: seated, natural stance, with the feet together (narrow stance), or while balancing on a wooden beam. Contrary to our initial predictions, the lack of distinct changes in eye-head metrics; timing of eye, head, and arm movement initiation; and gaze accuracy, in spite of kinematic differences, suggests that the CNS integrates postural constraints into the control necessary to initiate gaze shifts. This may be achieved by adopting a whole body gaze strategy that allows for the successful completion of both gaze and reaching goals. NEW & NOTEWORTHY Differences in sequence of movement among the eye, head, and arm have been shown across various paradigms during reaching. Here we show that distinct changes in eye characteristics and movement sequence, coupled with stereotyped profiles of head and gaze movement, are not observed when adopting postures requiring changes to balance constraints. This suggests that a whole body gaze strategy is prioritized by the central nervous system with postural control subservient to gaze stability requirements.
Topics: Adult; Arm; Eye Movements; Female; Head Movements; Humans; Male; Posture; Psychomotor Performance
PubMed: 30020836
DOI: 10.1152/jn.00200.2018 -
Philosophical Transactions of the Royal... Aug 1997This paper presents several approaches to the machine perception of motion and discusses the role and levels of knowledge in each. In particular, different techniques of... (Review)
Review
This paper presents several approaches to the machine perception of motion and discusses the role and levels of knowledge in each. In particular, different techniques of motion understanding as focusing on one of movement, activity or action are described. Movements are the most atomic primitives, requiring no contextual or sequence knowledge to be recognized; movement is often addressed using either view-invariant or view-specific geometric techniques. Activity refers to sequences of movements or states, where the only real knowledge required is the statistics of the sequence; much of the recent work in gesture understanding falls within this category of motion perception. Finally, actions are larger-scale events, which typically include interaction with the environment and causal relationships; action understanding straddles the grey division between perception and cognition, computer vision and artificial intelligence. These levels are illustrated with examples drawn mostly from the group's work in understanding motion in video imagery. It is argued that the utility of such a division is that it makes explicit the representational competencies and manipulations necessary for perception.
Topics: Motion Perception; Motor Activity; Movement; Neural Networks, Computer
PubMed: 9304692
DOI: 10.1098/rstb.1997.0108 -
Scientific Reports Jan 2020General movements (GMs), a type of spontaneous movement, have been used for the early diagnosis of infant disorders. In clinical practice, GMs are visually assessed by...
General movements (GMs), a type of spontaneous movement, have been used for the early diagnosis of infant disorders. In clinical practice, GMs are visually assessed by qualified licensees; however, this presents a difficulty in terms of quantitative evaluation. Various measurement systems for the quantitative evaluation of GMs track target markers attached to infants; however, these markers may disturb infants' spontaneous movements. This paper proposes a markerless movement measurement and evaluation system for GMs in infants. The proposed system calculates 25 indices related to GMs, including the magnitude and rhythm of movements, by video analysis, that is, by calculating background subtractions and frame differences. Movement classification is performed based on the clinical definition of GMs by using an artificial neural network with a stochastic structure. This supports the assessment of GMs and early diagnoses of disabilities in infants. In a series of experiments, the proposed system is applied to movement evaluation and classification in full-term infants and low-birth-weight infants. The experimental results confirm that the average agreement between four GMs classified by the proposed system and those identified by a licensee reaches up to 83.1 ± 1.84%. In addition, the classification accuracy of normal and abnormal movements reaches 90.2 ± 0.94%.
Topics: Biomarkers; Biomedical Engineering; Female; Humans; Infant; Infant, Low Birth Weight; Male; Models, Theoretical; Motor Activity; Movement; Movement Disorders; Neurodevelopmental Disorders
PubMed: 31996716
DOI: 10.1038/s41598-020-57580-z