- Gaze Tracking and Assistive Technology
- Visual Attention and Saliency Detection
- EEG and Brain-Computer Interfaces
- Software Engineering Research
- Usability and User Interface Design
- Robot Manipulation and Learning
- Data Visualization and Analytics
- Tactile and Sensory Interactions
- Intelligent Tutoring Systems and Adaptive Learning
- Muscle activation and electromyography studies
- Visual perception and processing mechanisms
- Advanced Image and Video Retrieval Techniques
- Robotic Mechanisms and Dynamics
- Business Strategy and Innovation
- Cognitive Functions and Memory
- Digital Accessibility for Disabilities
- Global Trade and Competitiveness
- Glaucoma and retinal disorders
- Advanced Optical Imaging Technologies
- Architecture and Computational Design
- Video Analysis and Summarization
- Robotics and Sensor-Based Localization
- Mechanical Engineering and Vibrations Research
- Nuclear Issues and Defense
- Online Learning and Analytics
Imperial College London
2018-2021
Brain (Germany)
2019
Peter the Great St. Petersburg Polytechnic University
2015-2016
University of Eastern Finland
2014-2016
Finland University
2015
The methodology of eye tracking has been gradually making its way into various fields science, assisted by the diminishing cost associated technology. In an international collaboration to open up prospect movement research for programming educators, we present a case study on program comprehension and preliminary analyses together with some useful tools.
Assistive robotic systems endeavour to support those with movement disabilities, enabling them move again and regain functionality. Main issue these is the complexity of their low-level control, how translate this simpler, higher level commands that are easy intuitive for a human user interact with. We have created multi-modal system, consisting different sensing, decision making actuating modalities, leading intuitive, human-in-the-loop assistive robotics. The system takes its cue from...
Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently users continuously need interact with an interface during navigation. This puts a significant cognitive load on prevents them from interacting environment in other forms We have combined eyetracking/gaze-contingent intention decoding computer vision context-aware algorithms autonomous navigation drawn self-driving...
Visual context plays a crucial role in understanding human visual attention natural, unconstrained tasks - the objects we look at during everyday provide an indicator of our ongoing attention. Collection, interpretation, and study behaviour environments therefore is necessary, however presents many challenges, requiring painstaking hand-coding. Here demonstrate proof-of-concept system that enables real-time annotation egocentric video stream from head-mounted eye-tracking glasses. We...
Understanding software engineers' behaviour plays a vital role in the development industry. It also provides helpful guidelines for teaching and learning. In this article, we conduct study of extrafoveal vision its information processing. This is new perspective on source code comprehension. Despite major importance, has been largely ignored by previous studies. The available research focused entirely foveal processing gaze fixation position. work, share results gaze-contingent comprehension...
Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates distance and position object interest, reach for it. Having grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data gaze-point indicate hidden intention. We detect interest using deep convolution neural networks estimate its physical space 3D gaze vectors. Then we trigger possible...
Eye-tracking technology and gaze-contingent control in human–computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, which we concentrated one aspect gaze–contingent interaction: Its effectiveness compared with mouse-based computer strategy game. We propose measure for evaluating the based “the time recognition” game unit. In this article, use to compare gaze- mouse-contingent systems, present analysis differences as function...
Abstract Natural eye movements during navigation have long been considered to reflect planning processes and link user’s future action intention. We investigate here whether natural joystick-based of wheel-chairs follow identifiable patterns that are predictive joystick actions. To place in context with driving intentions, we combine our tracking a 3D depth camera system, which allows us identify the floor as gaze target distinguish them from other non-navigation related movements. find...
We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how user looks at environment understand where they want navigate their device. However, many natural eye-movements are not relevant for action intention decoding, only some are, which places a challenge on so-called Midas Touch Problem. Here, we present new solution, consisting of 1. deep computer vision what object is looking in field view, with 2. an analysis object's bounding box...
One of the important characteristics a window and gaze-contingent tool is speed reaction to pointer or eye-movements update delay so-called latency contingent response. In our video we demonstrate handy possibility measuring mouse-based software. We present low-cost Latency Measurement System which can be useful for different studies that include eye-movement tracking tools. conclusion this system tool.
As human life integrates further with machines, there is a greater need for intuitive human-machine interfaces. Gaze has long been studied as window into the mind, gaze control interfaces serving to manipulate variety of systems from computers drones. Present approaches do not rely on natural cues, however, and use instead concepts such dwell time or cursors capture command whilst avoiding problem Midas Touch. We present deep learning approach object manipulation intention decoding solely...
This extended abstract presents a gaze-interactive comics, where page design alters based on the coordinates of user's gaze. We were analyzing impact gaze-based interactivity impression comics plot comprehension. found that subjects can use interaction for viewing without prior training. The who using interactive technique tend to perceive story in slightly more optimistic way compared those watched non-interactive version. However, from both groups described environment and main character's...
Incorporating the physical environment is essential for a complete understanding of human behavior in unconstrained every-day tasks. This especially important ego-centric tasks where obtaining 3 dimensional information both limiting and challenging with current 2D video analysis methods proving insufficient. Here we demonstrate proof-of-concept system which provides real-time 3D mapping semantic labeling local from an RGB-D video-stream gaze point estimation head mounted eye tracking...
NordiCHI'14 conference attendees got hands-on experience with a number of great new interactive systems. Among the accepted poster, video, and demo submissions, we selected following four prototypes to illustrate high-quality design research displayed during conference, which was held in Helsinki, Finland, October 26--30, 2014. Mikael Wiberg, Lily Diaz-Kommonen, Anna Kolehmainen, Poster, Video, Demo Chairs