- Gaze Tracking and Assistive Technology
- Glaucoma and retinal disorders
- Retinal Imaging and Analysis
- Visual Attention and Saliency Detection
- EEG and Brain-Computer Interfaces
- Virtual Reality Applications and Impacts
- Human-Automation Interaction and Safety
- Radiology practices and education
- Ocular Surface and Contact Lens
- Tactile and Sensory Interactions
- Online Learning and Analytics
- Domain Adaptation and Few-Shot Learning
- Topic Modeling
- Explainable Artificial Intelligence (XAI)
- Advanced Neural Network Applications
- Robotics and Sensor-Based Localization
- Intelligent Tutoring Systems and Adaptive Learning
- Visual perception and processing mechanisms
- Sleep and Work-Related Fatigue
- Visual and Cognitive Learning Processes
- Human Pose and Action Recognition
- Machine Learning and Data Classification
- Dental Radiography and Imaging
- Natural Language Processing Techniques
- Clinical Reasoning and Diagnostic Skills
Technical University of Munich
2022-2025
Centro Regional de Derechos Humanos y Justicia de Género, Corporación Humanas
2024
The Human Diagnosis Project
2024
Munich School of Philosophy
2024
University of Tübingen
2014-2023
TH Bingen University of Applied Sciences
2021-2023
Institut für Urheber- und Medienrecht
2023
Lund University
2023
IMT School for Advanced Studies Lucca
2023
Human Computer Interaction (Switzerland)
2020-2021
Abstract Teachers must be able to monitor students’ behavior and identify valid cues in order draw conclusions about actual engagement learning activities. Teacher training can support (inexperienced) teachers developing these skills by using videotaped teaching highlight which indicators should considered. However, this supposes that (a) of are known (b) work with videos is designed as effectively possible reduce the effort involved manual coding procedures examining videos. One avenue for...
Large language models represent a significant advancement in the field of AI. The underlying technology is key to further innovations and, despite critical views and even bans within communities regions, large are here stay. This position paper presents potential benefits challenges educational applications models, from student teacher perspectives. We briefly discuss current state their applications. then highlight how these can be used create content, improve engagement interaction,...
Fast and robust pupil detection is an essential prerequisite for video-based eye-tracking in real-world settings. Several algorithms image-based have been proposed the past, their applicability, however, mostly limited to laboratory conditions. In scenarios, automated has face various challenges, such as illumination changes, reflections (on glasses), make-up, non-centered eye recording, physiological characteristics. We propose ElSe, a novel algorithm based on ellipse evaluation of filtered...
Post-chiasmal visual pathway lesions and glaucomatous optic neuropathy cause binocular field defects (VFDs) that may critically interfere with quality of life driving licensure. The aims this study were (i) to assess the on-road performance patients suffering from loss using a dual-brake vehicle, (ii) investigate related compensatory mechanisms. A instructor, blinded participants' diagnosis, rated (passed/failed) ten homonymous (HP), including four right (HR) six left (HL), glaucoma (GP),...
Abstract Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the tracker slip on participant’s head, potentially strongly affecting quality. To investigate how this eye-tracker slippage affects quality, we designed experiments in which mimic behaviors that can a mobile move. Specifically, investigated quality when speak, make facial expressions, tracker....
Real-time, accurate, and robust pupil detection is an essential prerequisite for pervasive video-based eye-tracking. However, automated in real-world scenarios has proven to be intricate challenge due fast illumination changes, occlusion, non centered off-axis eye recording, physiological characteristics. In this paper, we propose evaluate a method based on novel dual convolutional neural network pipeline. its first stage the pipeline performs coarse position identification using subregions...
Recent studies analyzing driver behavior report that various factors may influence a driver's take-over readiness when resuming control after an automated driving section. However, there has been little effort made to transfer and integrate these findings into system which classifies the derives expected quality. This study now introduces new advanced assistance classify takeover in conditionally scenarios. The proposed works preemptively, i.e., is warned advance if low be expected....
This paper presents a novel approach to automated recognition of the driver's activity, which is crucial factor for determining take-over readiness in conditionally autonomous driving scenarios. Therefore, an architecture based on head-and eye-tracking data introduced this study and several features are analyzed. The proposed evaluated recorded during simulator with 73 subjects performing different secondary tasks while setting. shows promising results towards in-vehicle driver-activity...
We present TEyeD, the world's largest unified public data set of eye images taken with head-mounted devices. TEyeD was acquired seven different trackers. Among them, two trackers were integrated into virtual reality (VR) or augmented (AR) The in obtained from various tasks, including car rides, simulator outdoor sports activities, and daily indoor activities. includes 2D 3D landmarks, semantic segmentation, eyeball annotation gaze vector movement types for all images. Landmarks segmentation...
Student engagement is a key component of learning and teaching, resulting in plethora automated methods to measure it. Whereas most the literature explores student analysis using computer-based often lab, we focus on classroom instruction authentic environments. We collected audiovisual recordings secondary school classes over one half month period, acquired continuous labeling per (N=15) repeated sessions, explored computer vision classify from facial videos. learned deep embeddings for...
In this paper, we use fully convolutional neural networks for the semantic segmentation of eye tracking data. We also these reconstruction, and in conjunction with a variational auto-encoder to generate movement The first improvement our approach is that no input window necessary, due therefore any size can be processed directly. second used generated data raw (position X, Y time) without preprocessing. This achieved by pre-initializing filters layer building tensor along z axis. evaluated...
Identifying logical errors in complex, incomplete or even contradictory and overall heterogeneous data like students' experimentation protocols is challenging. Recognizing the limitations of current evaluation methods, we investigate potential Large Language Models (LLMs) for automatically identifying student streamlining teacher assessments. Our aim to provide a foundation productive, personalized feedback. Using dataset 65 protocols, an Artificial Intelligence (AI) system based on GPT-3.5...
The integration of Artificial Intelligence (AI), particularly Large Language Model (LLM)-based systems, in education has shown promise enhancing teaching and learning experiences. However, the advent Multimodal Models (MLLMs) like GPT-4 with vision (GPT-4V), capable processing multimodal data including text, sound, visual inputs, opens a new era enriched, personalized, interactive landscapes education. Grounded theory multimedia learning, this paper explores transformative role MLLMs central...
Smooth pursuit eye movements provide meaningful insights and information on subject's behavior health may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic efficient algorithm to identify these is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose Bayesian Decision Theory Identification (I-BDT) algorithm, a novel ternary that able reliably separate fixations, saccades, smooth...
Shared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent succeed by first identifying their operators' intentions, typically analyzing the user's joystick input. To this recognition, it is useful characterize behavior while performing such a task. Furthermore, eye gaze rich source information for understanding operator intention. The goal paper provide novel insights into dynamics control and in human-robot shared...
The correct identification of the eyelids and its aperture provide essential data to infer a subject's mental state (e.g., vigilance, fatigue, drowsiness) validate or reduce search space other eye features pupil, iris). This knowledge can be used not only improve many applications, such as tracking iris recognition, but also derive information about user (such as, take-over readiness driver in automated driving context). In this paper, we propose computervision-based approach estimation....
Eye tracking is increasingly influencing scientific areas such as psychology, cognitive science, and human-computer interaction. Many eye trackers output the gaze location pupil center. However, other valuable information can also be extracted from eyelids, fatigue of a person. We evaluated Generative Adversarial Networks (GAN) for eyelid area segmentation, data generation, image refinement. While segmentation GAN performs desired task, others serve supportive Networks. The trained...