Sean Anthony Byrne

ORCID: 0009-0004-5685-7318
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Gaze Tracking and Assistive Technology
  • Retinal Imaging and Analysis
  • Visual Attention and Saliency Detection
  • Glaucoma and retinal disorders
  • Advanced Neural Network Applications
  • Industrial Vision Systems and Defect Detection
  • Online Learning and Analytics
  • Image Retrieval and Classification Techniques
  • Medical Imaging and Analysis
  • Retinopathy of Prematurity Studies
  • Ocular Surface and Contact Lens
  • Augmented Reality Applications
  • Online and Blended Learning
  • Retinal and Optic Conditions

IMT School for Advanced Studies Lucca
2022-2025

Politecnico di Milano
2025

Lund University
2023

Technical University of Munich
2023

The advent of foundation models signals a new era in artificial intelligence. Segment Anything Model (SAM) is the first model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded virtual reality setups. increasing requirement annotated eye-image datasets presents significant opportunity SAM redefine landscape data annotation gaze estimation. Our investigation centers on zero-shot learning abilities and effectiveness prompts like...

10.1145/3654704 article EN other-oa Proceedings of the ACM on Computer Graphics and Interactive Techniques 2024-05-17

Abstract Eye movement data has been extensively utilized by researchers interested in studying decision-making within the strategic setting of economic games. In this paper, we demonstrate that both deep learning and support vector machine classification methods are able to accurately identify participants’ decision strategies before they commit action while playing Our approach focuses on creating scanpath images best capture dynamics a participant’s gaze behaviour way is meaningful for...

10.1038/s41598-023-31536-5 article EN cc-by Scientific Reports 2023-03-23

We explore the transformative potential of SAM 2, a vision foundation model, in advancing gaze estimation. 2 addresses key challenges estimation by significantly reducing annotation time, simplifying deployment, and enhancing segmentation accuracy. Utilizing its zero-shot capabilities with minimal user input—a single click per video—we tested on over 14 million eye images from diverse range datasets, including EDS challenge datasets Labelled Pupils Wild. This is first application to domain....

10.1145/3729409 article EN Proceedings of the ACM on Computer Graphics and Interactive Techniques 2025-05-26

We present a deep learning method for accurately localizing the center of single corneal reflection (CR) in an eye image. Unlike previous approaches, we use convolutional neural network (CNN) that was trained solely using synthetic data. Using only data has benefit completely sidestepping time-consuming process manual annotation is required supervised training on real images. To systematically evaluate accuracy our method, first tested it images with CRs placed different backgrounds and...

10.3758/s13428-023-02297-w article EN cc-by Behavior Research Methods 2023-12-19

Instructors who teach digital literacy skills are increasingly faced with the challenges that come larger student populations and online courses. We asked an educator how we could support learning better assist instructors both in classroom. To address these challenges, discuss behavioral signals collected from eye tracking mouse can be combined to offer predictions of performance. In our preliminary study, participants completed two image masking tasks Adobe Photoshop based on real...

10.1145/3588015.3589197 article EN 2023-05-24

Image classification models are becoming a popular method of analysis for scanpath classification. To implement these models, gaze data must first be reconfigured into 2D image. However, this step gets relatively little attention in the literature as focus is mostly placed on model configuration. As standard architectures have become more accessible to wider eye-tracking community, we highlight importance carefully choosing feature representations within images they may heavily affect...

10.1145/3591130 article EN Proceedings of the ACM on Human-Computer Interaction 2023-05-17

Deep learning has shown promise for gaze estimation in Virtual Reality (VR) and other head-mounted applications, but such models are hard to train due lack of available data. Here we introduce a novel method neural networks using synthetic images that model the light distributions captured P-CR setup. We tested our on dataset real eye from VR setup, achieving 76% accuracy which is close state-of-the-art was trained itself. The localization error CRs 1.56 pixels 2.02 pupil, par with...

10.1145/3565066.3608690 article EN 2023-09-22

As tech giants such as Apple and Meta invest heavily in Virtual Augmented Reality (VR/AR) technologies, often collectively termed Extended (XR) devices, a significant societal concern emerges: The use of eye-tracking technology within these devices. Gaze data holds immense value, revealing insights into user attention, health, cognitive states. This raises substantial concerns over privacy fairness, with potential risks targeted ads, unauthorized surveillance, re-purposing. the impact eye...

10.1109/aixvr59861.2024.00020 article EN 2024-01-17

Deep learning has bolstered gaze estimation techniques, but real-world deployment been impeded by inadequate training datasets. This problem is exacerbated both hardware-induced variations in eye images and inherent biological differences across the recorded participants, leading to feature pixel-level variance that hinders generalizability of models trained on specific While synthetic datasets can be a solution, their creation time resource-intensive. To address this problem, we present...

10.48550/arxiv.2309.06129 preprint EN cc-by-sa arXiv (Cornell University) 2023-01-01

We explore the transformative potential of SAM 2, a vision foundation model, in advancing gaze estimation and eye tracking technologies. By significantly reducing annotation time, lowering technical barriers through its ease deployment, enhancing segmentation accuracy, 2 addresses critical challenges faced by researchers practitioners. Utilizing zero-shot capabilities with minimal user input-a single click per video-we tested on over 14 million images from diverse datasets, including virtual...

10.48550/arxiv.2410.08926 preprint EN arXiv (Cornell University) 2024-10-11

We present a deep learning method for accurately localizing the center of single corneal reflection (CR) in an eye image. Unlike previous approaches, we use convolutional neural network (CNN) that was trained solely using simulated data. Using only data has benefit completely sidestepping time-consuming process manual annotation is required supervised training on real images. To systematically evaluate accuracy our method, first tested it images with CRs placed different backgrounds and...

10.48550/arxiv.2304.05673 preprint EN cc-by-sa arXiv (Cornell University) 2023-01-01

The advent of foundation models signals a new era in artificial intelligence. Segment Anything Model (SAM) is the first model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded virtual reality setups. increasing requirement annotated eye-image datasets presents significant opportunity SAM redefine landscape data annotation gaze estimation. Our investigation centers on zero-shot learning abilities and effectiveness prompts like...

10.48550/arxiv.2311.08077 preprint EN cc-by-sa arXiv (Cornell University) 2023-01-01

Abstract Eye movement data has been extensively utilized by researchers interested in studying decision-making within the strategic setting of economic games. In this paper, we demonstrate both a deep learning and traditional machine classification method which are able to accurately identify given participant's decision strategy before they commit an action while playing Our approach focuses on creating scanpath images that best capture dynamics gaze behaviour during game way is meaningful...

10.21203/rs.3.rs-2088288/v1 preprint EN cc-by Research Square (Research Square) 2022-10-31
Coming Soon ...