Arren Glover

ORCID: 0000-0003-4499-4070
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Memory and Neural Computing
  • CCD and CMOS Imaging Sensors
  • Ferroelectric and Negative Capacitance Devices
  • Robotics and Sensor-Based Localization
  • Neural dynamics and brain function
  • Neuroscience and Neural Engineering
  • Video Surveillance and Tracking Methods
  • Reinforcement Learning in Robotics
  • Robot Manipulation and Learning
  • Advanced Vision and Imaging
  • Advanced Image and Video Retrieval Techniques
  • Advanced Neural Network Applications
  • Face and Expression Recognition
  • Visual Attention and Saliency Detection
  • Electronic and Structural Properties of Oxides
  • Age of Information Optimization
  • Radiation Effects in Electronics
  • Human Pose and Action Recognition
  • EEG and Brain-Computer Interfaces
  • Multimodal Machine Learning Applications
  • Distributed systems and fault tolerance
  • Inertial Sensor and Navigation
  • Domain Adaptation and Few-Shot Learning
  • Robotic Path Planning Algorithms
  • Embodied and Extended Cognition

Italian Institute of Technology
2016-2025

Weatherford College
2024

Johns Hopkins University
2024

Valeo (Germany)
2022

AGH University of Krakow
2022

University of Genoa
2020

Queensland University of Technology
2011-2016

The University of Queensland
2009-2010

Appearance-based mapping and localisation is especially challenging when separate processes of occur at different times day. The problem exacerbated in the outdoors where continuous change sun angle can drastically affect appearance a scene. We confront this challenge by fusing probabilistic local feature based data association method FAB-MAP with pose cell filtering experience RatSLAM. evaluate effectiveness our amalgamation methods using five datasets captured throughout day from single...

10.1109/robot.2010.5509547 article EN 2010-05-01

Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently pose, are now widely in robotic applications. The current state-of-the-art field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated several seminal mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation original FAB-MAP algorithm. Beyond benefits full user access to code, OpenFABMAP provides number...

10.1109/icra.2012.6224843 article EN 2012-05-01

The detection of consistent feature points in an image is fundamental for various kinds computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents event-based approach to the corner points, which benefits from high temporal resolution, compressed visual information low latency provided by asynchronous neuromorphic camera. proposed method adapts commonly used Harris detector data, frames are replaced a stream...

10.1109/iros.2016.7759610 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016-10-01

Event cameras are a new technology that can enable low-latency, fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes the scene and have very high temporal resolution (<; 1μs). Moving targets produce dense spatio-temporal streams of events do not suffer from information loss "between frames", which occur when traditional used track fast-moving targets. Event-based tracking algorithms need be able follow target position within data, while...

10.1109/iros.2017.8206226 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017-09-01

The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking control, example a robot catching ball. When the event-driven iCub humanoid grasps an object its head torso move, inducing camera motion, tracked objects become no longer trivially segmented amongst mass background clutter. Current algorithms have mostly considered stationary clean event-streams with minimal This paper introduces novel...

10.1109/iros.2016.7759345 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016-10-01

Human Pose Estimation (HPE) is crucial as a building block for tasks that are based on the accurate understanding of human position, pose and movements. Therefore, accuracy efficiency in this echo throughout system, making it important to find efficient methods, run at fast rates online applications. The state art mainstream sensors has made considerable advances, but event camera HPE still its infancy. Event cameras boast high data capture compact structure, with advantages like dynamic...

10.1109/cvprw59228.2023.00420 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2023-06-01

Event cameras are an emerging technology in computer vision, offering extremely low latency and bandwidth, as well a high temporal resolution dynamic range. Inherent data compression is achieved pixel only produced by contrast changes at the edges of moving objects. However, current trends state-of-the-art visual algorithms rely on deep-learning with networks designed to process colour intensity information contained dense arrays, but notoriously computationally heavy. While combination...

10.1109/iros.2018.8594119 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018-10-01

In this paper we present a novel, condition-invariant place recognition algorithm inspired by recent discoveries in human visual neuroscience. The combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch processes. approach does not require prior training and works on single images, alleviating the need for either velocity signal or sequence, differentiating it from current state of art methods. We conduct an exhaustive set experiments evaluating...

10.1109/icra.2014.6907678 article EN 2014-05-01

Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning, requiring extensive computational resources, especially during training. Inspired by primate visual system, we developed bio-inspired perception system neuromorphic robot iCub. The model uses hierarchical, biologically plausible architecture...

10.1038/s41467-025-56904-9 article EN cc-by-nc-nd Nature Communications 2025-02-22

Unlike standard cameras that send intensity images at a constant frame rate, event-driven asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast power vision algorithms robots. Visual tracking, example, is easily achieved even very stimuli, as only moving objects cause changes. However, mounted on robot are typically non-stationary same tracking problem becomes...

10.1109/icar.2017.8023661 article EN 2021 20th International Conference on Advanced Robotics (ICAR) 2017-07-01

Table tennis robots gained traction over the last years and have become a popular research challenge for control perception algorithms. Fast accurate ball detection is crucial enabling robotic arm to rally back successfully. So far, most table use conventional, frame-based cameras pipeline. However, suffer from motion blur if frame rate not high enough fast-moving objects. Event-based cameras, on other hand, do this drawback since pixels report changes in intensity asynchronously...

10.48550/arxiv.2502.00749 preprint EN arXiv (Cornell University) 2025-02-02

In this work, we present a neuromorphic architecture for head pose estimation and scene representation the humanoid iCub robot. The spiking neuronal network is fully realized in Intel's research chip, Loihi, precisely integrates issued motor commands to estimate iCub's path-integration process. vision system of used correct drift estimation. Positions objects front robot are memorized using on-chip synaptic plasticity. We real-time robotic experiments 2 degrees freedom (DoF) robot's show...

10.3389/fnins.2020.00551 article EN cc-by Frontiers in Neuroscience 2020-06-23

There have been a number of corner detection methods proposed for event cameras in the last years, since event-driven computer vision has become more accessible. Current state-of-the-art either unsatisfactory accuracy or real-time performance when considered practical use, example camera is randomly moved an unconstrained environment. In this paper, we present yet another method to perform detection, dubbed look-up event-Harris (luvHarris), that employs Harris algorithm high but manages...

10.1109/tpami.2021.3135635 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2021-12-15

The Lingodroids are a pair of mobile robots that evolve language for places and relationships between (based on distance direction). Each robot in these studies has its own understanding the layout world, based unique experiences exploration environment. Despite having different internal representations able to develop common lexicon places, then use simple sentences explain understand even they could not physically experience, such as areas behind closed doors. By learning language,...

10.1109/icra.2011.5980476 article EN 2011-05-01

Event-driven (ED) cameras are an emerging technology that sample the visual signal based on changes in magnitude, rather than at a fixed-rate over time. The change paradigm results camera with lower latency, uses less power, has reduced bandwidth, and higher dynamic range. Such offer many potential advantages for on-line, autonomous, robots; however sensor data does not directly integrate current "image-based" frameworks software libraries. iCub robot Yet Another Robot Platform (YARP) as...

10.3389/frobt.2017.00073 article EN cc-by Frontiers in Robotics and AI 2018-01-16

Event cameras offer many advantages for dynamic robotics due to their low latency response motion, high range, and inherent compression of the visual signal. Many algorithms easily achieve real-time performance when testing on off-line datasets, however with an increase in camera resolution applications fast-moving robots, latency-free operation is not guaranteed. The event-rate constant, but proportional amount movement scene, or velocity itself. Recently, have instead reported a maximum...

10.1109/icra.2018.8460541 article EN 2018-05-01

Abstract To interact with its environment, a robot working in 3D space needs to organise visual input terms of objects or their perceptual precursors, proto-objects. Among other cues, depth is submodality used direct attention features and objects. Current depth-based proto-object models have been implemented for standard RGB-D cameras that produce synchronous frames. In contrast, event are neuromorphic sensors loosely mimic the function human retina by asynchronously encoding per-pixel...

10.1038/s41598-022-11723-6 article EN cc-by Scientific Reports 2022-05-10

Autonomous robots can rely on attention mechanisms to explore complex scenes and select salient stimuli relevant for behaviour. Stimulus selection should be fast efficiently allocate available (and limited) computational resources process in detail a subset of the otherwise overwhelmingly large sensory input. The amount processing required is product data sampled by robot's sensors; while standard RGB camera produces fixed every pixel sensor, an event-camera only where there contrast change...

10.1109/iros40897.2019.8967943 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019-11-01

Intelligent robots need to recognize objects in their environment. This task is conceptually different from the typical image classification computer vision. Robots particular object instances, not classes of objects, which makes these tasks simpler. However, instances be recognized under viewing angles, poses, and lighting conditions reliably. Moreover, for many application, capability learn new quickly, e.g., an interactive session with user, adapt representations if change mistakes are...

10.1145/3546790.3546791 article EN 2022-07-27

Event cameras offer low-latency and data compression for visual applications, through event-driven operation, that can be exploited edge processing in tiny autonomous agents. Robust, accurate low latency extraction of highly informative features such as corners is key most processing. While several corner detection algorithms have been proposed, state-of-the-art performance achieved by "luvHarris". However, this algorithm requires a high number memory accesses per event, making it less-than...

10.1109/icassp48485.2024.10445937 article EN ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024-03-18

Robots are able to learn how interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such regime robot must be about new objects, but without general rule for what "object" is, the everything environment determine their affordances. this paper, sensorimotor coordination is modeled using distributed semi-Markov decision process; it created...

10.1109/tcds.2016.2612721 article EN IEEE Transactions on Cognitive and Developmental Systems 2016-09-22

Vergence control and tracking allow a robot to maintain an accurate estimate of dynamic object three dimensions, improving depth estimation at the fixation point. Brain-inspired implementations vergence are based on models complex binocular cells visual cortex sensitive disparity. The energy activation provides disparity-related signal that can be reliably used for control. We implemented such model neuromorphic iCub, equipped with pair brain inspired vision sensors. Such sensors provide...

10.1109/humanoids.2016.7803355 article EN 2016-11-01

This paper investigates trajectory prediction for robotics, to improve the interaction of robots with moving targets, such as catching a bouncing ball. Unexpected, highly-non-linear trajectories cannot easily be predicted regression-based fitting procedures, therefore we apply state art machine learning, specifically based on Long-Short Term Memory (LSTM) architectures. In addition, fast targets are better sensed using event cameras, which produce an asynchronous output triggered by spatial...

10.1109/aicas48895.2020.9073855 article EN 2020-04-23
Coming Soon ...