- Advanced Vision and Imaging
- Robotics and Sensor-Based Localization
- Advanced Image and Video Retrieval Techniques
- Optical measurement and interference techniques
- Video Surveillance and Tracking Methods
- 3D Surveying and Cultural Heritage
- Robotic Path Planning Algorithms
- Tactile and Sensory Interactions
- Neuroscience and Neural Engineering
- Image and Object Detection Techniques
- Advanced Memory and Neural Computing
- Image Processing Techniques and Applications
- Remote Sensing and LiDAR Applications
- Industrial Vision Systems and Defect Detection
- Human Pose and Action Recognition
- Multimodal Machine Learning Applications
- Robot Manipulation and Learning
- Indoor and Outdoor Localization Technologies
- Image Retrieval and Classification Techniques
- Wind Energy Research and Development
- Image and Video Stabilization
- Infrared Target Detection Methodologies
- Advanced Neural Network Applications
- EEG and Brain-Computer Interfaces
- Computer Graphics and Visualization Techniques
Universidad de Zaragoza
2015-2024
Institute of Engineering
2017-2020
Instituto Tecnológico de Aragón
2018
Polytechnic University of Puerto Rico
1995-1999
Many robotic applications work with visual reference maps, which usually consist of sets more or less organized images. In these applications, there is a compromise between the density data stored and capacity to identify later robot localization, when it not exactly in same position as one views. Here we propose use recently developed feature, SURF, improve performance appearance-based localization methods that perform image retrieval large sets. This feature integrated vision-based...
The problem of 3D layout recovery in indoor scenes has been a core research topic for over decade. However, there are still several major challenges that remain unsolved. Among the most relevant ones, part state-of-the-art methods make implicit or explicit assumptions on -- e.g. box-shaped Manhattan layouts. Also, current computationally expensive and not suitable real-time applications like robot navigation AR/VR. In this work we present CFL (Corners Layout), first end-to-end model 360...
Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though ad hoc techniques have been developed to deal with this problem, unfortunately most them not applicable indoors due the high variability environment (movement furniture people, etc.). In paper, we describe use robust regression detect reject NLOS measures a location estimation using...
Vision-based topological localization and mapping for autonomous robotic systems have received increased research interest in recent years. The need to map larger environments requires models at different levels of abstraction additional abilities deal with large amounts data efficiently. Most successful approaches appearance-based datasets typically represent locations using local image features. We study the feasibility performing these tasks urban global descriptors instead taking...
Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, phosphenic images produced by implants have very limited information bandwidth due poor resolution and lack color or contrast. The ability object recognition scene understanding in real environments severely restricted for prosthetic users. Computer can play a key role overcome limitations optimize visual vision, improving amount that presented. We present new approach build...
Visual prostheses are designed to restore partial functional vision in patients with total loss. Retinal visual provide limited capabilities as a result of low resolution, field view and poor dynamic range. Understanding the influence these parameters perception results can guide research design. In this work, we evaluate respect spatial resolution prostheses, measuring accuracy response time search recognition task. Twenty-four normally sighted participants were asked find recognize usual...
In this work we integrate the Spherical Camera Model for catadioptric systems in a Visual-SLAM application. The is projection model that unifies central and conventional cameras. To into Extended Kalman Filter-based SLAM require to linearize direct inverse projection. We have performed an initial experimentation with omni directional real sequences including challenging trajectories. results confirm camera gives much better orientation accuracy improving estimated trajectory.
In this letter, we propose a novel procedure for three-dimensional layout recovery of indoor scenes from single 360° panoramic images. With such images, all scene is seen at once, allowing us to recover closed geometries. Our method combines strategically the accuracy provided by geometric reasoning (lines and vanishing points) with higher level data abstraction pattern recognition achieved deep learning techniques (edge normal maps). Thus, extract structural corners which generate...
<para xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> This paper addresses the robot and landmark localization problem from bearing-only data in three views, simultaneously to robust association of this data. The algorithm is based on 1-D trifocal tensor, which relates linearly observed parameters. aim work bring useful geometric construction computer vision closer robotic applications. One contribution evaluation two linear approaches estimating...
The SLAM (Simultaneous Localization and Mapping) problem is one of the essential challenges for current robotics. Our main objective in this work to develop a real-time visual system using monocular omnidirectional vision. approach based on Extended Kalman Filter (EKF). We use Spherical Camera Model obtain geometric information from images. This model integrated EKF-based through linearization direct inverse projections. introduce new computation descriptor patch catadioptric cameras which...
In this paper we present a dense visual odometry system for RGB-D cameras performing both photometric and geometric error minimisation to estimate the camera motion between frames. Contrary most works in literature, parametrise by inverse depth instead of depth, which translates into better fit distribution used robust cost functions. We also provide unified evaluation under same framework different estimators ways computing scale residuals can be found spread along related literature. For...