Camillo J. Taylor

ORCID: 0000-0002-9332-5087
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotics and Sensor-Based Localization
  • Advanced Vision and Imaging
  • Robotic Path Planning Algorithms
  • Advanced Image and Video Retrieval Techniques
  • Distributed Control Multi-Agent Systems
  • Modular Robots and Swarm Intelligence
  • Image Processing Techniques and Applications
  • Indoor and Outdoor Localization Technologies
  • Optical measurement and interference techniques
  • Advanced Neural Network Applications
  • Target Tracking and Data Fusion in Sensor Networks
  • 3D Surveying and Cultural Heritage
  • Advanced Image Processing Techniques
  • Video Surveillance and Tracking Methods
  • Distributed and Parallel Computing Systems
  • Robotic Locomotion and Control
  • Autonomous Vehicle Technology and Safety
  • UAV Applications and Optimization
  • Underwater Vehicles and Communication Systems
  • Robot Manipulation and Learning
  • Computer Graphics and Visualization Techniques
  • Robotics and Automated Systems
  • Remote Sensing and LiDAR Applications
  • Smart Agriculture and AI
  • Scientific Computing and Data Management

University of Pennsylvania
2016-2025

Pennsylvania State University
2022

California University of Pennsylvania
2002-2021

Philadelphia University
2001-2017

State University of New York
2005

Global and Regional Asperger Syndrome Partnership
2001

Yale University
1992

We describe a framework for cooperative control of group nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because the potential reusability. Our composition also guarantees stability convergence in wide range tasks. There are two key features our approach: 1) paradigm switching between decentralized changes formation; 2) use information single type sensor, an omnidirectional camera, all...

10.1109/tra.2002.803463 article EN IEEE Transactions on Robotics and Automation 2002-10-01

In recent years, vision-aided inertial odometry for state estimation has matured significantly. However, we still encounter challenges in terms of improving the computational efficiency and robustness underlying algorithms applications autonomous flight with microaerial vehicles, which it is difficult to use high-quality sensors powerful processors because constraints on size weight. this letter, present a filter-based stereo visual that uses multistate constraint Kalman filter. Previous...

10.1109/lra.2018.2793349 article EN IEEE Robotics and Automation Letters 2018-01-15

This paper describes a fruit counting pipeline based on deep learning that accurately counts in unstructured environments. Obtaining reliable is challenging because of variations appearance due to illumination changes and occlusions from foliage neighboring fruits. We propose novel approach uses map input images total counts. The utilizes custom crowdsourcing platform quickly label large data sets. A blob detector fully convolutional network extracts candidate regions the images. algorithm...

10.1109/lra.2017.2651944 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2017-01-11

Homography estimation between multiple aerial images can provide relative pose for collaborative autonomous exploration and monitoring. The usage on a robotic system requires fast robust homography algorithm. In this letter, we propose an unsupervised learning algorithm that trains deep convolutional neural network to estimate planar homographies. We compare the proposed traditional-feature-based direct methods, as well corresponding supervised Our empirical results demonstrate compared...

10.1109/lra.2018.2809549 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2018-02-26

In this work we propose long wave infrared (LWIR) imagery as a viable supporting modality for semantic segmentation using learning-based techniques. We first address the problem of RGB-thermal camera calibration by proposing passive target and procedure that is both portable easy to use. Second, present PST900, dataset 894 synchronized calibrated RGB Thermal image pairs with per pixel human annotations across four distinct classes from DARPA Subterranean Challenge. Lastly, CNN architecture...

10.1109/icra40945.2020.9196831 article EN 2020-05-01

In this paper we propose a vision-based stabilization and output tracking control method for model helicopter. A novel two-camera is introduced estimating the full six-degrees-of-freedom pose of One these cameras located on-board helicopter, other camera on ground. Unlike previous work, two are set to see each other. The estimation algorithm compared in simulation methods shown be less sensitive errors feature detection. order build an autonomous studied: one using series mode-based,...

10.1177/0278364905053804 article EN The International Journal of Robotics Research 2005-05-01

We describe a framework for coordinating multiple robots in cooperative manipulation tasks which vision is used establishing relative position and orientation maintaining formation. The two key contributions are scheme localizing the based on visual imagery that more robust than decentralized localization, set of control algorithms allow to maintain prescribed formation (shape size). ability allows "trap" objects their midst, "flow" desired position. derive localization present experimental...

10.1109/iros.2001.976240 article EN 2002-11-13

Abstract In this paper, we present an experimental study of strategies for maintaining end‐to‐end communication links tasks such as surveillance, reconnaissance, and target search identification, where team connectivity is required situational awareness. Our main contributions are threefold: (a) We the construction a radio signal strength map that can be used to plan multi‐robot tasks, also serve useful perceptual information. show how nominal model urban environment obtained by aerial...

10.1002/rob.20221 article EN Journal of Field Robotics 2007-12-14

Abstract In this paper, we report on the integration challenges of various component technologies developed toward establishment a framework for deploying an adaptive system heterogeneous robots urban surveillance. our integrated experiment and demonstration, aerial generate maps that are used to design navigation controllers plan missions team. A team ground constructs radio‐signal strength map is as aid planning missions. Multiple establish mobile ad hoc communication network aware between...

10.1002/rob.20222 article EN Journal of Field Robotics 2007-11-01

In this paper, we address the estimation, control, navigation and mapping problems to achieve autonomous inspection of penstocks tunnels using aerial vehicles with on-board sensing computation. Penstocks have shape a generalized cylinder. They are generally dark featureless. State estimation is challenging because range sensors do not yield adequate information cameras work in dark. We show that six degrees freedom (DOF) pose velocity can be estimated by fusing from an inertial measurement...

10.1109/lra.2017.2699790 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2017-04-28

In this paper we propose a convolutional neural network that is designed to upsample series of sparse range measurements based on the contextual cues gleaned from high resolution intensity image. Our approach draws inspiration related work super-resolution and in-painting. We novel architecture seeks pull separately image depth features then fuse them later in network. argue effectively exploits relationship between two modalities produces accurate results while respecting salient...

10.1109/itsc.2019.8917294 article EN 2019-10-01

We present a novel fruit counting pipeline that combines deep segmentation, frame to tracking, and 3D localization accurately count visible fruits across sequence of images. Our works on image streams from monocular camera, both in natural light, as well with controlled illumination at night. first train Fully Convolutional Network (FCN) segment video images into non-fruit pixels. then track frames using the Hungarian Algorithm where objective cost is determined Kalman Filter corrected...

10.1109/iros.2018.8594239 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018-10-01

Abstract One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, with little no priori knowledge operating environment. This challenge addressed present paper. We describe system design software architecture our proposed solution showcase how all distinct components can be integrated enable smooth operation. provide critical insight on hardware component selection development results...

10.1002/rob.21774 article EN Journal of Field Robotics 2017-12-15

Robotic exploration of underground environments is a particularly challenging problem due to communication, endurance, and traversability constraints which necessitate high degrees autonomy agility. These challenges are further exacerbated by the need minimize human intervention for practical applications. While legged robots have ability traverse extremely terrain, they also engender new planning, estimation, control. In this work, we describe fully autonomous system multi-robot mine...

10.1109/lra.2020.2972872 article EN IEEE Robotics and Automation Letters 2020-02-10

Semantic maps represent the environment using a set of semantically meaningful objects. This representation is storage-efficient, less ambiguous, and more informative, thus facilitating large-scale autonomy acquisition actionable information in highly unstructured, GPS-denied environments. In this letter, we propose an integrated system that can perform autonomous flights real-time semantic mapping challenging under-canopy We detect model tree trunks ground planes from LiDAR data, which are...

10.1109/lra.2022.3154047 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2022-02-24

With the increasing speeds of modern microprocessors, it has become ever more common for computer-vision algorithms to find application in real-time control tasks. In this paper, we present an analysis problem steering autonomous vehicle along a highway based on images obtained from CCD camera mounted vehicle. We explore effects changing various important system parameters like velocity, look-ahead range vision sensor, and processing delay associated with perception systems. also results...

10.1177/027836499901800502 article EN The International Journal of Robotics Research 1999-05-01

We describe a new system for estimating road shape ahead of vehicle the purpose driver assistance. The method utilises single on board colour camera, together with inertial and velocity information, to estimate both position host car respect lane it is following also width curvature at distances up 80 metres. system's image processing extracts variety different styles markings from imagery, able compensate range lighting conditions. Road are estimated using particle filter. system, which...

10.1109/iccv.2001.937519 article EN 2002-11-13

In this paper, we present an approach to the problem of actively controlling configuration a team mobile agents equipped with cameras so as optimize quality estimates derived from their measurements. The issue optimizing robots' is particularly important in context teams vision sensors, since most estimation schemes interest will involve some form triangulation. We provide theoretical framework for tackling sensor planning problem, and practical computational strategy inspired by work on...

10.1177/0278364903022001002 article EN The International Journal of Robotics Research 2003-01-01

In this paper, a vision-based stabilization and output tracking control method for four-rotor helicopter has been proposed. A novel 2 camera described estimating the full 6 DOF pose of helicopter. This two system is consisting pan-tilt ground an onboard camera. The estimation algorithm compared in simulation to other methods (such as four point method, stereo method) shown be less sensitive feature detection errors on image plane. proposed non-linear techniques have implemented remote...

10.1109/robot.2003.1242264 article EN 2004-03-22

In this work, we present an end-to-end heterogeneous multi-robot system framework where ground robots are able to localize, plan, and navigate in a semantic map created real time by high-altitude quadrotor. The choose deconflict their targets independently, without any external intervention. Moreover, they perform cross-view localization matching local maps with the overhead using semantics. communication backbone is opportunistic distributed, allowing entire operate no infrastructure aside...

10.1109/lra.2022.3191165 article EN publisher-specific-oa IEEE Robotics and Automation Letters 2022-07-15

We present M3ED, the first multi-sensor event camera dataset focused on high-speed dynamic motions in robotics applications. M3ED provides high-quality synchronized and labeled data from multiple platforms, including ground vehicles, legged robots, aerial operating challenging conditions such as driving along off-road trails, navigating through dense forests, performing aggressive flight maneuvers. Our also covers demanding operational scenarios for cameras, scenes with high egomotion...

10.1109/cvprw59228.2023.00419 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2023-06-01

This paper considers the problem of vision-based control a nonholonomic mobile robot. We describe design and implementation real-time estimation algorithms on car-like robot platform using single omni-directional camera as sensor without explicit use odometry. provide experimental results for each these objects. The are packaged modes can be combined hierarchically to perform higher level tasks involving multiple robots.

10.1109/robot.2001.932858 article EN 2002-11-13
Coming Soon ...