Tobias Fischer

ORCID: 0000-0003-2183-017X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Robotics and Sensor-Based Localization
  • Advanced Image and Video Retrieval Techniques
  • Advanced Memory and Neural Computing
  • Video Surveillance and Tracking Methods
  • Indoor and Outdoor Localization Technologies
  • Advanced Neural Network Applications
  • Modular Robots and Swarm Intelligence
  • Gaze Tracking and Assistive Technology
  • Human Pose and Action Recognition
  • Multimodal Machine Learning Applications
  • Robotics and Automated Systems
  • Advanced Vision and Imaging
  • Remote Sensing and LiDAR Applications
  • Underwater Vehicles and Communication Systems
  • Coral and Marine Ecosystems Studies
  • Ferroelectric and Negative Capacitance Devices
  • Visual Attention and Saliency Detection
  • Spatial Cognition and Navigation
  • Robot Manipulation and Learning
  • Social Robot Interaction and HRI
  • Robotic Path Planning Algorithms
  • 3D Surveying and Cultural Heritage
  • Industrial Vision Systems and Defect Detection
  • Organic Light-Emitting Diodes Research
  • Advanced biosensing and bioanalysis techniques

Queensland University of Technology
2020-2025

Australian Centre for Robotic Vision
2023-2024

ETH Zurich
2024

Universität Koblenz
2023

Koblenz University of Applied Sciences
2023

University of New Mexico
2019-2022

Institute of Electrical and Electronics Engineers
2022

Gorgias Press (United States)
2022

Stevens Institute of Technology
2022

University of Bonn
2022

We propose a new tracking framework with an attentional mechanism that chooses subset of the associated correlation filters for increased robustness and computational efficiency. The is adaptively selected by deep network according to dynamic properties target. Our contributions are manifold, summarised as follows: (i) Introducing Attentional Correlation Filter Network which allows adaptive targets. (ii) Utilising shifts attention best candidate modules, well predicting estimated accuracy...

10.1109/cvpr.2017.513 article EN 2017-07-01

Visual Place Recognition is a challenging task for robotics and autonomous systems, which must deal with the twin problems of appearance viewpoint change in an always changing world. This paper introduces Patch-NetVLAD, provides novel formulation combining advantages both local global descriptor methods by deriving patch-level features from NetVLAD residuals. Unlike fixed spatial neighborhood regime existing keypoint features, our method enables aggregation matching deep-learned defined over...

10.1109/cvpr46437.2021.01392 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution the lies in proposed deep feature compression that is achieved by scheme utilizing multiple expert auto-encoders; context our refers coarse category of target according appearance patterns. In pre-training phase, one auto-encoder trained per category. best selected for given target, only this...

10.1109/cvpr.2018.00057 preprint EN 2018-06-01

Visual Place Recognition (VPR) is often characterized as being able to recognize the same place despite significant changes in appearance and viewpoint. VPR a key component of Spatial Artificial Intelligence, enabling robotic platforms intelligent augmentation such augmented reality devices perceive understand physical world. In this paper, we observe that there are three "drivers" impose requirements on spatially agents thus systems: 1) particular agent including its sensors computational...

10.24963/ijcai.2021/603 preprint EN 2021-08-01

The ability to recognize, localize and track dynamic objects in a scene is fundamental many real-world applications, such as self-driving robotic systems. Yet, traditional multiple object tracking (MOT) benchmarks rely only on few categories that hardly represent the multitude of possible are encountered real world. This leaves contemporary MOT methods limited small set pre-defined categories. In this paper, we address limitation by tackling novel task, open-vocabulary MOT, aims evaluate...

10.1109/cvpr52729.2023.00539 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023-06-01

This paper introduces a cognitive architecture for humanoid robot to engage in proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both human robot. The framework, based on biologically-grounded theory brain mind, integrates reactive interaction engine, number state-of-the-art perceptual motor learning algorithms, as well planning abilities an autobiographical memory. whole drives behavior solve symbol grounding problem,...

10.1109/tcds.2017.2754143 article EN IEEE Transactions on Cognitive and Developmental Systems 2017-09-18

Localization is an essential capability for mobile robots, enabling them to build a comprehensive representation of their environment and interact with the effectively toward goal. A rapidly growing field research in this area visual place recognition (VPR), which ability recognize previously seen places world based solely on images.

10.1109/mra.2023.3310859 article EN cc-by IEEE Robotics & Automation Magazine 2023-09-22

Synthesis of various derivatives 2-(2-thienyl)pyridine via substituted 3-thienyl-1,2,4-triazines is reported. The final step the synthesis a transformation triazine ring to pyridine in an aza-Diels-Alder-type reaction. resulting 5-aryl-2-(2-thienyl)pyridines (HL1-HL4) and 5-aryl-2-(2-thienyl)cyclopenteno[c]pyridines (HL5-HL8) (with aryl = phenyl, 4-methoxyphenyl, 2-naphtyl, 2-thienyl) were used as cyclometallating ligands prepare series eight luminescent platinum complexes type [Pt(L)(acac)]...

10.1021/ic802401j article EN Inorganic Chemistry 2009-04-01

We address the problem of out-of-distribution (OOD) detection for task object detection. show that residual convolutional layers with batch normalisation produce Sensitivity-Aware FEatures (SAFE) are consistently powerful distinguishing in-distribution from detections. extract SAFE vectors every detected object, and train a multilayer perceptron on surrogate adversarially perturbed clean examples. This circumvents need realistic OOD training data, computationally expensive generative models,...

10.1109/iccv51070.2023.02154 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

In recent years gaze estimation methods have made substantial progress, driven by the numerous application areas including human-robot interaction, visual attention and foveated rendering for virtual reality headsets. However, many typically assume that subject's eyes are open; closed eyes, these provide irregular estimates. Here, we address this assumption first introducing a new open-sourced dataset with annotations of eye-openness more than 200,000 eye images, 10,000 images where closed....

10.1109/iccvw.2019.00147 article EN 2019-10-01

Fully autonomous mobile robots have a multitude of potential applications, but guaranteeing robust navigation performance remains an open research problem. For many tasks such as repeated infrastructure inspection, item delivery, or inventory transport, route repeating capability can be sufficient and offers practical advantages over full stack. Previous teach repeat has achieved high in difficult conditions predominantly by using sophisticated, expensive sensors, often had computational...

10.1109/iros51168.2021.9636334 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021-09-27

Spiking neural networks (SNNs) offer both compelling potential advantages, including energy efficiency and low latencies challenges the non-differentiable nature of event spikes. Much initial research in this area has converted deep to equivalent SNNs, but conversion approach potentially negates some advantages SNN-based approaches developed from scratch. One promising for high-performance SNNs is template matching image recognition. This introduces first SNN Visual Place Recognition (VPR)...

10.1109/lra.2022.3149030 article EN IEEE Robotics and Automation Letters 2022-02-07

We introduce powerful ideas from Hyperdimensional Computing into the challenging field of Out-of-Distribution (OOD) detection. In contrast to most existing works that perform OOD detection based on only a single layer neural network, we use similarity-preserving semi-orthogonal projection matrices project feature maps multiple layers common vector space. By repeatedly applying bundling operation ⊕, create expressive class-specific descriptor vectors for all in-distribution classes. At test...

10.1109/wacv56688.2023.00267 article EN 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023-01-01

Dense 3D reconstruction and ego-motion estimation are key challenges in autonomous driving robotics. Compared to the complex, multi-modal systems deployed today, multi-camera provide a simpler, low-cost alternative. However, camera-based of complex dynamic scenes has proven extremely difficult, as existing solutions often produce incomplete or incoherent results. We propose R3D3, system for dense estimation. Our approach iterates between geometric that exploits spatial-temporal information...

10.1109/iccv51070.2023.00298 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2023-10-01

Accurate detection and tracking of surrounding objects is essential to enable self-driving vehicles. While Light Detection Ranging (LiDAR) sensors have set the benchmark for high performance, appeal camera-only solutions lies in their cost-effectiveness. Notably, despite prevalent use Radio (RADAR) automotive systems, potential 3D has been largely disregarded due data sparsity measurement noise. As a recent development, combination RADARs cameras emerging as promising solution. This paper...

10.48550/arxiv.2403.15313 preprint EN arXiv (Cornell University) 2024-03-22

Perspective taking enables humans to imagine the world from another viewpoint. This allows reasoning about state of other agents, which in turn is used more accurately predict their behavior. In this paper, we equip an iCub humanoid robot with ability perform visuospatial perspective (PT) using a single depth camera mounted above robot. Our approach has distinct benefit that can be unconstrained environments, as opposed previous works employ marker-based motion capture systems. Prior and...

10.1109/icra.2016.7487504 article EN 2016-05-01

The preparation of aminated monolayers with a controlled density functional groups on silica surfaces through simple vapor deposition process employing different ratios two suitable monoalkoxysilanes, (3-aminopropyl)diisopropylethoxysilane (APDIPES) and (3-cyanopropyl)dimethylmethoxysilane (CPDMMS), advances in the reliable quantification such tailored are presented here. one-step codeposition was carried out binary silane mixtures, rendering possible control over wide range densities single...

10.1021/ac503850f article EN Analytical Chemistry 2015-01-26

Robot systems that interact with humans over extended periods of time will benefit from storing and recalling large amounts accumulated sensorimotor interaction data.We provide a principled framework for the cumulative organisation streaming autobiographical data so can be continuously processed augmented as processing reasoning abilities agent develop further interactions take place.As an example, we show how kinematic structure learning algorithm reasons a-posteriori about skeleton human...

10.1109/tamd.2015.2507439 article EN IEEE Transactions on Cognitive and Developmental Systems 2015-12-10
Coming Soon ...