- CCD and CMOS Imaging Sensors
- Advanced Memory and Neural Computing
- Advanced Vision and Imaging
- Video Surveillance and Tracking Methods
- Robotics and Sensor-Based Localization
- Advanced Image and Video Retrieval Techniques
- Advanced Data Compression Techniques
- Remote Sensing and LiDAR Applications
- Neuroscience and Neural Engineering
- Infrared Target Detection Methodologies
- Image and Signal Denoising Methods
- Advanced Neural Network Applications
- Context-Aware Activity Recognition Systems
- Human Pose and Action Recognition
- Domain Adaptation and Few-Shot Learning
- Advanced Image Fusion Techniques
- Neural dynamics and brain function
- 3D Surveying and Cultural Heritage
- Anomaly Detection Techniques and Applications
- Multimodal Machine Learning Applications
- Image Processing Techniques and Applications
- Industrial Vision Systems and Defect Detection
- Advanced Optical Sensing Technologies
- Data Stream Mining Techniques
- Modular Robots and Swarm Intelligence
NORCE Norwegian Research Centre
2022-2025
Institut de Recherche en Génie Civil et Mécanique
2018
Teknova
2018
Austrian Institute of Technology
2007-2015
Dynamic Systems (United States)
2015
Austrian Research Centre for Forests
2007
TU Wien
2000-2006
Seibersdorf Laboratories (Austria)
2006
University of Applied Sciences Technikum Wien
2004
Abstract In an era defined by the relentless influx of data from diverse sources, ability to harness and extract valuable insights streaming has become paramount. The rapidly evolving realm online learning techniques is tailored specifically for unique challenges posed data. As digital world continues generate vast torrents real-time data, understanding effectively utilizing approaches are pivotal staying ahead in various domains. One primary goals continuously update model with most recent...
This paper presents an embedded vision system for object tracking applications based on a 128times128 pixel CMOS temporal contrast sensor. imager asynchronously responds to relative illumination intensity changes in the visual scene, exhibiting usable dynamic range of 120 dB and latency under 100 mus. The information is encoded form address-event representation (AER) data. An algorithm with 1 millisecond timestamp resolution AER data stream presented. As real-world application example,...
This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, the algorithm developed processing asynchronous output data delivered by this sensor. The main purpose of these efforts is to provide flexible, compact, low-power low-cost which capable determining velocity passing simultaneously multiple lanes. proposed exploit unique characteristics sensor with focal-plane analog preprocessing....
Although motion analysis has been extensively investigated in the literature and a wide variety of tracking algorithms have proposed, problem objects using Dynamic Vision Sensor requires slightly different approach. Sensors are biologically inspired vision systems that asynchronously generate events upon relative light intensity changes. Unlike conventional systems, output such sensor is not an image (frame) but address stream. Therefore, most appropriate for DVS data processing. In this...
Biologically-inspired dynamic vision sensors have been introduced in 2002 which asynchronously detect the significant relative light intensity changes a scene and output them form of Address-Event representation. These capture dynamical discontinuities on-chip for reduced data volume compared to that from images. Therefore, they support detection, segmentation tracking moving objects space by exploiting generated events, as reaction changes, resulting dynamics. Object has previously...
We introduce an innovative methodology for the identification of vehicular collisions within Internet Vehicles (IoV) applications. This approach combines a knowledge base system with deep learning model selection in ensemble setting. It is designed to provide general near-crash detection capability without relying on domain-specific knowledge, enabling development generic models. Our proposed employs novel approach, wherein multiple models are individually trained each image. Subsequently,...
This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have advantage to allow simultaneously high temporal resolution (better than 10μs) and wide range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order exploit...
This paper presents a stereo matching approach for novel multi-perspective panoramic vision system, making use of asynchronous and non-simultaneous imaging towards real-time 3D 360° vision. The method is designed events representing the scenes visual contrast as sparse code allowing reconstruction high resolution views. We propose cost measure matching, which makes similarity based on event distributions. Thus, robustness to variations in occurrences was increased. An evaluation proposed...
Abstract This research presents a novel ensemble fuzzy deep learning approach for brain Magnetic Resonance Imaging (MRI) analysis, aiming to improve the segmentation of tissues and abnormalities. The method integrates multiple components, including diverse architectures enhanced with volumetric pooling, model fusion strategy, an attention mechanism focus on most relevant regions input data. process begins by collecting medical data using sensors acquire MRI images. These are then used train...
This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety elderly at home. The consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA interfacing the detectors, early data processing, matching depth map reconstruction, (3) digital signal processor interpreting in real-time recognition, (4) wireless communication module...
This paper introduces a novel concept for Home-based Monitoring (HM) that enables robust analysis and understanding of activities towards improved caring safety. Spatio-Temporal Visual Learning HM (STVL-HM) is method learns from sensor data jointly represented in space time order to robustify the process. We propose hybrid model based on Convolution Neural Network (CNN) Transformers. The CNN first visual spatial features various data. learned are then fed into transformer, which captures...
Precision in building delineation plays a pivotal role population data analysis, city management, policy making, and disaster management. Leveraging computer vision technologies, particularly deep learning models for semantic segmentation, has proven instrumental achieving accurate automatic segmentation remote sensing applications. However, current state-of-the-art (SOTA) techniques are not optimized precisely extracting footprints and, specifically, boundaries of the building. This...
We present a novel approach for addressing computer vision tasks in intelligent transportation systems, with strong focus on data security during training through federated learning. Our method leverages visual transformers, multiple models each image. By calculating and storing image features as well loss values, we propose Shapley value model based performance consistency to select the most appropriate testing. To enhance security, introduce an learning strategy, where users are grouped...
This paper presents a neuromorphic dual-line vision sensor and signal-processing concepts for object recognition classification. The system performs ultrahigh speed machine with compact low-cost embedded-processing architecture. main innovation of this includes efficient edge extraction moving objects by the on pixel level novel concept real-time embedded processing based address-event data. proposed exploits very high temporal resolution sparse visual-information representation event-based...
This paper presents a novel 360° High-Dynamic Range (HDR) camera for real-time 3D panoramic computer vision. The consists of (1) pair bio-inspired dynamic vision line sensors (1024 pixel each) asynchronously generating events at high temporal resolution with on-chip time stamping (1μs resolution), having range and the sparse visual coding information, (2) high-speed mechanical device rotating up to 10 revolutions per sec (rps) on which sensor is mounted (3) processing unit configuration...
This paper proposes a method for clustering asynchronous events generated upon scene activities by dynamic 3D vision system. The inherent detection of moving objects offered the stereo system comprising pair sensors allows event-based in real-time and representation objects. exploits sparse spatio-temporal sensor's separation between makes use density distance metrics dynamics (changes scene). It has been evaluated on persons across sensor field view. Tests real scenarios with more than 100...