Ahmed Nabil Belbachir

ORCID: 0000-0001-9233-3723
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • CCD and CMOS Imaging Sensors
  • Advanced Memory and Neural Computing
  • Advanced Vision and Imaging
  • Video Surveillance and Tracking Methods
  • Robotics and Sensor-Based Localization
  • Advanced Image and Video Retrieval Techniques
  • Advanced Data Compression Techniques
  • Remote Sensing and LiDAR Applications
  • Neuroscience and Neural Engineering
  • Infrared Target Detection Methodologies
  • Image and Signal Denoising Methods
  • Advanced Neural Network Applications
  • Context-Aware Activity Recognition Systems
  • Human Pose and Action Recognition
  • Domain Adaptation and Few-Shot Learning
  • Advanced Image Fusion Techniques
  • Neural dynamics and brain function
  • 3D Surveying and Cultural Heritage
  • Anomaly Detection Techniques and Applications
  • Multimodal Machine Learning Applications
  • Image Processing Techniques and Applications
  • Industrial Vision Systems and Defect Detection
  • Advanced Optical Sensing Technologies
  • Data Stream Mining Techniques
  • Modular Robots and Swarm Intelligence

NORCE Norwegian Research Centre
2022-2025

Institut de Recherche en Génie Civil et Mécanique
2018

Teknova
2018

Austrian Institute of Technology
2007-2015

Dynamic Systems (United States)
2015

Austrian Research Centre for Forests
2007

TU Wien
2000-2006

Seibersdorf Laboratories (Austria)
2006

University of Applied Sciences Technikum Wien
2004

Abstract In an era defined by the relentless influx of data from diverse sources, ability to harness and extract valuable insights streaming has become paramount. The rapidly evolving realm online learning techniques is tailored specifically for unique challenges posed data. As digital world continues generate vast torrents real-time data, understanding effectively utilizing approaches are pivotal staying ahead in various domains. One primary goals continuously update model with most recent...

10.1007/s10115-025-02351-3 article EN cc-by Knowledge and Information Systems 2025-02-08

This paper presents an embedded vision system for object tracking applications based on a 128times128 pixel CMOS temporal contrast sensor. imager asynchronously responds to relative illumination intensity changes in the visual scene, exhibiting usable dynamic range of 120 dB and latency under 100 mus. The information is encoded form address-event representation (AER) data. An algorithm with 1 millisecond timestamp resolution AER data stream presented. As real-world application example,...

10.1109/dspws.2006.265448 article EN 2006-09-01

This work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic "silicon-retina" image sensor, the algorithm developed processing asynchronous output data delivered by this sensor. The main purpose of these efforts is to provide flexible, compact, low-power low-cost which capable determining velocity passing simultaneously multiple lanes. proposed exploit unique characteristics sensor with focal-plane analog preprocessing....

10.1109/itsc.2006.1706816 article EN 2006-01-01

Although motion analysis has been extensively investigated in the literature and a wide variety of tracking algorithms have proposed, problem objects using Dynamic Vision Sensor requires slightly different approach. Sensors are biologically inspired vision systems that asynchronously generate events upon relative light intensity changes. Unlike conventional systems, output such sensor is not an image (frame) but address stream. Therefore, most appropriate for DVS data processing. In this...

10.1109/cvprw.2012.6238892 article EN IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops 2012-06-01

Biologically-inspired dynamic vision sensors have been introduced in 2002 which asynchronously detect the significant relative light intensity changes a scene and output them form of Address-Event representation. These capture dynamical discontinuities on-chip for reduced data volume compared to that from images. Therefore, they support detection, segmentation tracking moving objects space by exploiting generated events, as reaction changes, resulting dynamics. Object has previously...

10.1109/iscas.2010.5537289 article EN 2010-05-01

We introduce an innovative methodology for the identification of vehicular collisions within Internet Vehicles (IoV) applications. This approach combines a knowledge base system with deep learning model selection in ensemble setting. It is designed to provide general near-crash detection capability without relying on domain-specific knowledge, enabling development generic models. Our proposed employs novel approach, wherein multiple models are individually trained each image. Subsequently,...

10.1016/j.engappai.2024.108350 article EN cc-by-nc-nd Engineering Applications of Artificial Intelligence 2024-04-05

This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have advantage to allow simultaneously high temporal resolution (better than 10μs) and wide range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order exploit...

10.1109/iccvw.2013.13 article EN IEEE International Conference on Computer Vision Workshops 2013-12-01

This paper presents a stereo matching approach for novel multi-perspective panoramic vision system, making use of asynchronous and non-simultaneous imaging towards real-time 3D 360° vision. The method is designed events representing the scenes visual contrast as sparse code allowing reconstruction high resolution views. We propose cost measure matching, which makes similarity based on event distributions. Thus, robustness to variations in occurrences was increased. An evaluation proposed...

10.1109/cvpr.2015.7298644 article EN 2015-06-01

Abstract This research presents a novel ensemble fuzzy deep learning approach for brain Magnetic Resonance Imaging (MRI) analysis, aiming to improve the segmentation of tissues and abnormalities. The method integrates multiple components, including diverse architectures enhanced with volumetric pooling, model fusion strategy, an attention mechanism focus on most relevant regions input data. process begins by collecting medical data using sensors acquire MRI images. These are then used train...

10.1038/s41598-025-90572-5 article EN cc-by Scientific Reports 2025-02-19

This paper presents a recently developed dynamic stereo vision sensor system and its application for fall detection towards safety elderly at home. The consists of (1) two optical detector chips with 304×240 event-driven pixels which are only sensitive to relative light intensity changes, (2) an FPGA interfacing the detectors, early data processing, matching depth map reconstruction, (3) digital signal processor interpreting in real-time recognition, (4) wireless communication module...

10.1109/iscas.2012.6272141 article EN 1993 IEEE International Symposium on Circuits and Systems 2012-05-01

This paper introduces a novel concept for Home-based Monitoring (HM) that enables robust analysis and understanding of activities towards improved caring safety. Spatio-Temporal Visual Learning HM (STVL-HM) is method learns from sensor data jointly represented in space time order to robustify the process. We propose hybrid model based on Convolution Neural Network (CNN) Transformers. The CNN first visual spatial features various data. learned are then fed into transformer, which captures...

10.1016/j.inffus.2023.101984 article EN cc-by Information Fusion 2023-08-26

Precision in building delineation plays a pivotal role population data analysis, city management, policy making, and disaster management. Leveraging computer vision technologies, particularly deep learning models for semantic segmentation, has proven instrumental achieving accurate automatic segmentation remote sensing applications. However, current state-of-the-art (SOTA) techniques are not optimized precisely extracting footprints and, specifically, boundaries of the building. This...

10.1109/access.2024.3391416 article EN cc-by-nc-nd IEEE Access 2024-01-01

We present a novel approach for addressing computer vision tasks in intelligent transportation systems, with strong focus on data security during training through federated learning. Our method leverages visual transformers, multiple models each image. By calculating and storing image features as well loss values, we propose Shapley value model based performance consistency to select the most appropriate testing. To enhance security, introduce an learning strategy, where users are grouped...

10.1109/tits.2024.3520487 article EN IEEE Transactions on Intelligent Transportation Systems 2025-01-01

10.1109/wacv61041.2025.00648 article EN 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2025-02-26

This paper presents a neuromorphic dual-line vision sensor and signal-processing concepts for object recognition classification. The system performs ultrahigh speed machine with compact low-cost embedded-processing architecture. main innovation of this includes efficient edge extraction moving objects by the on pixel level novel concept real-time embedded processing based address-event data. proposed exploits very high temporal resolution sparse visual-information representation event-based...

10.1109/tie.2010.2095390 article EN IEEE Transactions on Industrial Electronics 2010-12-09

This paper presents a novel 360° High-Dynamic Range (HDR) camera for real-time 3D panoramic computer vision. The consists of (1) pair bio-inspired dynamic vision line sensors (1024 pixel each) asynchronously generating events at high temporal resolution with on-chip time stamping (1μs resolution), having range and the sparse visual coding information, (2) high-speed mechanical device rotating up to 10 revolutions per sec (rps) on which sensor is mounted (3) processing unit configuration...

10.1109/cvprw.2014.69 article EN 2014-06-01

This paper proposes a method for clustering asynchronous events generated upon scene activities by dynamic 3D vision system. The inherent detection of moving objects offered the stereo system comprising pair sensors allows event-based in real-time and representation objects. exploits sparse spatio-temporal sensor's separation between makes use density distance metrics dynamics (changes scene). It has been evaluated on persons across sensor field view. Tests real scenarios with more than 100...

10.1109/cvprw.2010.5543810 article EN IEEE Computer Society Conference on Computer Vision and Pattern Recognition workshops 2010-06-01

10.1007/s00502-010-0747-9 article EN e+i Elektrotechnik und Informationstechnik 2010-08-01
Coming Soon ...