- Robotics and Sensor-Based Localization
- Advanced Vision and Imaging
- Network Time Synchronization Technologies
- Advanced Optical Sensing Technologies
- Remote Sensing and LiDAR Applications
- Video Surveillance and Tracking Methods
- Advanced Image and Video Retrieval Techniques
- 3D Surveying and Cultural Heritage
- Time Series Analysis and Forecasting
- Industrial Vision Systems and Defect Detection
- Advanced Optical Imaging Technologies
- Image and Video Stabilization
- Image Processing Techniques and Applications
- Age of Information Optimization
- Optimization and Search Problems
- Optical Systems and Laser Technology
- Geophysics and Sensor Technology
- Distributed Sensor Networks and Detection Algorithms
- Video Coding and Compression Technologies
- Mobile Agent-Based Network Management
- Cell Image Analysis Techniques
- Optical measurement and interference techniques
- Target Tracking and Data Fusion in Sensor Networks
- Robotic Path Planning Algorithms
- Digital Holography and Microscopy
Skolkovo Institute of Science and Technology
2020-2024
St Petersburg University
2019
Data fusion algorithms that employ LiDAR measurements, such as Visual-LiDAR, LiDAR-Inertial, or Multiple Odometry and simultaneous localization mapping (SLAM) rely on precise timestamping schemes grant synchronicity to data from other sensors. Poor synchronization performance, due incorrect procedure, may negatively affect the algorithms' state estimation results. To provide highly accurate between sensors, we introduce an open-source hardware-software sensors time system exploits a...
Sensor networks require a high degree of synchronization in order to produce stream data useful for further purposes. Examples time misalignment manifest as undesired artifacts when doing multi-camera bundle-adjustment or global positioning system (GPS) geo-localization mapping. Network Time Protocol (NTP) variants clock can provide accurate results, though present variance conditioned by the environment and channel load. We propose new precise technique software over network rigidly...
We present a dataset of 1000 video sequences human portraits recorded in real and uncontrolled conditions by using handheld smartphone accompanied an external high-quality depth camera. The collected contains 200 people captured different poses locations its main purpose is to bridge the gap between raw measurements obtained from downstream applications, such as state estimation, 3D reconstruction, view synthesis, etc. sensors employed data collection are smartphone's camera Inertial...
This article describes our ongoing research on real-time digital video stabilization using MEMS-sensors. The authors propose to use the described method for stabilizing that is transmitted mobile robot operator who controls vehicle remotely, as well increasing precision of video-based navigation subminiature autonomous models. general mathematical models needed implement module based MEMS sensors readings. These includes camera motion model, frame transformation model and rolling-shutter...
This article describes our ongoing research on auto-calibration and synchronization of camera MEMS-sensors. The is applicable any system that consists MEMS-sensors, such as gyroscope. main task to find parameters the focal length time offset between sensor timestamps frame timestamps, which caused by processing encoding. makes possible scale computer vision algorithms (video stabilization, 3D reconstruction, video compression, augmented reality), use frames sensor's data, a wider range...
Recently, progress in acquisition equipment such as LiDAR sensors has enabled sensing increasingly spacious outdoor 3D environments. Making sense of acquisitions requires fine-grained scene understanding, constructing instance-based segmentations. Commonly, a neural network is trained for this task; however, access to large, densely annotated dataset, which widely known be challenging obtain. To address issue, work we propose predict instance segmentations scenes an unsupervised way, without...
Recently, significant progress has been achieved in sensing real large-scale outdoor 3D environments, particularly by using modern acquisition equipment such as LiDAR sensors. Unfortunately, they are fundamentally limited their ability to produce dense, complete scenes. To address this issue, recent learning-based methods integrate neural implicit representations and optimizable feature grids approximate surfaces of However, naively fitting samples along raw rays leads noisy mapping results...
Aerial imagery and its direct application to visual localization is an essential problem for many Robotics Computer Vision tasks. While Global Navigation Satellite Systems (GNSS) are the standard default solution solving aerial problem, it subject a number of limitations, such as, signal instability or unreliability that make this option not so desirable. Consequently, geolocalization emerging as viable alternative. However, adapting Visual Place Recognition (VPR) task presents significant...
This paper addresses the problem of building an affordable easy-to-setup synchronized multi-view camera system, which is in demand for many Computer Vision and Robotics applications high-dynamic environments. In our work, we propose a solution this — publicly-available Android application video recording on multiple smartphones with sub-millisecond accuracy. We present generalized mathematical model timestamping prove its applicability 47 different physical devices. Also, estimate time drift...
The results of the development a cross-platform software (framework), designed to create and test multi-agent control navigation algorithms in highly dynamic environment with centralized system based on Robocup SSL competition, are presented. framework supports programming signal calculation using MATLAB [1] technical vision which is called "SSL Vision" [2]. To support various models types robots, an universal network interface has been implemented. paper provides overview existing...
This paper introduces a novel approach for creating visual place recognition (VPR) database localization in indoor environments from RGBD scanning sequences. The proposed method formulates the problem as minimization challenge by utilizing dominating set algorithm applied to graph constructed spatial information, referred "DominatingSet" algorithm. Experimental results on various datasets, including 7-scenes, BundleFusion, RISEdb, and specifically recorded sequences highly repetitive office...
Introduction: The international RoboCup soccer tournaments for robots have been held since the mid-1990s and attracted alot of attention from universities all over world. Their goal is to develop scientific engineering research in areas robotics,artificial intelligence, computer vision, navigation, group multi-agent interaction mechatronic devices. Purpose: Tocreate a new generation bench which assists developing, training evaluating robots, as well developing basicalgorithms managing an...
We present a dataset of 1000 video sequences human portraits recorded in real and uncontrolled conditions by using handheld smartphone accompanied an external high-quality depth camera. The collected contains 200 people captured different poses locations its main purpose is to bridge the gap between raw measurements obtained from downstream applications, such as state estimation, 3D reconstruction, view synthesis, etc. sensors employed data collection are smartphone's camera Inertial...
Nowadays, smartphones can produce a synchronized (synced) stream of high-quality data, including RGB images, inertial measurements, and other data. Therefore, are becoming appealing sensor systems in the robotics community. Unfortunately, there is still need for external supporting sensing hardware, such as depth camera precisely synced with smartphone sensors. In this paper, we propose hardware-software recording system that presents heterogeneous structure contains an visual, depth, data...
Data fusion algorithms that employ LiDAR measurements, such as Visual-LiDAR, LiDAR-Inertial, or Multiple Odometry and simultaneous localization mapping (SLAM) rely on precise timestamping schemes grant synchronicity to data from other sensors. Poor synchronization performance, due incorrect procedure, may negatively affect the algorithms' state estimation results. To provide highly accurate between sensors, we introduce an open-source hardware-software sensors time system exploits a...