François Rameau

ORCID: 0000-0001-5031-7653
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Robotics and Sensor-Based Localization
  • Video Surveillance and Tracking Methods
  • Advanced Image and Video Retrieval Techniques
  • Image Processing Techniques and Applications
  • Optical measurement and interference techniques
  • Advanced Image Processing Techniques
  • Advanced Neural Network Applications
  • Domain Adaptation and Few-Shot Learning
  • 3D Surveying and Cultural Heritage
  • 3D Shape Modeling and Analysis
  • Multimodal Machine Learning Applications
  • Visual Attention and Saliency Detection
  • Autonomous Vehicle Technology and Safety
  • Infrared Target Detection Methodologies
  • Image Enhancement Techniques
  • Image and Video Stabilization
  • Image and Signal Denoising Methods
  • Remote Sensing and LiDAR Applications
  • Adversarial Robustness in Machine Learning
  • Image Processing and 3D Reconstruction
  • Indoor and Outdoor Localization Technologies
  • Advanced Neuroimaging Techniques and Applications
  • Image and Object Detection Techniques
  • Advanced Measurement and Detection Methods

SUNY Korea
2023-2025

Korea Advanced Institute of Science and Technology
2015-2022

Kootenay Association for Science & Technology
2017-2021

Université de Bourgogne
2011-2015

Centre National de la Recherche Scientifique
2011-2015

Laboratoire d’Électronique, Informatique et Image
2012-2014

Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically generated from graphic engines used to train segmentation models. the models trained synthetic difficult transfer real images. tackle issue, previous works considered directly adapting source unlabeled target (to reduce inter-domain gap). Nonetheless, techniques do not...

10.1109/cvpr42600.2020.00382 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020-06-01

We propose a novel video object segmentation algorithm based on pixel-level matching using Convolutional Neural Networks (CNN). Our network aims to distinguish the target area from background basis of similarity between two units. The proposed represents features different depth layers in order take advantage both spatial details and category-level semantic information. Furthermore, we feature compression technique that drastically reduces memory requirements while maintaining capability...

10.1109/iccv.2017.238 article EN 2017-10-01

ResNet or DenseNet? Nowadays, most deep learning based approaches are implemented with seminal backbone networks, among them the two arguably famous ones and DenseNet. Despite their competitive performance overwhelming popularity, inherent drawbacks exist for both of them. For ResNet, identity shortcut that stabilizes training might limit its representation capacity, DenseNet mitigates it multi-layer feature concatenation. However, dense concatenation causes a new problem requiring high GPU...

10.1109/wacv48630.2021.00359 article EN 2021-01-01

Calibration of wide field-of-view cameras is a fundamental step for numerous visual media production applications, such as 3D reconstruction, image undistortion, augmented reality and camera motion estimation. However, existing calibration methods require multiple images pattern (typically checkerboard), assume the presence lines, manual interaction and/or need an sequence. In contrast, we present novel fully automatic deep learning-based approach that overcomes all these limitations works...

10.1145/3278471.3278479 article EN 2018-11-27

One of the most hazardous driving scenario is overtaking a slower vehicle, indeed, in this case front vehicle (being overtaken) can occlude an important part field view rear vehicle's driver. This lack visibility probable cause accidents context. Recent research works tend to prove that augmented reality applied assisted significantly reduce risk accidents. In paper, we present real-time marker-less system see through cars. For purpose, two cars are equipped with cameras and appropriate...

10.1109/tvcg.2016.2593768 article EN IEEE Transactions on Visualization and Computer Graphics 2016-07-27

To reconstruct a 3D scene from set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global fusion. Recent studies concentrate deep neural architectures for estimation by using conventional fusion method or direct reconstruction network regressing Truncated Signed Distance Function (TSDF). In this paper, we advocate that replicating the stages framework with networks improves both interpretability accuracy results. As...

10.1109/iccv48922.2021.01578 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

Segment anything model (SAM) developed by Meta AI Research has recently attracted significant attention. Trained on a large segmentation dataset of over 1 billion masks, SAM is capable segmenting any object certain image. In the original work, authors turned to zero-short transfer tasks (like edge detection) for evaluating performance SAM. Recently, numerous works have attempted investigate in various scenarios recognize and segment objects. Moreover, projects emerged show versatility as...

10.48550/arxiv.2306.06211 preprint EN cc-by-nc-sa arXiv (Cornell University) 2023-01-01

Generative AI (AIGC, a.k.a. generated content) has made remarkable progress in the past few years, among which text-guided content generation is most practical one since it enables interaction between human instruction and AIGC. Due to development text-to-image as well 3D modeling technologies (like NeRF), text-to-3D become a newly emerging yet highly active research field. Our work conducts first comprehensive survey on help readers interested this direction quickly catch up with its fast...

10.48550/arxiv.2305.06131 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Rotating and zooming cameras, also called PTZ (Pan-Tilt-Zoom) are widely used in modern surveillance systems. While their ability allows acquiring detailed images of the scene, it makes calibration more challenging since any action results a modification intrinsic parameters. Therefore, such camera has to be computed online; this process is self-calibration. In paper, given an image pair captured by camera, we propose deep learning based approach automatically estimate focal length...

10.1109/wacv45572.2020.9093629 article EN 2020-03-01

For scalable autonomous driving, a robust map-based localization system, independent of GPS, is fundamental. To achieve such localization, online high-definition (HD) map construction plays significant role in accurate estimation the pose. Although recent advancements HD have predominantly investigated on vectorized representation due to its effectiveness, they suffer from computational cost and fixed parametric model, which limit scalability. alleviate these limitations, we propose novel...

10.1109/tits.2024.3518537 article EN IEEE Transactions on Intelligent Transportation Systems 2025-01-01

This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on car. Our method is designed to cope with illumination changes, shadows, harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into individual marking. For this purpose, proposed technique relies MSER features obtain candidate regions which are further merged using density-based clustering. Finally, these...

10.1109/wacv.2017.90 article EN 2017-03-01

In this paper, we propose a noise-aware exposure control algorithm for robust robot vision. Our method aims to capture best-exposed images, which can boost the performance of various computer vision and robotics tasks. For purpose, carefully design an image quality metric that captures complementary attributes ensures light-weight computation. Specifically, our consists combination gradient, entropy, noise metrics. The synergy these measures allows preservation sharp edges rich texture in...

10.1109/iros40897.2019.8968590 article EN 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019-11-01

Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically generated from graphic engines used to train segmentation models. the models trained synthetic difficult transfer real images. tackle issue, previous works considered directly adapting source unlabeled target (to reduce inter-domain gap). Nonetheless, techniques do not...

10.48550/arxiv.2004.07703 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Inferring traffic object such as lane information is of foremost importance for deployment autonomous driving. Previous approaches focus on offline construction HD map inferred with GPS localization, which insufficient globally scalable To alleviate these issues, we propose online learning framework that detects elements from onboard sensor observations. We represent the a graph; InstaGraM, instance-level graph modeling brings accurate and fast end-to-end vectorized learning. Along strategy,...

10.48550/arxiv.2301.04470 preprint EN other-oa arXiv (Cornell University) 2023-01-01

In most of computer vision applications, motion blur is regarded as an undesirable artifact. However, it has been shown that in image may have practical interests fundamental problems. this work, we propose a novel framework to estimate optical flow from single motion-blurred end-to-end manner. We design our network with transformer networks learn globally and locally varying motions encoded features input, decode left right frame without explicit supervision. A estimator then used the...

10.1609/aaai.v35i2.16172 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18

Estimating the motion of camera together with 3D structure scene from a monocular vision system is complex task that often relies on so-called rigidity assumption. When observing dynamic environment, this assumption violated which leads to an ambiguity between ego-motion and objects. To solve problem, we present self-supervised learning framework for object field estimation videos. Our contributions are two-fold. First, propose two-stage projection pipeline explicitly disentangle motions...

10.1109/iccv48922.2021.00482 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts been made solve these problems, however, most already existing methods are designed for visible light cameras, which inherently inefficient under low conditions. this paper, we present drivable algorithm thermal-infrared cameras order overcome aforementioned problems. The novelty...

10.1109/ivs.2016.7535507 article EN 2022 IEEE Intelligent Vehicles Symposium (IV) 2016-06-01

Abrupt motion of camera or objects in a scene result blurry video, and therefore recovering high quality video requires two types enhancements: visual enhancement temporal upsampling. A broad range research attempted to recover clean frames from blurred image sequences temporally upsample by interpolation, yet there are very limited studies handling both problems jointly. In this work, we present novel framework for deblurring, interpolating extrapolating sharp motion-blurred an end-to-end...

10.1609/aaai.v35i2.16173 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18

We present a new linear method for RGB-D based simultaneous localization and mapping (SLAM). Compared to existing techniques relying on the Manhattan world assumption defined by three orthogonal directions, our approach is designed more general scenario of Atlanta world. It consists vertical direction set horizontal directions thus can represent wider range scenes. Our leverages structural regularity decouple non-linearity camera pose estimations. This allows us separately estimate rotation...

10.1109/icra40945.2020.9196561 article EN 2020-05-01
Coming Soon ...