Xiang Ruan

ORCID: 0000-0003-4500-7516
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Image and Video Retrieval Techniques
  • Image Retrieval and Classification Techniques
  • Visual Attention and Saliency Detection
  • Advanced Neural Network Applications
  • Multimodal Machine Learning Applications
  • Video Surveillance and Tracking Methods
  • Medical Image Segmentation Techniques
  • Human Pose and Action Recognition
  • Olfactory and Sensory Function Studies
  • Video Analysis and Summarization
  • Advanced Algorithms and Applications
  • Face Recognition and Perception
  • Robotics and Sensor-Based Localization
  • Remote-Sensing Image Classification
  • Radiomics and Machine Learning in Medical Imaging
  • Advanced Image Fusion Techniques
  • Handwritten Text Recognition Techniques
  • Probabilistic and Robust Engineering Design
  • Infrared Target Detection Methodologies
  • AI in cancer detection
  • COVID-19 diagnosis using AI
  • Advanced Vision and Imaging
  • Image and Object Detection Techniques
  • Face and Expression Recognition
  • Adversarial Robustness in Machine Learning

Hebei University
2024

Anhui Medical University
2023-2024

Sekisui Chemical (Japan)
2016-2023

Second Military Medical University
2022-2023

Henan University
2023

Anhui Jianzhu University
2022-2023

Henan Provincial People's Hospital
2023

Fuzhou University
2023

Changhai Hospital
2023

Eastern Hepatobiliary Surgery Hospital
2022

Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within local context entire image, whereas few focus segmenting out background regions and thereby salient objects. Instead considering between objects their surrounding regions, we consider both cues in different way. We rank similarity image elements (pixels regions) with via graph-based manifold ranking. The is defined relevances to given seeds queries. represent as close-loop graph...

10.1109/cvpr.2013.407 article EN 2009 IEEE Conference on Computer Vision and Pattern Recognition 2013-06-01

Deep Neural Networks (DNNs) have substantially improved the state-of-the-art in salient object detection. However, training DNNs requires costly pixel-level annotations. In this paper, we leverage observation that image-level tags provide important cues of foreground objects, and develop a weakly supervised learning method for saliency detection using only. The Foreground Inference Network (FIN) is introduced challenging task. first stage our method, FIN jointly trained with fully...

10.1109/cvpr.2017.404 article EN 2017-07-01

Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features layers. However, how to better aggregate multi-level feature maps for salient object detection underexplored. In this work, we present Amulet, a generic aggregating framework detection. Our first integrates into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then...

10.1109/iccv.2017.31 article EN 2017-10-01

In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via super pixels as likely cues for background templates, which dense and sparse appearance models constructed. For each region, compute Second, errors propagated based on contexts obtained K-means clustering. Third, pixel-level is computed by an integration multi-scale refined object-biased Gaussian model. We apply Bayes formula to integrate...

10.1109/iccv.2013.370 article EN 2013-12-01

This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the stage, we detect using deep neural network (DNN-L) which learns patch features to determine value of each pixel. The estimated maps are further refined exploring high level object concepts. search map together with contrast geometric information used as describe set candidate regions. Another (DNN-G) is trained predict score region based on features. final generated weighted sum...

10.1109/cvpr.2015.7298938 article EN 2015-06-01

Effective integration of contextual information is crucial for salient object detection. To achieve this, most existing methods based on 'skip' architecture mainly focus how to integrate hierarchical features Convolutional Neural Networks (CNNs). They simply apply concatenation or element-wise operation incorporate high-level semantic cues and low-level detailed information. However, this can degrade the quality predictions because cluttered noisy also be passed through. address problem, we...

10.1109/cvpr.2018.00330 article EN 2018-06-01

We propose a bootstrap learning algorithm for salient object detection in which both weak and strong models are exploited. First, saliency map is constructed based on image priors to generate training samples model. Second, classifier directly from an input learned detect pixels. Results multiscale maps integrated further improve the performance. Extensive experiments six benchmark datasets demonstrate that proposed performs favorably against state-of-the-art methods. Furthermore, we show...

10.1109/cvpr.2015.7298798 article EN 2015-06-01

Deep networks have been proved to encode high-level features with semantic meaning and delivered superior performance in salient object detection. In this paper, we take one step further by developing a new saliency detection method based on recurrent fully convolutional (RFCNs). Compared existing deep network methods, the proposed is able incorpor- ate prior knowledge for more accurate inference. addition, architecture enables our automatically learn refine map iteratively correcting its...

10.1109/tpami.2018.2846598 article EN IEEE Transactions on Pattern Analysis and Machine Intelligence 2018-06-12

With the popularity of multi-modal sensors, visible-thermal (RGB-T) object tracking is to achieve robust performance and wider application scenarios with guidance objects' temperature information. However, lack paired training samples main bottleneck for unlocking power RGB-T tracking. Since it laborious collect high-quality sequences, recent benchmarks only provide test sequences. In this paper, we construct a large-scale benchmark high diversity UAV (VTUAV), including 500 sequences 1.7...

10.1109/cvpr52688.2022.00868 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022-06-01

Person re-identification addresses the problem of matching people across disjoint camera views and extensive efforts have been made to seek either robust feature representation or discriminative metrics. However, most existing approaches focus on learning a fixed distance metric for all instance pairs, while ignoring individuality each person. In this paper, we formulate person as an imbalanced classification learn classifier specifically pedestrian such that model is highly tuned...

10.1109/cvpr.2016.143 article EN 2016-06-01

We propose a salient object detection algorithm via multi-scale analysis on superpixels. First, segmentations of an input image are computed and represented by In contrast to prior work, we utilize various Gaussian smoothing parameters generate coarse or fine results, thereby facilitating the regions. At each scale, three essential cues from local contrast, integrity center bias considered within Bayesian framework. Next, compute saliency maps weighted summation normalization. The final map...

10.1109/lsp.2014.2323407 article EN IEEE Signal Processing Letters 2014-05-13

Object proposals are a series of candidate segments containing the objects interest, which taken as preprocessing and widely applied in various vision tasks. However, most existing saliency approaches only utilizes to compute location prior. In this paper, we naturally take bags instances multiple learning (MIL), where superpixels contained proposals, formulate detection problem an MIL task (i.e., predict labels using classifier framework). This method allows some flexibility finding...

10.1109/tip.2017.2669878 article EN IEEE Transactions on Image Processing 2017-02-15

Most existing bottom-up algorithms measure the foreground saliency of a pixel or region based on its contrast within local context entire image, whereas few methods focus segmenting out background regions and thereby salient objects. Instead only considering between objects their surrounding regions, we consider both cues in this work. We rank similarity image elements with via graph-based manifold ranking. The is defined relevances to given seeds queries. represent an as multi-scale graph...

10.1109/tpami.2016.2609426 article EN publisher-specific-oa IEEE Transactions on Pattern Analysis and Machine Intelligence 2016-09-14

In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction error. The image boundaries are first extracted via superpixels as likely cues for background templates, which dense and sparse appearance models constructed. First, compute errors on templates each region. Second, propagated based contexts obtained K-means clustering. Third, pixel-level error is computed by integration multi-scale errors. Both pixellevel then weighted compactness, could...

10.1109/tip.2016.2524198 article EN IEEE Transactions on Image Processing 2016-02-02

Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features layers. However, how to better aggregate multi-level feature maps for salient object detection underexplored. In this work, we present Amulet, a generic aggregating framework detection. Our first integrates into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then...

10.48550/arxiv.1708.02001 preprint EN other-oa arXiv (Cornell University) 2017-01-01

Existing CNNs-Based RGB-D salient object detection (SOD) networks are all required to be pretrained on the ImageNet learn hierarchy features which helps provide a good initialization. However, collection and annotation of large-scale datasets time-consuming expensive. In this paper, we utilize self-supervised representation learning (SSL) design two pretext tasks: cross-modal auto-encoder depth-contour estimation. Our tasks require only few unlabeled perform pretraining, makes network...

10.1609/aaai.v36i3.20257 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28
Coming Soon ...