Xiaohan Wang

ORCID: 0000-0003-1640-3691
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Neural Network Applications
  • Video Surveillance and Tracking Methods
  • Optical measurement and interference techniques
  • Advanced Data Compression Techniques
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • 3D Surveying and Cultural Heritage
  • Visual Attention and Saliency Detection
  • Advanced Vision and Imaging
  • Integrated Circuits and Semiconductor Failure Analysis
  • Robotics and Sensor-Based Localization
  • Image Enhancement Techniques
  • Industrial Vision Systems and Defect Detection
  • Remote Sensing and LiDAR Applications
  • Japanese History and Culture
  • Autonomous Vehicle Technology and Safety
  • Advanced Optical Sensing Technologies
  • VLSI and Analog Circuit Testing
  • Traffic Prediction and Management Techniques
  • Domain Adaptation and Few-Shot Learning
  • Remote Sensing and Land Use
  • Neuroscience and Neural Engineering
  • Waste Management and Recycling
  • Adversarial Robustness in Machine Learning
  • Image and Signal Denoising Methods
  • Composting and Vermicomposting Techniques

Shenyang Ligong University
2024

University of Science and Technology Beijing
2022-2023

Northwestern Polytechnical University
2022

Shijiazhuang University
2021

PLA Army Engineering University
2021

University of Wisconsin–Madison
2019

Peking University
2016

McMaster University
2007

Aiming at the existing problem of unmanned aerial vehicle (UAV) photography for riders’ helmet wearing detection, a novel remote sensing detection paradigm is proposed by combining super-resolution reconstruction, residual transformer-spatial attention, and you only look once version 5 (YOLOv5) image classifier. Due to its small target size, significant size change, strong motion blur in UAV images, model riders has weak generalization ability low accuracy. First, ladder-type multi-attention...

10.3390/drones6120415 article EN cc-by Drones 2022-12-15

DropConnect is a recently introduced algorithm to prevent the co-adaptation of feature detectors. Compared Dropout, gains state-of-the-art results on several image recognition benchmarks. Motivated by success DropConnect, we extended this with ability sparse selection. In algorithm, dropping masks weights are generated using Bernoulli gating variables that independent and activations. We introduce new strategy generate depending outputs previous layer. Using method, neurons which promising...

10.1049/cje.2016.01.023 article EN Chinese Journal of Electronics 2016-01-01

As the hardware cost decreases, more and cameras have been deployed used to monitor traffics. Widely enable a wide range of computer vision-based applications for traffic analytics. In this work, we propose intelligent system which can analysis at road interactions. Our leverages existing monitoring applies vision techniques provide detailed results. To achieve these goals, develop deep learning object detection model based on Single Shot MultiBox Detector (SSD). approach is able detect...

10.1109/ictis.2019.8883683 article EN 2019-07-01

As an important research topic in the field of computer vision, object detection has been successfully applied to several fields. YOLO is one popular frameworks for detection, but traditional method lacks processing anchor points with and recognition features. In addition, most methods seldom consider complex environments, especially underwater images high turbidity. Therefore, a based proposed. An improved without introduced, where features are separated from reduce mutual interference...

10.1109/icipmc55686.2022.00012 article EN 2022-05-01

In the current 3-D object detection tasks, most algorithms are based on pure point cloud. Although LiDAR can provide target location information and contour for detection, it is sparse, especially long-distance objects. Besides, camera sensors more detailed color, texture information, so on. However, if both cloud image data used at same time, problem of large model capacity overfitting will occur. Different modes also produce different gradients subnetworks, entire network be difficult to...

10.1109/jsen.2023.3240295 article EN IEEE Sensors Journal 2023-02-01

Linear predictors for lossless data compression should ideally minimize the entropy of prediction errors. But in current practice least-square type are used instead. In this paper, we formulate and solve linear minimum-entropy predictor design problem as one convex or quasiconvex programming. The proposed algorithms derived from well-known fact that errors most signals obey generalized Gaussian distribution. Empirical results analysis presented to demonstrate superior performance over...

10.1109/mmsp.2007.4412852 article EN 2007-01-01

This paper delves into the challenges of achieving scalable and effective multi-object modeling for semi-supervised Video Object Segmentation (VOS). Previous VOS methods decode features with a single positive object, limiting learning representation as they must match segment each target separately under scenarios. Additionally, earlier techniques catered to specific application objectives lacked flexibility fulfill different speed-accuracy requirements. To address these problems, we present...

10.48550/arxiv.2203.11442 preprint EN other-oa arXiv (Cornell University) 2022-01-01

To assist in the implementation of a fine 3D terrain reconstruction scene remote sensing applications, an automatic joint calibration method between light detection and ranging (LiDAR) visible camera based on edge points refinement virtual mask matching is proposed this paper. The used to solve problem inaccurate estimation LiDAR with different horizontal angle resolutions low efficiency. First, we design novel target, adding four hollow rectangles for fully locating target increasing number...

10.3390/rs14246385 article EN cc-by Remote Sensing 2022-12-17

Calibrating the extrinsic parameters between LiDAR and visible light camera is a challenging task, because accurate 3D feature points in sparse point clouds are not easy to extract. In this paper, we propose calibration method for estimating of LiDAR-camera system by considering background clouds. method, additional behind object introduced refine edges points. We process each scanning line back-projection turn get edge poitns refining scheme, thus leading results robust sensor noise...

10.22541/au.165451875.53371159/v1 preprint EN Authorea (Authorea) 2022-06-06

We propose PR-RRN, a novel neural-network based method for Non-rigid Structure-from-Motion (NRSfM). PR-RRN consists of Residual-Recursive Networks (RRN) and two extra regularization losses. RRN is designed to effectively recover 3D shape camera from 2D keypoints with residual-recursive structure. As NRSfM highly under-constrained problem, we new pairwise further regularize the reconstruction. The Rigidity-based Pairwise Contrastive Loss regularizes representation by encouraging higher...

10.48550/arxiv.2108.07506 preprint EN cc-by arXiv (Cornell University) 2021-01-01
Coming Soon ...