Lina Liu

ORCID: 0000-0001-6483-1557
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Vision and Imaging
  • Image Processing Techniques and Applications
  • Human Pose and Action Recognition
  • Gait Recognition and Analysis
  • Optical measurement and interference techniques
  • Video Surveillance and Tracking Methods
  • Anomaly Detection Techniques and Applications
  • Advanced Image Processing Techniques
  • Tribology and Wear Analysis
  • Carbon Nanotubes in Composites
  • Image Enhancement Techniques
  • Polymer Nanocomposites and Properties
  • Robotics and Sensor-Based Localization
  • Nanoparticle-Based Drug Delivery
  • Hand Gesture Recognition Systems
  • Lubricants and Their Additives
  • Dendrimers and Hyperbranched Polymers
  • Advanced Image Fusion Techniques
  • Remote Sensing and LiDAR Applications
  • Advanced Image and Video Retrieval Techniques
  • Flood Risk Assessment and Management
  • Educational Technology and Pedagogy
  • Biofuel production and bioconversion
  • Advanced Cellulose Research Studies
  • Receptor Mechanisms and Signaling

Shandong University of Technology
2016-2024

China Mobile (China)
2024

Baidu (China)
2021-2023

Zhejiang University
2007-2023

State Key Laboratory of Industrial Control Technology
2023

Northeast Forestry University
2022

Shijiazhuang Tiedao University
2022

Soochow University
2021

University of Alberta
2020

Zhejiang A & F University
2019

Self-supervised learning shows great potential in monocular depth estimation, using image sequences as the only source of supervision. Although people try to use high-resolution for accuracy prediction has not been significantly improved. In this work, we find core reason comes from inaccurate estimation large gradient regions, making bilinear interpolation error gradually disappear resolution increases. To obtain more accurate it is necessary features with spatial and semantic information....

10.1609/aaai.v35i3.16329 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18

Depth completion aims to recover a dense depth map from sparse with the corresponding color image as input. Recent approaches mainly formulate one-stage end-to-end learning task, which outputs maps directly. However, feature extraction and supervision in frameworks are insufficient, limiting performance of these approaches. To address this problem, we propose novel residual framework, formulates two-stage i.e., sparse-to-coarse stage coarse-to-fine stage. First, coarse is obtained by simple...

10.1609/aaai.v35i3.16311 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18

10.1016/j.compmedimag.2020.101765 article EN Computerized Medical Imaging and Graphics 2020-07-21

Remarkable results have been achieved by DCNN based self-supervised depth estimation approaches. However, most of these approaches can only handle either day-time or night-time images, while their performance degrades for all-day images due to large domain shift and the variation illumination between day night images. To relieve limitations, we propose a domain-separated network Specifically, negative influence disturbing terms (illumination, etc.), partition information image pairs into two...

10.1109/iccv48922.2021.01250 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

This paper addresses the guided depth completion task in which goal is to predict a dense map given guidance RGB image and sparse measurements. Recent advances on this problem nurture hopes that one day we can acquire accurate at very low cost. A major challenge of effectively make use extremely measurements, e.g., measurements covering less than 1% pixels. In paper, propose fully differentiable model avoids convolving tensors by jointly learning interpolation refinement. More specifically,...

10.1109/tip.2021.3055629 article EN IEEE Transactions on Image Processing 2021-01-01

Remarkable progress has been achieved by current depth completion approaches, which produce dense maps from sparse and corresponding color images. However, the performances of these approaches are limited due to insufficient feature extractions fusions. In this work, we propose an efficient multi-modal fusion based framework (MFF-Net), can efficiently extract fuse features with different modals in both encoding decoding processes, thus more details better performance be obtained. specific,...

10.1109/lra.2023.3234776 article EN IEEE Robotics and Automation Letters 2023-01-06

High-performance natural plant fibers reinforced polymer biocomposites with excellent friction and wear properties hold significant practical applications in industry. Unfortunately, it remains a major challenge to engineer the interfacial interactions between matrix, which determines mechanical of final composites. Herein, we coir fiber surface by depositing polyethylenimine (PEI) graphene nanosheets then prepare polypropylene/coir biocomposites. As compared unmodified fibers, decorated can...

10.1021/acssuschemeng.9b04381 article EN ACS Sustainable Chemistry & Engineering 2019-10-28

This paper focuses on the mask utilization of video object segmentation (VOS). The here mains reference masks in memory bank, i.e., several chosen high-quality predicted masks, which are usually used with frames together. depict edge and contour features target indicate boundary against background, while contain raw RGB information whole image. It is obvious that could play a significant role VOS, but this not well explored yet. To tackle this, we propose to investigate advantages both...

10.1109/tip.2022.3208409 article EN IEEE Transactions on Image Processing 2022-01-01

Neural Radiance Fields (NeRFs) have demonstrated impressive performance in vision and graphics tasks, such as novel view synthesis immersive reality. However, the shape-radiance ambiguity of radiance fields remains a challenge, especially sparse viewpoints setting. Recent work resorts to integrating depth priors into outdoor NeRF training alleviate issue. criteria for selecting relative merits different not been thoroughly investigated. Moreover, approaches use is also an unexplored problem....

10.1145/3581783.3612306 article EN 2023-10-26

To overcome the side effects and drug resistance in cancer chemotherapy, oxaliplatin (OXA) was encapsulated chitosan based polymeric micelles with glycolipid-like structure, which were formed by stearic acid-grafted oligosaccharide (CSO-SA). CSO-SA 6.89% amino substituted degree synthesized this paper. The critical micelle concentration about 0.12 mg/mL. of 1.0 mg/mL had 34.8 nm number average diameter +50.8 mV surface potential aqueous medium. Thin-film dispersed method mediated lecithin...

10.3109/1061186x.2010.499465 article EN Journal of drug targeting 2010-09-20

Person reidentification usually refers to matching people in different camera views nonoverlapping multicamera networks. Many existing methods learn a similarity measure by projecting the raw feature latent subspace make same target's distance smaller than targets' distances. However, targets captured should hold intrinsic attributes while attributes. Projecting all data would cause loss of such an information and comparably poor discriminability. To address this problem, paper, method based...

10.1109/tnnls.2016.2602855 article EN IEEE Transactions on Neural Networks and Learning Systems 2017-04-12

The 3D reconstruction of forests provides a strong basis for scientific regulation tree growth and fine survey forest resources. Depth estimation is the key to inter-forest scene, which directly determines effect digital stereo reproduction. In order solve problem that existing matching methods lack ability use environmental information find consistency ill-posed regions, resulting in poor regions with weak texture, occlusion other inconspicuous features, LANet, network based on...

10.3389/fpls.2022.978564 article EN cc-by Frontiers in Plant Science 2022-09-02

Abstract Through the functionalization of multiwalled carbon nanotubes (MWCNTs) by 0,0′‐diallylbisphenol A (DBA), interface situation between MWCNTs and bismaleimide (BMI) was improved, as detected scanning electron microscope (SEM) dynamic mechanical analysis (DMA). The improved considered to be main reason for huge increased microhardness value greatly microtribological property MWCNTs/BMI composites. Besides, wear mechanism composite also believed related interfacial situation. rough...

10.1002/pat.1345 article EN Polymers for Advanced Technologies 2008-12-15

Abstract Original multiwalled carbon nanotubes (O‐MWCNTs) and aminofunctionalized ethylenediamine‐treated (MWCNTs‐EDA) were mixed with bismaleimide (BMI) resin to prepare O‐MWCNT/BMI MWCNT‐EDA/BMI composites, respectively. Raman spectroscopy, thermogravimetric analysis, infrared spectroscopy used investigate the influence of aminofunctionalization on nanotube (MWCNT) framework. Dynamic mechanical scanning electron microscopy images fractured surface, field emission worn surface determine...

10.1002/app.30156 article EN Journal of Applied Polymer Science 2009-05-08

Recently, massive deep learning-based image dehazing methods have sprung up. These can effectively remove most of the haze and obtain far better results than traditional methods. With removal haze, however, edge details are also lost, which is usually more noticeable in gradient space. This paper proposes a guided dual-branch network (GGDB-Net) for dehazing. Specifically, we explore hazy map to guide our model focus on regions restoration. We implement two parallel branches with...

10.1142/s0218126622502905 article EN Journal of Circuits Systems and Computers 2022-06-09

The great potential of unsupervised monocular depth estimation has been demonstrated by many works due to low annotation cost and impressive accuracy comparable supervised methods. To further improve the performance, recent mainly focus on designing more complex network structures exploiting extra information, e.g., semantic segmentation. These methods optimize models reconstructed relationship between target reference images in varying degrees. However, previous prove that this image...

10.1109/icra48891.2023.10160534 article EN 2023-05-29
Coming Soon ...