Xingjian Li

ORCID: 0000-0001-8073-7552
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Domain Adaptation and Few-Shot Learning
  • Advanced Neural Network Applications
  • Multimodal Machine Learning Applications
  • Machine Learning and ELM
  • Adversarial Robustness in Machine Learning
  • Machine Learning and Data Classification
  • Neural Networks and Applications
  • Natural Language Processing Techniques
  • Topic Modeling
  • COVID-19 diagnosis using AI
  • Constraint Satisfaction and Optimization
  • Machine Learning and Algorithms
  • Data Management and Algorithms
  • Explainable Artificial Intelligence (XAI)
  • Privacy-Preserving Technologies in Data
  • Human Pose and Action Recognition
  • Parallel Computing and Optimization Techniques
  • Advanced Data Storage Technologies
  • Distributed and Parallel Computing Systems
  • Cryptography and Data Security
  • Advanced Control Systems Optimization
  • Advanced Electron Microscopy Techniques and Applications
  • Smart Agriculture and AI
  • Cloud Computing and Resource Management
  • Rough Sets and Fuzzy Logic

University of Science and Technology of China
2025

Baidu (China)
2019-2024

North Carolina State University
2024

Carnegie Mellon University
2022-2024

Emory University
2020-2024

Linyi University
2024

Sichuan University
2022-2024

State Key Laboratory of Oral Diseases
2022-2024

University of Macau
2021-2023

Peking University
2023

Abstract Bone formation and deposition are initiated by sensory nerve infiltration in adaptive bone remodeling. Here, we focused on the role of Semaphorin 3A (Sema3A), expressed nerves, mechanical loads-induced withdrawal using orthodontic tooth movement (OTM) model. Firstly, was activated after 3rd day OTM, coinciding with a decrease nerves an increase pain threshold. Sema3A, rather than growth factor (NGF), highly both trigeminal ganglion axons periodontal ligament following OTM. Moreover,...

10.1038/s41368-023-00269-6 article EN cc-by International Journal of Oral Science 2024-01-19

While recent studies on semi-supervised learning have shown remarkable progress in leveraging both labeled and unlabeled data, most of them presume a basic setting the model is randomly initialized. In this work, we consider transfer jointly, leading to more practical competitive paradigm that can utilize powerful pre-trained models from source domain as well labeled/unlabeled data target domain. To better exploit value weights examples, introduce adaptive consistency regularization consists...

10.1109/cvpr46437.2021.00685 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021-06-01

Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by limited dataset size of new target task. To solve problem, some regularization methods, constraining outer layer weights using starting point references (SPAR), have been studied. In this paper, we propose novel regularized transfer framework DELTA, namely DEep Learning Feature Map...

10.48550/arxiv.1901.09229 preprint EN other-oa arXiv (Cornell University) 2019-01-01

Uncertainty estimation for unlabeled data is crucial to active learning. With a deep neural network employed as the backbone model, selection process highly challenging due potential over-confidence of model inference. Existing methods resort special learning fashions (e.g. adversarial) or auxiliary models address this challenge. This tends result in complex and inefficient pipelines, which would render impractical. In work, we propose novel algorithm that leverages noise stability estimate...

10.1609/aaai.v38i12.29270 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Temporal relational modeling in video is essential for human action understanding, such as recognition and segmentation. Although Graph Convolution Networks (GCNs) have shown promising advantages relation reasoning on many tasks, it still a challenge to apply graph convolution networks long sequences effectively. The main reason that large number of nodes (i.e., frames) makes GCNs hard capture model temporal relations videos. To tackle this problem, paper, we introduce an effective GCN...

10.1609/aaai.v35i4.16377 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2021-05-18

We propose a neural network (NN) approach that yields approximate solutions for high-dimensional optimal control (OC) problems and demonstrate its effectiveness using examples from multiagent path finding. Our in feedback form, where the policy function is given by an NN. In particular, we fuse Hamilton–Jacobi–Bellman (HJB) Pontryagin maximum principle (PMP) approaches parameterizing value with enables us to obtain approximately OCs real time without having solve optimization problem. Once...

10.1109/tcst.2022.3172872 article EN IEEE Transactions on Control Systems Technology 2022-06-01

Transformers have achieved state-of-the-art performance in numerous tasks. In this paper, we propose a continuous-time formulation of transformers. Specifically, consider dynamical system whose governing equation is parametrized by transformer blocks. We leverage optimal transport theory to regularize the training problem, which enhances stability and improves generalization resulting model. Moreover, demonstrate that regularization necessary as it promotes uniqueness regularity solutions....

10.48550/arxiv.2501.18793 preprint EN arXiv (Cornell University) 2025-01-30

Liquid crystal elastomers with near-ambient temperature-responsiveness (NAT-LCEs) have been extensively studied for building biocompatible, low-power consumption devices and robotics. However, conventional manufacturing methods face limitations in programmability (e.g., molding) or low nematic order DIW printing). Here, a hybrid cooling strategy is proposed programmable three-dimensional (3D) printing of NAT-LCEs enhanced order, intricate shape forming, morphing capability. By integrating...

10.1021/acsnano.4c15521 article EN ACS Nano 2025-02-13

Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, Jiebo Luo. Proceedings of the 2021 Conference North American Chapter Association for Computational Linguistics: Human Language Technologies. 2021.

10.18653/v1/2021.naacl-main.258 article EN cc-by Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2021-01-01

Central to active learning (AL) is what data should be selected for annotation. Existing works attempt select highly uncertain or informative Nevertheless, it remains unclear how impacts the test performance of task model used in AL. In this work, we explore such an impact by theoretically proving that selecting unlabeled higher gradient norm leads a lower upper-bound loss, resulting better performance. However, due lack label information, directly computing infeasible. To address challenge,...

10.1609/aaai.v36i8.20834 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2022-06-28

Application-level protocol specifications are helpful for network security management, including intrusion detection and prevention which rely on monitoring technologies such as deep packet inspection. Moreover, detailed knowledge of is also an effective way detecting malicious code. However, current methods obtaining unknown proprietary message formats (i.e., no publicly available specification), especially binary protocols, highly manual operations, reverse engineering time-consuming...

10.1109/pdcat.2011.25 article EN 2011-10-01

The advent of large-scale pretrained language models (PLMs) has contributed greatly to the progress in natural processing (NLP). Despite its recent success and wide adoption, fine-tuning a PLM often suffers from overfitting, which leads poor generalizability due extremely high complexity model limited training samples downstream tasks. To address this problem, we propose novel effective framework, named layerwise noise stability regularization (LNSR). Specifically, our method perturbs input...

10.1109/tnnls.2023.3330926 article EN IEEE Transactions on Neural Networks and Learning Systems 2023-01-01

Over the past decades, we noticed huge advances in FPGA technologies. The topic of floating-point accelerator on has gained renewed interests due to increased device size and emergence fast hardware library. popularity FFT makes it easier justify spending lots effort doing detailed optimization. However, ever increasing data some compelling application domains remains beyond capability existing accelerators. demand for more performance an active research topic. In this paper, leveraging...

10.1109/fpt.2011.6132672 article EN 2011-12-01

Regularization that incorporates the linear combination of empirical loss and explicit regularization terms as function has been frequently used for many machine learning tasks. The term is designed in different types, depending on its applications. While regularized often boost performance with higher accuracy faster convergence, would sometimes hurt minimization lead to poor performance. To deal such issues this work, we propose a novel strategy, namely Gr adients O rthogonal D...

10.1145/3530836 article EN ACM Transactions on Knowledge Discovery from Data 2022-04-18

Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly improve and accelerate training while the accuracy is frequently bottlenecked by limited dataset size of new target task. To solve problem, some regularization methods, constraining outer layer weights using starting point references (SPAR), have been studied. In this article, we propose novel regularized transfer framework \operatorname{DELTA} , namely DE...

10.1145/3473912 article EN ACM Transactions on Knowledge Discovery from Data 2021-10-22
Coming Soon ...