- Domain Adaptation and Few-Shot Learning
- Advanced Neural Network Applications
- Multimodal Machine Learning Applications
- Machine Learning and ELM
- Adversarial Robustness in Machine Learning
- Machine Learning and Data Classification
- Neural Networks and Applications
- Natural Language Processing Techniques
- Topic Modeling
- COVID-19 diagnosis using AI
- Constraint Satisfaction and Optimization
- Machine Learning and Algorithms
- Data Management and Algorithms
- Explainable Artificial Intelligence (XAI)
- Privacy-Preserving Technologies in Data
- Human Pose and Action Recognition
- Parallel Computing and Optimization Techniques
- Advanced Data Storage Technologies
- Distributed and Parallel Computing Systems
- Cryptography and Data Security
- Advanced Control Systems Optimization
- Advanced Electron Microscopy Techniques and Applications
- Smart Agriculture and AI
- Cloud Computing and Resource Management
- Rough Sets and Fuzzy Logic
University of Science and Technology of China
2025
Baidu (China)
2019-2024
North Carolina State University
2024
Carnegie Mellon University
2022-2024
Emory University
2020-2024
Linyi University
2024
Sichuan University
2022-2024
State Key Laboratory of Oral Diseases
2022-2024
University of Macau
2021-2023
Peking University
2023
Abstract Bone formation and deposition are initiated by sensory nerve infiltration in adaptive bone remodeling. Here, we focused on the role of Semaphorin 3A (Sema3A), expressed nerves, mechanical loads-induced withdrawal using orthodontic tooth movement (OTM) model. Firstly, was activated after 3rd day OTM, coinciding with a decrease nerves an increase pain threshold. Sema3A, rather than growth factor (NGF), highly both trigeminal ganglion axons periodontal ligament following OTM. Moreover,...
While recent studies on semi-supervised learning have shown remarkable progress in leveraging both labeled and unlabeled data, most of them presume a basic setting the model is randomly initialized. In this work, we consider transfer jointly, leading to more practical competitive paradigm that can utilize powerful pre-trained models from source domain as well labeled/unlabeled data target domain. To better exploit value weights examples, introduce adaptive consistency regularization consists...
Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by limited dataset size of new target task. To solve problem, some regularization methods, constraining outer layer weights using starting point references (SPAR), have been studied. In this paper, we propose novel regularized transfer framework DELTA, namely DEep Learning Feature Map...
Uncertainty estimation for unlabeled data is crucial to active learning. With a deep neural network employed as the backbone model, selection process highly challenging due potential over-confidence of model inference. Existing methods resort special learning fashions (e.g. adversarial) or auxiliary models address this challenge. This tends result in complex and inefficient pipelines, which would render impractical. In work, we propose novel algorithm that leverages noise stability estimate...
Temporal relational modeling in video is essential for human action understanding, such as recognition and segmentation. Although Graph Convolution Networks (GCNs) have shown promising advantages relation reasoning on many tasks, it still a challenge to apply graph convolution networks long sequences effectively. The main reason that large number of nodes (i.e., frames) makes GCNs hard capture model temporal relations videos. To tackle this problem, paper, we introduce an effective GCN...
We propose a neural network (NN) approach that yields approximate solutions for high-dimensional optimal control (OC) problems and demonstrate its effectiveness using examples from multiagent path finding. Our in feedback form, where the policy function is given by an NN. In particular, we fuse Hamilton–Jacobi–Bellman (HJB) Pontryagin maximum principle (PMP) approaches parameterizing value with enables us to obtain approximately OCs real time without having solve optimization problem. Once...
Transformers have achieved state-of-the-art performance in numerous tasks. In this paper, we propose a continuous-time formulation of transformers. Specifically, consider dynamical system whose governing equation is parametrized by transformer blocks. We leverage optimal transport theory to regularize the training problem, which enhances stability and improves generalization resulting model. Moreover, demonstrate that regularization necessary as it promotes uniqueness regularity solutions....
Liquid crystal elastomers with near-ambient temperature-responsiveness (NAT-LCEs) have been extensively studied for building biocompatible, low-power consumption devices and robotics. However, conventional manufacturing methods face limitations in programmability (e.g., molding) or low nematic order DIW printing). Here, a hybrid cooling strategy is proposed programmable three-dimensional (3D) printing of NAT-LCEs enhanced order, intricate shape forming, morphing capability. By integrating...
Hang Hua, Xingjian Li, Dejing Dou, Chengzhong Xu, Jiebo Luo. Proceedings of the 2021 Conference North American Chapter Association for Computational Linguistics: Human Language Technologies. 2021.
Central to active learning (AL) is what data should be selected for annotation. Existing works attempt select highly uncertain or informative Nevertheless, it remains unclear how impacts the test performance of task model used in AL. In this work, we explore such an impact by theoretically proving that selecting unlabeled higher gradient norm leads a lower upper-bound loss, resulting better performance. However, due lack label information, directly computing infeasible. To address challenge,...
Application-level protocol specifications are helpful for network security management, including intrusion detection and prevention which rely on monitoring technologies such as deep packet inspection. Moreover, detailed knowledge of is also an effective way detecting malicious code. However, current methods obtaining unknown proprietary message formats (i.e., no publicly available specification), especially binary protocols, highly manual operations, reverse engineering time-consuming...
The advent of large-scale pretrained language models (PLMs) has contributed greatly to the progress in natural processing (NLP). Despite its recent success and wide adoption, fine-tuning a PLM often suffers from overfitting, which leads poor generalizability due extremely high complexity model limited training samples downstream tasks. To address this problem, we propose novel effective framework, named layerwise noise stability regularization (LNSR). Specifically, our method perturbs input...
Over the past decades, we noticed huge advances in FPGA technologies. The topic of floating-point accelerator on has gained renewed interests due to increased device size and emergence fast hardware library. popularity FFT makes it easier justify spending lots effort doing detailed optimization. However, ever increasing data some compelling application domains remains beyond capability existing accelerators. demand for more performance an active research topic. In this paper, leveraging...
Regularization that incorporates the linear combination of empirical loss and explicit regularization terms as function has been frequently used for many machine learning tasks. The term is designed in different types, depending on its applications. While regularized often boost performance with higher accuracy faster convergence, would sometimes hurt minimization lead to poor performance. To deal such issues this work, we propose a novel strategy, namely Gr adients O rthogonal D...
Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly improve and accelerate training while the accuracy is frequently bottlenecked by limited dataset size of new target task. To solve problem, some regularization methods, constraining outer layer weights using starting point references (SPAR), have been studied. In this article, we propose novel regularized transfer framework \operatorname{DELTA} , namely DE...