- Advanced Neural Network Applications
- Cloud Computing and Resource Management
- Brain Tumor Detection and Classification
- Parallel Computing and Optimization Techniques
- Machine Learning and Data Classification
- Domain Adaptation and Few-Shot Learning
- Machine Learning and ELM
- Plant Physiology and Cultivation Studies
- Stochastic Gradient Optimization Techniques
- Advanced Data Storage Technologies
- Plant nutrient uptake and metabolism
- Advanced Memory and Neural Computing
- IoT and Edge/Fog Computing
- Distributed and Parallel Computing Systems
- Pasture and Agricultural Systems
- Neural Networks and Applications
- CCD and CMOS Imaging Sensors
- Robotics and Automated Systems
- Algorithms and Data Compression
- Advanced Algorithms and Applications
- Age of Information Optimization
- Optimization and Search Problems
- Advanced Sensor and Control Systems
- Powdery Mildew Fungal Diseases
- Growth and nutrition in plants
Northwest A&F University
2021-2025
Hebei Normal University of Science and Technology
2025
Xi'an Jiaotong University
2019-2023
Northwest Institute of Mechanical and Electrical Engineering
2023
Corn stover is rich in lignocellulose, which results low digestibility. Steam explosion a hydrothermal pretreatment widely used to improve the digestibility of plant-based materials by inducing cell wall disruption through rapid release pressure. However, impact steam explosion-treated corn on growth performance sheep consuming it remains unclear. This study aimed evaluate effects and Lactobacillus buchneri inoculation nutritional value rumen microbiota stover. was prepared with or without...
Due to the increase in computing power, it is possible improve feature extraction and data fitting capabilities of DNN networks by increasing their depth model complexity. However, big complex models greatly training overhead DNN, so accelerating process becomes a key task. The Tianhe-3 peak speed designed target E-class, huge power provides potential opportunity for training. We implement extend LeNet, AlexNet, VGG, ResNet single MT-2000+ FT-2000+ compute nodes, as well extended multi-node...
Fluid mechanical simulation is a typical high-performance computing problem. Due to the development of high-precision parallel algorithms, traditional platforms are unable satisfy requirements large-scale algorithms. The Sunway TaihuLight supercomputer, which uses SW26010 processor as its node, provides powerful performance for this purpose. In paper, hierarchical fluid machinery (swHPFM) framework and algorithm proposed. Using proposed algorithm, engineers can exploit parallelism existing...
University English is one of the basic core courses for all university students. The course involves many difficult concepts and grammatical structures, so it to most starting point in any teaching program determine whether needed specify what that should accomplish. So students’ need vital importance success teaching. Traditional does not take into account individual learning abilities feedback, which would cause students grasp enough key knowledge, then make lose their interest English....
Hardware-aware automated quantization promises to unlock an entirely new algorithm-hardware co-design paradigm for efficiently accelerating deep neural network (DNN) inference by incorporating the hardware cost into reinforcement learning (RL) -based strategy search process. Existing works usually design algorithm targeting one accelerator with a device-specific performance model or pre-collected data. However, determining is non-trivial experts due their lack of cross-disciplinary knowledge...
Automated quantization has emerged as an entirely new design paradigm to automate the optimal configuration of bitwidth for deep neural networks (DNNs), making DNN more memory-efficient and faster execute on hardware with limited resources. Reinforcement learning (RL) differentiable architecture search (DNAS) are two main solution paths that have shown their superiority. Yet, there countless methods various implementations within each path. It been hard comprehend differences make a...
Distributed deep learning has emerged as the principal training paradigm in recent years. However, significant communication overhead often leads to a severe degradation of performance data parallelism, which limits scalability distributed training. To address this problem, paper introduce novel MSDU (Multi-Step Delayed Update) method. mitigates negative impact on efficiency by introducing delay parameter aggregation process. Which allowing for overlap between computation and communication....
Deep neural network models perform very brightly in the field of artificial intelligence, but their success is affected by hyperparameters, and learning rate schedule one most important search for often time-consuming computationally resource-intensive. In this paper, we propose a Population Automatic Neural Distributed Algorithm (PANDA) based on population joint optimization, which uses distributed data parallel deep training to implement dynamic optimization strategy idea, with almost no...