- Advanced Neural Network Applications
- Domain Adaptation and Few-Shot Learning
- Advanced Image and Video Retrieval Techniques
- Machine Learning and ELM
- Rough Sets and Fuzzy Logic
- Sparse and Compressive Sensing Techniques
- Advanced Computational Techniques and Applications
- Mobile Agent-Based Network Management
- Network Security and Intrusion Detection
- Reliability and Maintenance Optimization
- Video Surveillance and Tracking Methods
- Machine Learning and Data Classification
- Machine Fault Diagnosis Techniques
- Human Pose and Action Recognition
- Multi-Agent Systems and Negotiation
- Financial Distress and Bankruptcy Prediction
- Data Mining Algorithms and Applications
- Cancer-related molecular mechanisms research
- Risk and Safety Analysis
- Advanced Algorithms and Applications
- Antenna Design and Optimization
- Efficiency Analysis Using DEA
- Engineering Diagnostics and Reliability
- Privacy-Preserving Technologies in Data
- Advanced Data Compression Techniques
The Affiliated Yongchuan Hospital of Chongqing Medical University
2024
Chongqing Medical University
2024
Tianjin Normal University
2024
Shanghai Jiao Tong University
2014-2023
Nanjing University of Posts and Telecommunications
2021-2022
Huawei Technologies (China)
2020
China Mobile (China)
2020
Hunan University of Technology and Business
2019
Hunan University of Technology
2005-2019
Hengyang Normal University
2015-2019
Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads jointly training super-network searching for an optimal architecture. In this paper, we present novel approach, namely, Partially-Connected DARTS, by sampling small part of to reduce the redundancy exploring space, thereby performing more efficient without comprising performance. particular, perform operation subset channels...
Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network methods cannot sufficiently exploit the depth information generate low-bit compressed network. In this paper, we propose two novel approaches, single-level (SLQ) high-bit and multi-level (MLQ) extremely (ternary). We are first consider from both width level. level, parameters divided into parts: one other re-training eliminate loss. SLQ leverages distribution of improve...
To enable DNNs on edge devices like mobile phones, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pre-trained model by decomposition; however, small errors in parameters can ripple over large prediction loss. As result, performance usually drops significantly sophisticated effort fine-tuning is required recover accuracy. Apparently, it not optimal separate from...
Differentiable architecture search (DARTS) enables effective neural (NAS) using gradient descent, but suffers from high memory and computational costs. In this paper, we propose a novel approach, namely Partially-Connected DARTS (PC-DARTS), to achieve efficient stable by reducing the channel spatial redundancies of super-network. level, partial connection is presented randomly sample small subset channels for operation selection accelerate process suppress over-fitting Side introduced...
Batch normalization (BN) is a fundamental unit in modern deep neural networks. However, BN and its variants focus on statistics but neglect the recovery step that uses linear transformation to improve capacity of fitting complex data distributions. In this paper, we demonstrate can be improved by aggregating neighborhood each neuron rather than just considering single neuron. Specifically, propose simple yet effective method named batch with enhanced (BNET) embed spatial contextual...
White matter (WM) lesions can be classified into contrast enhancement (CELs), iron rim (IRLs), and non-iron (NIRLs) based on different pathological mechanism in relapsing-remitting multiple sclerosis (RRMS). The application of radiomics established by T2-FLAIR to classify WM RRMS is limited, especially for 3-class classification among CELs, IRLs, NIRLs.
We introduce Reward-Guided Speculative Decoding (RSD), a novel framework aimed at improving the efficiency of inference in large language models (LLMs). RSD synergistically combines lightweight draft model with more powerful target model, incorporating controlled bias to prioritize high-reward outputs, contrast existing speculative decoding methods that enforce strict unbiasedness. employs process reward evaluate intermediate steps and dynamically decide whether invoke optimizing trade-off...
Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network methods cannot sufficiently exploit the depth information generate low-bit compressed network. In this paper, we propose two novel approaches, single-level (SLQ) high-bit and multi-level (MLQ) extremely (ternary).We are first consider from both width level. level, parameters divided into parts: one other re-training eliminate loss. SLQ leverages distribution of improve...
Aiming at the problem that traditional collaborative filtering recommendation algorithm does not fully consider influence of correlation between projects on accuracy, this paper introduces project attribute fuzzy matrix, measures relevance through clustering method, and classifies all attributes. Then, weight is introduced in user similarity calculation, so nearest neighbor search more accurate. In prediction scoring section, considering change interest with time, it proposed to use time...
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pre-trained model by decomposition; however, small errors in parameters can ripple over large prediction loss. Apparently, it is not optimal separate from training. Unlike works, this paper integrates low rank regularization into the training process. We propose Trained Rank Pruning...
Differentiable neural architecture search methods became popular in recent years, mainly due to their low costs and flexibility designing the space. However, these suffer difficulty optimizing network, so that searched network is often unfriendly hardware. This paper deals with this problem by adding a differentiable latency loss term into optimization, process can tradeoff between accuracy balancing coefficient. The core of prediction encode each feed it multi-layer regressor, training data...
Neural architecture search (NAS) has attracted increasing attentions in both academia and industry. In the early age, researchers mostly applied individual methods which sample evaluate candidate architectures separately thus incur heavy computational overheads. To alleviate burden, weight-sharing were proposed exponentially many share weights same super-network, costly training procedure is performed only once. These methods, though being much faster, often suffer issue of instability. This...
This paper introduces a filter level pruning method based on similar feature extraction for compressing and accelerating the convolutional neural networks by k-means++ algorithm. In contrast to other methods, proposed would analyze similarities in recognizing features among filters rather than evaluate importance of prune redundant ones. strategy be more reasonable effective. Furthermore, our does not result unstructured network. As result, it needs extra sparse representation could...
Network quantization offers an effective solution to deep neural network compression for practical usage. Existing methods cannot theoretically guarantee the convergence. This paper proposes a novel iterative framework with arbitrary bit-widths. We present two Lipschitz constraint based strategies, namely width-level (WLQ) and multi-level (MLQ), high-bit extremely low-bit (ternary) quantization, respectively. In WLQ, partition is developed divide parameters in each layer into groups: one...
The performance of Deep Neural Networks (DNNs) keeps elevating in recent years with increasing network depth and width. To enable DNNs on edge devices like mobile phones, researchers proposed several compression methods including pruning, quantization factorization. Among the factorization-based approaches, low-rank approximation has been widely adopted because its solid theoretical rationale efficient implementations. Several previous works attempted to directly approximate a pre-trained...
Neural architecture search has attracted wide attentions in both academia and industry. To accelerate it, researchers proposed weight-sharing methods which first train a super-network to reuse computation among different operators, from exponentially many sub-networks can be sampled efficiently evaluated. These enjoy great advantages terms of computational costs, but the are not guaranteed estimated precisely unless an individual training process is taken. This paper owes such inaccuracy...
Purpose Available information for evaluating the possibility of hospitality firm failure in emerging countries is often deficient. Oversampling can compensate this but also yield mixed samples, which limit prediction models’ effectiveness. This research aims to provide a feasible approach handle possible caused by oversampling. Design/methodology/approach paper uses sample modelling (MSM) when on enlarged firms. The filtered out with index through control noisy parameter and outliner...
This paper analyzes the reasons for formation of security problems in mobile agent systems, and compares mechanisms technologies existing systems from perspective blocking attacks. On this basis, host protection technology is selected, a method to enhance agents (referred as IEOP method) proposed. The first encrypts code using encryption function, then encapsulates encrypted with improved EOP protocol IEOP, traces suspicious execution result. Experiments show that can block most malicious...
It is crucial to reduce the cost of deep convolutional neural networks while preserving their accuracy. Existing methods adaptively prune DNNs in a layer-wise or channel-wise manner based on input image. In this paper, we develop novel dynamic network, namely Dynamic-Stride-Net, improve residual network with adaptive strides convolution operations. Dynamic-Stride-Net leverages gating select blocks outputs previous layer. To optimize selection strides, trained by reinforcement learning. The...