- Advanced Neural Network Applications
- Adversarial Robustness in Machine Learning
- Neural Networks and Applications
- Domain Adaptation and Few-Shot Learning
- Terahertz technology and applications
- Wireless Signal Modulation Classification
- Soft Robotics and Applications
- Software Testing and Debugging Techniques
- Radar Systems and Signal Processing
- Machine Learning and Data Classification
- Image Processing and 3D Reconstruction
- Modular Robots and Swarm Intelligence
- Robot Manipulation and Learning
- Music Technology and Sound Studies
- Speech and Audio Processing
- Machine Learning in Materials Science
- Data Management and Algorithms
- Brain Tumor Detection and Classification
- Natural Language Processing Techniques
- Multimodal Machine Learning Applications
- Topic Modeling
- Music and Audio Processing
- Spider Taxonomy and Behavior Studies
- Human Pose and Action Recognition
- Advanced Image and Video Retrieval Techniques
Zhejiang University of Technology
2021-2024
Zhejiang University
2024
Southwest University
2023
Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, recently they are also introduced for radio signal modulation classification. In this paper, we propose a novel deep framework called SigNet, where signal-to-matrix (S2M) operator is adopted convert the original into square matrix first co-trained with follow-up CNN architecture This model further accelerated by integrating 1D convolution...
Binary Neural Network (BNN) converts full-precision weights and activations into their extreme 1-bit counterparts, making it particularly suitable for deployment on lightweight mobile devices. While binary neural networks are typically formulated as a constrained optimization problem optimized in the binarized space, general an unconstrained continuous space. This paper introduces Hyperbolic (HBNN) by leveraging framework of hyperbolic geometry to optimize problem. Specifically, we transform...
Deep learning technology has found a promising application in lightweight model design, for which pruning is an effective means of achieving large reduction both parameters and float points operations (FLOPs). The existing neural network methods mostly start from the consideration importance design parameter evaluation metrics to perform iteratively. These were not studied perspective topology, so they might be but efficient, require completely different datasets. In this article, we study...
Structured pruning methods are developed to bridge the gap between massive scale of neural networks and limited hardware resources. Most current structured rely on training datasets fine-tune compressed model, resulting in high computational burdens being inapplicable for scenarios with stringent requirements privacy security. As an alternative, some data-free have been proposed, however, these often require handcraft parameter tuning can only achieve inflexible reconstruction. In this...
Binary Neural Networks~(BNNs) have been proven to be highly effective for deploying deep neural networks on mobile and embedded platforms. Most existing works focus minimizing quantization errors, improving representation ability, or designing gradient approximations alleviate mismatch in BNNs, while leaving the weight sign flipping, a critical factor achieving powerful untouched. In this paper, we investigate efficiency of updates BNNs. We observe that, vanilla over 50\% weights remain...
Binary neural network (BNN) converts full-precision weights and activations into their extreme 1-bit counterparts, making it particularly suitable for deployment on lightweight mobile devices. While BNNs are typically formulated as a constrained optimization problem optimized in the binarized space, general networks an unconstrained continuous space. This article introduces hyperbolic BNN (HBNN) by leveraging framework of geometry to optimize problem. Specifically, we transform space one...
The study of sparsity in Convolutional Neural Networks (CNNs) has become widespread to compress and accelerate models environments with limited resources. By constraining N consecutive weights along the output channel be group-wise non-zero, recent network 1$\times$N received tremendous popularity for its three outstanding advantages: 1) A large amount storage space saving by a \emph{Block Sparse Row} matrix. 2) Excellent performance at high sparsity. 3) Significant speedups on CPUs Advanced...
N:M sparsity has received increasing attention due to its remarkable performance and latency trade-off compared with structured unstructured sparsity. However, existing methods do not differentiate the relative importance of weights among blocks leave important underappreciated. Besides, they directly apply whole network, which will cause severe information loss. Thus, are still sub-optimal. In this paper, we propose an efficient effective Multi-Axis Query methodology, dubbed as MaxQ,...
Soft filter pruning~(SFP) has emerged as an effective pruning technique for allowing pruned filters to update and the opportunity them regrow network. However, this strategy applies training in alternative manner, which inevitably causes inconsistent representations between reconstructed network~(R-NN) at network~(P-NN) inference, resulting performance degradation. In paper, we propose mitigate gap by learning consistent representation soft pruning, dubbed CR-SFP. Specifically, each step,...
Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, recently they are also introduced for radio signal modulation classification. In this paper, we propose a novel deep framework called SigNet, where signal-to-matrix (S2M) operator is adopted convert the original into square matrix first co-trained with follow-up CNN architecture This model further accelerated by integrating 1D convolution...
Lightweight model design has become an important direction in the application of deep learning technology, pruning is effective mean to achieve a large reduction parameters and FLOPs. The existing neural network methods mostly start from importance parameters, parameter evaluation metrics perform iteratively. These are not studied perspective topology, may be but efficient, requires completely different for datasets. In this paper, we study graph structure network, propose regular based...
Deep Neural Networks (DNN) are known to be vulnerable adversarial samples, the detection of which is crucial for wide application these DNN models. Recently, a number deep testing methods in software engineering were proposed find vulnerability systems, and one them, i.e., Model Mutation Testing (MMT), was used successfully detect various samples generated by different kinds attacks. However, mutated models MMT always huge (e.g., over 100 models) lack diversity can easily circumvented...