Jingyang Xiang

ORCID: 0000-0001-5350-1528
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Neural Network Applications
  • Adversarial Robustness in Machine Learning
  • Neural Networks and Applications
  • Domain Adaptation and Few-Shot Learning
  • Terahertz technology and applications
  • Wireless Signal Modulation Classification
  • Soft Robotics and Applications
  • Software Testing and Debugging Techniques
  • Radar Systems and Signal Processing
  • Machine Learning and Data Classification
  • Image Processing and 3D Reconstruction
  • Modular Robots and Swarm Intelligence
  • Robot Manipulation and Learning
  • Music Technology and Sound Studies
  • Speech and Audio Processing
  • Machine Learning in Materials Science
  • Data Management and Algorithms
  • Brain Tumor Detection and Classification
  • Natural Language Processing Techniques
  • Multimodal Machine Learning Applications
  • Topic Modeling
  • Music and Audio Processing
  • Spider Taxonomy and Behavior Studies
  • Human Pose and Action Recognition
  • Advanced Image and Video Retrieval Techniques

Zhejiang University of Technology
2021-2024

Zhejiang University
2024

Southwest University
2023

Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, recently they are also introduced for radio signal modulation classification. In this paper, we propose a novel deep framework called SigNet, where signal-to-matrix (S2M) operator is adopted convert the original into square matrix first co-trained with follow-up CNN architecture This model further accelerated by integrating 1D convolution...

10.1109/tccn.2021.3120997 article EN IEEE Transactions on Cognitive Communications and Networking 2021-10-20

Binary Neural Network (BNN) converts full-precision weights and activations into their extreme 1-bit counterparts, making it particularly suitable for deployment on lightweight mobile devices. While binary neural networks are typically formulated as a constrained optimization problem optimized in the binarized space, general an unconstrained continuous space. This paper introduces Hyperbolic (HBNN) by leveraging framework of hyperbolic geometry to optimize problem. Specifically, we transform...

10.48550/arxiv.2501.03471 preprint EN arXiv (Cornell University) 2025-01-06

Deep learning technology has found a promising application in lightweight model design, for which pruning is an effective means of achieving large reduction both parameters and float points operations (FLOPs). The existing neural network methods mostly start from the consideration importance design parameter evaluation metrics to perform iteratively. These were not studied perspective topology, so they might be but efficient, require completely different datasets. In this article, we study...

10.1109/tnnls.2023.3280899 article EN IEEE Transactions on Neural Networks and Learning Systems 2023-06-13

Structured pruning methods are developed to bridge the gap between massive scale of neural networks and limited hardware resources. Most current structured rely on training datasets fine-tune compressed model, resulting in high computational burdens being inapplicable for scenarios with stringent requirements privacy security. As an alternative, some data-free have been proposed, however, these often require handcraft parameter tuning can only achieve inflexible reconstruction. In this...

10.48550/arxiv.2403.08204 preprint EN arXiv (Cornell University) 2024-03-12

Binary Neural Networks~(BNNs) have been proven to be highly effective for deploying deep neural networks on mobile and embedded platforms. Most existing works focus minimizing quantization errors, improving representation ability, or designing gradient approximations alleviate mismatch in BNNs, while leaving the weight sign flipping, a critical factor achieving powerful untouched. In this paper, we investigate efficiency of updates BNNs. We observe that, vanilla over 50\% weights remain...

10.48550/arxiv.2407.05257 preprint EN arXiv (Cornell University) 2024-07-07

10.1109/cvpr52733.2024.01500 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024-06-16

Binary neural network (BNN) converts full-precision weights and activations into their extreme 1-bit counterparts, making it particularly suitable for deployment on lightweight mobile devices. While BNNs are typically formulated as a constrained optimization problem optimized in the binarized space, general networks an unconstrained continuous space. This article introduces hyperbolic BNN (HBNN) by leveraging framework of geometry to optimize problem. Specifically, we transform space one...

10.1109/tnnls.2024.3485115 article EN IEEE Transactions on Neural Networks and Learning Systems 2024-01-01

10.18653/v1/2024.emnlp-main.775 article EN Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2024-01-01

The study of sparsity in Convolutional Neural Networks (CNNs) has become widespread to compress and accelerate models environments with limited resources. By constraining N consecutive weights along the output channel be group-wise non-zero, recent network 1$\times$N received tremendous popularity for its three outstanding advantages: 1) A large amount storage space saving by a \emph{Block Sparse Row} matrix. 2) Excellent performance at high sparsity. 3) Significant speedups on CPUs Advanced...

10.48550/arxiv.2310.06218 preprint EN other-oa arXiv (Cornell University) 2023-01-01

N:M sparsity has received increasing attention due to its remarkable performance and latency trade-off compared with structured unstructured sparsity. However, existing methods do not differentiate the relative importance of weights among blocks leave important underappreciated. Besides, they directly apply whole network, which will cause severe information loss. Thus, are still sub-optimal. In this paper, we propose an efficient effective Multi-Axis Query methodology, dubbed as MaxQ,...

10.48550/arxiv.2312.07061 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Soft filter pruning~(SFP) has emerged as an effective pruning technique for allowing pruned filters to update and the opportunity them regrow network. However, this strategy applies training in alternative manner, which inevitably causes inconsistent representations between reconstructed network~(R-NN) at network~(P-NN) inference, resulting performance degradation. In paper, we propose mitigate gap by learning consistent representation soft pruning, dubbed CR-SFP. Specifically, each step,...

10.48550/arxiv.2312.11555 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, recently they are also introduced for radio signal modulation classification. In this paper, we propose a novel deep framework called SigNet, where signal-to-matrix (S2M) operator is adopted convert the original into square matrix first co-trained with follow-up CNN architecture This model further accelerated by integrating 1D convolution...

10.48550/arxiv.2011.03525 preprint EN other-oa arXiv (Cornell University) 2020-01-01

Lightweight model design has become an important direction in the application of deep learning technology, pruning is effective mean to achieve a large reduction parameters and FLOPs. The existing neural network methods mostly start from importance parameters, parameter evaluation metrics perform iteratively. These are not studied perspective topology, may be but efficient, requires completely different for datasets. In this paper, we study graph structure network, propose regular based...

10.48550/arxiv.2110.15192 preprint EN other-oa arXiv (Cornell University) 2021-01-01

Deep Neural Networks (DNN) are known to be vulnerable adversarial samples, the detection of which is crucial for wide application these DNN models. Recently, a number deep testing methods in software engineering were proposed find vulnerability systems, and one them, i.e., Model Mutation Testing (MMT), was used successfully detect various samples generated by different kinds attacks. However, mutated models MMT always huge (e.g., over 100 models) lack diversity can easily circumvented...

10.1109/ase51524.2021.9678732 article EN 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2021-11-01
Coming Soon ...