Model-driven deep unrolling: Towards interpretable deep learning against noise attacks for intelligent fault diagnosis
Deep Learning
0202 electrical engineering, electronic engineering, information engineering
Neural Networks, Computer
02 engineering and technology
Algorithms
DOI:
10.1016/j.isatra.2022.02.027
Publication Date:
2022-02-22T10:31:43Z
AUTHORS (7)
ABSTRACT
Intelligent fault diagnosis (IFD) has experienced tremendous progress owing to a great deal to deep learning (DL)-based methods over the decades. However, the "black box" nature of DL-based methods still seriously hinders wide applications in industry, especially in aero-engine IFD, and how to interpret the learned features is still a challenging problem. Furthermore, IFD based on vibration signals is often affected by the heavy noise, leading to a big drop in accuracy. To address these two problems, we develop a model-driven deep unrolling method to achieve ante-hoc interpretability, whose core is to unroll a corresponding optimization algorithm of a predefined model into a neural network, which is naturally interpretable and robust to noise attacks. Motivated by the recent multi-layer sparse coding (ML-SC) model, we herein propose to solve a general sparse coding (GSC) problem across different layers and deduce the corresponding layered GSC (LGSC) algorithm. Based on the ideology of deep unrolling, the proposed algorithm is unfolded into LGSC-Net, whose relationship with the convolutional neural network (CNN) is also discussed in depth. The effectiveness of the proposed model is verified by an aero-engine bevel gear fault experiment and a helical gear fault experiment with three kinds of adversarial noise attacks. The interpretability is also discussed from the perspective of the core of model-driven deep unrolling and its inductive reconstruction property.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (53)
CITATIONS (78)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....