Improved AutoEncoder with LSTM module and KL divergence

Autoencoder Divergence (linguistics) Kullback–Leibler divergence
DOI: 10.48550/arxiv.2404.19247 Publication Date: 2024-04-30
ABSTRACT
The task of anomaly detection is to separate anomalous data from normal in the dataset. Models such as deep convolutional autoencoder (CAE) network and supporting vector description (SVDD) model have been universally employed demonstrated significant success detecting anomalies. However, over-reconstruction ability CAE for can easily lead high false negative rate data. On other hand, SVDD has drawback feature collapse, which leads a decrease accuracy To address these problems, we propose Improved AutoEncoder with LSTM module Kullback-Leibler divergence (IAE-LSTM-KL) this paper. An added after encoder memorize representations In meanwhile, phenomenon collapse also be mitigated by penalizing featured input via KL divergence. efficacy IAE-LSTM-KL validated through experiments on both synthetic real-world datasets. Experimental results show that yields higher addition, it found demonstrates enhanced robustness contaminated outliers
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....