A comprehensive study of auto-encoders for anomaly detection: Efficiency and trade-offs
MNIST database
Autoencoder
Generative model
Representation
Feature Learning
Hyperparameter
DOI:
10.1016/j.mlwa.2024.100572
Publication Date:
2024-07-10T00:13:00Z
AUTHORS (2)
ABSTRACT
Unsupervised anomaly detection (UAD) is a diverse research area explored across various application domains. Over time, numerous techniques, including clustering, generative, and variational inference-based methods, are developed to address specific drawbacks advance state-of-the-art techniques. Deep learning generative models recently played significant role in identifying unique challenges devising advanced approaches. Auto-encoders (AEs) represent one such powerful technique that combines probabilistic modeling with deep architecture. Auto-Encoder aims learn the underlying data distribution generate consequential sample data. This concept of generation adoption have emerged extensive variations design, particularly unsupervised representation learning. study systematically reviews 11 architectures categorized into three groups, aiming differentiate their reconstruction ability, generation, latent space visualization, accuracy classifying anomalous using Fashion-MNIST (FMNIST) MNIST datasets. Additionally, we closely observed reproducibility scope under different training parameters. We conducted experiments utilizing similar model setups hyperparameters attempted comparative results improvements for each Auto-Encoder. conclude this by analyzing experimental results, which guide us efficiency trade-offs among auto-encoders, providing valuable insights performance applicability
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (57)
CITATIONS (4)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....