CDN-MEDAL: Two-stage Density and Difference Approximation Framework for Motion Analysis

Softmax function
DOI: 10.48550/arxiv.2106.03776 Publication Date: 2021-01-01
ABSTRACT
Background modeling and subtraction is a promising research area with variety of applications for video surveillance. Recent years have witnessed proliferation effective learning-based deep neural networks in this area. However, the techniques only provided limited descriptions scenes' properties while requiring heavy computations, as their single-valued mapping functions are learned to approximate temporal conditional averages observed target backgrounds foregrounds. On other hand, statistical learning imagery domains has been prevalent approach high adaptation dynamic context transformation, notably using Gaussian Mixture Models (GMM) its generalization capabilities. By leveraging both, we propose novel method called CDN-MEDAL-net background two convolutional networks. The first architecture, CDN-GM, grounded on an unsupervised GMM strategy describe salient features. second one, MEDAL-net, implements light-weighted pipeline online subtraction. Our two-stage architecture small, but it very rapid convergence representations intricate motion patterns. experiments show that proposed not capable effectively extracting regions moving objects unseen cases, also efficient.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....