Dynamically adaptive adjustment loss function biased towards few‐class learning

QA76.75-76.765 Photography 0202 electrical engineering, electronic engineering, information engineering Computer software 02 engineering and technology TR1-1050
DOI: 10.1049/ipr2.12661 Publication Date: 2022-10-17T10:40:36Z
ABSTRACT
Abstract Convolution neural networks have been widely used in the field of computer vision, which effectively solve practical problems. However, loss function with fixed parameters will affect training efficiency and even lead to poor prediction accuracy. In particular, when there is a class imbalance data, final result tends favor large‐class. detection recognition problems, large‐class dominate due its quantitative advantage, features few‐class can be not fully learned. order learn few‐class, batch nuclear‐norm maximization introduced deep networks, mechanism adaptive composite established increase diversity network thus improve accuracy prediction. The proposed added crowd counting, verified on ShanghaiTech UCF_CC_50 datasets. Experimental results show that improves convergence speed networks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (35)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....