High-Dimensional Separability for One- And Few-Shot Learning
Stochastic Gradient Descent
DOI:
10.20944/preprints202106.0718.v1
Publication Date:
2021-06-30T07:58:35Z
AUTHORS (5)
ABSTRACT
This work is driven by a practical question, corrections of Artificial Intelligence (AI) errors. Systematic re-training large AI system hardly possible. To solve this problem, special external devices, correctors, are developed. They should provide quick and non-iterative fix without modification legacy system. A common universal part the corrector classifier that separate undesired erroneous behavior from normal operation. Training such classifiers grand challenge at heart one- few-shot learning methods. Effectiveness few-short methods based on either significant dimensionality reductions or blessing effects. Stochastic separability phenomenon allows one-and error correction: in high-dimensional datasets under broad assumptions each point can be separated rest set simple robust linear discriminant. The hierarchical structure data universe introduced where cluster has granular internal structure, etc. New stochastic separation theorems for distributions with fine-grained formulated proved. Separation infinite-dimensional limits proven compact embedding patterns into space. multi-correctors systems presented illustrated examples predicting errors new classes objects deep convolutional neural network.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....