CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability

Compass Comparability Warrant Mindset
DOI: 10.48550/arxiv.2110.03331 Publication Date: 2021-01-01
ABSTRACT
What is the state of art in continual machine learning? Although a natural question for predominant static benchmarks, notion to train systems lifelong manner entails plethora additional challenges with respect set-up and evaluation. The latter have recently sparked growing amount critiques on prominent algorithm-centric perspectives evaluation protocols being too narrow, resulting several attempts at constructing guidelines favor specific desiderata or arguing against validity prevalent assumptions. In this work, we depart from mindset argue that goal precise formulation an ill-posed one, as diverse applications may always warrant distinct scenarios. Instead, introduce Continual Learning EValuation Assessment Compass: CLEVA-Compass. compass provides visual means both identify how approaches are practically reported works can simultaneously be contextualized broader literature landscape. addition promoting compact specification spirit recent replication trends, it thus intuitive chart understand priorities individual systems, where they resemble each other, what elements missing towards fair comparison.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....