SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level
Black box
DOI:
10.48550/arxiv.2007.10609
Publication Date:
2020-01-01
AUTHORS (8)
ABSTRACT
Understanding the interpretation of machine learning (ML) models has been paramount importance when making decisions with societal impacts such as transport control, financial activities, and medical diagnosis. While current model methodologies focus on using locally linear functions to approximate or creating self-explanatory that give explanations each input instance, they do not at subpopulation level, which is understanding interpretations across different subset aggregations in a dataset. To address challenges providing an ML whole dataset, we propose SUBPLEX, visual analytics system help users understand black-box analysis. SUBPLEX designed through iterative design process researchers three usage scenarios real-life tasks: debugging, feature selection, bias detection. The applies novel analysis interactive visualization explore dataset levels granularity. Based system, conduct user evaluation assess how level influences sense-making interpreting from user's perspective. Our results suggest by for groups data, encourages generate more ingenious ideas enrich interpretations. It also helps acquire tight integration between programming workflow workflow. Last but least, summarize considerations observed applying
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....