Learning Accurate and Interpretable Decision Rule Sets from Neural Networks

Decision rule Simplicity Rule-based system Representation Learning rule
DOI: 10.1609/aaai.v35i5.16555 Publication Date: 2022-09-08T18:32:32Z
ABSTRACT
This paper proposes a new paradigm for learning set of independent logical rules in disjunctive normal form as an interpretable model classification. We consider the problem decision rule training neural network specific, yet very simple two-layer architecture. Each neuron first layer directly maps to if-then after training, and output second disjunction set. Our representation neurons this enables us encode both positive negative association features rule. State-of-the-art net approaches can be leveraged highly accurate classification models. Moreover, we propose sparsity-based regularization approach balance between accuracy simplicity derived rules. experimental results show that our method generate more sets than other state-of-the-art rule-learning algorithms with better accuracy-simplicity trade-offs. Further, when compared uninterpretable black-box machine such random forests full-precision deep networks, easily find have comparable predictive performance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (25)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....