Published January 1, 2022 | Version v1
Conference paper Open

Integration of Local and Global Features Explanation with Global Rules Extraction and Generation Tools

  • 1. Univ Appl Sci Western Switzerland HES SO, Delemont, Switzerland

Description

Widely used in a growing number of domains, Deep Learning predictors are achieving remarkable results. However, the lack of transparency (i.e., opacity) of their inner mechanisms has raised trust and employability concerns. Nevertheless, several approaches fostering models of interpretability and explainability have been developed in the last decade. This paper combines approaches for local feature explanation (i.e., Contextual Importance and Utility - CIU) and global feature explanation (i.e., Explainable Layers) with a rule extraction system, namely ECLAIRE. The proposed pipeline has been tested in four scenarios employing a breast cancer diagnosis dataset. The results show improvements such as the production of more human-interpretable rules and adherence of the produced rules with the original model.

Files

bib-6c7fc555-9ab6-447d-9945-55ce5ecf594b.txt

Files (228 Bytes)

Name Size Download all
md5:7a1d7ea213dfe127214c2d96fb5a9c80
228 Bytes Preview Download