kontakt@camo.nrw +49 202 / 439 1164

[Paper] Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation

SAIAD 2021

In this paper, we propose a method for post-hoc explainability of black-box models. The key component of the semantic and quantitative local explanation is a knowledge distillation (KD) process which is used to mimic the teacher’s behavior by means of an explainable generative model. Therefore, we introduce a Concept Probability Density Encoder (CPDE) in conjunction with a Gaussian Discriminant Decoder (GDD) to describe the contribution of high-level concepts (e.g. object parts, color, shape). We argue that our objective function encourages both, an explanation given by a set of likelihood ratios and a measure to describe how far the explainer deviates from the training data distribution of the concepts. The method can leverage any pre-trained concept classifier that admits concept scores (e.g. logits) or probabilities. We demonstrate the effectiveness of the proposed method in the context of object detection utilizing the DensePose dataset.

https://www.sites.google.com/view/saiad2021/home

Verwandte Arbeiten

[Paper] Multivariate Confidence Calibration for Object Detection

Unbiased confidence estimates of neural networks are crucial especially for safety-critical applications. Many methods have been developed to calibrate biased confidence estimates. Though there is a variety of methods for classification, the field of object […]

Mehr erfahren
Cover Neue Dimensionen der Mobilität

[Paper] User-driven development (UDD): Ansätze und Methoden zur erfolgreichen Umsetzung neuer Mobilitätskonzepte

Ansätze und Methoden zur erfolgreichen Umsetzung neuer Mobilitätskonzepte Autonome Fahrzeuge und Flugtaxis, die per Smartphone bedarfsgerecht angefordert und über eine App abgerechnet werden, Transportdrohnen, die Güter zustellen, etc. – mit solchen Bildern vermitteln die beteiligten […]

Mehr erfahren

[Paper] Bayesian Confidence Calibration for Epistemic Uncertainty Modelling

Modern neural networks have found to be miscalibrated in terms of confidence calibration, i.e., their predicted confidence scores do not reflect the observed accuracy or precision. Recent work has introduced methods for post-hoc confidence calibration […]

Mehr erfahren