Publications

Causally Reliable Concept Bottleneck Models

Giovanni De Felice*, Arianna Casanova*, Francesco De Santis*, Silvia Santini, Johannes Schneider, Pietro Barbiero, Alberto Termine

Tags: explainability
Venue: arXiv

Concept-based models are an emerging paradigm in deep learning that constrains the inference process to operate through human-interpretable variables, facilitating explainability and human interaction. However, these architectures, on par with popular opaque neural models, fail to account for the true causal mechanisms underlying the target phenomena represented in the data. This hampers their ability to support causal reasoning tasks, limits out-of-distribution generalization, and hinders the implementation of fairness constraints. To overcome these issues, we propose mph{Causally reliable Concept Bottleneck Models} (C$^2$BMs), a class of concept-based architectures that enforce reasoning through a bottleneck of concepts structured according to a model of the real-world causal mechanisms. We also introduce a pipeline to automatically learn this structure from observational data and mph{unstructured} background knowledge (e.g., scientific literature). Experimental evidence suggests that C$^2$BMs are more interpretable, causally reliable, and improve responsiveness to interventions w.r.t. standard opaque and concept-based models, while maintaining their accuracy.

google scholar old readthedocs.io icon Google Scholar night mode building arrow up arrow left view minus share gmlg arrow right placeholder paper plane newspaper mail heart link menu broken link dots like plus arrow down graph academic cap world sensor network interpolation usi blackboard youtube twitter instagram linkedin github facebook skype