Publications

Climbing the Ladder of Interpretability with Counterfactual Concept Bottleneck Models

Submitted to IJCAI 2024, 2023

In this paper, we introduce CounterFactual Concept Bottleneck Models (CF-CBMs), a class of models designed to efficiently address three fundamental questions all at once without the need to run post-hoc searches: predict class labels to solve a given classification task (the “What?”), explain task predictions (the “Why?”), and imagine alternative scenarios that could result in different predictions (the “What if?”).

Recommended citation: Gabriele Dominici, Pietro Barbiero, Francesco Giannini, Martin Gjoreski, Marc Langheinrich & Giuseppe Marra. (2024). Climbing the Ladder of Interpretability with Counterfactual Concept Bottleneck Models

SHARCS - Shared Concept Space for Explainable Multimodal Learning

Accepted to NeurIPS2024 Workshop - UniReps, 2023

In this paper, we introduce SHARCS (SHARed Concept Space) – a novel concept-based approach for explainable multimodal learning. SHARCS learns and maps interpretable concepts from different heterogeneous modalities into a single unified concept-manifold, which leads to an intuitive projection of semantically similar cross-modal concepts.

Recommended citation: Gabriele Dominici, Pietro Barbiero, Lucie Charlotte Magister, Pietro Liò, & Nikola Simidjievski. (2023). SHARCS: Shared Concept Space for Explainable Multimodal Learning. https://arxiv.org/abs/2307.00316?context=cs.AI