Ran Gilad-Bachrach

 

 

 

 

 

Talk: Explainable AI: Theory and Algorithms

As Artificial Intelligence (AI) advances across domains, its inherent limitations and risks, such as bias, are becoming more apparent. To address these challenges, Explainable AI (xAI) has emerged as a key approach. This talk presents our efforts to build a theoretical foundation for xAI and develop algorithms to tackle its challenges.

We begin with a brief introduction to xAI, demonstrating its value through explainable algorithms for graph learning. We then take a theoretical view to establish a solid mathematical foundation for explainability. To do so, we focus on 'feature importance' as a method of explanation, particularly in the data-global setting. In this setting, a model serves as a proxy for understanding natural phenomena. Using an axiomatic approach, we show that a unique definition of feature importance arises in this context: the Marginal Contribution Feature Importance (MCI).

We then extend this definition beyond the data-global setting, highlighting inconsistencies in how explanations behave across different contexts. The talk concludes by examining the motivations behind seeking explanations, drawing parallels with the legal domain to understand the practice of reason-giving. This analysis assesses whether xAI meets these motivations—spoiler: it falls short.

Workshop Home Page

Return to the TAU-UIUC Workshop Home Page

Home→