Han Zhao
Talk: Neural Probabilistic Circuits: Towards Compositional and Interpretable Predictions through Logical Reasoning
End-to-end deep neural networks have achieved remarkable success across various domains but are often criticized for their lack of interpretability. While post-hoc explanation methods attempt to address this issue, they often fail to accurately represent these black-box models, resulting in misleading or incomplete explanations. To overcome these challenges, we propose an inherently transparent model architecture called Neural Probabilistic Circuits (NPCs), which enable compositional and interpretable predictions through logical reasoning. In particular, an NPC consists of two modules: an attribute recognition model, which predicts probabilities for various attributes, and a task predictor built on a probabilistic circuit, which enables logical reasoning over recognized attributes to make class predictions. In this talk, I will introduce the key components and training procedures of NPCs, as well as their compositional generalization capabilities and supports for counterfactual explanations.
BIO:
Dr. Han Zhao is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC). He is also an Amazon Scholar at Amazon AI. Dr. Zhao earned his Ph.D. degree in machine learning from Carnegie Mellon University. His research interest is centered around trustworthy machine learning, with a focus on algorithmic fairness, robust generalization and model interpretability. He has been named a Kavli Fellow of the National Academy of Sciences and has been selected for the AAAI New Faculty Highlights program. His research has been recognized through an NSF CAREER Award, a Google Research Scholar Award, an Amazon Research Award, and a Meta Research Award.