Explainable Computer Vision: Where are We and Where are We Going? Call for Papers
Workshop on
Explainable Computer Vision: Where are We and Where are We Going?
at ECCV 2024 in Milan
Paper deadline: July 24, 2024
More details: https://excv-workshop.github.iohttps://excv-workshop.github.io
We invite papers covering all topics within XAI for computer vision, including but not limited to:
- Attribution maps
- Evaluating XAI methods
- Intrinsically explainable models
- Language as an explanation for vision models
- Counterfactual explanations
- Causality in XAI for vision models
- Mechanistic interpretability
- XAI beyond classification (e.g., segmentation or other disciplines of computer vision)
- Concept discovery
We also have a Non-Proceedings / Nectar Track to highlight already
published works.
About: Deep neural networks (DNNs) are an essential component in the
field of computer vision and achieve state-of-the-art results in
almost all of its sub-disciplines. While DNNs excel at predictive
performance, they are often too complex to be understood by humans,
leading to them often being referred to as "black-box
models". This is of particular concern when DNNs are applied in
safety-critical domains such as autonomous driving or medical
applications. With this problem in mind, explainable artificial
intelligence (XAI) aims to gain a better understanding of DNNs,
ultimately leading to more robust, fair, and interpretable models. To
this end, a variety of different approaches such as attribution maps,
intrinsically explainable models, and mechanistic interpretability
methods have been developed. While this important field of research is
gaining more and more traction, there is also justified criticism of
the way in which the research is conducted. For example, the term
"explainability" in itself is not properly defined and is highly
dependent on the end user and the task, leading to ill-defined
research questions and no standardized evaluation practices. The goals
of this workshop are thus two-fold:
1. Discussion and dissemination of ideas at the cutting-edge of XAI
research (Where are we?)
2. A critical introspection on the challenges faced by the community
and the way to go forward (Where are we going?)