4th ICPR Workshop on Explainable and Ethical AI Call for Papers
*****************************Call for papers******************************
4th ICPR Workshop on Explainable and Ethical AI
https://xaie4.sciencesconf.org/
Contact: romain.bourqui@u-bordeaux.fr
****************************************************************************
The fourth edition of WS XAI-E follows three successful editions at ICPR'2020
https://edl-ai-icpr.labri.fr/,
ICPR'2022 https://xaie-icpr.labri.fr/
and ICPR'2024
https://xaie.sciencesconf.org/.
The WS will be held on August 21, 2026 in Lyon, France jointly with the
ICPR’2026 conference https://icpr2026.org/.
** The topics covered by the workshop are:
- Naturally explainable AI methods
- Post-Hoc Explanation methods of Deep Neural Networks and Transformers
- Technical issues in AI ethics including automated audits, detection of bias,
ability to control AI systems to prevent harm and others
- Methods to improve AI explainability in general, including algorithms and
evaluation methods
- User interface and visualization for achieving more explainable and ethical AI
- Real world applications and case studies
Methodology in explainability is related to the creation of explanations,
their representation, as well as the quantification of their confidence,
while those in AI ethics include automated audits, detection of bias in data
and models, ability to control AI systems to prevent harm, and other
methods to improve AI explainability in general and trustfulness to AI.
We are witnessing the emergence of an “AI economy and society” where
AI technologies are increasingly impacting many aspects of business
as well as everyday life. We read with great interest about recent
advances in AI medical diagnostic systems, self-driving cars, and
the ability of AI technology to automate many aspects of business
decisions like loan approvals, hiring, policing etc. However, as
evident by recent experiences, AI systems may produce errors, can
exhibit overt or subtle bias, may be sensitive to noise in the data,
and often lack technical and judicial transparency and explainability.
These shortcomings have been documented in scientific but also and
importantly in general press (accidents with self-driving cars,
biases in AI-based policing, hiring and loan systems, biases in
face recognition systems for people of color, seemingly correct
medical diagnoses later found to be made due to wrong reasons etc.).
These shortcomings are raising many ethical and policy concerns not
only in technical and academic communities, but also among policymakers
and general public, and will inevitably impede wider adoption of AI in society.
The problems related to Ethical AI are complex and broad and encompass
not only technical issues but also legal, political and ethical ones.
One of the key components of Ethical AI systems is explainability or
transparency, but other issues like detecting bias, ability to control
the outcomes, ability to objectively audit AI systems for ethics are
also critical for successful applications and adoption of AI in society.
Consequently, explainable and Ethical AI are very current and popular
topics both in technical as well as in business, legal and philosophy
communities. Many workshops in this field are held at top conferences,
and we believe ICPR has to address this topic broadly and focus on its
technical aspects. Our proposed workshop aims to address technical aspects
of explainable and ethical AI in general, and include related applications
and case studies with the aim to address these very important problems
from a broad technical perspective.
** Organizing committee:
Marco Angelini, Univ Rome 3, Italy
Jenny Benois Pineau, Univ. Bordeaux, France
Romain Bourqui, Univ. Bordeaux, France,
Romain Giot, Univ. Bordeaux, France
Sebastian Lapuschkin, Fraunhofer Institute for Telecommunications,
Heinrich Hertz Institute, Germany
** Important dates:
May 1, 2026: paper submission
June 10, 2026: Notification to authors
June 20, 2026:: Camera ready versions
August 21, 2026: Workshop
The WS papers will be published in the proceedings (Springer) of ICPR’2026.
Contact: romain.bourqui@u-bordeaux.fr