3rd Workshop on Explainable and Ethical AI Call for Papers

************Call for papers******************
3rd Workshop on Explainable and Ethical AI jointly with ICPR'2024
https://xaie.sciencesconf.org/
*******************************************************************
The third edition of WS XAI-E follows two successful editions at
ICPR'2020( https://edl-ai-icpr.labri.fr/ ) 
and ICPR2022
(https://xaie-icpr.labri.fr/).  
The WS will be held on December 1st 2024 in Kolkata, India 
jointly with the ICPR'2024 conference
https://icpr2024.org/.

**The topics covered by the workshop are: 
	- Naturally explainable AI methods, 
	- Post-Hoc Explanation methods of Deep Neural Networks, including transformers and Generative AI, 
	- Evaluation metrics for Explanation methods,
	- Hybrid XAI, 
	- XAI in generative AI,
	- Visualization of Explanations and user interfaces,
	- Image-to-text explanations,
	- Concept-based explanations,
	- Use of explanation methods for Deep NN  models in training and generalization.
	- Ethical considerations when using pattern recognition models,
	- Real-World Application of XAI methods 

Methodology in explainability is related to the creation of
explanations, their representation, as well as the quantification of
their confidence, while those in AI ethics include automated audits,
detection of bias in data and models, ability to control AI systems to
prevent harm. and others methods to improve AI explainability in
general and trustfulness to AI.

We are witnessing the emergence of an "AI economy and society"
where AI technologies are increasingly impacting many aspects of
business as well as of everyday life. We read with great interest
about recent advances in AI medical diagnostic systems, self-driving
cars, and the ability of AI technology to automate many aspects of
business decisions like loan approvals, hiring, policing etc. In the
last years, generative AI is emerging as a major topic promising great
benefits but also raising well-founded fears of significant disruption
to all aspects of society. Its problems like "hallucinations"
and bias are also well known.  However, as evident by recent
experiences, AI systems may produce errors, can exhibit overt or
subtle bias, may be sensitive to noise in the data, and often lack
technical and judicial transparency and explainability.  These
shortcomings have been reported in scientific but also and importantly
in general press (accidents with self-driving cars, biases in AI-based
policing, hiring and loan systems, biases in face recognition,
seemingly correct medical diagnoses later found to be made due to
wrong reasons etc.). These shortcomings are raising many ethical and
policy concerns not only in technological and research communities,
but also among policymakers and general public, and will inevitably
impede wider adoption of AI in society.
 
The problems related to Ethical AI are complex and broad. They
encompass not only technical issues but also legal, political and
ethical ones. One of the key components of Ethical AI systems is
explainability or transparency, but other issues like detecting bias,
ability to control the outcomes, ability to objectively audit AI
systems for ethics are also critical for successful applications and
adoption of AI in society. Consequently, explainable and Ethical AI
are very urgent and popular topics both in IT as well as in business,
legal and philosophy communities. Many workshops in this field are
held at top conferences.

The third workshop on explainable AI at ICPR aims to address
methodological aspects of explainable and ethical AI in general, and
include related applications and case studies with the aim to address
these very important problems from a broad research perspective.

** Organizing committee: 
Prof. J. Benois-Pineau, University of Bordeaux, jenny.benois-pineau@u-bordeaux.fr
Dr. R. Bourqui, University of Bordeaux, romain.bourqui@u-bordeaux.fr
Dr. R. Giot, University of Bordeaux, romain.giot@u-bordeaux.fr
Prof. D. Petkovic, CS Department, San Francisco State University,  petkovic@sfsu.edu

**Important dates: 
	-July 14, 2024: paper submission 
	-September 20, 2024: Notification to authors
	-September 27, 2024: Camera ready versions


The WS papers will be published in the proceedings of ICPR’2024.
 
Romain Giot, Jenny Benois-Pineau, Romain Bourqui, Dragutin Petkovic 
WS organizers 

Jenny Benois-Pineau, 
Professeure en Informatique, 
Chargée de mission aux relations Internationales
Collège Sciences et Technologies, 
Université de Bordeaux
351, crs de la Libération
33405 Talence
France
tel.: +33 (0) 5 40 00 84 24

Jenny Benois-Pineau, PhD, HDR, 
Professor of Computer Science, 
Chair of International relations
School of Sciences and Technologies
University of Bordeaux 
351, crs de la Libération
33405 Talence
tel.: +33 (0) 5 40 00 84 24