ICCV 2023 Workshop on DeepFake Analysis and Detection Call for Papers

ICCV 2023 Workshop on DeepFake Analysis and Detection

Organized in conjunction with ICCV 2023

--- Submission deadline: July, 17th AoE ---

Machine-generated images are becoming more and more popular in the
digital world, thanks to the spread of Deep Learning models that can
generate visual data like Generative Adversarial Networks, and
Diffusion Models. While image generation tools can be employed for
lawful goals (e.g., to assist content creators, generate simulated
datasets, or enable multi-modal interactive applications), there is a
growing concern that they might also be used for illegal and malicious
purposes, such as the forgery of natural images, the generation of
images in support of fake news, misogyny or revenge porn. While the
results obtained in the past few years contained artefacts which made
generated images easily recognizable, today's results are way less
recognizable from a pure perceptual point of view. In this context,
assessing the authenticity of fake images becomes a fundamental goal
for security and for guaranteeing a degree of trustworthiness of AI
algorithms. There is a growing need, therefore, to develop automated
methods which can assess the authenticity of images (and, in general,
multimodal content), and which can follow the constant evolution of
generative models, which become more realistic over time.

The first Workshop and Challenge on DeepFake Analysis and Detection
(DFAD) focuses on the development of benchmarks and tools for Fake
data Understanding and Detection, with the final goal of protecting
from visual disinformation and misuse of generated images and text,
and to monitor the progress of existing and proposed solutions for
detection. It fosters the submission of works that identify novel ways
of understanding and detecting fake data, especially through new
machine learning approaches capable of mixing syntactic and perceptive

In parallel to soliciting the submission of relevant scientific works,
the Workshop will host a competition on deepfake detection. This is
organised with the support of the ELSA project - the European
Lighthouse on Secure and Safe AI, which builds on and extends the
existing internationally recognized and excellently positioned ELLIS
(European Laboratory for Learning and Intelligent Systems) network of
excellence. The objective of the challenge is to monitor and evaluate
the development of algorithms for deep fake detection, in terms of
efficacy and explainability. The challenge will be published during
June 2023. Submitted papers do not need to be linked with the

The workshop calls for submissions addressing, but not limited to, the
following topics:
- Approaches for fake image detection, relying on both low-level, 
      hand-crafted features or learnable and semantic approaches
- Partially-altered fake image detection
- GAN and Diffusion-based techniques with safety reassurance for 
      image and video synthesis and generation
- Video Deepfake detection and multimodal approaches to deepfake detection
- Approaches for detecting generated text and fake news, also based 
      on multimodal analysis
- Approaches and techniques for explainable deepfake detection
- Evaluation metrics for deepfake generation and detection systems

We invite submission of full and short papers describing work in the
domains suggested above or in closely-related areas.

Accepted submissions will be presented either as oral or posters at
the workshop, and published in the ICCV 2023 Workshops proceedings.

- Paper Submission Deadline: July, 17th AoE
- Decision to Authors: August, 4th AoE
- Camera ready papers due: August, 11th
- Workshop date: October 2nd or 3rd (TBD)

- Lorenzo Baraldi, University of Modena and Reggio Emilia
- Alessandro Nicolosi, Leonardo SpA
- Dmitry Kangin, Lancaster University
- Tamar Glaser, Meta
- Plamen Angelov, Lancaster University
- Rita Cucchiara, University of Modena and Reggio Emilia

For further information, please see the workshop website at