CVPR 2024 Workshop and Challenge on DeepFake Analysis and Detection Call for Papers


********************************
CVPR 2024 Workshop and Challenge on DeepFake Analysis and Detection
https://www.dfad.unimore.it/
********************************

Organized in conjunction with CVPR 2024


=== SUBMISIONS ARE OPEN!!! ====
--- Submission deadline (extended): March 22, 2024 AoE ---


Apologies for multiple posting
Please distribute this call to interested parties
                                                                     

AIMS AND SCOPE
===============
Machine-generated images are becoming more and more popular in the
digital world, thanks to the spread of Deep Learning models that can
generate visual data like Generative Adversarial Networks, and
Diffusion Models. While image generation tools can be employed for
lawful goals (e.g., to assist content creators, generate simulated
datasets, or enable multi-modal interactive applications), there is a
growing concern that they might also be used for illegal and malicious
purposes, such as the forgery of natural images, the generation of
images in support of fake news, misogyny or revenge porn. While the
results obtained in the past few years contained artefacts which made
generated images easily recognizable, today’s results are way less
recognizable from a pure perceptual point of view. In this context,
assessing the authenticity of fake images becomes a fundamental goal
for security and for guaranteeing a degree of trustworthiness of AI
algorithms. There is a growing need, therefore, to develop automated
methods which can assess the authenticity of images (and, in general,
multimodal content), and which can follow the constant evolution of
generative models, which become more realistic over time.

The second Workshop and Challenge on DeepFake Analysis and Detection
(DFAD) focuses on the development of benchmarks and tools for Fake
data Understanding and Detection, with the final goal of protecting
from visual disinformation and misuse of generated images and text,
and to monitor the progress of existing and proposed solutions for
detection. Moreover, with the growing amount of generation models, the
challenge of generated content detection should be generalizable to
content generated by models that were unseen during the training
phase. It fosters the submission of works that identify novel ways of
understanding and detecting fake data, especially through new machine
learning approaches capable of mixing syntactic and perceptive
analysis.

TOPICS
======
The workshop calls for submissions addressing, but not limited to, the
following topics:

- Approaches for fake image detection, relying on both low-level,
hand-crafted features or learnable and semantic approaches
- Partially-altered fake image detection
- Generalisation across attacks and generation methods
- GAN and Diffusion-based techniques with safety reassurance for image
and video synthesis and generation
- Video Deepfake detection and multimodal approaches to deepfake detection
- Approaches for detecting generated text and fake news, also based on
multimodal analysis
- Approaches and techniques for explainable deepfake detection
- Evaluation metrics for deepfake generation and detection systems

SUBMISSION GUIDELINES
======================
We invite participants to submit their work to the workshop as full
papers. Submitted papers can (but do not need to) be linked with the
challenge.

Accepted submissions will be presented either as oral or posters at
the workshop, and published in the CVPR 2024 Workshops proceedings.


IMPORTANT DATES
================
- Paper Submission Deadline: March 1, 2024 AoE
- Decision to Authors: March 15, 2024 AoE
- Camera ready papers due: Apr 1, 2024 AoE
- Final Workshop program date: May 15, 2024 AoE

ORGANIZERS
===========
- Lorenzo Baraldi, UNIMORE
- Alessandro Nicolosi, Leonardo SpA
- Dmitry Kangin, Lancaster University
- Tamar Glaser, Meta AI
- Plamen Angelov, Lancaster University
- Tal Hassner, Meta AI

CONTACTS
=========
For further information, please see the workshop website at 
https://www.dfad.unimore.it/