LEARNING WITH FEW OR WITHOUT ANNOTATED FACE, BODY AND GESTURE DATA Call for Papers

WORKSHOP ON LEARNING WITH FEW OR WITHOUT ANNOTATED FACE, BODY AND GESTURE DATA
https://sites.google.com/view/lfa-fg2024/


SCOPE

Since more than a decade, Deep Learning has been successfully employed
for vision-based face, body and gesture analysis, both for static and
dynamic granularities. This is particularly due to the development of
effective deep architectures and the release of quite consequent
datasets.

However, one of the main limitations of Deep Learning is that it
requires large scale annotated datasets to train efficient
models. Gathering such face, body or gesture data and annotating them
can be time consuming and laborious. This is particularly the case in
areas where experts from the field are required, like in the medical
domain. In such a case, using crowdsourcing may not be suitable.

In addition, currently available face and/or gesture datasets cover a
limited set of categories. This makes the adaptation of trained models
to novel categories not straightforward. Finally, while most of the
available datasets focus on classification problems with discretized
labels, continuous annotations are required in many scenarios. Hence,
this significantly complicates the annotation process.

The goal of this 2nd edition of the workshop is to explore approaches
to overcome such limitations by investigating ways to learn from few
annotated data, to transfer knowledge from similar domains or
problems, or to benefit from the community to gather novel large scale
annotated datasets.



TOPICS

We encourage scientists and industrials to submit their contribution
under one of the following topic of interest but also welcome any
novel relevant research in the field:
- Data augmentation methods for face, body and gesture
- Generative models for face, body and gesture
- Zero-shot / few-shot learning for face, body and gesture
- Leveraging Large Language Models for face, body and gesture
- Self supervised Learning for face, body and gesture
- Weakly supervised learning for face, body and gesture
- Semi-supervised learning for face, body and gesture
- Transfer Learning for face, body and gesture
- Adaptive/Continuous learning for face, body and gesture
- New annotated face, body and gesture benchmarks



IMPORTANT DATES

Paper Submission: March 17th, 2024
Authors Notification: April 15th, 2024
Camera Ready: April 22nd, 2024



ORGANIZING COMMITTEE

Dr. Maxime Devanne, Université de Haute-Alsace, maxime.devanne@uha.fr
Prof. Mohamed Daoudi, IMT Nord Europe, mohamed.daoudi@imt-nord-europe.fr
Prof. Stefano Berretti, Università di Firenze, stefano.berretti@unifi.it
Dr. Jonathan Weber, Université de Haute-Alsace, jonathan.weber@uha.fr
Prof. Germain Forestier, Université de Haute-Alsace, germain.forestier@uha.fr