Call for Papers
WORKSHOP ON LEARNING WITH FEW OR NO ANNOTATED FACE, BODY, AND GESTURE DATA
https://sites.google.com/view/lfa-fg2026/home
AIM AND SCOPE
Since more than a decade, Deep Learning has been successfully employed
for vision-based face, body and gesture analysis, both for static and
dynamic granularities. This is particularly due to the development of
effective deep architectures and the release of quite consequent
datasets.
However, one of the main limitations of Deep Learning is that it
requires large-scale annotated datasets to train efficient
models. Gathering such face, body or gesture data and annotating them
can be very time consuming and laborious. This is particularly the
case in areas where experts from the field are required, like in the
medical domain. In such a case, using crowdsourcing may not be
suitable, also due to privacy concerns and regulations.
In addition, currently available face and/or gesture datasets cover a
limited set of categories. This makes the adaptation of trained models
to novel categories not straightforward. Finally, while most of the
available datasets focus on classification problems with discretized
labels, continuous annotations are required in many scenarios. Hence,
this significantly complicates the annotation process.
The goal of this 4th edition of the workshop is to explore approaches
to overcome such limitations by investigating ways to learn from few
annotated data, to transfer knowledge from similar domains or
problems, to generate synthetic data, or to benefit from the community
to gather novel large-scale annotated datasets.
TOPICS
We encourage scientists and industrials to submit their contribution
under one of the following topic of interest, but also welcome any
novel relevant research in the field:
- Data augmentation methods for face, body and gesture
- Generative models and synthetic face, body and gesture data
- Zero-shot / few-shot learning for face, body and gesture
- Leveraging Large Language Models for face, body and gesture
- Self supervised Learning for face, body and gesture
- Weakly supervised learning for face, body and gesture
- Semi-supervised learning for face, body and gesture
- Transfer Learning for face, body and gesture
- Adaptive/Continuous learning for face, body and gesture
- New annotated face, body and gesture benchmarks
- Fairness and Biases in data collection and analysis
IMPORTANT DATES
Paper Submission: March 20th, 2026
Authors Notification: April 14th, 2026
Camera Ready: April 21st, 2026
ORGANIZING COMMITTEE
Dr. Maxime Devanne, Université de Haute-Alsace, maxime.devanne@uha.fr
Prof. Guido Borghi, University of Modena and Reggio Emilia, guido.borghi@unimore.it
Prof. Mohamed Daoudi, IMT Nord Europe, mohamed.daoudi@imt-nord-europe.fr
Prof. Stefano Berretti, Università di Firenze, stefano.berretti@unifi.it
Prof. Jonathan Weber, Université de Haute-Alsace, jonathan.weber@uha.fr
Prof. Germain Forestier, Université de Haute-Alsace, germain.forestier@uha.fr