MultiMediate: Multi-modal Behaviour Analysis for Artificial Mediation Call for Papers

Call for Participation - ACM Multimedia 2023 Grand Challenge:

== MultiMediate: Multi-modal Behaviour Analysis for Artificial Mediation ==
https://multimediate-challenge.org/ 

Artificial mediators are a promising approach to support
conversations, but at present their abilities are limited by
insufficient progress in behaviour sensing and analysis. The
MultiMediate challenge is designed to work towards the vision of
effective artificial mediators by facilitating and measuring progress
on key social behaviour sensing and analysis tasks. This year, the
challenge focuses on the recognition of bodily behaviours as well as
on engagement estimation. In addition, we continue to accept
submissions to previous years’ tasks, including backchannel
detection, agreement estimation from backchannels, eye contact
detection, and next speaker prediction.

== Bodily Behaviour Recognition Task ==
Bodily behaviours like fumbling, gesturing or crossed arms are key
signals in social interactions and are related to many higher-level
attributes including liking, attractiveness, social verticality,
stress, and anxiety. While impressive progress was made on human body-
and hand pose estimation, the recognition of such more complex bodily
behaviours is still underexplored. We formulate bodily behaviour
recognition as a 14-class multi-label classification. This task is
based on the recently released BBSI annotations collected on the
MPIIGroupInteraction dataset. This dataset consists of video- and
audio recordings of participants engaged in a group
discussion. Challenge participants will receive 64-frame video
snippets as input and need to classify which of 14 behaviour classes
are present. To counter class imbalances, performance will be
evaluated using macro averaged average precision.

== Engagement Estimation Task ==
Knowing how engaged participants are is important for a mediator whose
goal it is to keep engagement at a high level. For the purpose of this
challenge, we collected novel annotations of engagement on the
Novice-Expert Interaction (NoXi) database. This database consists of
dyadic, screen-mediated interactions focussed on information
exchange. Interactions took place in several languages, and
participants were recorded with video cameras and microphones. The
task includes the continuous, frame-wise prediction of the level of
conversational engagement of each participant on a continuous scale
from 0 (lowest) to 1 (highest). We will use the Concordance
Correlation Coefficient (CCC) to evaluate predictions.

== Continuing Tasks ==
We continue to invite submission to the tasks introduced in
MultiMedaite'21 and MultiMediate'22: Eye contact detection, next
speaker prediction, backchannel detection, and agreement estimation
from backchannels. All of these tasks make use of the
MPIIGroupInteraction dataset.

== Dataset & Evaluation Protocol ==
Training datasets for all tasks are available from our website. We
will additionally provide baseline implementations along with
pre-computed features to minimise the overhead for participants. The
test sets for bodily behaviour recognition and engagement estimation
will be released two weeks before the challenge deadline. Participants
will in turn submit their predictions for evaluation against ground
truth on our servers. For previous years' tasks, the test sets are
already published and three evaluations on the test set can be
performed per month.

== How to Participate ==
Instructions are available at https://multimediate-challenge.org/ 
Paper submission deadline: 14 July 2023 AOE

== Organisers ==
Philipp Müller (German Research Center for Artificial Intelligence)
Tobias Baur (Augsburg University)
Dominik Schiller (Augsburg University)
Michael Dietz (Augsburg University)
Alexander Heimerl (Augsburg University)
Elisabeth André (Augsburg University)
Dominike Thomas (University of Stuttgart)
Andreas Bulling (University of Stuttgart)
Michal Balazia (INRIA Sophia Antipolis)
François Brémond (INRIA Sophia Antipolis)