First International Workshop on Affective Understanding in Video Call for Papers

We are pleased to share with you the Call for Papers for the 
First International Workshop on Affective Understanding in Video 
and Evoked Expressions in Videos Challenge at CVPR 2021.

Important Dates


        Submission deadline: March 21st 2021

        Paper acceptance notification: April 7th 2021

        Camera-ready submission deadline: April 15th 2021


        Challenge open: March 1st 2021

        Challenge close: April 24th 2021

        Winners Announced: May 1st 2021

Papers are limited to 8 pages (excluding references) in the CVPR 2021
format. Topics include, but are not limited to the following.

    Methods to recognize affective expressions evoked in viewers by
    videos, such as from music/audio, scenes, character interactions,
    and related topics.

    Methods to recognize affective expressions of people shown in
    videos, including facial expression, body expressions, voice
    expression, and related topics.

    Ethics, bias and fairness in modeling and datasets for the problem
    of affective understanding in videos.

    Multimodal techniques for understanding affective expressions,
    including non-visual signals such as audio or speech.

    Explainability and interpretability in the context of affective
    video understanding.

    Temporal context and scene context in affective video

    Cross-cultural analysis of affect and subjective annotations.

    Open public academic datasets to understand affective expressions
    in video.

    Applications of affective understanding of videos to industry.

Evoked Expressions from Videos (EEV) Challenge

    Given a video, how well can models predict viewer facial reactions
    at each timestamp when watching the video?

    Predicting evoked facial expressions from video is challenging, as
    it requires modeling signals from different modalities (visual and
    audio) potentially over long timescales.

    Our challenge uses the EEV dataset, a novel dataset collected
    using reaction videos, to study these facial expressions
    variations as viewers watch the video.  Register for the
    challenge here.

For more information, please visit the website of
the workshop
and competition.

AUVi Workshop Organizing Committee
Jennifer J.Sun (Caltech)
Gautam Prasad (Google)
Ting Liu (Google)
Agata Lapedriza (Universitat Oberta de Catalunya)
James Z. Wang (Penn State)
Vikash Gupta (Mayo Clinic)
Sara Ostadabbas (Northeastern University)
Salil Soman (Harvard Medical School)