1st Vision for Vitals Challenge and Workshop Call for Papers

**********************************************************************
1st Vision for Vitals Challenge & Workshop
In conjunction with ICCV 2021 (virtual) (Oct 11th- Oct 17th, 2021)
Website: https://vision4vitals.github.io
Codalab: https://competitions.codalab.org/competitions/31978
CMT is open: https://cmt3.research.microsoft.com/V4V2021
**********************************************************************

---------------
CALL FOR PAPERS
---------------

Over the past few years a number of research groups have made rapid
advances in remote PPG methods for estimating heart rate from digital
video and obtained impressive results. How these various methods
compare in naturalistic conditions, where spontaneous movements,
facial expressions, or illumination changes are present, is relatively
unknown. Most previous benchmarking efforts focused on posed
situations. No commonly accepted evaluation protocol exists for
estimating vital signs in spontaneous behavior with which to compare
them.

To enable comparisons among alternative methods, we present the 1st
Vision for Vitals Workshop & Challenge (V4V 2021). This topic is
germane to both computer vision and multimedia communities. For
computer vision, it is an exciting approach to longstanding
limitations of vital signs estimating approaches. For multimedia,
remote vital signs estimation would enable more powerful applications.


Workshop (main) track
~~~~~~~~~~~~~~~~~~~~~
The main track is intended to bring together computer vision
researchers whose work is related to vision based vital signs
estimation. We are soliciting original contributions which address a
wide range of theoretical and application issues of remote vital signs
estimation, including but not limited to:

- Methods for extracting vital signals from videos, including pulse
rate, respiration rate, blood oxygen, and body temperature.

- Vision-based methods to support and augment vital signs monitoring
systems, such as face/skin detection, motion tracking, video
segmentation, and optimization.

- Vision-based vital signs measurement for affective, emotional, or
cognitive states.

- Vision-based vital signs measurement to assist video surveillance
in-the-wild.

- Vision-based vital signs measurement to detect human liveness or
manipulated images (deep fake detection).

- Applications of vision-based vital signs monitoring

- User interfaces employing vision-based vital signs estimation

Challenge Track
~~~~~~~~~~~~~~~
V4V Challenge evaluates remote PPG methods for vital signs estimation
on a new large corpora of face videos annotated with corresponding
high-resolution videos and vital signs from contact sensors. The goal
of the challenge is to reconstruct the vital signs of the subjects
from the video sources. The participants will receive an annotated
training set and a test set without annotations.

----------
SUBMISSION
----------

For paper submission, please use the CMT site:
https://cmt3.research.microsoft.com/V4V2021

For participating in the challenge, please visit the CodaLab page for more details:
https://competitions.codalab.org/competitions/31978
https://vision4vitals.github.io

---------------
IMPORTANT DATES
---------------

Challenge Track

May 21th: Challenge site opens, training data available
July 9th: Testing phase begins
July 30th: Competition ends (challenge paper submission - optional)

Workshop Track

July 26th: Paper submission deadline
August 9th: Notification of acceptance
August 16th: Camera ready submission

-------------------
WORKSHOP ORGANIZERS
-------------------
Laszlo A. Jeni, Carnegie Mellon University, USA
Lijun Yin, Binghamton University, USA

Data chairs:
Ambareesh Revanur, Carnegie Mellon University, USA
Zhihua Li, Binghamton University, USA
Umur A. Ciftci, Binghamton University, USA