2nd 3D Face Alignment in the Wild Challenge & Workshop Call for Papers

2nd 3D Face Alignment in the Wild Challenge & Workshop
In conjunction with ICCV 2019, Seoul, Korea (Oct 27th- Nov 2nd, 2019)
Website: https://3dfaw.github.io
CMT is open: https://cmt3.research.microsoft.com/3DFAW2019


Over the past few years a number of research groups have made rapid
advances in dense 3D alignment from 2D images and obtained impressive
results. How these various methods compare is relatively
unknown. Previous benchmarks addressed sparse 3D alignment and single
image 3D reconstruction. No commonly accepted evaluation protocol
exists for dense 3D face reconstruction from video with which to
compare them.

To enable comparisons among alternative methods, we present the 2nd 3D
Face Alignment in the Wild - Dense Reconstruction from Video
Challenge. This topic is germane to both computer vision and
multimedia communities. For computer vision, it is an exciting
approach to longstanding limitations of single-image 3D reconstruction
approaches. For multimedia, 3D alignment would enable more powerful

Workshop track
The workshop track is intended to bring together computer vision
researchers whose work is related to 2D or 3D face alignment. We are
soliciting original contributions which address a wide range of
theoretical and application issues of 3D face alignment for computer
vision applications, including but not limited to:

- 3D face alignment from 2D dimensional images
- Model- and stereo-based 3D face reconstruction
- Dense and sparse face tracking from 2D and 3D dimensional inputs
- Applications in AR / VR
- Face alignment for embedded and mobile devices
- Facial expression retargeting (avatar animation)
- Face alignment-based user interfaces

Challenge Track
The challenge track evaluates 3D face reconstruction methods on a new
large corpora of profile-to-profile face videos annotated with
corresponding high-resolution 3D ground truth meshes. The corpora
includes profile-to-profile videos obtained under a range of

- high-definition in-the-lab video, and
- unconstrained video from an iPhone device.

For each subject, high-resolution 3D ground truth scans were obtained
using a Di4D imaging system. The goal of the challenge is to
reconstruct the 3D structure of the face from the two different video


For paper submission, please use the CMT site:

For participating in the challenge, please visit the CodaLab page for more details:


Challenge Track
June 27th: Challenge site opens
August 1st: Testing phase begins
August 15: Competition ends

Workshop Track
July 31st: Paper submission deadline
August 21st: Notification of acceptance
August 28th: Camera ready submission

Laszlo A. Jeni, Carnegie Mellon University, USA
Jeffrey F. Cohn, University of Pittsburgh, USA
Lijun Yin, Binghamton University, USA

Data chairs:
Rohith Krishnan Pillai, Carnegie Mellon University, USA
Huiyuan Yang, Binghamton University, USA
Zheng Zhang, Binghamton University, USA

Abhinav Dhall, Australian National University, Australia
Gábor Szirtes, KÜRT Akadémia / bsi.ai
Hamdi Dibeklioglu, Bilkent University, Turkey
Michel Valstar, University of Nottingham, UK
Patrik Huber, University of Surrey, UK
Sergio Escalera, University of Barcelona, Spain
Shaun Canavan, University of South Florida, USA
Vitomir Štruc, University of Ljubljana, Slovenia
Xiaoming Liu, Michigan State University, USA
Xing Zhang, A9
Zoltan Kato, University of Szeged, Hungary

A special issue on a top journal is planned.

For more information: https://3dfaw.github.io