DynaVis: The Second International Workshop on Dynamic Scene Reconstruction Call for Papers


DynaVis: The Second International Workshop on Dynamic Scene Reconstruction

https://dynavis.github.io/

Workshop at CVPR 2020, Seattle, Washington

Organizers:
Armin Mustafa, Marco Volino, Michael Zollhöfer,
Dan Casas, Christian Richardt, Adrian Hilton

Keynote Speakers

    Prof. Yaser Sheikh
    Director, Oculus Research Pittsburgh, and Carnegie Mellon University

    Prof. Raquel Urtasun (tbc)
    Uber ATG Chief Scientist and the Head of Uber ATG Toronto,
    University of Toronto

Call for contributions

Reconstruction of general dynamic scenes is motivated by potential
applications in film and broadcast production together with the
ultimate goal of automatic understanding of real-world scenes from
distributed camera networks. With recent advances in sensor hardware
and the advent of learning-based approaches as well as virtual and
augmented reality, dynamic scene reconstruction is being applied to
ever more complex scenes with applications in healthcare, security,
education, and entertainment, including games, film and VR/AR.

We welcome contributions to this workshop in the form of oral
presentations, posters, and demos. Suggested topics include, but are
not limited to:

    Dynamic 3D reconstruction from single, stereo or multiple views

    Learning-based methods in dynamic scene reconstruction and understanding

    Multi-modal dynamic scene modelling (RGBD, LIDAR, 360 videos, light fields)

    4D reconstruction and modelling

    3D/4D data acquisition, representation, compression, and transmission 

    Scene analysis and understanding in 2D and 3D

    Structure from motion, camera calibration, and pose estimation

    Digital humans: motion and performance capture, bodies, faces, hands

    Geometry processing

    Computational photography

    Appearance reconstruction and modeling: materials, reflectance, illumination

    Scene modelling in the wild, moving cameras, handheld cameras

    Applications of dynamic scene reconstruction (VR/AR, character
    animation, free-viewpoint video, relighting, medical imaging,
    creative content production, animal tracking, HCI, sports)


We welcome submissions from both industry and academia, including
interdisciplinary work and work from outside of the mainstream
computer vision community. We also welcome submissions from the CVPR
main conference, regardless of their acceptance.

Submission website: https://cmt3.research.microsoft.com/DYNAVIS2020

Prizes

Best Paper will receive an NVIDIA Geforce RTX 2080 GPU, courtesy of
our main sponsor NVIDIA.

Important Dates

Paper submission deadline:	Friday, 6 March 2020
Notification to authors:	        Monday, 23 March 2020
Camera-ready deadline:	        Friday, 3 April 2020