The 2nd Workshop on 3D Reconstruction in the Wild (3DRW2019) Call for Papers

The 2nd Workshop on 3D Reconstruction in the Wild (3DRW2019)

Seoul, Korea - Oct. 28, 2019
In conjunction with ICCV2019

Workshop web-site

==Important Dates==

Paper Submission: July 30, 2019
Notification of Acceptance: August 22, 2019
Camera-Ready: August 29, 2019

/Call for Papers/

Research on 3D reconstruction has long focused on recovering 3D
information from multi-view images captured in ideal
conditions. However, the assumption of ideal acquisition conditions
severely limits the deployment possibilities for reconstruction
systems, as typically several external factors need to be controlled,
intrusive capturing devices have to be used or complex hardware setups
need to be operated to acquire image data suitable for 3D
reconstruction. In contrast, 3D reconstruction in unconstrained
settings (referred to as 3D reconstruction in the wild) usually
imposes only limited to no restrictions on the data acquisition
procedure and/or on data capturing environments, and therefore,
represents a far more challenging task.

The goal of this workshop is to foster the development of 3D
reconstruction techniques capable of operating in unconstrained
conditions which are robust and/or real-time, and perform well on a
variety of environments with different characteristics. Towards this
goal, we are interested in all parts of 3D reconstruction techniques
ranging from multi-camera calibration, feature extraction, matching,
data fusion, depth learning, and meshing techniques to 3D modeling
approaches capable of operating on image data captured in the
wild. Topics of interest include, but are not limited to:

geometry for multi-views
underwater camera calibration, refractive concerns and lighting/camera configuration
features from images under bad weather
features from backlit images tracking in the snow
distorted image matching
structure from super-wide-baseline images
structure from remote sensing images
structure-from-motion and visual odometry
depth from incomplete data
3D from hand-held cameras
3D from images captured by underwater cameras
3D from images captured using drones
3D from unordered image sequences/collections
3D from depth image sequences
3D from data (deep learning approach)
fusion for heterogeneous images
fusion for unreliable depth maps/sequences
mesh generation
mesh interpolation for deforming objects
reconstruction of thin objects
reconstruction in sports
reconstruction of planets
mapping, localization and SLAM
autonomous navigation
3D for agriculture, bio-imaging, and physics
benchmarking dataset under challenging scenarios

Jan-Michael Frahm (University of North Carolina at Chapel Hill, USA)
Adrian Hilton (The University of Surrey, UK)
Tomas Pajdla (Czech Technical University in Prague, Czech Republic)
Akihiro Sugimoto (National Institute of Informatics, Japan)