Semantic Reconstruction Challenge 2018 (at ECCV 2018)


Semantic Reconstruction Challenge 2018 (at ECCV 2018)


This Challenge is associated with the

    3D Reconstruction meets Semantics 2018 Workshop

which is held in conjunction with ECCV 2018 (September 9th, Munich, Germany)


To support research on questions related to the integration of 3D
reconstruction with semantics, the workshop features a semantic
reconstruction challenge.

The goal of the challenge is to create a semantically annotated
3D model of a test scene.

The dataset was rendered from a drive
through a semantically-rich virtual garden scene with many fine
structures. Virtual models of the environment provide
exact ground truth for the 3D structure and semantics of the garden
and rendered images from virtual multi-camera rig, enabling the use of
both stereo and motion stereo information.

The challenge participants
can submit their results in one or more categories:
1) the quality of the 3D reconstructions, 2) the quality of semantic
segmentation, and 3) the quality of semantically annotated 3D models.
Additionally, a dataset captured in the real garden from moving robot
is available for validation.

Given a set of images and their known camera poses, the goal of the
challenge is to create a semantically annotated 3D model of the scene.
To this end, it will be necessary to compute depth maps for the images
and then fuse them together (potentially while incorporating
information from the semantics) into a single 3D model.

Authors with the best scoring submissions will be able to
present their approach and results at the workshop.

We provide the following data for the challenge:
* A synthetic training sequences consisting of
  - 20k calibrated images with their camera poses,
  - ground truth semantic annotations for a subset of these images,
  - a semantically annotated 3D point cloud depicting the area of the
    training sequence.
* A synthetic testing sequence consisting of 5k calibrated images with
  their camera poses.
* A real-world validation sequence consisting of 268 calibrated images
  with their camera poses.

Both training and testing data are available at

Please see the git repository for details on the file formats.

This year we accept submissions in several categories: semantics and
geometry, either joint or separate. For example, if you have a
pipeline that first computes semantics and geometry independently and
then fuses them, we can compare how the fused result improved

The deadline for submitting to the challenge is July 10th (23:59 GMT).
Please follow the instructions on the website to submit your results
in the following categories:
A. Semantic Mesh
B. Geometric Mesh
C. Semantic Image Annotations

Radim Tylecek (