Deep Learning for Vehicle Perception and Challenge on Efficient ConvNets for Semantic Segmentation Call for Papers

--------------------------------------------------------------------------
Workshop Title: 
Deep Learning for Vehicle Perception and Challenge on Efficient ConvNets for Semantic Segmentation:
http://www.deep-driving.net/


Details:

The recent advances in computer vision have been mainly driven by deep
learning, with no end in sight.  Autonomous driving is a very exciting
application domain for a lot of computer vision problems.  In a
vehicle, energy resources are very limited. At the same time, there is
a trend towards more sensors with higher resolutions. This exposes a
contradiction in autonomous driving: huge amounts of data have to be
processed as fast, accurate and with as little power consumption as
possible.

This workshop will encourage work on efficient neural networks in the
context of autonomous driving.  We challenge the IV community to
participate in a semantic segmentation challenge, offering a prize for
the winner. The workshop program will include invited talks,
presentations of the top 3 challenge winners and a poster session of
submitted work.

Challenge Details

The prize for the winning team is awarded by a jury. The prize will be
given to the team showing the best compromise between high accuracy,
high speed in frames/second and low power consumption.  The winner
team is required to give a detailed presentation (with reproducible
results) at the workshop.  Papers about the work are optional but
highly welcome to a special issue in IEEE T-IV.

The accuracy of the system will be measured following the
Cityscapes1,2 benchmark for class segmentation.  Testing will be done
using unpublished data taken with the same car and camera system that
was used for the Cityscapes sequences. As a consequence, you have to
process 2048x1024px RGB images and output 2048x1024px maps of labels
according to the Cityscapes label set3. Prior to the deadline, a
sequence of roughly one minute will be supplied to interested
teams. The results (whole sequence) have to be submitted within a day,
using the same labels and format as known from the standard Cityscapes
benchmark.

For a fair comparison, submitting teams have to measure the speed of
their solution and to specify the used HW.

The deadline for submissions is May, 1st. Participants are requested
to render a video of the entire sequence and to show the results
during their presentation at the workshop.

References: 
[1] Cityscapes Dataset: M. Cordts, M. Omran, S. Ramos, T. Rehfeld,
    M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele,
    "The Cityscapes Dataset for Semantic Urban Scene Understanding,"
    in Proc. of the IEEE Conference on Computer Vision and Pattern
    Recognition (CVPR), 2016.  https://arxiv.org/abs/1604.01685

[2] Cityscapes Homepage: https://www.cityscapes-dataset.com/
[3] Cityscapes Github: https://github.com/mcordts/cityscapesScripts

For the labelset see: https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py