1st Autonomous Vehicle Vision (AVVision'21) Workshop Call for Papers

Call for Papers 1st Autonomous Vehicle Vision (AVVision'21) Workshop

In conjunction with WACV 2021

 

 

The Autonomous Vehicle Vision 2021 (AVVision'21) workshop (webpage:
avvision.xyz) aims to bring together industry professionals and
academics to brainstorm and exchange ideas on the advancement of
visual environment perception for autonomous driving. In this one-day
workshop, we will have regular paper presentations and invited
speakers to present the state of the art as well as the challenges in
autonomous driving. Furthemore, we have prepared several large-scale,
synthetic and real-world datasets, which have been annotated by the
Hong Kong University of Science and Technology (HKUST), UDI, CalmCar,
ATG Robotics, etc. Based on these datasets, three challenges will be
hosted to understand the current status of computer vision and
machine/deep learning algorithms in solving the visual environment
perception problems for autonomous driving: 1) CalmCar MTMC Challenge,
2) HKUST-UDI UDA Challenge, and 3) KITTI Object Detection Challenge.

 

Keynote Speakers:

 
?      Andreas Geiger, University of Tübingen
?      Ioannis Pitas, Aristotle University of Thessaloniki
?      Nemanja Djuric, Uber ATG
?      Walterio Mayol-Cuevas, University of Bristol & Amazon

 

Call for Papers:

With a number of breakthroughs in autonomous system technology over
the past decade, the race to commercialize self-driving cars has
become fiercer than ever. The integration of advanced sensing,
computer vision, signal/image processing, and machine/deep learning
into autonomous vehicles enables them to perceive the environment
intelligently and navigate safely. Autonomous driving is required to
ensure safe, reliable, and efficient automated mobility in complex
uncontrolled real-world environments. Various applications range from
automated transportation and farming to public safety and environment
exploration. Visual perception is a critical component of autonomous
driving. Enabling technologies include: a) affordable sensors that can
acquire useful data under varying environmental conditions, b)
reliable simultaneous localization and mapping, c) machine learning
that can effectively handle varying real-world conditions and
unforeseen events, as well as "machine-learning friendly" signal
processing to enable more effective classification and decision
making, d) hardware and software co-design for efficient real-time
performance, e) resilient and robust platforms that can withstand
adversarial attacks and failures, and f) end-to-end system integration
of sensing, computer vision, signal/image processing and machine/deep
learning. The AVVision'21 workshop will cover all these
topics. Research papers are solicited in, but not limited to, the
following topics:

    3D road/environment reconstruction and understanding;
    Mapping and localization for autonomous cars;
    Semantic/instance driving scene segmentation and semantic mapping;
    Self-supervised/unsupervised visual environment perception;
    Car/pedestrian/object/obstacle detection/tracking and 3D localization;
    Car/license plate/road sign detection and recognition;
    Driver status monitoring and human-car interfaces;
    Deep/machine learning and image analysis for car perception;
    Adversarial domain adaptation for autonomous driving;
    On-board embedded visual perception systems;
    Bio-inspired vision sensing for car perception;
    Real-time deep learning inference.

 

Author Guidelines:

 

Authors are encouraged to submit high-quality, original (i.e. not been
previously published or accepted for publication in substantially
similar form in any peer-reviewed venue including journal, conference
or workshop) research.

 

The paper template is identical to the WACV2020 main conference. The
author toolkit (latex only) is available both on Overleaf and in
Github. The submissions are handled through the CMT submission
website: https://cmt3.research.microsoft.com/AVV2021/.


Papers presented at the WACV workshops will be published as part of
the "WACV Workshops Proceedings" and should, therefore, follow the
same presentation guideliness as the main conference. Workshop papers
will be included in IEEE Xplore, but will be indexed separatelly from
the main conference papers.

 

For questions/remarks regarding the submission e-mail: avv.workshop@gmail.com.

 

Challenges:

 

Challenge 1: CalmCar MTMC Challenge

Multi-target multi-camera (MTMC) tracking systems can automatically
track multiple vehicles using an array of cameras. In this challenge,
participants are required to design robust MTMC algorithms, which are
targeted at vehicles, where the same vehicles captured by different
cameras possess the same tracking IDs. The competitors will have
access to four large-scale training datasets, each of which includes
around 1200 annotated RGB images, where the labels cover the types of
vehicles, tracking IDs and 2D bounding boxes. Identification precision
(IDP) and identification recall (IDR) will be used as metrics to
evaluate the performance of the implemented algorithms. The
competitors are required to submit their pretrained models as well as
the corresponding docker image files via the CMT submission system for
algorithm evaluation (in terms of both speed and accuracy). The winner
of the competition will receive a monetary prize (US$5000) and will
give a keynote presentation at the workshop.

 

Challenge 2: HKUST-UDI UDA Challenge

Deep neural networks excel at learning from large amounts of data but
they can be inefficient when it comes to generalizing and applying
learned knowledge to new datasets or environments. In this
competition, participants need to develop an unsupervised domain
adaptation (UDA) framework which can allow a model trained on a large
synthetic dataset to generalize to real-world imagery. The tasks in
this competition include: 1) UDA for monocular depth prediction and 2)
UDA for semantic driving-scene segmentation. The competitors will have
access to Ready to Drive (R2D) dataset, which is a large-scale
synthetic driving scene dataset collected under different
weather/illumination conditions using the Carla Simulator. In
addition, competitors will also have access to a small amount of
real-world data. The mean absolute value of the relative (mAbsRel)
error and the mean intersection over union (mIoU) score will be used
as metrics to evaluate the performance of UDA for monocular depth
prediction and UDA for semantic driving scene segmentation,
respectively. The competitors will be required to submit their
pretrained models and docker image files via the CMT submission
system.

 

Challenge 3: KITTI Object Detection Challenge

Researchers of top-ranked object detection algorithms submitted to the
KITTI Object Detection Benchmarks will have the opportunity to present
their work at AVVision'21, subject to space availability and approval
by the workshop organizers. It should be noted that only the
algorithms submitted before 12/20/2020 are eligible for presentation
at AVVision'21.

 

Important Dates:

Full Paper Submission: 11/02/2020

Notification of Acceptance: 11/23/2020

Camera-Ready Paper Due: 11/30/2020
 

HKUST-UDI UDA Challenge abstract and code submission: 12/13/2020

Notification of HKUST-UDI UDA Challenge results: 12/20/2020

CalmCar MTMC Challenge abstract and code submission: 12/13/2020

Notification of CalmCar MTMC Challenge results: 12/20/2020