Uncertainty Quantification for Computer Vision Call for Papers
********************************
Call for Papers
Uncertainty Quantification for Computer Vision
Workshop & Challenge at ICCV 2023 (2nd Edition)
https://uncv2023.github.io/
********************************
Submission Deadline July 18th AOE
Two types of paper are welcome:
- Regular Papers -
(novel contributions not published previously)
- Extended Abstracts -
(novel contributions or papers that have been already accepted for
publication previously)
----------------------------
In the last decade, substantial progress has been made w.r.t. the
performance of computer vision systems, a significant part of it
thanks to deep learning. These advancements prompted sharp community
growth and a rise in industrial investment. However, most current
models lack the ability to reason about the confidence of their
predictions; integrating uncertainty quantification into vision
systems will help recognize failure scenarios and enable robust
applications.
The ICCV 2023 workshop on Uncertainty Quantification for Computer
Vision will consider recent advances in methodology and applications
of uncertainty quantification in computer vision. Prospective authors
are invited to submit papers on relevant algorithms and applications
including, but not limited to:
* Applications of uncertainty quantification
* Failure prediction (e.g., OOD detection)
* Robustness in CV
* Safety critical applications in CV
* Domain-shift in CV
* Probabilistic deep models
* Deep probabilistic models
* Deep ensemble uncertainty
* Connections between NNs and GPs
* Incorporating explicit prior knowledge in deep learning
* Computational aspects and real-time probabilistic inference
* Output ambiguity, multi-modality and diversity
All papers will be peer-reviewed, and accepted Regular papers are
presented at the workshop and included in the ICCV Workshop
Proceedings.
\
Challenge
The workshop features the MUAD Uncertainty Estimation for Semantic
Segmentation Challenge. This challenge aims to evaluate the
uncertainty estimation performance of the semantic segmentation
models. The participants will download the training and validation
sets (containing the RGB images and the corresponding ground truth
maps) as well as the test set (only the RGB images are provided), and
design and train the models. Then ones should provide the confidence
maps which can provide enough information to help the decision-maker
to find out the Out-Of-Distribution objects in the test set
images. Some of the test set images have different levels of weather
conditions (rain, fog, snow), which will be challenging to the
robustness of the models.
The MUAD challenge link: click here.
More information about the MUAD dataset and its download link are
available at MUAD website.
Submission Instructions
At the time of submission authors must indicate the desired paper
track:
Regular papers will be peer-reviewed following the same policy of
the main conference and will be published in the proceedings (call
for papers with guidelines and template here, max 8 pages,
additional pages for references only are allowed). These are meant
to present novel contributions not published previously (submitted
papers should not have been published, accepted or under review
elsewhere).
Extended abstracts are meant for preliminary works and short
versions of papers that have already been accepted, or are under
review, preferably in the last year in some major conferences or
journals. These papers will undergo a separate reviewing process
to assess the suitability for the workshop. These will *not
appear* in the workshop proceedings. Template and guidelines (max
4 content pages, additional pages for references allowed) here.
Submission site:
https://openreview.net/group?id=thecvf.com/ICCV/2023/Workshop/UnCV
Important Dates (All times are end of day AOE)
Submission deadline: July 18th, 2023
Notification of acceptance: August 5th, 2023
Camera-ready deadline: August 10th, 2023
Organizing Commitee
Andrea Pilzer, NVIDIA, Italy
Elisa Ricci, University of Trento, Italy
Gianni Franchi, ENSTA Paris, France
Andrei Bursuc, valeo.ai, France
Arno Solin, Aalto University, Finland
Martin Trapp, Aalto University, Finland
Rui Li, Aalto University, Finland
Angela Yao, National University of Singapore, Singapore
Wenlong Chen, Imperial College London, UK
Ivor Simpson, University of Sussex, UK
Neill D. F. Campbell, University of Bath, UK