The 5th UG2+ Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition. Call for Papers

The 5th UG2+ Workshop and Prize Challenge: 
Bridging the Gap between Computational Photography and Visual Recognition.
In conjunction with CVPR 2022, June 19

Website: http://www.ug2challenge.org/
Challenge Registration: https://forms.gle/Td6DQuAJuuoLptN58
CMT: https://cmt3.research.microsoft.com/UG2CHALLENGE2022
Contact: cvpr2022.ug2challenge@gmail.com

Track 1: Object Detection in Haze Conditions

A dependable vision system must reckon with the entire spectrum of
complex unconstrained and dynamic degraded outdoor environments. It is
highly desirable to study to what extent, and in what sense, such
challenging visual conditions can be coped with, for the goal of
achieving robust visual sensing. This challenge aims to evaluate and
advance object detection algorithms’ robustness in haze condition.

Track 2: Action Recognition from Dark Videos

Videos shot under adverse illumination are unavoidable, such as night
surveillance, and self-driving at night. It is therefore highly
desirable to explore robust methods to cope with dark scenarios. It
would be even better if such methods could utilize web videos, which
are widely available and normally shot under poor illumination.  This
challenge aims to promote action recognition algorithms’ robustness
with special focus on dark videos.

Track 3: Action Recognition from Dark Videos

The theories of turbulence and propagation of light through random
media have been studied for the better part of a century. However,
under turbulence and propagation of light through random media,
progress of modern image reconstruction algorithms (e.g., deep
learning methods) has been slow. This challenge aims to promote the
development of new image reconstruction algorithms for incoherent
imaging through anisoplanatic turbulence.

Paper Track:


    Novel algorithms for robust object detection, segmentation or
    recognition on outdoor mobility platforms, such as UAVs, gliders,
    autonomous cars, outdoor robots, etc.

    Novel algorithms for robust object detection and/or recognition in
    the presence of one or more real-world adverse conditions, such as
    haze, rain, snow, hail, dust, underwater, low-illumination, low
    resolution, etc.

    The potential models and theories for explaining, quantifying, and
    optimizing the mutual influence between the low-level
    computational photography (image reconstruction, restoration, or
    enhancement) tasks and various high-level computer vision tasks.

    Novel physically grounded and/or explanatory models, for the
    underlying degradation and recovery processes, of real-world
    images going through complicated adverse visual conditions.

    Novel evaluation methods and metrics for image restoration and
    enhancement algorithms, with a particular emphasis on no-reference
    metrics, since for most real outdoor images with adverse visual
    conditions it is hard to obtain any clean "ground truth" to
    compare with.


Submission: https://cmt3.research.microsoft.com/UG2CHALLENGE2022

Important Dates:

    Paper submission: March 22, 2022 (11:59PM PST)
    Paper Acceptance Announcement: March 30, 2022 (11:59PM PST)
    Challenge result submission: May 1, 2022 (11:59PM PST)
    Winner Announcement: May 20, 2022 (11:59PM PST)
    CVPR Workshop: June 19, 2022 (Full day)


Speakers:

    Ming-Hsuan Yang (University of California, Merced)
    Danna Gurari (University of Colorado Boulder)
    Xiaohua Zhai (Google Brain)
    Achuta Kadambi (University of California, Los Angeles)
    Ulugbek Kamilov (Washington University in St. Louis)
    Angie Liu (Johns Hopkins University)
    Qifeng Chen (Hong Kong University of Science and Technology)
    Daniel LeMaster (Air Force Research)
    Russell Hardie (University of Dayton)


Organisers:

    Zhangyang Wang (UT Austin)
    Jiaying Liu (Peking University)
    Walter J. Scheirer (University of Notre Dame)
    Stanley H. Chan (Purdue University)
    Wenqi Ren (Chinese Academy of Sciences)
    Shalini De Mello (NVIDIA)
    Keigo Hirakawa (University of Dayton)
    Wuyang Chen (UT Austin)
    Wenhan Yang (Nanyang Technological University, Singapore)
    Yuecong Xu (Institute for Infocomm Research (I2R), A*STAR, Singapore)
    Zhenghua Chen (Institute for Infocomm Research (I2R), A*STAR, Singapore)
    Zhenyu Wu (Wormpex AI Research)
    Zhiyuan Mao (Purdue University)
    Dejia Xu (UT Austin)
    Nicholas Chimitt (Purdue University)