Workshop on Adversarial Robustness in the Real World Call for Papers

ICCV2021 - 2nd Workshop on Adversarial Robustness in the Real World

11th October 2021

In conjunction with ICCV 2021 

 Web: https://iccv21-adv-workshop.github.io/


********************************************************

_________________

IMPORTANT DATES

_________________

 

Full Paper Submission: August 5th, 2021, Anywhere on Earth (AoE)

Notification of Acceptance: August 12th, 2021, Anywhere on Earth (AoE)

 

________________

CALL FOR PAPERS

________________

 

Computer vision systems nowadays often perform on super-human level in
complex cognitive tasks but research in adversarial machine learning
also shows that they are not as robust as the human vision system. In
this context, perturbation-based adversarial examples have received
great attention.

Recent work has shown that deep neural networks are also easily
challenged by real-world adversarial examples, e.g. including partial
occlusion, viewpoint changes, atmospheric changes, or style
changes. Discovering and harnessing those adversarial examples helps
us understand and improve the robustness of computer vision methods in
real-world environments, which will also accelerate the deployment of
computer vision systems in safety-critical applications. In this
workshop, we aim to bring together researchers from various fields,
including robust vision, adversarial machine learning, and explainable
AI, to discuss recent research and future directions for adversarial
robustness and explainability, with a particular focus on real-world
scenarios.

Topics include but are not limited to:

    discovery of real-world adversarial examples

    novel architectures with robustness to occlusion, viewpoint, and
    other real-world domain shifts

    domain adaptation techniques for building robust vision system in
    the real world

    datasets for evaluating model robustness

    adversarial machine learning for diagnosing and understanding
    limitations of computer vision systems

    improving generalization performance of computer vision systems to
    out-of-distribution samples

    structured deep models and explainable AI


 

_________________

INVITED SPEAKERS

_________________

 

- Kate Saenko, Boston University

- Alan Yuille, Johns Hopkins University

- Cihang Xie, University of California, Santa Cruz

- Aleksander Madry, MIT

- Ludwig Schmidt, MIT

- Tomaso Poggio, MIT

- Nicholas Carlini, Google

 

____________

SUBMISSION AND REVISION

____________

 

Submissions need to be anonymized and follow the ICCV 2021 Author Instructions.

http://iccv2021.thecvf.com/node/4#submission-guidelines

The workshop considers two types of submissions: (1) Long Paper:
Papers are limited to 8 pages excluding references and will be
included in the official ICCV proceedings; (2) Extended Abstract:
Papers are limited to 4 pages excluding references and will NOT be
included in the official ICCV proceedings. Please use the ICCV
template for extended abstracts .

 

Based on the PC recommendations, the accepted long papers/extended
abstracts will be allocated either a contributed talk or a poster
presentation.

We invite submissions on any aspect of adversarial robustness in
real-world computer vision. This includes, but is not limited to:

Discovery of real-world adversarial examples

Novel architectures with robustness to occlusion, viewpoint, and other
real-world domain shifts

Domain adaptation techniques for building robust vision system in the
real world

Datasets for evaluating model robustness

Adversarial machine learning for diagnosing and understanding
limitations of computer vision systems

Improving generalization performance of computer vision systems to
out-of-distribution samples

Structured deep models and explainable AI

 _________________

WORKSHOP ORGANIZERS

_________________


Yingwei Li, Johns Hopkins University

Adam Kortylewski, Johns Hopkins University

Cihang Xie, UCSC

Yutong Bai, Johns Hopkins University

Angtian Wang, Johns Hopkins University

Chenglin Yang, Johns Hopkins University

Xinyun Chen, UCB

Yinpeng Dong, Tsinghua University

Tianyu Pang, Tsinghua University

Jieru Mei, Johns Hopkins University

Nataniel Ruiz, Boston University

Alexander Robey, UPenn

Wieland Brendel, University of Tübingen

Matthias Bethge, University of Tübingen

George Pappas, UPenn

Philippe Burlina, Johns Hopkins University

Rama Chellappa, Johns Hopkins University

Dawn Song, UCB

Jun Zhu, Tsinghua University

Hang Su, Tsinghua University

Matthias Hein, University of Tübingen

Judy Hoffman, Georgia Tech

Alan Yuille, Johns Hopkins University