1st International Workshop on Distributed Smart Cameras (IWDSC), Call for Papers

Call for Papers for the 
1st International Workshop on Distributed Smart Cameras (IWDSC), 
which will be virtually held in conjunction
with the International Conference on Computer Vision (ICCV).

All the following information is also available at https://iwdsc.github.io/

Apologies if you receive this more than once.

================================================================
About IWDSC
================================================================
Smart camera networks are becoming a fundamental piece of our
intelligent cities, buildings, and homes, progressively inserting
themselves into our lives. From smart surveillance systems composed of
a multitude of smart camera nodes to small wearable cameras able to
render a visual log of our daily experience, these devices interact
with each other and with a wealth of other smart things, and of course
the internet. Their rapid development is possible thanks to the
convergence of several technologies. From advanced image sensors and
vision chips to embedded vision systems capable of efficient feature
extraction, image encoding and wireless transmission of the relevant
visual content: This technology opens the door to new application
domains, where video analytics and the extraction of semantic
information from the scene is performed in a distributed fashion,
implementing a new type of cooperative and/or collaborative vision.

The IWDSC team has previously organized 12 editions of the
International Conference on Distributed Smart Cameras (ICDSC) with ACM
as an international forum to discuss recent advances and open issues
in these topics.

================================================================
Papers are invited in the following and related areas:
================================================================
=> Smart cameras in Internet of Things (IoT), Industry 4.0 and smart transport
- Smart cameras as IoT sensors
- Intelligent traffic cameras
- Smart IoT security camera for smart homes
- Smart cameras for Industry 4.0
- Vision sensors in Vehicular Ad- hoc Networks (VANETs)

=> Mobile Vision, 3D and Robotics
- 3D reconstruction of everything
- Driver assistance, autonomous driving
- Social robot
- Drone distributed smart cameras
- Structure-from-motion in mobile devices
- Visual landmark localization
- Active vision and mobile/body-worn cameras
- Position discovery and middleware apps

=> Smart image sensors and vision chips
- Circuits and systems for image sensing
- Parallel processing hard ware
- Camera beyond visual spectrum
- HW/S W codesign for embedded vision
- Reconfigurable vision processing architectures
- High-performance image sensors
- Neuromorphic cameras and networks

=> Machine learning in distributed camera networks
- Deep learning on smart cameras
- Transforming embedded vision through deep learning
- Distributed learning algorithms in visual sensor networks
- Novel machine learning technology implementation on smart cameras
- Machine learning for low-powered sensors
- Artificial intelligence and deep learning at the edge

=> Distributed smart cameras and network architectures
- Camera system designs and architectures
- Architectures for camera networks
- Embedded vision programming
- Distributed video coding
- Self-reconfiguring camera networks
- Self-organizing smart cameras
- Wireless and mobile image sensor network s
- Context-aware networks
- Distributed video analytics
- Resource management and task allocation
- Data aggregation and information fusion
- Collaborative object recognition and extraction

=> Emerging applications
- Process splitting in camera vs cloud
- Virtual reality and augmented reality
- Smart cameras and semantic information
- Smart cities and smart cameras
- Camera- based health and wellness monitoring    
- AI and video analytics for smart cameras
- Social media and big data
- Smart cameras for situational intelligence and interaction

================================================================
Submission
================================================================
Submission site is open, and accessible at: 
https://cmt3.research.microsoft.com/DSC2021

Each paper will be reviewed by at least two members of the scientific
Program Committee, in double-blind fashion. The submitted papers
should present original work, not currently under review elsewhere and
should have no substantial overlap with already published
work. Submissions should be submitted in PDF and should be no more
than 8 pages following the ICCV main Conference paper format and
guidelines (https://iwdsc.github.io/submissions/).

================================================================
Important Dates
================================================================
Paper Submission Deadline: July 17, 2021 (11:59PM Pacific Time)
Paper Submission Deadline for main conference rejected papers (*): July 28, 2021 (11:59PM Pacific Time)
Reviews Released to Authors: August 11, 2021 (11:59PM Pacific Time)
Camera ready due: August 16, 2021 (11:59PM Pacific Time)
(*) In case of rejection from ICCV, authors can submit their work to
the workshop. Authors should address all ICCV reviewers' comments
in the submitted paper and submit the ICCV reviews as supplementary
material.

================================================================
Proceedings and Special Issue 
================================================================
Accepted papers will be published within the ICCV Workshop
Proceedings. Authors of high quality papers will be invited to extend
their work and submit it for a special issue in a JCR-indexed journal.

================================================================
Oranizing Team
================================================================
Niki Martinel, University of Udine, Italy
Ehsan Adeli, Stanford University, USA
Caifeng Shan, Shandong University of Science and Technology, China
Anima Anandkuma, Caltech/NVidia, USA
Yue Gao, Tsinghua University, China
Hamid Aghajan, Gent University, Belgium
Christian Micheloni, University of Udine, Italy
Fei-Fei Li, Stanford University, USA