International Workshop on Multi-Modal Deep Learning: Challenges and Applications Call for Papers

FIRST CALL FOR PAPERS
International Workshop on Multi-Modal Deep Learning: Challenges and Applications
(MMDLCA 2020), 
Milan, Italy, 
January 11, 2020
https://medical-and-multimedia-lab.github.io/MMDLCA2020/

In conjunction with the 
25th International Conference on Pattern Recognition (ICPR 2020), 
Milan, Italy, September 13-18, 2020
https://www.micc.unifi.it/icpr2020

INFORMATION ON MMDLCA
Deep learning is now recognized as one of the key software engines
that drives the new industrial revolution. The majority of current
deep learning research efforts have been dedicated to single-modal
data processing. Pronounced manifestations are deep learning based
visual recognition and speech recognition. Although significant
progress made, single-modal data is often insufficient to derive
accurate and robust deep models in many applications. Our digital
world is by nature multi-modal, that combines different modalities of
data such as text, audio, images, animations, videos and interactive
content. Multi-modal is the most popular form for information
representation and delivery. For example, posts for hot social events
are typically composed of textual descriptions, images and videos. For
medical diagnosis, the joint use of medical imaging and textual
reports is also essential. Multi-modal data is common for human to
make accurate perceptions and decisions. Multi-modal deep learning
that is capable of learning from information presented in multiple
modalities and consequently making predictions based on multi-modal
input is much in demand.

This workshop calls for scientific works that illustrate the most
recent progress on multi-modal deep learning. In particular,
multi-modal data capture, integration, modelling, understanding and
analysis, and how to leverage them to derive accurate and robust AI
models in many applications. It is a timely topic following the rapid
development of deep learning technologies and their remarkable
applications to many fields. It will serve as a forum to bring
together active researchers and practitioners to share their recent
advances in this exciting area. In particular, we solicit original and
high-quality contributions in: (1) presenting state-of-the-art
theories and novel application scenarios related to multi-modal deep
learning; (2) surveying the recent progress in this area; and (3)
developing benchmark datasets and evaluations. We welcome
contributions coming from various communities (i.e., visual computing,
machine learning, multimedia analysis, distributed and cloud
computing, etc.) to submit their novel results.

TOPICS
The list of topics includes, but not limited to:
**	Multi-modal intelligent data acquisition and management
**	Multi-modal benchmark datasets and evaluations
**	Multi-modal representation learning and applications
**	Multi-modal data driven visual analysis and understanding
**	Multi-modal object detection, classification, recognition and segmentation
**	Multi-modal information tracking, retrieval and identification
**	Multi-modal social event analysis
**	Multi-modal medical diagnosis
**	Multi-modal machine learning from incomplete data
**	Deep neural network architectures for multi-modal data processing
**	Multi-modal big data analytics
**	Emerging multi-modal deep learning applications

SUBMISSION GUIDELINES 
Submissions must be formatted in accordance with the Springer's
Computer Science Proceedings guidelines
(https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines). Two
types of contribution will be considered:
**	Full papers (10-12 pages, including references)
**	Short papers (6-8 pages, including references)
Accepted manuscripts will be included in the ICPR 2020 Workshop
Proceedings Springer volume. Once accepted, at least one author is
expected to attend the event and orally present the paper. The
submission platform will be available soon.

IMPORTANT DATES

    Workshop submission deadline: October 10st, 2020
    Workshop author notification: November 10th, 2020
    Camera-ready submission: November 15th, 2020
    Finalized workshop program: December 1st, 2020

CONTACTS
For any inquiry you may have, please send an email to: 
Zhineng Chen at zhineng.chen@ia.ac.cn,
Xirong Li at xirong@ruc.edu.cn,

Efstratios Gavves at e.gavves@uva.nl

Mei Chen, may4mc@gmail.com
Ioannis (Yiannis) Kompatsiaris ikom@iti.gr.