23rd International Conference on MultiMedia Modeling Call for Papers

MMM 2017, the 23rd International Conference on MultiMedia Modeling
Web site: http://mmm2017.ru.is/

Welcome to MMM 2017, the 23rd International Conference on MultiMedia
Modeling, in Reykjavík, Iceland, which takes place January 4-6 2017 on
the modern campus of Reykjavik University.

Now in its 23rd year, MMM is a leading international conference for
researchers and industry practitioners for sharing new ideas, original
research results and practical development experiences from all MMM
related areas. For more details of the call for contributions, please
see the mail below.

There will be two main social events at MMM 2017: a welcome reception
featuring the Video Browser Showdown, on January 4 at Reykjavik
University, and the conference banquet which will take place at the
exotic Blue Lagoon on the evening of January 5. Optional tours allow
participants to further enjoy their stay on the beautiful island.

We sincerely hope that the carefully crafted program that we will put
together, the scientific discussions that the conference will
stimulate, and your additional activities in Reykjavík and its
surroundings, will make participation in MMM 2017 a valuable and a
memorable experience.

On behalf of the conference organizers and our hosts at Reykjavik
University, we sincerely welcome you to MMM 2017.

Björn Þór Jónsson and Cathal Gurrin, General Chairs
Laurent Amsaleg and Shin’ichi Satoh, Program Chairs

===== Conference Overview ===============================

MMM 2017 calls for full research papers reporting original research
and investigation results, as well as demonstration proposals, in all
areas related to multimedia modeling technologies and applications. In
particular, six special sessions will be held, representing a wide
variety of topics of current interest and importance. Finally, MMM
2017 will host the popular Video Browser Showdown.

The topics of interest for MMM 2017 include, but are not limited to:

Multimedia Content Analysis:
- Multimedia Indexing
- Multimedia Mining
- Multimedia Abstraction and Summarisation
- Multimedia Annotation, Tagging and Recommendation
- Multimodal Analysis for Retrieval Applications
- Semantic Analysis of Multimedia and Contextual Data
- Multimedia Fusion Methods
- Media Content Linking and Threading Methods
- Media Content Browsing and Retrieval Tools

Multimedia Signal Processing and Communications:
- Media Representation and Algorithms
- Audio, Image, Video Processing, Coding and Compression
- Multimedia Sensors and Interaction Modes
- Multimedia Privacy, Security and Content Protection
- Multimedia Standards and Related Issues
- Advances in Multimedia Networking and Streaming
- Multimedia Databases, Content Delivery and Transport
- Wireless and Mobile Multimedia Networking

Multimedia Applications and Services:
- Multi-Camera and Multi-View Systems
- Augmented and Virtual Reality, Virtual Environments
- Real-Time and Interactive Multimedia Applications
- Mobile Multimedia Applications
- Multimedia Web Applications
- Multimedia Authoring and Personalisation
- Interactive Multimedia and Interfaces
- Sensor Networks (Video Surveillance, Distributed Systems)
- Social and Educational Multimedia Applications
- Other Emerging Trends (e-learning, e-Health, Multimedia Collaboration, etc.)

Other topics related to MultiMedia Modeling that are not listed above are welcome as well. Please also see the description of special sessions below.

===== Submission Details and Links =========================

The conference proceedings will be published as series of Lecture Notes in Computer Science (LNCS) by Springer. Submissions should conform to the formatting instructions of Springer Verlag, LNCS series, and adhere to the submission schedule. Regular research paper submissions (including special session papers) will be peer-reviewed in a double-blind review process, while demonstrations and VBS papers will be peer-reviewed in a single-blind process. Each contribution (research paper or demonstration) must be associated with one full registration; only one contribution can be associated with each registration. The authors of selected top research papers will be invited to publish extended versions of their papers in a special issue of Multimedia Tools and Applications (MTAP).

For more details, including formatting instructions, please refer to the following pages:
- Call for Research Paper Submissions: 

- Call for Demonstration Submissions: 

- Call for Special Session Submissions: 

 - Video Browser Showdown (VBS): 

To submit to MMM 2017, please go to the MMM 2017 submission site: 

===== Important Dates ===================================

- 01/08/16: Submission Deadline
            (Research Papers, Special Session Papers, Demonstrations)
- 16/09/16: Submission Deadline (Video Browser Showdown)
- 01/10/16: Notification of Acceptance/Rejection
- 30/10/16: Camera Ready and Author Registration Deadline
- 04/01/17: Conference Starts

===== Special Sessions ==================================

The following six special sessions will be held at MMM 2017:

SS1: CrowdMM: Crowdsourcing for Multimedia
SS2: Social Media Retrieval and Recommendation
SS3: Modeling Multimedia Behaviors
SS4: Multimedia Computing for Intelligent Life
SS5: Multimedia and E-Learning
SS6: Multimedia and Multimodal Interaction for Health and Basic Care Applications

For more details, see the brief description below or follow the links to the more detailed description.

SS1: CrowdMM: Crowdsourcing for Multimedia [G. Gravier, M. Lux and
M. Riegler] The power of crowds—leveraging the capabilities of a
large number of human contributors—has enormous potential for
multimedia research, but exploiting crowdsourcing to achieve solid
results remains difficult. For this special session we seek
contributions that address the fundamental challenges of crowdsourcing
that prevent widespread adoption of crowdsourcing paradigms in the
multimedia community. Topics of interest include:

- New applications of crowdsourcing, such as affection and intent

- Social media and game techniques in crowdsourcing

- Methodological issues for crowdsourcing studies, such as human
factors and repeatability

SS2: Social Media Retrieval and Recommendation [L. Nie, Y. Yan,
B. Huet] This special session aims to call for contributions on
solutions, models and theories that tackle the key issues in
searching, recommending and discovering multimedia content, as well as
a variety of multimedia applications based on search and
recommendation technologies. Topics of interest include:

- Indexing, ranking and reranking of social media

- Entity search and recommendation in social media environments

- Representation and deep learning for social media data

SS3: Modeling Multimedia Behaviors [P. Wang, F. Hopfgartner, L. Bai]
This special session focuses on the most recent progress on modeling
multimedia behavior from various aspects of human factors, such as
interacting with multimedia contents, human behavior detection and
recognition, behavioral mining and prediction, multimedia content
distribution and streaming, etc. Topics of interest include:

- Behavior mining/monitoring using multimedia

- Interfaces and interaction with multimedia

- Multimedia sharing/streaming based on behavior models

SS4: Multimedia Computing for Intelligent Life [Z. Chen, W. Zhang,
T. Yao, K.-L. Hua, W.-H. Cheng] Recent advancement in multimedia
computing has opened many avenues for multimedia computing towards
more intelligent daily life. Thus, innovative algorithms as well as
systems that aim at better understanding and enhancing multimedia
signals regarding to intelligent life are warmly welcomed. Topics of
interest include:

- Multimedia Signal Processing for Intelligent Life

- Machine Learning and Pattern Recognition for Intelligent Life

- Multimodal Signal Representation, Analysis and Visualization for
Intelligent Life

SS5: Multimedia and E-Learning [V. Oria, A.G. Hauptmann] Ever since
multimedia has started to coalesce as a field, there has been the
vision that it would be useful for education. However, the reality is
that progress had been slow compared to other advances in the field.

- Learning media content analysis

- Support for audio/visual lecture summarization, segmentation and

- Multimodal (slides, notes, lectures, etc) course material alignment

SS6: Multimedia and Multimodal Interaction for Health and Basic
Care Applications [S. Vrochidis, L. Wanner, E. André, K. Schoeffmann]
This special session targets the most recent results and applications
in the area of multimedia analysis and multimodal interaction for
health and basic care. Of special interest are autonomous human-like
social agents that can analyze information and learn from
conversational spoken and multimodal interaction, as well as
multimedia systems that support exploration of videos from medical
endoscopy.  Topics of interest include:

- Multimedia analysis and retrieval for multimodal interaction in the
health domain

- Multimodal conversation and knowledge-based systems for social
companion agents

- Content exploration and retrieval in endoscopic video

===== Video Browser Showdown ===========================

The Video Browser Showdown (VBS) is an annual live video search
competition, where international researchers evaluate and demonstrate
the efficiency of their exploratory video retrieval tools on a shared
data set in front of the audience.

VBS has been organized as a special session MMM since 2012. In this
special session the participating teams start with a quick
presentation of their video search systems and then perform several
video retrieval tasks with a moderately large video collection. This
year, VBS will collaborate with the TRECVID Ad-Hoc search task and use
the same data set of about 600 hours of video content.

The aim of the Video Browser Showdown is to evaluate video browsing
tools for their efficiency at “Known Item Search” (KIS) tasks
with a well-defined data set in direct comparison with other
tools. For each KIS task the searchers need to interactively find a
short video clip (20 secs) in the video collection within a specific
time limit. While the video data is distributed before the event, the
tasks are presented on-site via a projector (either as a video clip
itself, or as a textual description).

For more details, including videos with impressions from previous
events, please go to the official website: