IEEE MultiMedia: MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data Call for Papers

IEEE MultiMedia Special Issue Call for Papers
MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data
    ** Aim and Scope
With the rapid development of digital photography and social networks,
people get used to sharing their lives and expressing their opinions
online. As a result, user-generated social media data, including text,
images, audios, and videos, grow rapidly, which urgently demands
advanced techniques on the management, retrieval, and understanding of
these data. Most of the existing works on multimedia analysis focused
on cognitive content understanding, such as scene understanding,
object detection, and recognition. Recently, with a significant demand
for emotion representation in artificial intelligence, multimedia
affective analysis has attracted increasing research efforts from both
academic and industrial research communities. Affective computing of
the user-generated large-scale multimedia data is rather challenging
due to the following reasons. As emotion is a subjective concept,
affective analysis involves multidisciplinary understanding of human
perceptions and behaviors. Furthermore, emotions are often jointly
expressed and perceived through multiple modalities. Multi-modal data
fusion and complementation need to be explored. Recent solutions based
on deep learning require large-scale data with fine labelling. The
development of affective analysis is constrained by the affective gap
between low-level affective features and high-level emotions, and the
subjectivity of emotion perceptions among different viewers with the
influence of social, educational and cultural factors. Recently, great
advancements in machine learning and artificial intelligence have made
large-scale affective computing of multimedia possible.

 This special issue of IEEE MultiMedia aims to gather high-quality
 contributions reporting the most recent progress on multi-modal
 affective computing of large-scale multimedia data and its wide
 applications. It targets a mixed audience of researchers and product
 developers from several communities, i.e., multimedia, machine
 learning, psychology, artificial intelligence, etc. The topics of
 interest include, but are not limited to:

    ** Affective content understanding of uni-modal text, images, facial expressions, and speech
    ** Emotion recognition from multi-modal physiological signals
    ** Emotion based multi-modal summarization of social events
    ** Affective tagging, indexing, retrieval, and recommendation of social media
    ** Human-centered emotion perception prediction in social networks
    ** Group emotion clustering, personality inference, and emotional region detection
    ** Psychological perspectives on affective content analysis
    ** Weakly-supervised/unsupervised/self-supervised learning for affective computing
    ** Deep learning and reinforcement learning for affective computing
    ** Domain adaptation and generalization for affective computing
    ** Fusion methods for multi-modal emotion recognition
    ** Benchmark dataset and performance evaluation
    ** Overviews and surveys on affective computing
    ** Affective computing-based applications in entertainment, robotics, education, etc.

Important Dates
    ** Submission due: December 16, 2020
    ** First notification: February 17, 2021
    ** Revision submission: March 24, 2021
    ** Notification of acceptance: April 28, 2021
    ** Publication: April-June 2021

Guest Editors
    ** Dr. Sicheng Zhao, University of California Berkeley, US. E-mail: schzhao@gmail.com
    ** Prof. Min Xu, University of Technology Sydney, Australia. E-mail: Min.Xu@uts.edu.au
    ** Prof. Qingming Huang, Chinese Academy of Sciences, China. E-mail: qmhuang@ucas.ac.cn
    ** Prof. Björn W. Schuller, Imperial College London, UK. E-mail: schuller@ieee.org