Multimodal Content Moderation (MMCM) @ CVPR 2023 Call for Papers

Call for Papers: 
1st IEEE Workshop on Multimodal Content Moderation (MMCM) @ CVPR 2023

Submission deadline March 20th CVPR Workshop on Multimodal Content
Moderation (MMCM 2023)

1st IEEE Workshop on Multimodal Content Moderation 
will be held in conjunction with CVPR 2023.

Content moderation (CM) is a rapidly growing need in today’s world,
with a high societal impact, where automated CM systems can discover
discrimination, violent acts, hate/toxicity, and much more, on a
variety of signals (visual, text/OCR, speech, audio, language,
generated content, etc.).  Leaving or providing unsafe content on
social platforms and devices can cause a variety of harmful
consequences, including brand damage to institutions and public
figures, erosion of trust in science and government, marginalization
of minorities, geo-political conflicts, suicidal thoughts and
more. Even more challenging to tackle, some CM systems need to operate
in real time (e.g., moderating livestreams). Besides user-generated
content, AI-generated content (e.g., by DALL-E or GPT-3) present
additional challenges due to model biases and the scale of content
they can generate.

With the prevalence of multimedia social networking and online gaming,
the problem of sensitive content detection and moderation is by nature
multimodal. Moreover, content moderation is contextual and culturally
multifaceted, for example, different cultures have different
conventions about gestures. This requires CM approach to be not only
multimodal, but also context aware and culturally sensitive.

This workshop intends to draw more visibility and interest to this
challenging field, and establish a platform to foster in-depth idea
exchange and collaboration. Authors are invited to submit original and
innovative papers. We aim for broad scope, topics of interest include
but are not limited to:

    Multi-modal content moderation in image, video, audio/speech, text;
    Context aware content moderation;
    Datasets/benchmarks/metrics for content moderation;
    Annotations for content moderation with ambiguous policies, perspectivism, noisy or disagreeing labels;
    Content moderation for synthetic/generated data (image, video, audio, text); utilizing synthetic dataset;
    Dealing with limited data for content moderation.
    Continual & adversarial learning in content moderation services;
    Explainability and interpretability of models;
    Challenges of at-scale real-time content moderation needs vs. human-in-the-loop moderation;
    Detecting misinformation;
    Detecting/mitigating biases in content moderation;
    Analyses of failures in content moderation.

Accepted papers will be included in the CVPR proceedings, on IEEE
Xplore, and on CVF website.

Paper Submission Deadline: March 20th 2023, 11:59:59 Pacific
Time. Link to submission