Call for Papers

Call for Papers: 
Special Session on 
Multimedia Indexing for eXtended Reality
 at CBMI 2024

https://cbmi2024.org/?page_id=100#MmIXR

21st International Conference on Content-based Multimedia Indexing (CBMI 2024)
18-20 September 2024, 
Reykjavik, Iceland - 
https://cbmi2024.org/

DESCRIPTION:
Extended Reality (XR) applications rely not only on computer vision
for navigation and object placement but also require a range of
multimodal methods to understand the scene or assign semantics to
objects being captured and reconstructed. Multimedia indexing for XR
thus encompasses methods for processes during XR authoring, such as
indexing content to be used for scene and object reconstruction, as
well as during the immersive experience, such as object detection and
scene segmentation.

The intrinsic multimodality of XR applications involves new challenges
like the analysis of egocentric data (video, depth, gaze, head/hand
motion) and their interplay. XR is also applied in diverse domains,
e.g., manufacturing, medicine, education, and entertainment, each with
distinct requirements and data. Thus, multimedia indexing methods must
be capable of adapting to the relevant semantics of the particular
application domain.

TOPICS OF INTEREST:
    Multimedia analysis for media mining, adaptation (to scene
     requirements), and description for use in XR experiences
     (including but not limited to AI-based approaches)
    Processing of egocentric multimedia datasets and streams for XR
     (e.g., egocentric video and gaze analysis, active object
     detection, video diarization/summarization/captioning)
    Cross- and multi-modal integration of XR modalities (video, depth, audio, gaze, hand/head movements, etc.)
    Approaches for adapting multimedia analysis and indexing methods to new application domains (e.g., open-world/open-vocabulary recognition/detection/segmentation, few-shot learning)
    Large-scale analysis and retrieval of 3D asset collections (e.g., objects, scenes, avatars, motion capture recordings)
    Multimodal datasets for scene understanding for XR
    Generative AI and foundation models for multimedia indexing and/or synthetic data generation
    Combining synthetic and real data for improving scene understanding
    Optimized multimedia content processing for real-time and low-latency XR applications
    Privacy and security aspects and mitigations for XR multimedia content

IMPORTANT DATES:
Submission of papers: 22 March 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

SUBMISSION:
The session will be organized as an oral presentation session. The contributions to this session will be long papers describing novel methods or their adaptation to specific applications or short papers describing emerging work or open challenges.

SPECIAL SESSION ORGANISERS:
Fabio Carrara, Artificial Intelligence for Multimedia and Humanities
Laboratory, ISTI-CNR, Pisa, Italy

Werner Bailer, Intelligent Vision Applications Group, JOANNEUM
RESEARCH, Graz, Austria

Lyndon J. B. Nixon, MODUL Technology GmbH and Applied Data Science
School at MODUL University, Vienna, Austria

Vasileios Mezaris, Information Technologies Institute / Centre for
Research and Technology Hellas, Thessaloniki, Greece