Multimedia Evaluation Benchmark (MediaEval 2025 Call for Papers
Call for Participation: MediaEval Benchmark (register now)
Multimedia Evaluation Benchmark (MediaEval 2025)
1st Call for Participation
https://multimediaeval.github.io/editions/2025/
The MediaEval Multimedia Evaluation benchmark offers innovative
challenges that are of interests to researchers in the areas of NLP,
IR, AI, computer vision, and multimedia. Registration is now open!
(Submissions will be due in September.) This year's tasks are:
- Medico: VQA (with multimodal explanations) for gastrointestinal imaging
https://multimediaeval.github.io/editions/2025/tasks/medico/
- Memorability: Predicting movie and commercial memorability
https://multimediaeval.github.io/editions/2025/tasks/memorability/
- MultiSumm: Multimodal summarization of multiple topically related websites
https://multimediaeval.github.io/editions/2025/tasks/multisumm/
- NewsImages: Retrieval and generative AI for news thumbnails
https://multimediaeval.github.io/editions/2025/tasks/newsimages/
- Synthetic Images: Advancing detection of generative AI used in
real-world online images
https://multimediaeval.github.io/editions/2025/tasks/synthim/
MediaEval places special emphasis on gaining insight into data and
algorithms and moving forward the state of the art on tasks that are
novel or unexpectedly challenging, due to the nature of the data or to
a limited quantity of labels. The overall mission of MediaEval is to
support reproducible research that makes multimedia a positive force
for society.
Results are presented at the next edition of the yearly MediaEval
workshop, which will take place in Dublin, Ireland, Sat.-Sun. 25-26
October 2025, between CBMI2025 (Content-Based Multimedia Indexing) and
ACM Multimedia 2025 (the 33rd ACM International Conference on
Multimedia). The workshop will provide an opportunity for online
participation.
On behalf of the organizers,
Mihai Gabriel Constantin