Skip to content

Commit 06188d1

Browse files
committed
added 2026
1 parent 8388ed5 commit 06188d1

3 files changed

Lines changed: 135 additions & 0 deletions

File tree

_editions/2026.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
layout: edition
3+
title: MediaEval 2026
4+
year: 2026
5+
permalink: /editions/2026/
6+
---
7+
8+
The MediaEval Multimedia Evaluation benchmark offers challenges in artificial intelligence for multimedia data.
9+
Participants address these challenges by creating algorithms for analyzing, exploring and accessing information in the data. Solutions are systematically compared using a common evaluation procedure,
10+
making it possible to establish the state of the art and track progress. Our larger aim is to promote reproducible research that makes multimedia a positive force for society.
11+
12+
MediaEval goes beyond other benchmarks and data science challenges in that it also pursues a “Quest for Insight” (Q4I). With Q4I we push beyond only striving to improve evaluation
13+
scores to also working to achieve deeper understanding about the challenges. For example, characteristics of the data, strengths and weaknesses of particular types of approaches, and observations
14+
about the evaluation procedure.
15+
16+
##### Registration:
17+
Signup for MediaEval 2026 opens in January.
18+
19+
##### MediaEval 2026 Schedule:
20+
* Registration for task participation opens: January 2026
21+
* Test data release: 1 March 2026
22+
* Runs due: 1 May 2026
23+
* Working notes papers due: 31 May 2026
24+
* MediaEval 2026 Workshop, Sat.-Sun. 15-16 June 2026, Amsterdam, Netherlands and Online.
25+
26+
27+
##### The MediaEval Coordination Committee (2026):
28+
* Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
29+
* Steven Hicks, SimulaMet, Norway
30+
* Martha Larson, Radboud University, Netherlands
31+
32+
##### MediaEval 2026 is supported by:
33+
34+
<a href="https://www.sigmm.org/">
35+
<img src="https://multimediaeval.github.io/editions/2020/docs/sigmmlogo.gif" width=150/>
36+
</a><br>
37+
<a href="https://www.adaptcentre.ie/">
38+
<img src="https://multimediaeval.github.io/editions/2025/docs/adaptlogo.jpg" width=150/>
39+
</a><br>
40+
<a href="https://aicode-project.eu/">
41+
<img src="https://multimediaeval.github.io/editions/2025/docs/ai-code-1.png" width=150/>
42+
</a>

_editions/2026/tasks/README.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
This folder contains `Markdown` (.md) files to all tasks for 2026 MediaEval edition.
2+
3+
## How to edit
4+
5+
Opening a file and clicking on the pencil logo (view Figure 1 below)
6+
7+
![Figure 1: Editing task content](/docs/task_edition1.png "Figure 1: Editing task content")
8+
9+
you will access `edit` mode on the file aand you will see something like below:
10+
11+
![Figure 2: Editing task content](/docs/task_edition2.png "Figure 1: Editing task content")
12+
13+
There are 2 main parts to the document:
14+
15+
* part 1 (lines 1 to 11) is the task file metadata. Here, you (task organizer), should fill in all `# required info` fields (title, subtitle, and blurb). When your task content is ready to be published on the website, to be shown on the website, then you should edit the `hide` property to `false`, this way your task will be visible on the website.
16+
17+
* part 2 (lines 12 to infinity) is the actual task content. There is a suggested structure to the document to be followed. This part accepts content with [Markdown](https://daringfireball.net/projects/markdown/syntax) and HTML syntax.
18+
19+
After you fill all content `commit changes` by filling the form below that edit screen and clicking on `Propose changes` as shown in Figure 3 below:
20+
21+
![Figure 3: Proposing changes](/docs/task_edition3.png "Figure 3: Proposing changes")
22+
23+
That action will open a new window in which you will confirm a `Pull request`. As you can see in Figure 4 below:
24+
* yellow arrow points out where you can select a reviewer (if you are already talking to one of the website admins), this is optional
25+
* fill your comments on the `fill here` space as you believe it's required to support approval of your change.
26+
* red arrow points to the button that confirms your `Pull request`
27+
28+
![Figure 4: Pull request](/docs/task_edition4.png "Figure 3: Pull request")
29+
30+
Other than that please feel free to ask for help. This structure is and experiment and we need help to turn it useful and easy to everyone. MediaEval organizers are available to help or submit questions and issues [here](https://github.com/multimediaeval/multimediaeval.github.io/issues).

_editions/2026/tasks/template.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
# static info
3+
layout: task
4+
year: 2026
5+
hide: true <!-- # change this to false once you finish editing-->
6+
7+
# required info
8+
title: <!-- # add your title here-->
9+
subtitle: <!-- # leave this blank-->
10+
blurb: <!-- # add the task blurb here-->
11+
---
12+
13+
<!-- # please respect the structure below-->
14+
*See the [MediaEval 2026 webpage](https://multimediaeval.github.io/editions/2026/) for information on how to register and participate.*
15+
16+
#### Task description
17+
18+
#### Motivation and background
19+
20+
#### Target group
21+
22+
#### Data
23+
24+
#### Ground truth
25+
26+
#### Evaluation methodology
27+
28+
#### Quest for insight
29+
Here are several research questions related to this challenge that participants can strive to answer in order to go beyond just looking at the evaluation metrics:
30+
* <!-- # First research question-->
31+
* <!-- # Second research question-->
32+
<!-- # and so on-->
33+
34+
#### Participant information
35+
<!-- Please contact your task organizers with any questions on these points. -->
36+
<!-- # * Signing up: Fill in the [registration form]() and fill out and return the [usage agreement](). -->
37+
<!-- # * Making your submission: To be announced (check the task read me) <!-- Please add instructions on how to create and submit runs to your task replacing "To be announced." -->
38+
<!-- # * Preparing your working notes paper: Instructions on preparing you working notes paper can be found in [MediaEval 2026 Working Notes Paper Instructions]().-->
39+
40+
#### References and recommended reading
41+
<!-- # Please use the ACM format for references https://www.acm.org/publications/authors/reference-formatting (but no DOI needed)-->
42+
<!-- # The paper title should be a hyperlink leading to the paper online-->
43+
44+
#### Task organizers
45+
* <!-- # First organizer-->
46+
* <!-- # Second organizer-->
47+
<!-- # and so on-->
48+
49+
#### Task auxiliaries
50+
<!-- # optional, delete if not used-->
51+
* <!-- # First auxiliary-->
52+
* <!-- # Second auxiliary-->
53+
<!-- # and so on-->
54+
55+
#### Task schedule
56+
* 1 January 2026: Task sign up opens.
57+
* 1 March 2026: Testing Data release. <!-- * XX June 2026: Data release <!-- # Replace XX with your date. We suggest setting the date in June - of course if you want to realease sooner it's OK. -->
58+
* 1 May 2026: Runs due and results returned. Exact dates to be announced. <!--* XX May 2026: Runs due <!-- # Replace XX with your date. We suggest setting enough time in order to have enough time to assess and return the results by the Results returned.-->
59+
* 31 May 2026: Working notes paper. <!-- Fixed. Please do not change.-->
60+
* 15-16 June 2026: MediaEval Workshop co-located with ACM ICMR in Amsterdam, Netherlands and Online. <!-- Fixed. Please do not change.-->
61+
62+
#### Acknowledgements
63+
<!-- # optional, delete if not used-->

0 commit comments

Comments
 (0)