[Date Prev][Date Next][Date Index]

[ntcir:386] Call for Participation in the MediaEval 2010 Benchmark Evaluation




-----------------------------------

-----------------------------------

Call for Participation in the MediaEval 2010 Benchmark Evaluation

http://www.multimediaeval.org/

MediaEval is a benchmarking initiative offering tasks promoting research
and innovation on multimodal approaches to multimedia access and
retrieval. MediaEval 2010 (which builds on and expands VideoCLEF
2008-2009) focuses on speech and language, but also visual and social
features. Participants carry out one or more of the selected tasks and
submit runs to be evaluated. Results are written up and presented at the
MediaEval 2010 workshop.

For each task, participants receive a task definition, task data and
accompanying resources (dependent on task) such as shot boundaries,
keyframes, speech transcripts and social metadata. Participation is open
to all interested research groups. In order to participate, please sign
up by 15 May via http://www.multimediaeval.org/ Choose one or more of
the following tasks:

Tagging Task (Wild Wild Web version)
Requires participants to automatically assign tags to videos using
features derived from spoken, visual, and audio content or associated
textual or social information. (Data: Creative Commons Blip.tv data,
predominantly English language)

Tagging Task (Professional Version)
Requires participants to automatically assign thematic subject labels
(keywords) to videos using features derived from speech, metadata, audio
or visual content. The keywords capture what a video is about as a whole
and not only what is depicted in the visual channel. (Data: TRECVid
Sound and Vision data, Dutch language)

Placing Task
Requires participants to automatically assign geo-coordinates to Flickr
videos using one or more of: Flickr metadata, visual content, audio
content, social information (Data: Creative Commons Flickr data,
predominantly English language)

Affect Task
Involves automatically identifying videos that viewers report are
boring. Participants can make use of spoken, visual, and audio content
as well as accompanying metadata. (Data: Internet video, English language)

Passage Retrieval Task
Given a set of queries and a video collection, participants are required
to automatically identify relevant jump-in points into the video based
on the combination of modalities, i.e., speech, metadata, audio and
visual (Data: TRECVid Sound and Vision data, Dutch language)

Linking Task
Participants are supplied with a set of videos  in which segments
(called multimedia anchors) have been marked. They are asked to link
each anchor to an article from the English language Wikipedia that is
about the topic treated by the video at the anchor point. (Data: Sound
and Vision data, Dutch language)

MediaEval 2010 Timeline

Sign up by May 15, 2010
May: Release of dev data
June: Release of test data
August: Participants submit their completed runs
Late August: Release of results
September: Task write up
24 October 2010 MediaEval workshop in Pisa, Italy
(right before the start of ACM Multimedia 2010)

MediaEval 2010 Organization Committee
Martha Larson, Delft University of Technology (MediaEval Coordinator)
Gareth Jones, Dublin City University (MediaEval Coordinator)
Roeland Ordelman, Sound & Vision and University of Twente
Pavel Serdyukov, Delft University of Technology
Vanessa Murdock, Yahoo! Research
Mohammad Soleymani, University of Geneva

Contact
For questions or additional information please contact Martha Larson
m.a.larson@xxxxxxxxxx

MediaEval 2010 is sponsored by PetaMedia http://www.petamedia.eu/, a FP7
EU network of excellence dedicated to R&D in the area of multimedia
access and retrieval.

------------------------------------------
This message is brought by NTCIR and
distributed through the mailing list:
ntcir [at] nii.ac.jp
for announcements and updates about NTCIR.
* To unsubscribe, send mail to:
ntc-secretariat [at] nii.ac.jp, with the subject:
ntcirML UNSUBSCRIBE [your e-mail address]
* For more info, visit:
http://research.nii.ac.jp/ntcir/
------------------------------------------
Noriko Kando, Leader, NTCIR Project
ntc-admin [at] nii.ac.jp

NTCIR Project Office
ntc-secretariat [at] nii.ac.jp

National Institute of Informatics
2-1-2 Hitotsubashi,
Chiyoda-ku, Tokyo 101-8430, Japan
------------------------------------------