Evaluation Tasks

The Tenth NTCIR Workshop (NTCIR-10) Task Selection Committee has selected the following six Core Tasks and two Pilot Tasks. For details, please visit each task’s homepage.


Cross-lingual Link Discovery ("CrossLink-2")

Cross-lingual link discovery (CLLD) is concerned with automatically finding potential links between documents in different languages. In contrast to traditional information retrieval tasks where queries are not attached to explicit context, or only loosely attached to context, cross language link discovery algorithms actively recommend a set of meaningful anchors in the context of a source document and establish links to documents in an alternative language. CLLD is helpful for complimentary knowledge discovery in different language domains.

Intent ("INTENT-2)

When you type "office" in the search box, do you mean "workplace" or "Microsoft Office"? When you type "Harry Potter", do you mean the books, the films or the character? The INTENT task deals with such ambiguous and underspecified queries. In the Subtopic Mining (SM) subtask, participating systems are given a query, and are asked to return a ranked list of "subtopic strings" e.g. "Harry Potter and the Deathly Hallows" and "Harry Potter Hermione Granger". In the Document Ranking (DR) subtask, participating systems are given a query, and are asked to return a selectively diversified ranked list of web pages ("selectively" because some queries may not require diversification). In addition to the Chinese and Japanese SM and DR subtasks which were successful at NTCIR-9, we are introducing an English SM task. We will provide some baseline runs to participants so that even those who do not have a search engine can participate. For more information, please visit:

One Click ("1CLICK-2)

You are using your mobile phone to lookup a definition of a term, answers to a question, various facts about a celebrity or even items that match geographical constraints such as "sushi restaurants near Tokyo station." You want a concise summary of information gathered from several web pages that can be displayed on a small mobile screen, and you don't have much time. Thus, instead of making the user scan a ranked list of URLs and snippets, One Click Access systems try to satisfy the user directly and immediately. In contrast to evaluation based on document relevance, 1CLICK evaluates systems based on nuggets. At the second round of the 1CLICK task, we are expanding our language scope to English, Japanese and Korean. How quickly can your system satisfy the user? For more information, please visit:

Patent Machine Translation ("PatentMT-2")

Patent translation constitutes one of the ambitious translation domains for machine translation due to the length and complex structures of many patent sentences. There is also a significant practical need for patent translations, so let us cultivate this challenging and significant practical research field with patent machine translation! Participants will have access to a large patent parallel corpus compiled mostly of patent descriptions (1 million Chinese-English sentence pairs and 3 million Japanese-English sentence pairs) and new test sets. Human evaluations will be carried out by paid evaluation experts. PatentMT will offer the following subtasks: Chinese to English, Japanese to English, and English to Japanese. At NTCIR-10, we are planning evaluations based on: (1) the quality of translated sentences, (2) a progress test on NTCIR-9 data set, (3) a trilingual test set (undecided), and (4) practical MT performance in the patent domain (for determination of usefulness in patent examination) (undecided).

Recognizing Inference in TExt ("RITE-2")

NTCIR-10 RITE-2 is a generic benchmark task that addresses common semantic processing needs in various NLP/Information Access research areas (e.g. Information Retrieval, Question Answering, Text Summarization). We will evaluate systems which automatically detect entailment, paraphrase, and contradiction in texts written in Japanese, Simplified Chinese, and Traditional Chinese. Similar to RITE-1, where nearly 200 runs were submitted by 24 teams in 4 subtasks, RITE-2 datasets will be constructed from wikipedia, entrance exam and QA data based on our reflections from RITE-1. We set up the Resource Pool on the RITE wiki, so that useful resources and baseline implementation can be shared among participants, and they can be used to build a system for the formal evaluation. Since RITE is a friendly evaluation effort, we welcome your participation even with an ambitious/exploratory approach that may or may not work.

IR for Spoken Documents (“SpokenDoc-2”)

In the 2nd round of the IR for Spoken Documents task (SpokenDoc -2), we will continue to evaluate spoken document retrieval through the two subtasks, spoken term detection and spoken content retrieval. Within the latter task, the novel boundary-free passage retrieval is also to be evaluated. The organizers will provide several automatic transcriptions of the spoken document obtained on the various recognition conditions, which enable participants who are interested in SDR but not in ASR to participate in these tasks.


Math (“MATH")

Regardless of the importance of mathematical modeling and analysis in many scientific disciplines, most of the contemporary retrieval systems do not facilitate users' access to mathematical expressions in target documents. NTCIR Math Task aims at exploring methods for mathematical content access based on both natural language text and mathematical formulae. The dataset will contain a document collection with MathML representation, a query set for IR evaluation, and also a reference set of annotated documents for NLP-based approaches. For updated information, please refer to

Medical Natural Language Processing (“MedNLP")  added date : October 25, 2012

Recently, more and more medical records are written in electronic format in place of paper, which leads to a higher importance of information processing technique in medical fields. In this proposed pilot task, participants are supposed to retrieve important information from medical documents in Japanese. This is one of the elemental technologies to develop computational systems for supporting a wide range of medical services. Our goal is to promote and support to generate practical tools and systems applicable in the medical industry, which will support medical decisions and treatments by physicians and medical staffs. While a short-term objective of this pilot task is to evaluate basic techniques to information extraction in medical fields, the actual objective is to offer a forum for achieving the goal with community-based approach, that is, to gather people who are interested in this issue, and to facilitate their communication and discussion to clarify issues to be solved and to define element technologies. For more information, please visit:


-   To evaluate and improve your ideas and systems within a set time frame
-   To compare different approaches and systems from different research teams on a common test bed
-   To build reliable evaluation data sets
-   To build a community of researchers from around the world to discuss their common interests and to innovate collectively
-   To redefine the task and improve its evaluation methods
-   And to have fun


-   Sign registration and user agreement forms (See HOW TO PARTICIPATE)
-   Follow the task guidelines/schedule and submit your system output (“runs”) to your task organizers. If you will cancel your participation and not submit your research results, please notify the task organizers of your withdrawal by the due date of run submission.
-   Based on your task participation experience, write a paper for the NTCIR-10 final meeting (which will be presented in a poster session, or a poster AND an oral presentation session). Papers will be published as the Online Proceedings as of the date of the final meeting.
-   You shall use your best efforts to participate fully in the tasks of the evaluation activitiy you registered, submitting the results of your experiments and reporting them at the NTCIR-10 final meeting (June 18-21, 2013@NII, Tokyo, Japan) .
-   Submit a list of your publications that used NTCIR data every year to NII


Please consult the 'How to Participate' page. An online registration form is available there.

NTCIR is a series of evaluation workshops for fostering research in information access technologies – cross-lingual and multilingual ones in particular. An NTCIR final meeting takes place at NII, Tokyo every 18 months or so to conclude the tasks and to move on to the next challenges. The First NTCIR (NTCIR-1) took place in 1999. See the NTCIR website for a complete history (including online proceedings) of NTCIR.

Last Modified: 2013-04-15