Task Overview and Call for Task Participation

The 14th NTCIR (2017 - 2019)
Evaluation of Information Access Technologies
Conference: June 10-13, 2019, NII, Tokyo, Japan

Call for Participation to the NTCIR-14 Tasks:

Task Participation

Let's participate in a collaborative activity for enhancing Information Access technologies!

For the 20 years, NTCIR has been formulating the infrastructure for the evaluation, and contributing to development of the Information Access technologies. Consequently, NTCIR has been the major forum for researchers to intensively discuss the evaluation methodology of emerging information access technologies.
The 14th NTCIR, NTCIR-14, now calls for task participation of anyone interested in research on information access technologies and their evaluation, such as retrieval from a large amount of document collections, question answering and natural language processing. We welcome students, young researchers, professors who supervise students, researchers working for a company, and anyone who is interested in informatics.

The registration for NTCIR-14 Task Participation has just started.
Please visit: http://research.nii.ac.jp/ntcir/ntcir-14/howto.html


Evaluation Tasks

The thirteenth NTCIR (NTCIR-14) Task Selection Committee has selected the following five Core Tasks and one Pilot Tasks.
For details and latest information, please see below and visit each task’s homepage.

Lifelog-3    OpenLiveQ-2    QALab-PoliInfo    STC-3    WWW-2    CENTRE    


Lifelog Search Task ("Lifelog-3")

Website: http://ntcir-lifelog.computing.dcu.ie/


Open Live Test for Question Retrieval ("OpenLiveQ-2")

"Question retrieval task in which participants can evaluate their systems in a production environment of Yahoo Japan Corporation’s community question-answering service."

Open Live Test for Question Retrieval (OpenLiveQ-2) task provides an open live test environment of Yahoo Japan Corporation’s community question-answering service for question retrieval systems. The task aims to provide an opportunity for a more realistic evaluation, to encourage participants to address question retrieval problems specific to a production environment (e.g. ambiguous/underspecified queries and diverse relevance criteria). The task is simply defined as follows: given a query and a set of questions with their answers, return a ranked list of questions.

Website: http://www.openliveq.net/


Question Answering Lab for Political Information ("QALab-PoliInfo")

"Question answering task for fact checking using Japanese regional assembly minutes"

QALab-PoliInfo(Question Answering Lab for Political Information) task at NTCIR 14 is aimed at complex real-world question answering (QA) technologies, to extract structured data on the opinions of assemblymen, and the reasons and conditions for such opinions, from Japanese regional assembly minutes.

Website: https://poliinfo.github.io/
Contact: qalab-admin


Short Text Conversation("STC-3")

"Emotional Conversation Generation, Dialogue Quality, and Nugget Detection subtasks using Chinese and English dialogue data"

Following the success of NTCIR-13 STC-2 which attracted 27 research teams, the NTCIR-14 Short Text Conversation Task (STC-3) offers three new subtasks: Chinese Emotional Conversation Generation (CECG), Dialogue Quality (for Chinese and English), and Nugget Detection (for Chinese and English) subtasks. CECG: given a Chinese Weibo post and an emotion category (e.g., anger, disgust, happiness), return an appropriate response that matches the category. Dialogue Quality: given a helpdesk-customer dialogue, estimate the distribution of overall subjective scores (e.g. customer satisfaction, task accomplishment) as rated by multiple assessors. Nugget Detection: given a helpdesk-customer dialogue, estimate, for each utterance, the distribution of multiple assessors' labels over pre-defined classes (e.g. Not-A-Nugget, Regular Nugget, Trigger Nugget, Goal Nugget). A nugget is an utterance that helps the customer advance towards the Problem Solved state.



We Want Web-2 ("WWW-2")

"Ad hoc web search"

NTCIR We Want Web is a classical ad-hoc search task. In this round of WWW, for the Chinese subtask, we will provide an amazing dataset named as Sogou-QCL, which contains weak relevance labels for millions of query-doc pairs. For more information, please visit our website.

Website: http://www.thuir.cn/ntcirwww2



CLEF/NTCIR/TREC REproducibility ("CENTRE")

"Can we replicate/reproduce best practices from CLEF and/or TREC?"

This is a collaboration across CLEF, NTCIR, and TREC. At NTCIR, participants will try to replicate/reproduce best practices from CLEF and/or TREC. At CLEF, participants will try to replicate/reproduce those from NTCIR and/or TREC, and so on. We want improvements that generalise and add up. This task welcomes researchers and students who are interested in ad hoc IR and web search.

Website: http://sakailab.com/ntcir14centre/


Last Modified: 2018-05-14