The 14th NTCIR (2017 - 2019)
Evaluation of Information Access Technologies
Conference: June 10-13, 2019, NII, Tokyo, Japan
Call for Participation to the NTCIR-14 Tasks:
Call for Participation [Flyer]
Task Participation
Let's participate in a collaborative activity for enhancing Information
Access technologies!
For the 20 years, NTCIR has been formulating the infrastructure for the evaluation, and contributing to development of the Information Access technologies. Consequently, NTCIR has been the major forum for researchers to intensively discuss the evaluation methodology of emerging information access technologies.
The 14th NTCIR, NTCIR-14, now calls for task participation of anyone
interested in research on information access technologies and their evaluation,
such as retrieval from a large amount of document collections, question
answering and natural language processing. We welcome students, young researchers,
professors who supervise students, researchers working for a company, and
anyone who is interested in informatics.
The registration for NTCIR-14 Task Participation has just started.
Please visit: http://research.nii.ac.jp/ntcir/ntcir-14/howto.html
The fourteenth NTCIR (NTCIR-14) Task Selection Committee has selected the
following five Core Tasks and two Pilot Tasks.
For task slides of Kick-Off event, please visit:
Slides for task introduce at the NTCIR-14 Kick-Off Event: http://research.nii.ac.jp/ntcir/ntcir-14/kickoff.html
For details and latest information, please see below and visit each task’s homepage.
Lifelog-3 OpenLiveQ-2 QALab-PoliInfo STC-3 WWW-2 CENTRE FinNum
CORE TASKS
Website: http://ntcir-lifelog.computing.dcu.ie/
"Question retrieval task in which participants can evaluate their systems in a production environment of Yahoo Japan Corporation’s community question-answering service."
Abstract:
Open Live Test for Question Retrieval (OpenLiveQ-2) task provides an open live test environment of Yahoo Japan Corporation’s community question-answering service for question retrieval systems. The task aims to provide an opportunity for a more realistic evaluation, to encourage participants to address question retrieval problems specific to a production environment (e.g. ambiguous/underspecified queries and diverse relevance criteria). The task is simply defined as follows: given a query and a set of questions with their answers, return a ranked list of questions.
Website: http://www.openliveq.net/
Contact:
"Question answering task for fact checking using Japanese regional assembly minutes"
Abstract:
QALab-PoliInfo(Question Answering Lab for Political Information) task at NTCIR 14 is aimed at complex real-world question answering (QA) technologies, to extract structured data on the opinions of assemblymen, and the reasons and conditions for such opinions, from Japanese regional assembly minutes.
Website: https://poliinfo.github.io/
Contact: qalab-admin
"Emotional Conversation Generation, Dialogue Quality, and Nugget Detection subtasks using Chinese and English dialogue data"
Abstract:
Following the success of NTCIR-13 STC-2 which attracted 27 research teams, the NTCIR-14 Short Text Conversation Task (STC-3) offers three new subtasks: Chinese Emotional Conversation Generation (CECG), Dialogue Quality (for Chinese and English), and Nugget Detection (for Chinese and English) subtasks.
CECG: given a Chinese Weibo post and an emotion category (e.g., anger, disgust, happiness), return an appropriate response that matches the category.
Dialogue Quality: given a helpdesk-customer dialogue, estimate the distribution of overall subjective scores (e.g. customer satisfaction, task accomplishment) as rated by multiple assessors.
Nugget Detection: given a helpdesk-customer dialogue, estimate, for each utterance, the distribution of multiple assessors' labels over pre-defined classes (e.g. Not-A-Nugget, Regular Nugget, Trigger Nugget, Goal Nugget). A nugget is an utterance that helps the customer advance towards the Problem Solved state.
Website:
http://sakailab.com/ntcir14stc3/
Contact:
"Ad hoc web search"
Abstract:
NTCIR We Want Web is a classical ad-hoc search task. In this round of WWW, for the Chinese subtask, we will provide an amazing dataset named as Sogou-QCL, which contains weak relevance labels for millions of query-doc pairs. For more information, please visit our website.
Website: http://www.thuir.cn/ntcirwww2
Contact:
PILOT TASK
"Can we replicate/reproduce best practices from CLEF and/or TREC?"
Abstract:
This is a collaboration across CLEF, NTCIR, and TREC. At NTCIR, participants will try to replicate/reproduce best practices from CLEF and/or TREC. At CLEF, participants will try to replicate/reproduce those from NTCIR and/or TREC, and so on. We want improvements that generalise and add up. This task welcomes researchers and students who are interested in ad hoc IR and web search.
Website: http://www.centre-eval.org/ntcir14/
Contact:
"Fine-grained numeral understanding in financial social media data"
Abstract:
Numeral is the crucial part of financial documents. In order to understand the detail of opinions in financial documents, we should not only analyze the text, but also need to assay the numeric information in depth. Because of the informal writing style, analyzing social media data is more challenging than analyzing news and official documents. FinNum is a task for fine-grained numeral understanding in financial social media data - to identify the category of a numeral. For the purpose of understanding the fine-grained numeric information in social media data, we provide the taxonomy for numerals, and classify numerals into 7 categories and further extend several categories into subcategories. Especially, the most important category, Monetary, is categorized into 8 subcategories.
Website: http://nlpfin.com
Contact:
Last Modified: 2018-07-19