NTCIR-11 Workshop

NTCIR Workshop Aims

NTCIR Workshop is an evaluation workshop aimed to advance the research of Information Access (IA) technologies such as Information Retrieval, Text Summarisation, Information Extraction, and Question Answering.

The objectives of NTCIR Workshop are:

  1. Offer research infrastructure that allows researchers to conduct a large-scale evaluation of IA technologies
  2. Foster form of researchers to share and exchange their findings based on comparable experimental results
  3. Facilitate the research on evaluation methodologies and performance measures of IA technologies

The prominent concern of NTCIR Workshop is how to apply laboratory research outcomes to real-world problems. Evaluation is essential for any researchers to determine if a research idea was effective or helpful for improving existing techniques. However, since performance measures of many IA technologies are based on human relevance judgements, it is fairly costly to develop a large-scale evaluation resource by a single researcher or lab. An "evaluation workshop" like NTCIR is designed to develop such evaluation resources based on community-led efforts that can be shared among the research communities, allowing researchers to verify their ideas in a timely manner in a large-scale setting. Furthermore, since participants use a common set of evaluation data, the experimental analyses offer robust and comparable findings that bring together diverse approaches and ideas taken by research groups.

The NTCIR framework allows us to test and compare a large number of approaches and techniques that are not feasible to investigate by a single institution, and thus, we can learn from each other by understanding of diverse ideas and their advantages and disadvantages. In this regard, NTCIR works as a platform to exchange deep insight into research problems among related researchers.

It has proved that it is essential to build large scale standard test collections that are reusable in the research on Information Access technologies. Test collections typically consist of a set of document collections, search topics, and relevant documents data per topic. On the other hand, developing robust performance measures and metrics that have a high level of discriminative power and analytical power is equally an important challenge. Evaluation methodology itself needs to improve as technologies and societies advance. NTCIR is a great opportunity to examine the validity and reliability of performance measures and evaluation methodology, since many researchers get together to address common problems.

As can be seen, NTCIR workshop offers an innovative style of collaborative research based on large-scale test collections, robust evaluation methodologies, and forum to exchange latest findings among the community. We sincerely hope that NTCIR-11 will be beneficial to all research groups who consider their participation to advance their research and enjoy their research.

We welcome your suggestions, feedback, opinions on NTCIR workshop. Please feel free to contact NTCIR-11 General Co-chairs or Program Co-Chairs: ntc11-chairs@nii.ac.jp.

Last modified: 2013-09-06