The 8th NTCIR Workshop selected the following 5 areas of research as "Tasks".
Other "pilot tasks" can be started any time during the process
of NTCIR-8. For details, please visit each task's call for participation
linked from below.
(b) IR for QA (IR4QA) subtask
The ultimate goal in cross-lingual information access is to answer any
type of question or to retrieve any type of information needs in any language
with responses drawn from multilingual corpora. Participant can choose
either tests end-to-end ACLIA or paraticipate only some modules in the
whole. ACLIA provides module-based evaluation platform and aiming to search
"dream system" by combining modules from different systems. Use
Simplified and Traditional Chinese and Japnese documents, and queries are
in Simplified and Traditional Chinese, English, and Japnese
The purpose of the Patent Mining Task is to create technical trend maps
from a set of research papers and patents. Example. a map in which research
papers and US and Japan patents are classified in terms of elemental technologies
and their effects.
(a) Translation Subtask
Cross-Lingual Information Retrieval Subtask
(c) Evaluation Subtask
NTCIR PATMT provides a large-scale Englis-Japanese sentence alignment data
(1.8M sentence-pairs or more. The world largest). The importance of MT
has been increased in CLIR -- not only for query translation, but also
for bi-directoral approach in which both documents and queries are translated.
There are lots ofspace to be investiageted in MT. -- so let's work together.
Any attempts to test the problems to be solve in short time, or feasibility
studies for the future tasks. To be announced.
SUBMISSION RAW DATA and EVALUATION RESULTS: For every active participants who submitted results will receive all the submitted runs of the task and their evaluated results using the metrics set by the task as soon as those data will be available. The purpose of this is that we invited all the task participants to discuss, analyze or examine the evaluation metrics and evaluation results of the task. Evaluation is very critical issue for all of the researchers. So please examine how the evaluation is done and how the metrics behave, or whether there are any methods to overcome the limitation of current practice of the evaluation. With your cooperation, we would like to obtain fruitful examination of the evaluation results and metrics.
Please consult How to Participate. Online registration form is linked from there. After registration, submitting the signed user agreement forms is needed.
The NTCIR is a series of evaluation workshop to enhance the research in information access technologies, including text retrieval, cross-lingual information access, question answering, etc, by providing infrastructure of evaluation and research including large-scale re-usable test collections, evaluation metrics and methodologies, and a forum of researchers who are interested in exchanging research ideas and evaluation methodologies. The emphasis has been placed on, but not limited to, Japanese and other Asian languages, and cross-lingual applications between Asian languages and English. The workshops are periodical events which are held once per about one and a half years (about 18 months).
The detailed task and collection description will be available in each task's web site.