The goal of the third QA Lab (Question Answering Lab for Entrance Exam) task at NTCIR 13 is to investigate the real-world complex Question Answering (QA) technologies as a joint effort of participants and appropriate evaluation metrics and methodologies for them.
Based on the lessons learned from NTCIR-11 and -12, the major challenges include
- essay questions that require logical summaries along a historical theme
- competition with more than 3,500 students, examinees, from all over Japan (JA only)
- questions with context
- answer by text as high-compress-ratio query-biased summarization
- advanced entity-focused passage retrieval
- enhance knowledge resources
- semantic representation and sophisticated learning
- appropriate evaluation measure for essay
- research run using the past QA Lab data/systems
Research run investigates how much the QA technologies improved from QA Lab-1.
- Using the same training/test sets as the past QA Lab runs, comparison with the past results
- Using the systems participating in the past QA Lab runs, comparison with the present systems
To tackle to them, we propose to
Back to top
- enhance question format types ontology as joint effort,
- define enhanced answer type,
- evaluate end-to-end runs as well as vertical investigation runs according to question format type – answer-type – knowledge needed rather than the horizontal integration planned in NTCIR-11,
- collect and share more knowledge resources (e.g. dictionaries, chronological tables of historical events, gazetteers, biographical dictionaries), and baseline annotated corpus. Japan’s university entrance examination is selected here, but theoretically the framework can be applicable other domains. Participation for limited-types of question or limited types of modules are possible.