NTCIR (NII Test Collection for IR Systems) Project bNTCIRbCONTACT INFORMATIONbNIIb




NTCIR-8 Meeting
Session Notes

DAY-2 June 16 Wednesday: ACLIA: CCLQA and IR4QA

|NTCIR-8 Overview|Invited Talk-Building Watson|
|ACLIA Overview|Session1: CCLQA|Session2: IR4QA|
|Proposals for NTCIR-9|

[Session Notes] Overview for NTCIR-8
[Meeting Program][Online Proceedings]

Date: June 16, 2010
Time: 9:40-10:25
Speaker: Noriko Kando

In this topic, Dr. Noriko Kando gives an overview to NTCIR-8. She first gives a brief introduction of NTCIR history and basic description of information retrieval and information access. Then tasks at past NTCIRs are described. She introduces NTCIR-7 & 8 program committee and NTCIR-8 coordination and active participants. The focus of NTCIR lies in lab-type test, new challenges and forum for researchers and other experts/users. The official evaluation results are given, effective approaches are pointed out and general conclusions are proposed for each task in NTCIR-8. And detailed introduction is given to the new track: geotemporal information retrieval. At the end of the talk, Dr. Noriko Kando gives some information about NTCIR-9: call for formal task proposal, participation and paper for EVIA2011. NTCIR-9 Meeting & EVIA 2011 will be held in December, 2011.

by Jian Zhang
[Return to top]

[Session Notes] Invited talk
[Meeting Program][Online Proceedings]

Date: June 16, 2010
Time: 10:25-11:30
Speaker: Koichi Takeda (IBM Research)
Title: Building Watson: A Grand Challenge in Automatic NL Question Answering

In this talk, Dr. Koichi Takeda gives an introduction of the Jeopardy! Challenge and describes their technical approach and progress on this grand-challenge problem. The Jeopardy! Challenge is an Americafs Favorite quit TV show. In the show, participants compete to answer questions involving broad/open domain and complex languages. To win, high precision, accurate confidence and high speed are needed. And on April 27, 2009, IBM unveiled the details of a project--Watson for building an advanced computing system that will be able to compete with humans at the game of Jeopardy! The Project Watson is addressing a grand challenge in computer science aimed at illustrating how the integration and advancement of NLP, IR and ML. Usually, a question has to be decomposed and analyzed. Upon understanding the question, answers are proposed and the final answer will be obtained through synthesis. The development of system versions are described to show the improvement step by step. Structures of the systems are given and paralleled implementation is also described. At last, the speaker gives the potential applications of DeepQA and an introduction to the research team.

by Jian Zhang
[Return to top]

[Session Notes] Overview of the NTCIR-8 ACLIA Tasks: CCLQA and IR4QA
[Meeting Program][Online Proceedings]

Date: June 16, 2010
Time: 11:40-12:40
Speaker: Teruko Mitamura and Tetsuya Sakai

The speech is given by Prof. Teruko Mitamura. First of all, she generalizes the goal of the ACLIA task, that is, to develop effective CCLQA evaluations for complex questions as well as factoid ones. Then she shows us the figure of the data flow in ACLIA task cluster. After that, she introduces the CCLQA subtask in detail, including the question types that are different with those in the previous CCLQA subtask, the corpus, the input/output format, the tracks and participants, the official metrics used in evaluating participant runs, and the tools and process used to develop the official evaluation topics. She then shows the results of submitted runs of all groups. She also gives the comments about the problem of evaluation. Finally, the session program of CCLQA in the afternoon is shown.

The following speech, given by Tetsuya Sakai, discusses the IR4QA subtask. He first introduces the participants and thanks for their contributions. After a briefing introduction of relevance assessments, he describes the evaluation metrics and the tool SEPIA in detail. Then he shows EN, CS and CT run results of all groups. Finally he discusses the assessment work, which is thought a costly and time-consuming procedure, including pooling, relevance assessment, ranking, double checking, etc.. So hefd like to implement a mechanism that the participants can contribute to the assessment. Then he gives the forecast algorithm that can lessen manual works, and the algorithm contains three parts:

1.sort pooled docs by: # runs containing doc; sum of ranks of docs in the runs2.treat top 20% of the sorted docs as relevant to form pseudo-qrels
3. rank systems by AP, Q and nDCG using the pseudo-qrels

by Han Ren

[Return to top]

[Session Notes] Session 1: NTCIR-8 Advanced Cross-Lingual Information Access (ACLIA) - Complex Cross Lingual Question Answering (CCLQA)
[Meeting Program][Online Proceedings]

Date: June 16, 2010
Time: 14:00 - 15:30

1. An Open-domain Question Answering System for NTCIR-8 C-C Task
The talk was cancelled.

2. WHU Question Answering System at NTCIR-8 ACLIA Task
      Han Ren, Donghong Ji and Jing Wan (Wuhan University, China)

The reporter of the first speech in this session comes from Wuhan University, and they participate in CCLQA task and IR4QA task. First, he gives the core problem of answering complex questions, that is, how to acquire precise information requirements. Then the system architecture figure is shown and the reporter explains each part in detail. In their system, two models are noticeable: answer acquisition and answer ranking, in which topic, semantic and statistical based methods are utilized for answer extraction. Besides, their translation model achieves a better performance than other groups, due to the utilization of online dictionaries. Finally, the reporter gives an error discussion about question analysis runs, IR4QA runs and CCLQA runs, especially on Why questions, and he considers that an inference mechanism is required to answer those questions.

The session chair, Teruko Mitanura, asked one question:

Usually about the performance difference between monolingual QA and cross-lingual QA is large, but in the authorfs case the difference is not quite large.

The author answered that using the online translation engines can improve the performance of cross-lingual QA, and thus reduce the performance gap between monolingual QA and cross-lingual QA.

3. Bootstrap Pattern Learning for Open-Domain CLQA

Hideki Shima introduced Javelin, their cross-lingual QA system participated in the NTCIR-8 ACLIA evaluation. The author emphasized a minimally supervised bootstrapping approach to generating lexicosyntactic patterns used for answer extraction. The nugget F3 score shows that the proposed pattern learning approach outperformed two baselines, a supervised learning approach used in NTCIR-7 ACLIA and a simple key-term based approach, for both monolingual and crosslingual tracks.

Tetsuya Sakai asked questions about the strategies of query translation and term extraction. Young-In Song asked questions about the construction of pattern seeds for bootstrapping.

by Bin Lu and Han Ren
[Return to top]

[Session Notes] Session 2: NTCIR-8 Advanced Cross-Lingual Information Access (ACLIA) - Information Retrieval for Question Answering (IR4QA)
[Meeting Program][Online Proceedings]

Date: June 16, 2010

Time: 15:50 - 17:10

1. IMU Experiment in IR4QA at NTCIR-8

The author introduced their IR system for IR4QA which consists of two modules: (1) query processing; (2) indexing, retrieval and re-rank. Questions are first classified into different types based on which different weighting strategies are used. Query expansion is used to solve the vocabulary mismatch. They built three indexes for each document: KeyFile-Unigram- Index, KeyFile-Word-Index and Indri-Word-Index, and then used interpolating to re-rank the documents returned from the above three indexes.

One audience asked a question of how to do the choices in their system which contains many human-made heuristic rules. The session chair, Tetsuya Sakai, asked if the authors have tried the oracle classification of questions have been tried, and the author answered that they did not notice the question classification in the test set.

2. Query Expansion from Wikipedia and Topic Web Crawler on CLIR
The first author, Meng-Chun Lin, reported various strategies for query expansion (QE) in the NTCIR-8 IR4QA subtask. They used Google translation and the Okapi BM25 pseudo relevance feedback as the basic retrieval system. Query expansion from Wikipedia, the result of QA analysis, and a topic web crawler is used in their system.

One question is what kinds of keywords are actually found by query expansion using Wikipedia?

Tetsuya Sakai asked whether the authors had analyzed the performance improvement for each topic by using Wikipedia query question.

3. KECIR: An Information Retrieval System for IR4QA Task
The talk was cancelled.

by Bin Lu and Han Ren
[Return to top]

[Session Notes] Proposals for New Tasks at NTCIR-9
[Call For Task Proposals]

The last session in Jun. 16 discusses the proposals for new Tasks for the next NTCIR workshop.

The first reporter discusses the next ACLIA tasks, including CCLQA and IR4QA.

The second reporter is Ruihua Song, and her discusses the proposal about opinion mining. The related task is the TREC 2009 & 2010 web track diversity task, and the data and the evaluation methods is shown as follows:

Data: SogouT by Sogou
Evaluation Methods: subtopic mining
-- involving a large number of assessors to vote
-- evaluation metrics: novelty and coverage
For ranking
-- multi-grade per-intent relevance
-- evaluation metrics: those proposed in Sakai et al
-- the probabilities of intents are considered
-- intent sets: the most important subtopics obtained in the first subtask
-- evaluating query classifiers: TBD

The third reporter introduces the patent translation task in NTCIR-9. He gives the purpose of the task, that is, to investigate the state-of-the-art performance of machine translation in terms of Chinese-English and/or possibly Chinese-Japanese patent translation, and to improve effectiveness in machine translation on patents.

The fourth reporter discusses the GeoTime task, in which the new task is location based search.

The fifth reporter discussed the textual entailment task, and the following reporter discusses IR for spoken documents.

by Bin Lu and Han Ren
[Return to top]

Last updated: July 08, 2010