We are very pleased to announce that NTCIR-8 plans a pilot task using a
Community QA dataset from Yahoo! Japan.
The importance of the Community QA like Yahoo! Answers has been increased in our ordinary life. Wide variety of questions are asked and answered. It also provides a good opportunity to investigate the real users' (a kind of) relevance judgments for the "best answers" in large-scale.
We are fortunately provided a large-scale Community QA dataset from Yahoo! Japan for research purpose. It consists of about 100,000,000 items (about 100
GB). Then we plan a pilot task using the dataset as following;
A. Main Task: Best Answer Estimation
B. Sub Task: Question Type Classification
2. Data set:
100,000,000 items from Yahoo! Japan's "Yahoo! Chiebukuro" (http://chiebukuro.yahoo.co.jp/), a Community-based QA site which is equivalent to Yahoo! Answers. It consists of about 26,000,000 questions and 76,000,000 answers to the questions, about 100 GB in total. All items are written in Japanese.
- Training data will be more than 99% of the above including the best answer specification.
- Test data will be several thousand questions and their answers discarded the best answer specification.
- In addition to the above, set of questions/answers which manually annotated question types for training and for evaluating the test results.
3. Task Sketch
All the participants are expected to submit the result to Main Task. In the test, we will use the "Best answers" selected by the askers.
As Community QA has the wide variety of question types from simple factoid/navigate questions which often input to the ordinary web search engines to those asking the advises or opinions from the community.
To improve the effectiveness of the best answer estimation and investigating
users' intentions, question types is one of the clues. Then we are interested
in to test the automatic classification of the question types, and see
the effectiveness of the question type classification in the best answer
estimation. We will propose a question type classification scheme, and
provide a training set and the answers for the test set which containing
manually annotated question types. The participants can propose their own
classification scheme other than one provided by NTCIR, do the question
type annotation, and test the effectiveness of those classification in
the best answer type estimation over the runs without question types and/or
those with the other set of question types including the one which NTCIR
4. Task Organizers and Contact Information:
Daisuke Ishikawa and Noriko Kando (NII)
The task design is still rough, and you are welcome to join the discussion on the task design and evaluation methodology. Please consider to participate in this new exciting task.
5. How to Participate:
To join the discussion mailing list, please email to ntcadm-yahoo
To register as a participant, please read the "How to participate"
then register at; http://research.nii.ac.jp/ntcir/ntcir-ws8/ntcir8regist-e.html by selecting "pilot task" at the bottom of the form and describe
"Community QA Task" in the box for the system description.
You are most welcome to join us!