NTCIR-8 Multilingual Opinion Analysis Task 1st Round Table Meeting memo Date: June 22th (Mon), 2009 13:30`16:00 Place: NII #2009 Attendance: Noriko Kando(NII), Junichi Fukumoto (Ritsumeikan Univ.), Hiroya Susuki (Keio Univ.), David Kirk Evans (Amazon Japan) Yohei Seki (Toyohashi Univ.) 1. NTCIR-8 MOAT Task Of NTCIR-6, -7 subtasks, we continued four subtasks at least: opinionated/polarity judgement and opinion holder/target identification. We also challenged three new aspects: scenario-based evaluation, cross-lingual analysis, and collaborative annotation with participants. A. Document Genre and Number of Participants * Newspapers: J/E/TC/SC or Blog: J/E For budget or organizer overloads, we could not hold both tasks for newspapers and blogs. We ask the participants for the best document genre combination of the three choises as follows. A-1: N:J/E/TC/SC A-2: N:J/E/TC/SC, B:J A-3: N:TC/SC, B:J/E We should not continue the main task of the language if the number of participants is less than five at the language side. Note that the number of participants at NTCIR-7 MOAT is more than seven teams at each language side. B. Number of topics * For the collaborative annotation, we planned three topcics from five participants (except organizer). The resultant should be fifteen topics. * One of the goal in NTCIR-8 MOAT is to estimate the proper number of topics to evaluate the difference between systems. C. Relevant judgement subtask * To extract proper opinion, the relevant opinion retrieval is important. * We should continue the relevant judgement task as in NTCIR-7 or take care the relevance judgement for scenario-based evlauation. * In previous NTCIR-7 MOAT, simplified chinese judged all sentences for opinionatedness, but this should be changed to be consisntent with other languages. D. English NewspapersiNew York Timesj * We could not distribute full document set and should distribute subset. If the participants pay only $50, we could distribute full document set. In the subset case, we should select the documents randomly or should contain relevant documents. In the latter case, the document set might be skewed. * For Japanese newspapers, the participants could use more than seven years if you should submit the results to NTCIR-7 organizers. 2. Pilot task We have four candidates for pilot tasks. We ask the participants and decide the task for NTCIR-8 by June 30th. A. Yahoo! Ansers: Best Answer Estimation (in Japanese) B. ICWSM 2009 Blog Opinion Analysis Task (Japanese: Yohei co-organized with Keio University, English: look for co-organizer candidates) C. Blog recommendation tasks from Yahoo! Answers questions (in Japanese) D. Review Opinion Analysis Task (Ruifeng XuCKam-Fai Wong, et al. co-organizers (in Chinese, opinionatedness and polarity) 3. Schedule * Collaborative Annotation: July-September, 2009 * Formal Run: October or November, 2009 * Pilot task formal run: December, 2009? * Evaluation return: January, 2010 * Paper submission: March, 2010 * Meeting: June, 2010