Keynote
Date: June 8th (Wed), 2016 (Time: 10:20 a.m. - 11:20 a.m.)
Location: Hitotsubashi-hall, NII, Tokyo, Japan
Title: What Would We Like IR Metrics to Measure?
Speaker: Alistair Moffat, Department of Computing and Information Systems The University of Melbourne
The field of Information Retrieval has a long-standing tradition of rigorous evaluation, and an expectation that proposals for new mechanisms and techniques will either be evaluated in batch-mode experiments against realistic test collections, with results reported derived from standard tools; or will be evaluated through the use of user studies. This emphasis on evidence, and the desire for verification of proposals, has meant that IR effectiveness measurement is an important area studied in its own right. The result has been the development of a complex suite of relevance metrics, each of them with seemingly different behavior. Well-known examples include Precision, Recall, Average Precision, Normalized Discounted Cumulative Gain, BPref, the Q-Measure, Rank-Biased Precision, and so on. In this presentation the underlying question of what it is that a metric should measure is returned to, with a set of desiderata for usefulness used as a starting point for examining the existing palette of metrics. Recent work that has described a goal-sensitive adaptive metric called INST will then be presented.
Alistair Moffat has been involved in research in information retrieval for more than 25 years. He has published numerous papers in the areas of index compression, text compression, and dynamic pruning mechanisms, all of which help support efficient ranked querying. Alistair is a co-author of the 1991 (revised 1994) book Managing Gigabytes, and also co-author of the 2002 book Compression and Coding Algorithms. Much of Alistair's recent work has examined the issue of IR system evaluation, and, with other co-authors in Australia, he has focused on the relationship between models of user interactions with search results pages, and the effectiveness metrics that those interactions correspond to. Alistair was co-Chair for SIGIR 1998 in Melbourne, and for CIKM 2015, also held in Melbourne; and co-Program Committee Chair for SIGIR 2005 (Salvador, Brazil) and SIGIR 2015 (Santiago, Chile).
Alistair has been a teaching/research faculty member at the University
of Melbourne for nearly thirty years, and was Department Chair from 2007-2011.
During those thirty years he has taught programming skills to well in excess
of 10,000 undergraduate students, has authored a popular (he insists that
it is popular, because all his students have been told to purchase it)
C programming textbook (Programming, Problem Solving, and Abstraction with
C, 2002, revised 2012), and has received awards for his teaching and lecturing
skills.
Alistair's PhD was completed in 1985, at the University of Canterbury,
in New Zealand.
EVIA 2016 Keynote
Date: June 7th (Tues), 2016 (Time: 13:00 p.m. - 13:50 p.m.)
Location: Hitotsubashi Conference Room 2-4, NII, Tokyo, Japan
Title: Understanding and Predicting Search Satisfaction in a Heterogeneous Environment
Speaker: Yiqun Liu , Tsinghua University, China
Search performance evaluation can be performed using metrics based on result relevance or alternative measures based on users’ search experience. Recent studies indicate that relevance-based evaluation metrics, such as MAP and nDCG, may not be perfectly correlated with users’ search experience (usually considered as the gold standard). Therefore, search satisfaction has become one of the prime concerns in search evaluation studies. In this talk, I will discuss about some of our recent progresses in the understanding and effective prediction of search satisfaction. I will start by talking about the relationship between relevance, usefulness and satisfaction. More specifically, how do document’s usefulness perceived by the user and relevance annotated by the assessors correlate with user’s satisfaction? After that, we investigate users’ satisfaction perception in a heterogeneous search environment and try to find out how vertical results on SERPs affect users’ satisfaction. Finally, we introduce a novel satisfaction prediction framework which relies on users’ mouse movement patterns (motifs) to identify satisfied or unsatisfied search sessions.
Yiqun Liu is now working as associate professor at the Department of Computer Science and Technology in Tsinghua University, Beijing, China. His major research interests are in Web Search, User Behavior Analysis, and Natural Language Processing. He is also a Principal Investigator (PI) of a joint Center (named NExT) between National University of Singapore and Tsinghua University to develop technologies for live media search. He serves in the editorial board of the Information Retrieval Journal (Springer). He also serves as short paper chair of SIGIR2017, program chair of NTCIR-13, general chair of AIRS2016 as well as program committee members of a number of important international academic conferences including SIGIR, WWW, AAAI, ACL and IJCAI. He published over 30 papers in top-tier academic conferences/journals and got over 1,600 citations according to Google scholar. He received the best paper honorable mention award of SIGIR2015 and AIRS2013. He has also been the coordinator for the NTCIR INTENT and IMine tasks since 2011.
NTCIR-12 Invited Talks
Date: June 10th (Friday), 2016 (Time: 13:30 p.m. - 13:50 p.m.)
Location: Hitotsubashi Conference Room 2-4, NII, Tokyo, Japan
Speaker: Noriko Arai, National Institute of Informatics, Japan
“Todai Robot Project (Can a robot get into the University of Tokyo?)” was initiated by National Institute of Informatics in 2011 as an AI grand challenge. The goal of the project is to create an AI system that answers real questions of university entrance examinations consisting of two parts, the multiple-choice style national standardized tests and the written tests including short essays. The task naturally requires development of underlying technologies in the research areas including natural language processing, image processing, speech recognition, automated theorem proving, computer algebra and computer simulation. At the same time, interdisciplinary research synthesis is necessary. From 2013, our software have taken mock tests of the National Center Test every year. The result shows that its ability is still far below the average entrants of Tokyo University. However, it is beyond the average: it is competent to pass the entrance exams of two thirds of universities including 33 national universities in Japan.
Dr. Noriko Arai is a Professor of Information and Society Research Division of National Institute of Informatics since 2006. She is also Director of Research Center for Community Knowledge since 2008. She received her Ph.D degree with major achievements in theory of proof complexity from Tokyo Institute of Informatics in 1997. Her major research interests fall in the fields of mathematical logic, theory of computational and proof complexity, e-learning, knowledge sharing and artificial intelligence. Dr. Arai is known as the leader of the open source software project, NetCommons, which is used more than 3,500 schools and institutes. She won IASTED 3rd International Software Competition (2007), and was awarded NISTEP Award from National Institute of Science and Technology Policy (2009), Prizes for Science and Technology, the Commendation for Science and Technology from The Ministry of Education, Culture, Sports, Science and Technology in Japan (2010). From 2011, she has been Program Director of an AI grand challenge, Todai Robot Project (Can AI get into the University of Tokyo?). This project aims to add a new dimension to the current information technology and bring a deeper understanding of human intelligence, by setting a concrete goal: development of a software system which is able to pass university entrance exams.
Date: June 10th (Friday), 2016 (Time: 13:50 p.m. - 14:10 p.m.)
Location: Hitotsubashi Hall, NII, Tokyo, Japan
Title: News from TREC
Speaker: Ian Soboroff, the National Institute of Standards and Technology (NIST), USA
As TREC enters its 25th year, it might seem that we’re quite far from our “adhoc search” roots, but a closer look reveals that we are pushing IR research in new and different directions by expanding the Cranfield model, not abandoning it. The TREC 2015 tracks were quite novel and presented significant challenges for participants. The Live QA track requires participants to answer open-domain questions from Yahoo Answers in real time. In the fourth year of the Contextual Suggestion track, a live task was added to the existing batch experiment, with feedback from Mechanical Turk workers. The new Total Recall track (motto: “I’ll be back”) asks participating systems to achieve high recall with a simulated human in the loop. The new Dynamic Domain track also featured relevance feedback from a simulated user as the system navigates topics with multiple diverse sub-intents. The fifth year of the Microblog track moved the task to a real-time scenario, with participants running on the public feed of tweets as they happened. In the third iteration of the Temporal Summarization task, participants systems emitted sentence updates about emerging events and were penalized for latency as well as irrelevance. The new Tasks track featured both task understanding and task completion components. Lastly, the Clinical Decision Support track required participants to return biomedical journal articles relevant to a patient case narrative with respect to a specific question such as diagnosis or testing.
Dr. Ian Soboroff is a computer scientist and leader of the Retrieval Group at the National Institute of Standards and Technology (NIST). The Retrieval Group organizes the Text REtrieval Conference (TREC), the Text Analysis Conference (TAC), and the TREC Video Retrieval Evaluation (TRECVID). These are all large, community-based research workshops that drive the state-of-the-art in information retrieval, video search, web search, information extraction, text summarization and other areas of information access. He has co-authored many publications in information retrieval evaluation, test collection building, text filtering, collaborative filtering, and intelligent software agents. His current research interests include building test collections for social media environments and nontraditional retrieval tasks.
Date: June 10th (Friday), 2016 (Time: 14:10 p.m. - 14:30 p.m.)
Location: Hitotsubashi Hall, NII, Tokyo, Japan
Title: MediaEval 2016 Multimedia Benchmarking Initiative
Speaker: Gareth Jones, Dublin City University, Ireland
MediaEval is a multimedia benchmarking initiative which seeks to evaluate new algorithms for multimedia access and retrieval. MediaEval emphasizes the "multi" in multimedia, including tasks combining various facet combinations of speech, audio, visual content, tags, users, and context. MediaEval innovates new tasks and techniques focusing on the human and social aspects of multimedia content in a community driven setting. The initiative provides a platform for researchers to organize benchmark tasks within a planned annual timeline and to report results at an end of campaign workshop. This presentation will summarize current activities within the MediaEval 2016 campaign.
Gareth Jones conducts research on multiple topics in information retrieval, including multimedia, multilingual and personal content across a wide range of application areas. Over the last 20 years he has published hundreds of papers describing this work at multiple venues. Much of his research encompasses the design of tasks for the evaluation of this research, including test collections and evaluation metrics. Since 2002 he has been responsible for the organisation of international benchmarking tasks at venues including CLEF, FIRE, NTCIR and TRECVid. In 2010, together with Martha Larson, Delft University of Technology, The Netherlands, he co-founded the MediaEval Multimedia Benchmarking initiative to provide a platform for the development and evaluation of novel tasks in multimedia indexing and search. Gareth has served as co-Programme Chair for ECIR 2011, Information Retrieval Chair for ACM CIKM 2010, and co-Chair of ACM SIGIR 2013 hosted in Dublin. He is co-Chair for CLEF 2017 which will be co-located with the MediaEval 2017 Workshop in Dublin.
Gareth is a faculty member of the School of Computing, Dublin City University (DCU), Ireland and a Principal Researcher in the SFI ADAPT Centre. He holds B.Eng. and PhD degrees from the University of Bristol, UK. He has previously held posts at the University of Cambridge and University of Exeter, U.K., and in 1997 was a Toshiba Fellow at the Toshiba Corporation Research and Development Center in Kawasaki, Japan.
Last modified: 2016-06-03