NTCIR-17 Conference

The 10th International Workshop on Evaluating Information Access (EVIA 2023)

EVIA 2023

The Tenth International Workshop on Evaluating Information Access
(EVIA 2023),

a Satellite Workshop of the NTCIR-17 Conference
13:00-17:30, DAY-1, Tuesday, Dec. 12 at National Institute of Informatics, Tokyo, Japan.



Proceedings   Program   Keynote   Panel   Accepted Papers   EVIA 2023 CALL FOR PAPERS  
Submission Instructions   Important Dates  
EVIA 2023 Co-chairs   Past EVIAs

Online Proceedings

Program

Session 1

Chair: Qingyao AI

Time (JST) Content
13:00 Welcome from Doug and Qingyao
13:10 Keynote Address: Evaluating Systems that Generate Content (Ian Soboroff; In Person)
13:40 Discussant (Mark Sanderson)
13:50 Open Discussion
14:00 Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision (Fernando Diaz; In Person)
14:25 Discussant (Makoto P. Kato)
14:35 Open Discussion
14:45 Heaps’ Law in GPT-Neo Large Language Model Emulated Corpora (Uyen Lai*, Gurjit S. Randhawa and Paul Sheridan; In Person)
15:00 Discussant (Cheng Luo)
15:05 Open Discussion
15:10 Break


Session 2

Chair: Doug Oard

Time (JST) Content
15:30 Decoy Effect in Search Interaction: A Pilot Study (Nuo Chen*, Jiqun Liu, Tetsuya Sakai and Xiao-Ming Wu; Online)
15:45 Discussant (Lisa Raithel)
15:50 Content
15:55 Fairness-based Evaluation of Conversational Search: A Pilot Study (Tetsuya Sakai; In Person)
16:20 Discussant (Gareth Jones)
16:30 Open Discussion
16:40 Panel on Evaluation of Large Language Models
Moderator: Doug Oard
Panelists: Akiko Aizawa (NII), Inho Kang (Naver), Yiqun Liu (Tsinghua University), Paul Thomas (Microsoft)
17:20 Wrap Up
17:30 Adjourn


PAGE TOP

Keynote

Speaker: Ian Soboroff (NIST, USA)

Title: Evaluating Systems that Generate Content.

Abstract: The astounding emergence of ChatGPT and other AI systems that generate content, and their apparently incredible performance, are an inspiration to the research community. The performance of these LLMs is so impressive it is widely supposed that we can use them to measure their own effectiveness! We have had evaluation methods for generated content, including question answering, summarization, and translation, and in this talk I dust them off and present both a historical view and how we might approach those methods today. tl;dr, we have a lot of work to do.

PAGE TOP

Panel: Evaluation of Generative Large Language Models

Moderator: Doug Oard

Panelists: Akiko Aizawa (NII), Inho Kang (Naver), Yiqun Liu (Tsinghua University), Paul Thomas (Microsoft)

PAGE TOP

Accepted Papers (in alphabetical order of the first author's family name)

  • Nuo Chen, Jiqun Liu, Tetsuya Sakai and Xiao-Ming Wu, Decoy Effect in Search Interaction: A Pilot Study
  • Fernando Diaz, Best-Case Retrieval Evaluation: Improving the Sensitivity of Reciprocal Rank with Lexicographic Precision
  • Tetsuya Sakai. Fairness-based Evaluation of Conversational Search: A Pilot Study
  • Uyen Lai, Gurjit S. Randhawa and Paul Sheridan, Heaps' Law in GPT-Neo Large Language Model Emulated Corpora

PAGE TOP

EVIA 2023 CALL FOR PAPERS

We are delighted to invite submissions for the Tenth International Workshop on Evaluating Information Access (EVIA 2023), which will be held in conjunction with NTCIR 17 in Tokyo, Japan, on December 12-15, 2023. Information Access technologies play a crucial role as intermediaries between human information needs and digital information resources. The reliable evaluation of these technologies has been long recognized as central to the advancement of the field. As information retrieval technologies continue to pervade, retrieval methods diversify, and retrieval tools are enhanced, the importance of effective, efficient, and innovative evaluation grows as well.

Authors are encouraged to submit short research papers (2-5 pages) describing preliminary work or work in progress, full research papers (6-9 pages) describing completed work, or position papers (4-9 pages) that address one or more of the following topics, or that explore any other topics related to information access evaluation:

- Creation of test collections or other evaluation environments
- Evaluation using crowdsourcing, implicit feedback, living labs, or other inferential methods
- User studies and the evaluation of human-computer interaction in information retrieval
- Evaluation measures and statistical issues in retrieval evaluation
- Evaluation of generative large language models
- Evaluation methods for multilingual, multimedia, or mobile information access
- Evaluation issues in enterprise retrieval systems
- Other novel information access tasks and their evaluation

PAGE TOP

SUBMISSION INSTRUCTIONS

Submissions must be in English, in PDF, and use the standard ACM proceedings template, available for LaTeX or Word from https://www.acm.org/publications/proceedings-template.

Papers must report work that is not previously published, and not under review or accepted for publication elsewhere. Reviewing will be double blind, so submissions should not contain any author identification. Submit papers to the EVIA track using the NTCIR submission system at https://easychair.org/my/conference?conf=ntcir17.

PAGE TOP

IMPORTANT DATES (Anywhere on Earth)

October 2, 2023
Submission deadline
October 22, 2023
Notification of acceptance
November 3, 2023
Camera-ready paper deadline
December 12-15, 2023
EVIA Workshop and NTCIR


NTCIR and EVIA will be primarily in person events, but we will be able to arrange remote participation for cases in which travel to Tokyo is not possible. We look forward to an engaging exchange of research ideas, insights, and discoveries at EVIA 2023, contributing to the continued evolution of the information access landscape. Please join us for this intellectually enriching event!

PAGE TOP

EVIA 2023 Co-chairs

Qingyao Ai (Tsinghua University, aiqy[at]tsinghua.edu.cn) and Douglas W. Oard (University of Maryland, oard[at]umd.edu)

PAGE TOP

Program Committee members

  • Hsin-Hsi ChenNational Taiwan University

  • Charles L.A. ClarkeUniversity of Waterloo

  • Wai LamThe Chinese University of Hong Kong

  • Yiqun LiuTsinghua University

  • Jia PaikIndian Institute of Technology Kharagpur

  • Tetsuya SakaiWaseda University

  • Mark SandersonRoyal Melbourne Institute of Technology University

  • Nicola FerroUniversity of Padua

  • Ellen VoorheesNational Institute of Standards and Technology

PAGE TOP

PAST EVIAs

Last Modified: 2023-12-12