[Date Prev][Date Next][Date Index]
[ntcir:304] SIGIR 2009 Workshop on the Future of IR Evaluation
- To: NTCIR@xxxxxxxxx
- Subject: [ntcir:304] SIGIR 2009 Workshop on the Future of IR Evaluation
- From: tetsuyasakai@xxxxxxxxxxxxxxxx
- Date: Fri, 20 Mar 2009 17:56:06 +0900 (JST)
SIGIR 2009 Workshop on the Future of IR Evaluation
July 23, Boston
Call for Papers
Evaluation is at the core of information retrieval: virtually all
progress owes directly or indirectly to test collections built within
the so-called Cranfield paradigm. However, in recent years, IR
researchers are routinely pursuing tasks outside the traditional
paradigm, by taking a broader view on tasks, users, and context.
There is a fast moving evolution in content from traditional static
text to diverse forms of dynamic, collaborative, and multilingual
information sources. Also industry is embracing "operational"
evaluation based on the analysis of endless streams of queries and
We invite the submission of papers that think outside the box:
- Are you working on an interesting new retrieval task or aspect? Or
on its broader task or user context? Or on a complete system with
novel interface? Or on interactive/adaptive search? Or ...?
Please explain why this is of interest, and what would be an
appropriate way of evaluating.
- Do you feel that the current evaluation tools fail to do justice to
your research? Is there a crucial aspect missing? Or are you
interested in specific, rare, phenomena that have little impact on
the average scores? Or ...? Please explain why this is of interest,
and what would be an appropriate way of evaluating.
- Do you have concrete ideas how to evaluate such a novel IR task? Or
ideas for new types of experimental or operational evaluation? Or
new measures or ways of re-using existing data? Or ...? Please
explain why this is of interest, and what would be an appropriate
way of evaluating.
The workshop brings together all stake-holders ranging from those with
novel evaluation needs, such as a PhD candidate pursuing a new
IR-related problem, to senior IR evaluation experts. Desired outcomes
are insight into how to make IR evaluation more "realistic," and at
least one concrete idea for a retrieval track or task (at CLEF, INEX,
NTCIR, TREC) that would not have happened otherwise.
Help us shape the future of IR evaluation!
- Submit a short 2-page poster or position paper explaining your key
wishes or key points,
- and take actively part in the discussion at the Workshop.
The deadline is Monday May 18, 2009, further submission details are on
Shlomo Geva, INEX & QUT, Australia
Jaap Kamps, INEX & University of Amsterdam, The Netherlands
Carol Peters, CLEF & ISTI-CNR, Italy
Tetsuya Sakai, NTCIR & Microsoft Research Asia, China
Andrew Trotman, INEX & University of Otago, New Zealand
Ellen Voorhees, TREC/TAC & NIST, USA