Living Labs at TREC, CLEF, ECIR

There is a number of exciting developments around the living labs initiative that Anne Schuth and I have been working on. The goal that we are looking for in this activity is to enable researchers to perform online experiments, i.e., in-situ evaluation with actual users of a live site, as opposed to relying exclusively on paid assessors (or simulated users). We believe that this is nothing less than a paradigm shift. We define this new evaluation paradigm as follows:

The experimentation platform is an existing search engine. Researchers have the opportunity to replace components of this search engine and evaluate these components using interactions with real, unsuspecting users of this search engine.

CLEF LL4IR'16Our first pilot campaign, LL4IR, co-organized by Liadh Kelly, ran at CLEF earlier this year with two use-cases: product search and web search. See our (extended) overview paper for details. LL4IR will run again at CLEF next year with the same use-cases. Thanks to CLEF, our API has by now been used extensively and tested thoroughly: it has successfully processed over 0.5M query issues, coming from real users of the two search engines involved.
TREC OpenSearchBased on the positive feedback we received from researchers as well as commercial partners, we decided it’s time to go big, that is, TREC. Getting in to TREC is no small feat, given that the number of tracks is limited to 8 and a large number of proposals compete for the slot(s) that may get freed up each year. We are very pleased that TREC accepted our proposal and this attests to the importance of the direction we’re heading. At TREC OpenSearch we’re focusing on academic search. It’s an interesting domain as it offers a low barrier of entry with ad-hoc document retrieval, and at the same time is a great playground for current research problems, including semantic matching (to overcome vocabulary mismatches), semantic search (retrieving not just documents but authors, institutes, conferences, etc.), and recommendations (related literature). We are in the process of finalizing the agreements with academic search engines and plan to have our guidelines completed by March 2016.
LiLa'16Using our API is easy (with documentation and examples available online), but it is different from the traditional TREC-like way of evaluation. Therefore, Anne and I will be giving a tutorial, LiLa, at ECIR in Padova in March, 2016. The timing is ideal in that it’s well before the TREC and CLEF deadlines and allows prospective participants to familiarize themselves with both the underlying theory and the practicalities of our methodology.
Last but not least, we are thankful to 904Labs for hosting our API infrastructure.

ICTIR 2015 paper online

“Entity Linking in Queries: Tasks and Evaluation,” an upcoming ICTIR 2015 paper by Faegheh Hasibi, Svein Erik Bratsberg, and myself is available online now. The resources developed within this study are also made publicly available.

Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.

Two fully funded PhD positions available

I have two fully funded PhD positions available in the context of the FAETE project.
The positions are for three years and come with no teaching duties! (There is also possibility for an extension to four years with 25% compulsory duties.) Starting date can be as early as Sept 2015, but no later than Jan 2016.

Further details and application are on Jobbnorge.
Application deadline: Aug 3, 2015

Evaluating document filtering systems over time

Performance of three systems over time. Systems A and B degrade, while System C improves over time, but they all have the same average performance over the entire period. We express the change in system performance using the derivative of the fitted line (in orange) and compare performance at what we call the “estimated end-point” (the large orange dots).

Our IPM paper “Evaluating document filtering systems over time” with Tom Kenter and Maarten de Rijke as co-authors is available online. In this paper we propose a framework for measuring the performance of document filtering systems. Such systems, up to now, have been evaluated in terms of traditional metrics like precision, recall, MAP, nDCG, F1 and utility. We argue that these metrics lack support for the temporal dimension of the task. We propose a time-sensitive way of measuring performance by employing trend estimation. In short, the performance is calculated for batches, a trend line is fitted to the results, and the estimated performance of systems at the end of the evaluation period is used to compare systems. To demonstrate the results of our proposed evaluation methodology, we analyze the runs submitted to the Cumulative Citation Recommendation task of the 2012 and 2013 editions of the TREC Knowledge Base Acceleration track, and show that important new insights emerge.


The Exploiting Semantic Annotations in Information Retrieval (ESAIR) workshop series aims to advance the general research agenda on the problem of creating and exploiting semantic annotations. The eighth edition of ESAIR, with a renewed set of organizers, sets its focus on applications. We invite presentations of prototype systems in a dedicated “Annotation in Action” demo track, in addition to the regular research and position paper contributions. A Best Demonstration Award, sponsored by Google, will be presented to the authors of the most outstanding demo at the workshop.

Submissions: regular research papers (4+ pages), position papers (2+1 pages), demo papers (4+ pages)
Deadline: July 2nd

The workshop also offers a track for authors of papers that were not successful at the main conference for their work to be considered for presentation at the workshop; the deadline for these contributions is July 8. In this case, authors are required to attach the reviews for their paper along with the paper so as to facilitate the decision process.

See the workshop’s homepage for details.