The continuing goal of the Exploiting Semantic Annotations in Information Retrieval (ESAIR) workshop series is to create a forum for researchers interested in the application of semantic annotations for information access tasks. ESAIR’16 sets its focus on personal mobile applications and will be held in conjunction with CIKM’16 at Indianapolis, USA in October.

Important dates:

  • Position paper submission (2+1 pages): Aug 1, 2016
  • Demo submission (4+ pages): Aug 8, 2016
  • Acceptance notification: 22 August, 2016
  • Camera-ready version: 1 September, 2016


The Exploiting Semantic Annotations in Information Retrieval (ESAIR) workshop series aims to advance the general research agenda on the problem of creating and exploiting semantic annotations. The eighth edition of ESAIR, with a renewed set of organizers, sets its focus on applications. We invite presentations of prototype systems in a dedicated “Annotation in Action” demo track, in addition to the regular research and position paper contributions. A Best Demonstration Award, sponsored by Google, will be presented to the authors of the most outstanding demo at the workshop.

Submissions: regular research papers (4+ pages), position papers (2+1 pages), demo papers (4+ pages)
Deadline: July 2nd

The workshop also offers a track for authors of papers that were not successful at the main conference for their work to be considered for presentation at the workshop; the deadline for these contributions is July 8. In this case, authors are required to attach the reviews for their paper along with the paper so as to facilitate the decision process.

See the workshop’s homepage for details.

Living Labs for Information Retrieval Evaluation

Evaluation is a central aspect of information retrieval (IR) research. In the past few years, a new evaluation methodology known as living labs has been proposed as a way for researchers to be able to perform in-situ evaluation. This is not new, you might say; major web search engines have been doing it for serveral years already. While this is very true, it also means that this type of experimentation, with real users performing tasks using real-world applications, is only available to those selected few who are involved with the research labs of these organizations. There has been a lot of complaining about the “data divide” between industry and academia; living labs might be a way to bridge that.

The Living Labs for Information Retrieval Evaluation (LL’13) workshop at CIKM last year was a first attempt to bring people, both from academia and industry, together to discuss challenges and to formulate practical next steps. The workshop was successful in identifying and documenting possible further directions. See the preprint of the workshop summary.

The second edition of the iving Labs for IR workshop (LL’14), will run at CIKM this year. Our main goals are to continue our community building efforts around living labs for IR and to pursue the directions set out at LL’13. Having a community benchmarking platform with shared tasks would be a key catalyst in enabling people to make progress in this area. This is exactly what we are trying to set up for LL’14, in the form of a challenge (with the ultimate goal of turning it into a TREC, NTCIR or CLEF track in the future).

The challenge focuses on two specific use-cases: product search and local domain search. The basic idea is that participants receive a set of 100 frequent queries along with candidate results for these queries, and some general collection statistics. They are then expected to produce rankings for each query and to upload these rankings through an API. These rankings are evaluated online, on real users, and the results of these evaluations are made available to the participants, again, through an API.

In preparation for this challenge, we are organising a challenge workshop in Amsterdam on the 6th of June. The programme includes invited talks and a “hackathon.” We have a limited number of travel grants available (for those coming from outside The Netherlands and coming from academia) to cover travel and accommodation expenses. These are available on a “first come first served” basis (at most one per institute). If you would like to make use of this opportunity, please let us know as soon as possible.

More details may be found on our brand-new website:

Call for Demos | Living Labs for IR workshop

The Living Labs for Information Retrieval Evaluation (LL’13) workshop at CIKM’13 invites researchers and practitioners to present their innovative prototypes or practical developments in a dedicated demo track. Demo submissions must be based on an implemented system that pursues one or more aspects relevant to the interest areas of the workshop.

Authors are strongly encouraged to target scenarios that are rooted in real-world applications. One way to think about this is by considering the following: as a company operating a website/service/application, what methods could allow various academic groups to experiment with specific components of this website/service/application?
In particular, we seek prototypes that define specific component(s) in the context of some website/service/application, and allow for the testing and evaluation of alternative methods for that component. One example is search within a specific vertical (such as product or travel search engine), but we encourage authors to think outside the (search) box.

All accepted demos will be evaluated and considered for the Best Demo Award.
The Best Demo Award winner will receive an award of 750 EUR, offered by the ‘Evaluating Information Access Systems’ (ELIAS) ESF Research Networking Programme. The award can be used to cover travel, accommodation or other expenses in relation to attending and/or demo’ing at LL’13.

The submission deadline for demos and for all other contributions is July 22 (extended).

Further details can be found on the workshop website.

Living Labs for IR workshop @CIKM

Together with Liadh Kelly, David Elsweiler, Evangelos Kanoulas, and Mark Smucker, I’m co-organising a workshop on Living Labs for IR Evaluation at CIKM this year.

The basic idea of living labs for IR is that rather than individual research groups independently developing experimental search infrastructures and gathering their own groups of test searchers for IR evaluations, a central and shared experimental environment is developed to facilitate the sharing of resources.

Living labs would offer huge benefits to the community, such as: availability of, potentially larger, cohorts of real users and their behaviours, e.g. querying behaviours, for experiment purposes; cross-comparability across research centres; and greater knowledge transfer between industry and academia, when industry partners are involved. The need for this methodology is further amplified by the increased reliance of IR approaches on proprietary data; living labs are a way to bridge the data divide between academia and industry.

There are many challenges to be overcome before the benefits associated with living labs for IR can be realised, including challenges associated with living labs architecture and design, hosting, maintenance, security, privacy, participant recruiting, and scenarios and tasks for use development.

This workshop aims to bring together for the first time people interested in progressing the living labs for IR evaluation methodology. An interactive forum for researchers to share ideas and initiate collaborations will be provided, with the explicit goal of determining means for progressing towards living labs for IR and formulating practical next steps for progression.

See the Call-for-Papers for more details.

As part of the workshop, we are considering organising a challenge in the e-commerce domain with the involvement of a medium-sized online retailer. The goal of this challenge would be to (i) allow academics to work with real users and data (esp. those who otherwise would have no access to such data) and (ii) to provide a starting point for the discussions at the workshop.

We will set up and run this challenge if there is sufficient interest in the community. We have made a poll to collect some initial feedback — please let us know what you think!