ESAIR’16 CfP

The continuing goal of the Exploiting Semantic Annotations in Information Retrieval (ESAIR) workshop series is to create a forum for researchers interested in the application of semantic annotations for information access tasks. ESAIR’16 sets its focus on personal mobile applications and will be held in conjunction with CIKM’16 at Indianapolis, USA in October.

Important dates:

  • Position paper submission (2+1 pages): Aug 1, 2016
  • Demo submission (4+ pages): Aug 8, 2016
  • Acceptance notification: 22 August, 2016
  • Camera-ready version: 1 September, 2016

ECIR’16 contributions

Last Sunday, Anne Schuth and I gave a tutorial on Living Labs for Online Evaluation. The tutorial’s homepage contains all the slides and reference material.

Experimental evaluation has always been central to Information Retrieval research. The field is increasingly moving towards online evaluation, which involves experimenting with real, unsuspecting users in their natural task environments, a so-called living lab. Specifically, with the recent introduction of the Living Labs for IR Evaluation initiative at CLEF and the OpenSearch track at TREC, researchers can now have direct access to such labs. With these benchmarking platforms in place, we believe that online evaluation will be an exciting area to work on in the future. This half-day tutorial aims to provide a comprehensive overview of the underlying theory and complement it with practical guidance.

Today, Faegheh Hashibi is presenting our work on the reproducibility of the TAGME Entity Linking System. The full paper and resources for this work are available online.

Among the variety of approaches proposed for entity linking, the TAGME system has gained due attention and is considered a must-have baseline. In this paper, we examine the repeatability, reproducibility, and generalizability of TAGME, by comparing results obtained from its public API with (re)implementations from scratch. We find that the results reported in the paper cannot be repeated due to unavailability of data sources. Part of the results are reproducible only through the provided API, while the rest are not reproducible. We further show that the TAGME approach is generalizable to the task of entity linking in queries. Finally, we provide insights gained during this process and formulate lessons learned to inform future reducibility efforts.

PhD vacancy

I am looking for a PhD student to work on understanding complex information needs.

Web search engines have become remarkably effective in providing appropriate answers to queries that are issued frequently. However, when it comes to complex information needs, often formulated as natural language questions, responses become much less satisfactory (e.g., “Which European universities have active Nobel laureates?”). The goal of this project is to investigate how to improve query understanding and answer retrieval for complex information needs, using massive volumes of unstructured data in combination with knowledge bases. Query understanding entails, among others, determining the type (format) of the answer (single fact, list, answer passage, list of documents, etc.) and identifying the series of processing steps (retrieval, filtering, sorting, aggregation, etc.) required to obtain that answer. If the question is not understood or ambiguous, the system should ask for clarification in an interactive way. This could be done in a conversational manner, similarly to how it is done in commercial personal digital assistants, such as SIRI, Cortana, or Google Now.

The successful applicant would join a team of 2 other PhD students working on the FAETE project.

Details and application instructions can be found here.
Application deadline: April 17, 2016.

Important note: there are multiple projects advertised within the call. You need to indicate that you are applying for this specific project. Feel free to contact me directly for more information.

Living Labs at TREC, CLEF, ECIR

There is a number of exciting developments around the living labs initiative that Anne Schuth and I have been working on. The goal that we are looking for in this activity is to enable researchers to perform online experiments, i.e., in-situ evaluation with actual users of a live site, as opposed to relying exclusively on paid assessors (or simulated users). We believe that this is nothing less than a paradigm shift. We define this new evaluation paradigm as follows:

The experimentation platform is an existing search engine. Researchers have the opportunity to replace components of this search engine and evaluate these components using interactions with real, unsuspecting users of this search engine.

CLEF LL4IR'16Our first pilot campaign, LL4IR, co-organized by Liadh Kelly, ran at CLEF earlier this year with two use-cases: product search and web search. See our (extended) overview paper for details. LL4IR will run again at CLEF next year with the same use-cases. Thanks to CLEF, our API has by now been used extensively and tested thoroughly: it has successfully processed over 0.5M query issues, coming from real users of the two search engines involved.
TREC OpenSearchBased on the positive feedback we received from researchers as well as commercial partners, we decided it’s time to go big, that is, TREC. Getting in to TREC is no small feat, given that the number of tracks is limited to 8 and a large number of proposals compete for the slot(s) that may get freed up each year. We are very pleased that TREC accepted our proposal and this attests to the importance of the direction we’re heading. At TREC OpenSearch we’re focusing on academic search. It’s an interesting domain as it offers a low barrier of entry with ad-hoc document retrieval, and at the same time is a great playground for current research problems, including semantic matching (to overcome vocabulary mismatches), semantic search (retrieving not just documents but authors, institutes, conferences, etc.), and recommendations (related literature). We are in the process of finalizing the agreements with academic search engines and plan to have our guidelines completed by March 2016.
LiLa'16Using our API is easy (with documentation and examples available online), but it is different from the traditional TREC-like way of evaluation. Therefore, Anne and I will be giving a tutorial, LiLa, at ECIR in Padova in March, 2016. The timing is ideal in that it’s well before the TREC and CLEF deadlines and allows prospective participants to familiarize themselves with both the underlying theory and the practicalities of our methodology.
Last but not least, we are thankful to 904Labs for hosting our API infrastructure.