ICTIR 2016 paper online

“Exploiting Entity Linking in Queries for Entity Retrieval,” an upcoming ICTIR 2016 paper by Faegheh Hasibi, Svein Erik Bratsberg, and myself is available online now, along with the source code.

The premise of entity retrieval is to better answer search queries by returning specific entities instead of documents. Many queries mention particular entities; recognizing and linking them to the corresponding entry in a knowledge base is known as the task of entity linking in queries. In this paper we make a first attempt at bringing together these two, i.e., leveraging entity annotations of queries in the entity retrieval model. We introduce a new probabilistic component and show how it can be applied on top of any term-based entity retrieval model that can be emulated in the Markov Random Field framework, including language models, sequential dependence models, as well as their fielded variations. Using a standard entity retrieval test collection, we show that our extension brings consistent improvements over all baseline methods, includ- ing the current state-of-the-art. We further show that our extension is robust against parameter settings.

Update (16/09): Our paper received the Best Paper Honorable Mention Award at the conference. So it is definitely worth checking out ;)

ESAIR’16 CfP

The continuing goal of the Exploiting Semantic Annotations in Information Retrieval (ESAIR) workshop series is to create a forum for researchers interested in the application of semantic annotations for information access tasks. ESAIR’16 sets its focus on personal mobile applications and will be held in conjunction with CIKM’16 at Indianapolis, USA in October.

Important dates:

  • Position paper submission (2+1 pages): Aug 1, 2016
  • Demo submission (4+ pages): Aug 8, 2016
  • Acceptance notification: 22 August, 2016
  • Camera-ready version: 1 September, 2016

ECIR’16 contributions

Last Sunday, Anne Schuth and I gave a tutorial on Living Labs for Online Evaluation. The tutorial’s homepage contains all the slides and reference material.

Experimental evaluation has always been central to Information Retrieval research. The field is increasingly moving towards online evaluation, which involves experimenting with real, unsuspecting users in their natural task environments, a so-called living lab. Specifically, with the recent introduction of the Living Labs for IR Evaluation initiative at CLEF and the OpenSearch track at TREC, researchers can now have direct access to such labs. With these benchmarking platforms in place, we believe that online evaluation will be an exciting area to work on in the future. This half-day tutorial aims to provide a comprehensive overview of the underlying theory and complement it with practical guidance.

Today, Faegheh Hashibi is presenting our work on the reproducibility of the TAGME Entity Linking System. The full paper and resources for this work are available online.

Among the variety of approaches proposed for entity linking, the TAGME system has gained due attention and is considered a must-have baseline. In this paper, we examine the repeatability, reproducibility, and generalizability of TAGME, by comparing results obtained from its public API with (re)implementations from scratch. We find that the results reported in the paper cannot be repeated due to unavailability of data sources. Part of the results are reproducible only through the provided API, while the rest are not reproducible. We further show that the TAGME approach is generalizable to the task of entity linking in queries. Finally, we provide insights gained during this process and formulate lessons learned to inform future reducibility efforts.

PhD vacancy

I am looking for a PhD student to work on understanding complex information needs.

Web search engines have become remarkably effective in providing appropriate answers to queries that are issued frequently. However, when it comes to complex information needs, often formulated as natural language questions, responses become much less satisfactory (e.g., “Which European universities have active Nobel laureates?”). The goal of this project is to investigate how to improve query understanding and answer retrieval for complex information needs, using massive volumes of unstructured data in combination with knowledge bases. Query understanding entails, among others, determining the type (format) of the answer (single fact, list, answer passage, list of documents, etc.) and identifying the series of processing steps (retrieval, filtering, sorting, aggregation, etc.) required to obtain that answer. If the question is not understood or ambiguous, the system should ask for clarification in an interactive way. This could be done in a conversational manner, similarly to how it is done in commercial personal digital assistants, such as SIRI, Cortana, or Google Now.

The successful applicant would join a team of 2 other PhD students working on the FAETE project.

Details and application instructions can be found here.
Application deadline: April 17, 2016.

Important note: there are multiple projects advertised within the call. You need to indicate that you are applying for this specific project. Feel free to contact me directly for more information.