ICTIR 2016 paper online

“Exploiting Entity Linking in Queries for Entity Retrieval,” an upcoming ICTIR 2016 paper by Faegheh Hasibi, Svein Erik Bratsberg, and myself is available online now, along with the source code.

The premise of entity retrieval is to better answer search queries by returning specific entities instead of documents. Many queries mention particular entities; recognizing and linking them to the corresponding entry in a knowledge base is known as the task of entity linking in queries. In this paper we make a first attempt at bringing together these two, i.e., leveraging entity annotations of queries in the entity retrieval model. We introduce a new probabilistic component and show how it can be applied on top of any term-based entity retrieval model that can be emulated in the Markov Random Field framework, including language models, sequential dependence models, as well as their fielded variations. Using a standard entity retrieval test collection, we show that our extension brings consistent improvements over all baseline methods, includ- ing the current state-of-the-art. We further show that our extension is robust against parameter settings.

Update (16/09): Our paper received the Best Paper Honorable Mention Award at the conference. So it is definitely worth checking out ;)

ECIR’16 contributions

Last Sunday, Anne Schuth and I gave a tutorial on Living Labs for Online Evaluation. The tutorial’s homepage contains all the slides and reference material.

Experimental evaluation has always been central to Information Retrieval research. The field is increasingly moving towards online evaluation, which involves experimenting with real, unsuspecting users in their natural task environments, a so-called living lab. Specifically, with the recent introduction of the Living Labs for IR Evaluation initiative at CLEF and the OpenSearch track at TREC, researchers can now have direct access to such labs. With these benchmarking platforms in place, we believe that online evaluation will be an exciting area to work on in the future. This half-day tutorial aims to provide a comprehensive overview of the underlying theory and complement it with practical guidance.

Today, Faegheh Hashibi is presenting our work on the reproducibility of the TAGME Entity Linking System. The full paper and resources for this work are available online.

Among the variety of approaches proposed for entity linking, the TAGME system has gained due attention and is considered a must-have baseline. In this paper, we examine the repeatability, reproducibility, and generalizability of TAGME, by comparing results obtained from its public API with (re)implementations from scratch. We find that the results reported in the paper cannot be repeated due to unavailability of data sources. Part of the results are reproducible only through the provided API, while the rest are not reproducible. We further show that the TAGME approach is generalizable to the task of entity linking in queries. Finally, we provide insights gained during this process and formulate lessons learned to inform future reducibility efforts.

ICTIR 2015 paper online

“Entity Linking in Queries: Tasks and Evaluation,” an upcoming ICTIR 2015 paper by Faegheh Hasibi, Svein Erik Bratsberg, and myself is available online now. The resources developed within this study are also made publicly available.

Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.

Evaluating document filtering systems over time


Performance of three systems over time. Systems A and B degrade, while System C improves over time, but they all have the same average performance over the entire period. We express the change in system performance using the derivative of the fitted line (in orange) and compare performance at what we call the “estimated end-point” (the large orange dots).

Our IPM paper “Evaluating document filtering systems over time” with Tom Kenter and Maarten de Rijke as co-authors is available online. In this paper we propose a framework for measuring the performance of document filtering systems. Such systems, up to now, have been evaluated in terms of traditional metrics like precision, recall, MAP, nDCG, F1 and utility. We argue that these metrics lack support for the temporal dimension of the task. We propose a time-sensitive way of measuring performance by employing trend estimation. In short, the performance is calculated for batches, a trend line is fitted to the results, and the estimated performance of systems at the end of the evaluation period is used to compare systems. To demonstrate the results of our proposed evaluation methodology, we analyze the runs submitted to the Cumulative Citation Recommendation task of the 2012 and 2013 editions of the TREC Knowledge Base Acceleration track, and show that important new insights emerge.

Temporal Expertise Profiling

Expertise is not a static concept. Personal interest as well as the landscape of respective fields change over time; knowledge becomes outdated, new topics emerge, and so on.
In recent work, Jan Rybak, Kjetil Nørvåg, and I have been working on capturing, modeling, and characterizing the changes in a person’s expertise over time.

The basic idea that we presented in an ECIR’14 short paper is the following. The expertise of an individual is modelled as a series of profile snapshots. Each profile spanshot is a weighted tree; the hierarchy represents the taxonomy of expertise areas and the weights reflect the person’s knowledge on the corresponding topic. By displaying a series of profile snapshots on a timeline, we can have a complete overview of the development of expertise over time. In addition, we identify and characterize important changes that occur in these profiles. See our colorful poster for an illustration.

In an upcoming SIGIR’14 demo we introduce a web-based system, ExperTime, where we implemented these ideas. While our approach is generic, the system is particular to the computer science domain. Specifically, we use publications from DBLP, classified according to the ACM 1998 Computing Classification System. Jan also created a short video that explains the underlying ideas and introduces the main features of the system:

The next step on our research agenda is the evaluation of temporal expertise profiles. This is a challenging problem for two reasons: (1) the notions of focus and topic changes are subjective and are likely to vary from person to person, and (2) the complexity of the task is beyond the point where TREC-like benchmark evaluations are feasible. The feedback we plan to obtain with the ExperTime system, both implicit and explicit, will provide invaluable information to guide the development of appropriate evaluation methodology.

If you are interested in your temporal expertise profile, you are kindly invited to sign up and claim it. Or, it might already be ready and waiting for you: http://bit.ly/expertime.