The DBpedia-Entity v2 Test Collection

The DBpedia-Entity collection a standard test set for entity search. It is meant for evaluating retrieval systems that return a ranked list of entities in response to a free text user query. The first version of the collection (DBpedia-Entity v1) was released in 2013, based on DBpedia v3.7. It was created by assembling search queries from a number of entity-oriented benchmarking campaigns (TREC, INEX, SemSearch, etc.) and mapping relevant results to DBpedia. An updated version of the collection, DBpedia-Entity v2, has been released in 2017, as a result of a collaborative effort between the IAI group of the University of Stavanger, the Norwegian University of Science and Technology, Wayne State University, and Carnegie Mellon University. It has been published at the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), where it received a Best Short Paper Honorable Mention Award.

DBpedia-Entity v2 is based on DBpedia version 2015-10 (specifically on the English subset) and comes with graded relevance assessments collected via crowdsourcing. We also report on the performance of a selection of retrieval methods using this collection.

The collection is available here.

ECIR’16 contributions

Last Sunday, Anne Schuth and I gave a tutorial on Living Labs for Online Evaluation. The tutorial’s homepage contains all the slides and reference material.

Experimental evaluation has always been central to Information Retrieval research. The field is increasingly moving towards online evaluation, which involves experimenting with real, unsuspecting users in their natural task environments, a so-called living lab. Specifically, with the recent introduction of the Living Labs for IR Evaluation initiative at CLEF and the OpenSearch track at TREC, researchers can now have direct access to such labs. With these benchmarking platforms in place, we believe that online evaluation will be an exciting area to work on in the future. This half-day tutorial aims to provide a comprehensive overview of the underlying theory and complement it with practical guidance.

Today, Faegheh Hashibi is presenting our work on the reproducibility of the TAGME Entity Linking System. The full paper and resources for this work are available online.

Among the variety of approaches proposed for entity linking, the TAGME system has gained due attention and is considered a must-have baseline. In this paper, we examine the repeatability, reproducibility, and generalizability of TAGME, by comparing results obtained from its public API with (re)implementations from scratch. We find that the results reported in the paper cannot be repeated due to unavailability of data sources. Part of the results are reproducible only through the provided API, while the rest are not reproducible. We further show that the TAGME approach is generalizable to the task of entity linking in queries. Finally, we provide insights gained during this process and formulate lessons learned to inform future reducibility efforts.

ICTIR 2015 paper online

“Entity Linking in Queries: Tasks and Evaluation,” an upcoming ICTIR 2015 paper by Faegheh Hasibi, Svein Erik Bratsberg, and myself is available online now. The resources developed within this study are also made publicly available.

Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.

A Test Collection for Entity Search in DBpedia

With this SIGIR ’13 short paper, we try to address some of the action points that were identified as as important priorities for entity-oriented and semantic search at the JIWES workshop held at SIGIR ’12 (see the detailed workshop report). Namely: (A1) Getting more representative information needs and favoring long queries over short ones. (A2) Limiting search to a smaller, fixed set of entity types (as opposed to arbitrary types of entities). (A3) Using test collections that integrate both structured and unstructured information about entities.

An IR test collection has three main ingredients: a data collection, a set of queries, and corresponding relevance judgments. We propose to use DBpedia as the data collection; DBpedia is a community effort to extract structured information from Wikipedia. It is one of the most comprehensive knowledge bases on the web, describing 3.64M entities (in version 3.7). We took entity-oriented queries from a number of benchmarking evaluation campaigns, synthesized them into a single query set, and mapped known relevant answers to DBpedia. This mapping involved a series of not-too-exciting yet necessary data cleansing steps, such as normalizing URIs, replacing redirects, removing duplicates, and filtering out non-entity results. In the end, we have 485 queries with an average of 27 relevant entities per query.

Now, let’s see how this relates to the action points outlined above. (A1) We consider a broad range of information needs, ranging from short keyword queries to natural language questions. The average query length, computed over the whole query set, is 5.3 terms—more than double the length of typical web search queries (which is around 2.4 terms). (A2) DBpedia has a consistent ontology comprising of 320 classes, organized into a 6 levels deep hierarchy; this allows for the incorporation of type information at different granularities. (A3) As DBpedia is extracted from Wikipedia, there is more textual content available for those who wish to combine structured and unstructured information about entities.

The paper also includes a set of baseline results using variants of two popular retrieval models: language models and BM25. We found that the various query sub-sets (originating from different benchmarking campaigns) exhibit different levels of difficulty—this was expected. What was rather surprising, however, is that none of the more advanced multi-field variants could really improve over the simplest possible single-field approach. We observed that a large number of topics were affected, but the number of topics helped/hurt was about the same. The breakdowns by various query-subsets also suggest that there is no one-size-fits-all way to effectively address all types of information needs represented in this collection. This phenomenon could give rise to novel approaches in the future; for example, one could first identify the type of the query and then choose the retrieval model accordingly.

The resources developed as part of this study are made available here. You are also welcome to check out the poster I presented at SIGIR ’13.
If you have (or planning to have) a paper that uses this collection, I would be happy to hear about it!

First picks from 2013

It’s almost mid Feb, so I won’t even attempt to make it a Happy New Year entry. And I’ll keep it short.

As of Jan 1 this year, I’m working as an Associate Professor at the University of Stavanger. Don’t look for the IR group’s homepage, there is no such thing. Yet ;)

Briefly about (some of) my recent work. Not surprisingly, it’s all related to entities. In a SPIRE’12 paper we study ad-hoc entity retrieval in Linked Data in a distributed setting, with focus on the problems of collection ranking and collection selection. In a short position paper, written for the ESAIR’12 workshop, we discuss how to make entity retrieval temporally-aware, using semantic knowledge bases that are enriched with temporal information (like YAGO2). In a CIKM’12 poster we introduce the task of target type identification for entity-oriented queries, where types are organized hierarchically. We also made all related resources publicly available.
Most recently, just earlier this week, I gave a lecture on Semistructured Data Search at the PROMISE Winter School. At some point in the not-too-distant future there might be a written version of this material. So if you have any feedback, comments, suggestions, etc. please don’t hesitate to contact me.

Finally, I decided to set up and maintain a separate page with a list of entity-oriented benchmarking campaigns, workshops, and journal special issues. I hope people will find it useful. If you have a relevant piece to be added here, let me know.