Table generation and retrieval

Tables are powerful and versatile tools for organizing and presenting data. Tables may be viewed as complex information objects, which summarize existing information in a structured form. Therefore, for many information needs, returning tables as search results may be more helpful to users than serving a ranked list of items (documents or entities). We have a line of work, with Shuo Zhang, centered around utilizing (relational) tables as the unit of retrieval (published at WWW’18 and SIGIR’18). I presented our research at this interesting intersection of entity retrieval and data search in my keynote at the DATA:SEARCH’18 workshop at SIGIR’18 (slides are here).

The DBpedia-Entity v2 Test Collection

The DBpedia-Entity collection a standard test set for entity search. It is meant for evaluating retrieval systems that return a ranked list of entities in response to a free text user query. The first version of the collection (DBpedia-Entity v1) was released in 2013, based on DBpedia v3.7. It was created by assembling search queries from a number of entity-oriented benchmarking campaigns (TREC, INEX, SemSearch, etc.) and mapping relevant results to DBpedia. An updated version of the collection, DBpedia-Entity v2, has been released in 2017, as a result of a collaborative effort between the IAI group of the University of Stavanger, the Norwegian University of Science and Technology, Wayne State University, and Carnegie Mellon University. It has been published at the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), where it received a Best Short Paper Honorable Mention Award.

DBpedia-Entity v2 is based on DBpedia version 2015-10 (specifically on the English subset) and comes with graded relevance assessments collected via crowdsourcing. We also report on the performance of a selection of retrieval methods using this collection.

The collection is available here.

Future research directions in IR

Wondering what your next IR conference paper should be about? This is the billion dollar question (well, at least for IR researchers) that I surely won’t answer for you. But, here is some hint.
(I’ve just come across this on Facebook (thnx to Arjen P. De Vries and Claudia Hauff); this is evidence, that if you cut through all the clutter, FB can indeed be a great tool sometimes for finding serendipitous information. Maybe this is also something to think about…)
The list contains nominated papers from prominent IR researchers “that, in their opinion, represent important new directions, research areas, or results in the IR field.”
I must say I thoroughly enjoyed reading it. And yes, it does make me feel good that I see our last year’s ECIR paper with Elena Smirnova on the list :)

EARS released

After a period of development I am ready to release EARS to the world. EARS is an open source toolkit for entity-oriented search and discovery in large text collections. The association finding framework and models implemented in EARS were originally developed for expertise retrieval in an organizational setting, during my PhD studies. These models are robust and generic, and can be applied to finding associations between topics and entities other than people.

At present, EARS supports two main tasks: finding entities (“Which entities are associated with topic X?”) and profiling entities (“What topics is an entity associated with?”), and implements two baseline search strategies for accomplishing these tasks; these became popularly known as “Model 1” and “Model 2”.

A software system will never be finished; EARS is no exception to that rule. It, however, is an active research project with ongoing development and enhancements. A number of new models and features will be included in upcoming releases. Feedback, comments, and suggestions are always welcome.

The toolkit is available at http://code.google.com/p/ears/.

Dataset of 1 billion web pages

The ambitious goal set out for TREC 2009 was to have a collection of 1 billion web pages. One dataset that can be shared by several tracks (specifically, the Entity, Million query, Relevance feedback, and Web tracks).
In November 2008, when this was discussed at the TREC 2008 conference, people were concerned with two main questions: (1) Is it possible to create such crawl (given the serious time constraints)? (2)  Are we going to be able to handle (at least, index) this amount of data?
Jamie Callan was confident that they (the Language Technologies Institute at Carnegie Mellon University) could build this crawl by March 2009. His confidence was not unfounded, since they had managed to create a crawl of a few hundreds of millions of web pages earlier. Yet, the counter for the one billion documents collection was to be started from 0 again…
Against this background, let us fast forward to the present. The crawl has recently completed and the dataset, referred to as ClueWeb09, is now available. It is 25 terabytes uncompressed (5 terabytes compressed), which brings me back to the troubling question: are we going to be able to handle that? We (being ILPS) will certainly do our best to step up to the challenge. I shall post about our attempts in detail later on.
But, it is a fact that doing retrieval on 1 billion documents is too big of a bite for many research groups, as it calls for nontrivial software and hardware architecture (note that it is 40 times more data than the Gov2 corpus, which I believe was the largest web crawl available to the research community so far with its 25 million documents). Therefore, a “Category B” subset of the collection is also available, consisting of “only” 50 million English pages. Some of the tracks (the Entity track for sure) will use only the Category B subset in 2009.