I’ve got one full paper and two posters accepted at this year’s SIGIR conference.
The paper titled A Few Examples Go A Long Way: Constructing Query Models from Elaborate Query Formulations (co-authored by Wouter Weerkamp and Maarten de Rijke) addresses the document search task set out at TREC 2007. Our scenario is one where the topic description consists of a short query (of a few keywords) together with examples of key reference pages. Our main research goal is to investigate ways of utilizing these example documents provided by the users. In particular, we use these “sample documents” for query expansion, by sampling terms from them both independent of and dependent on the original query. We find that the query-independent expansion method helps to address the “aspect recall” problem, by identifying relevant documents that are not identified by the other query models we consider.
In the poster paper titled Parsimonious Relevance Models (co-authored by Edgar Meij, Wouter Weerkamp, and Maarten de Rijke) we describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. The results of our experimental evaluation (performed on six TREC collections) indicate that parsimonious relevance models significantly outperform their non-parsimonized counterparts on most measures.
Finally, the poster titled Bloggers as Experts (co-authored by Wouter Weerkamp and Maarten de Rijke) views the blog distillation task (finding blogs that are principally devoted to a given topic) as an association finding task between topics and bloggers. Under this view, it resembles the expert finding task (for which a range of models have been proposed). We adopt two expert finding models (Model 1 and Model 2 from our SIGIR 2006 paper) to determine their effectiveness as feed distillation strategies. We find that out-of-the-box expert finding methods can achieve competitive scores on the feed distillation task. However, as opposed to expert finding, where Model 2 performed consistently better, for the blog distillation task Model 1 is the preferred strategy.