It’s almost mid Feb, so I won’t even attempt to make it a Happy New Year entry. And I’ll keep it short.
As of Jan 1 this year, I’m working as an Associate Professor at the University of Stavanger. Don’t look for the IR group’s homepage, there is no such thing. Yet
Briefly about (some of) my recent work. Not surprisingly, it’s all related to entities. In a SPIRE’12 paper we study ad-hoc entity retrieval in Linked Data in a distributed setting, with focus on the problems of collection ranking and collection selection. In a short position paper, written for the ESAIR’12 workshop, we discuss how to make entity retrieval temporally-aware, using semantic knowledge bases that are enriched with temporal information (like YAGO2). In a CIKM’12 poster we introduce the task of target type identification for entity-oriented queries, where types are organized hierarchically. We also made all related resources publicly available.
Most recently, just earlier this week, I gave a lecture on Semistructured Data Search at the PROMISE Winter School. At some point in the not-too-distant future there might be a written version of this material. So if you have any feedback, comments, suggestions, etc. please don’t hesitate to contact me.
Finally, I decided to set up and maintain a separate page with a list of entity-oriented benchmarking campaigns, workshops, and journal special issues. I hope people will find it useful. If you have a relevant piece to be added here, let me know.
Together with Yi Fang (Purdue University, USA), Maarten de Rijke (University of Amsterdam, The Netherlands), Pavel Serdyukov (Yandex, Russia), and Luo Si (Purdue University, USA), I wrote a survey paper on Expertise Retrieval for the Foundations and Trends in Information Retrieval (FnTIR) journal, which is now available online. (If your organization doesn’t have a subscription, you can get a free copy from my homepage.)
The study offers a comprehensive overview of expertise retrieval, primarily from an IR perspective, but many other aspects of this multi-faceted research area are also covered. Our main attention is on models and algorithms, which are organized in five groups of basic approaches. We discuss extensions of these models as well as practical considerations. At the end of the survey, we identify a number of possible future directions; these could be of particular interest to those currently working in this area.
Earlier today I presented the work by Leif Azzopardi and myself at the CLEF 2011 conference, entitled Towards a Living Lab for Information Retrieval Research and Development. A proposal for a living lab for product search tasks. The abstract follows:
The notion of having a “living lab” to undertaken evaluations has been proposed by a number of proponents within the field of Information Retrieval (IR). However, what such a living lab might look like and how it might be setup has not been discussed in detail. Living labs have a number of appealing points such as realistic evaluation contexts where tasks are directly linked to user experience and the closer integration of research/academia and development/industry facilitating more efficient knowledge transfer. However, operationalizing a living lab opens up a number of concerns regarding security, privacy, etc. as well as challenges regarding the design, development and maintenance of the infrastructure required to support such evaluations. Here, we aim to further the discussion on living labs for IR evaluation and propose one possible architecture to create such an evaluation environment. To focus discussion, we put forward a proposal for a living lab on product search tasks within the context of an online shop.
We are keen to get feedback from the community to see if we should continue to develop this initiative further. If you’re at CLEF this week, come talk to me.