PhD position in Semantic Entity Search

The University of Stavanger invites applications for a three-year doctorate scholarship in Information Technology, at the Faculty of Science and Technology, in the Department of Electrical Engineering and Computer Science, beginning September 1, 2014.

Project: Semantic Entity Search
Semantic search refers to the idea that the search engine understands the concepts, meaning and intent behind the query that the user enters into the search box, and provides rich and focused responses (as opposed to merely a list of documents). Entities, such as people, organizations or products, play a central role in this context; they reflect the way humans think and organize information. We can observe that major search engines (like Google or Apple’s SIRI) are becoming “smarter” day by day in recognizing specific types of objects (for example, locations, events or celebrities); yet, true semantic search has still a long way to go.
This project aims to develop a theoretically sound and computationally efficient framework for entity-oriented information access: the search and discovery of entities and relationships between entities. A key element to a successful approach is the combination of massive volumes of structured and unstructured information from the Document Web and the Data Web, respectively. Successful candidates will be expected to conduct research, design, develop, and deploy state-of-art, scalable information retrieval, information extraction and machine learning techniques for innovative entity-oriented search applications. The project will include both theoretical and empirical explorations, where lab-based results will be evaluated in ‘live’ environments with real users.

Qualifications: M.Sc. in Computer Science, Computational Linguistics, Mathematics or related fields by the appointment date. Good written and spoken command of English. Research experience or a track record of project based work, demonstrable interest in the domain, solid programming skills (particularly Java), and experience in manipulating and analyzing large data sets (esp. using Hadoop) are a clear plus.

The research fellow is salaried according to the State Salary Code, l.pl 17.515, code 1017, LR 20, ltr 50, of NOK 421 100,- per annum.

Details and application instructions can be found here.
Application deadline: January 11, 2014.

A Test Collection for Entity Search in DBpedia

With this SIGIR ’13 short paper, we try to address some of the action points that were identified as as important priorities for entity-oriented and semantic search at the JIWES workshop held at SIGIR ’12 (see the detailed workshop report). Namely: (A1) Getting more representative information needs and favoring long queries over short ones. (A2) Limiting search to a smaller, fixed set of entity types (as opposed to arbitrary types of entities). (A3) Using test collections that integrate both structured and unstructured information about entities.

An IR test collection has three main ingredients: a data collection, a set of queries, and corresponding relevance judgments. We propose to use DBpedia as the data collection; DBpedia is a community effort to extract structured information from Wikipedia. It is one of the most comprehensive knowledge bases on the web, describing 3.64M entities (in version 3.7). We took entity-oriented queries from a number of benchmarking evaluation campaigns, synthesized them into a single query set, and mapped known relevant answers to DBpedia. This mapping involved a series of not-too-exciting yet necessary data cleansing steps, such as normalizing URIs, replacing redirects, removing duplicates, and filtering out non-entity results. In the end, we have 485 queries with an average of 27 relevant entities per query.

Now, let’s see how this relates to the action points outlined above. (A1) We consider a broad range of information needs, ranging from short keyword queries to natural language questions. The average query length, computed over the whole query set, is 5.3 terms—more than double the length of typical web search queries (which is around 2.4 terms). (A2) DBpedia has a consistent ontology comprising of 320 classes, organized into a 6 levels deep hierarchy; this allows for the incorporation of type information at different granularities. (A3) As DBpedia is extracted from Wikipedia, there is more textual content available for those who wish to combine structured and unstructured information about entities.

The paper also includes a set of baseline results using variants of two popular retrieval models: language models and BM25. We found that the various query sub-sets (originating from different benchmarking campaigns) exhibit different levels of difficulty—this was expected. What was rather surprising, however, is that none of the more advanced multi-field variants could really improve over the simplest possible single-field approach. We observed that a large number of topics were affected, but the number of topics helped/hurt was about the same. The breakdowns by various query-subsets also suggest that there is no one-size-fits-all way to effectively address all types of information needs represented in this collection. This phenomenon could give rise to novel approaches in the future; for example, one could first identify the type of the query and then choose the retrieval model accordingly.

The resources developed as part of this study are made available here. You are also welcome to check out the poster I presented at SIGIR ’13.
If you have (or planning to have) a paper that uses this collection, I would be happy to hear about it!

Call for Demos | Living Labs for IR workshop

The Living Labs for Information Retrieval Evaluation (LL’13) workshop at CIKM’13 invites researchers and practitioners to present their innovative prototypes or practical developments in a dedicated demo track. Demo submissions must be based on an implemented system that pursues one or more aspects relevant to the interest areas of the workshop.

Authors are strongly encouraged to target scenarios that are rooted in real-world applications. One way to think about this is by considering the following: as a company operating a website/service/application, what methods could allow various academic groups to experiment with specific components of this website/service/application?
In particular, we seek prototypes that define specific component(s) in the context of some website/service/application, and allow for the testing and evaluation of alternative methods for that component. One example is search within a specific vertical (such as product or travel search engine), but we encourage authors to think outside the (search) box.

All accepted demos will be evaluated and considered for the Best Demo Award.
The Best Demo Award winner will receive an award of 750 EUR, offered by the ‘Evaluating Information Access Systems’ (ELIAS) ESF Research Networking Programme. The award can be used to cover travel, accommodation or other expenses in relation to attending and/or demo’ing at LL’13.

The submission deadline for demos and for all other contributions is July 22 (extended).

Further details can be found on the workshop website.

Entity Linking and Retrieval tutorial at WWW’13

Earlier this week, Edgar Meij, Daan Odijk, and I gave a half-day tutorial at the WWW’13 conference on Entity Linking and Retrieval.

The tutorial consists of three parts: (i) entity linking (Edgar), (ii) entity retrieval (me), and a hands-on lab session (Daan). The hands-on session is further subdivided into entity linking and entity retrieval parts. The slides are made available on github. We also created a Mendeley group with all the papers that were discussed. The tags, entity linking and entity retrieval, hint the part of the tutorial to which each paper belongs. We intend to maintain and expand this repository, so it might be useful for you to follow this group.

Given that this was a half-day tutorial, we had to be quite selective in what we presented. A full-day version of the same tutorial will be given by us at SIGIR’13 in July. If you have suggestions for improvements and pointers to papers, approaches, services, etc. that we could/should cover (yes, this includes your own work) then don’t hesitate to get in touch with us!

Living Labs for IR workshop @CIKM

Together with Liadh Kelly, David Elsweiler, Evangelos Kanoulas, and Mark Smucker, I’m co-organising a workshop on Living Labs for IR Evaluation at CIKM this year.

The basic idea of living labs for IR is that rather than individual research groups independently developing experimental search infrastructures and gathering their own groups of test searchers for IR evaluations, a central and shared experimental environment is developed to facilitate the sharing of resources.

Living labs would offer huge benefits to the community, such as: availability of, potentially larger, cohorts of real users and their behaviours, e.g. querying behaviours, for experiment purposes; cross-comparability across research centres; and greater knowledge transfer between industry and academia, when industry partners are involved. The need for this methodology is further amplified by the increased reliance of IR approaches on proprietary data; living labs are a way to bridge the data divide between academia and industry.

There are many challenges to be overcome before the benefits associated with living labs for IR can be realised, including challenges associated with living labs architecture and design, hosting, maintenance, security, privacy, participant recruiting, and scenarios and tasks for use development.

This workshop aims to bring together for the first time people interested in progressing the living labs for IR evaluation methodology. An interactive forum for researchers to share ideas and initiate collaborations will be provided, with the explicit goal of determining means for progressing towards living labs for IR and formulating practical next steps for progression.

See the Call-for-Papers for more details.

As part of the workshop, we are considering organising a challenge in the e-commerce domain with the involvement of a medium-sized online retailer. The goal of this challenge would be to (i) allow academics to work with real users and data (esp. those who otherwise would have no access to such data) and (ii) to provide a starting point for the discussions at the workshop.

We will set up and run this challenge if there is sufficient interest in the community. We have made a poll to collect some initial feedback — please let us know what you think!