SIGIR’21 workshop on Simulation for IR Evaluation

I’m co-organizing a workshop on Simulation for IR Evaluation at SIGIR this year. Below is an extract from the CfP:

Simulation techniques are not foreign to information retrieval. Simulation has been employed, for example, for constructing test collections and for model performance prediction and analysis in a broad array of information access scenarios. The need for simulation has become ever more apparent recently with the emergence of areas where other types of evaluation are infeasible. One such area is conversational information access, where human evaluation is both time and resource intensive at scale. Another example is provided by settings that do not allow sharing of data, e.g., because of privacy constraints, and therefore necessitate the creation of synthetic test collections.

Despite the apparent need, a standardized methodology for performance evaluation via simulation has not yet been developed. The goal of the Sim4IR workshop is to create a forum for researchers and practitioners to promote methodology development and more widespread use of simulation for evaluation by: (i) identifying problem settings and application scenarios; (ii) sharing tools, techniques, and experiences; (iii) characterizing potentials and limitations; and (iv) developing a research agenda.

Submission deadlines: May 4 (regular/short/demo papers) and May 18 (encore talks).
Visit sim4ir.org for more details.

PhD position in Language Modeling for Explainable AI

We have a PhD position, funded by SFI NorwAI, the new 8-year research-based innovation Center for AI Innovation.

AI-powered systems exhibit a growing degree of personalization when recommending content. Users, however, have little knowledge of how these systems work and what personal data they use. There is a need for transparency, in terms of (1) collecting and using personal data, and how it is used for inferring user preferences and (2) explaining and justifying the generated recommendations. The most human-like form of providing explanations is via natural language. This would allow users to better understand how their preferences are understood and interpreted by the system, and also correct it if necessary.

The main goal of the project is to develop novel models that enable semantically rich and context-dependent text-based explainability of user preferences and system recommendations, using either existing metadata or automatically extracted annotations. A starting point for generating explanations is template-based; later, this can be made more human-like using language generation techniques, using large-scale (pre-trained) language models.


Visit Jobbnorge for more information and application details (notice the requirement for a cover letter).
Application deadline: March 15 June 1.

Highlights from 2020

A compilation of highlights from 2020:

PhD position in Conversational AI

I have a fully-funded PhD position in Conversational AI. This position is partially funded by an unrestricted gift from Google, and will be co-supervised by Google research scientist Filip Radlinski.

Conversational search is a newly emerging research area within AI that aims to provide access to digitally stored information by means of a conversational user interface. The goal of such systems is to effectively handle a wide range of requests expressed in natural language, with rich user-system dialogue as a crucial component for understanding the user’s intent and refining the answers.

The overall objective of this project is to develop a prototype conversational search system for supporting scholarly activities (Scholarly Conversational Assistant). Scholarly activities of interest include, among others, finding relevant research material, planning conference attendance, or finding relevant experts to serve as speakers, committee members, etc. You can find further details on the envisioned functionality of the Scholarly Conversational Assistant here.

Specific areas of functionality targeted in the project concern the modeling of user knowledge, adapting the assistant’s language usage accordingly, and system-initiated (proactive) notifications.

Visit Jobbnorge for more information and application details (notice the requirement for a cover letter).
Application deadline: Nov 26.

Three recent papers

I’m sharing in this post the preprints of three recent full papers, covering a diverse set of topics, that are to appear in the coming weeks.

The KDD’20 paper “Evaluating Conversational Recommender Systems via User Simulation” (w/ Shuo Zhang) [PDF] represents a new line of work that I’m really excited about. We develop a user simulator for evaluating conversational agents on an item recommendation task. Our user simulator aims to generate responses that a real human would give by considering both individual preferences and the general flow of interaction with the system. We compare three existing conversational recommender systems and show that our simulation methods can achieve high correlation with real users using both automatic evaluation measures and manual human assessments.

The ICTIR’20 paper “Sanitizing Synthetic Training Data Generation for Question Answering over Knowledge Graphs” (w/ Trond Linjordet) [PDF] studies template-based synthetic data generation for neural KGQA systems. We show that there is a leakage of information in current approaches between training and test splits, which affects performance. We raise a series of challenging questions around training models with synthetic (template-based) data using fair conditions, which extend beyond the particular flavor of question answering task we study here.

The CIKM’20 paper “Generating Categories for Sets of Entities” (w/ Shuo Zhang and Jamie Callan) [PDF] addresses problems associated with the maintenance of category systems of large knowledge repositories, like Wikipedia. We aim to aid knowledge editors in the manual process of expanding a category system. Given a set of entities, e.g., in a list or table, we generate suggestions for new categories, which are specific, important and non-redundant. In addition to generating category labels, we also find the appropriate place of these new categories in the hierarchy, by locating the parent nodes that should be extended.