Best paper award at DESIRES’21

I was honoured to receive a Best Paper Award for my paper “Conversational AI from an Information Retrieval Perspective: Remaining Challenges and a Case for User Simulation” at the 2nd International Conference on Design of Experimental Search & Information REtrieval Systems (DESIRES’21), which took place last week. The paper as well as the presentation slides are available online.

Workshop on Personal Knowledge Graphs

I’m co-organizing a workshop on Personal Knowledge Graphs at the Automatic Knowledge Base Construction Conference (AKBC’21).

The concept of personal knowledge graphs has been around for a while, in recognition of the need to represent structured information about entities that are personally related to a user. However, several open questions remain regarding its definition, construction, population, utilization, and practical realization. The workshop aims to bring different communities together to discuss these issues and create a shared research agenda.

We solicit both regular papers, position paper, demonstrators, as well as encore talks, i.e., presentation of work that has already been published in a leading conference or journal. Submission deadline: Sep 6, 2021.

For more details, visit the workshop’s website at https://pkgs.ws/.

SIGIR’21 preprints and resources

Thanks to a fruitful collaboration with colleagues at Google, Bloomberg, Radboud University, Shandong University, and the University of Amsterdam, and, of course, students at the University of Stavanger, I have the following papers to appear at SIGIR this year. All are around conversational and/or recommender systems and come with publicly released resources.

SIGIR’21 workshop on Simulation for IR Evaluation

I’m co-organizing a workshop on Simulation for IR Evaluation at SIGIR this year. Below is an extract from the CfP:

Simulation techniques are not foreign to information retrieval. Simulation has been employed, for example, for constructing test collections and for model performance prediction and analysis in a broad array of information access scenarios. The need for simulation has become ever more apparent recently with the emergence of areas where other types of evaluation are infeasible. One such area is conversational information access, where human evaluation is both time and resource intensive at scale. Another example is provided by settings that do not allow sharing of data, e.g., because of privacy constraints, and therefore necessitate the creation of synthetic test collections.

Despite the apparent need, a standardized methodology for performance evaluation via simulation has not yet been developed. The goal of the Sim4IR workshop is to create a forum for researchers and practitioners to promote methodology development and more widespread use of simulation for evaluation by: (i) identifying problem settings and application scenarios; (ii) sharing tools, techniques, and experiences; (iii) characterizing potentials and limitations; and (iv) developing a research agenda.

Submission deadlines: May 4 (regular/short/demo papers) and May 18 (encore talks).
Visit sim4ir.org for more details.

PhD position in Language Modeling for Explainable AI

We have a PhD position, funded by SFI NorwAI, the new 8-year research-based innovation Center for AI Innovation.

AI-powered systems exhibit a growing degree of personalization when recommending content. Users, however, have little knowledge of how these systems work and what personal data they use. There is a need for transparency, in terms of (1) collecting and using personal data, and how it is used for inferring user preferences and (2) explaining and justifying the generated recommendations. The most human-like form of providing explanations is via natural language. This would allow users to better understand how their preferences are understood and interpreted by the system, and also correct it if necessary.

The main goal of the project is to develop novel models that enable semantically rich and context-dependent text-based explainability of user preferences and system recommendations, using either existing metadata or automatically extracted annotations. A starting point for generating explanations is template-based; later, this can be made more human-like using language generation techniques, using large-scale (pre-trained) language models.


Visit Jobbnorge for more information and application details (notice the requirement for a cover letter).
Application deadline: March 15 June 1.