Workshop on Personal Knowledge Graphs

I’m co-organizing a workshop on Personal Knowledge Graphs at the Automatic Knowledge Base Construction Conference (AKBC’21).

The concept of personal knowledge graphs has been around for a while, in recognition of the need to represent structured information about entities that are personally related to a user. However, several open questions remain regarding its definition, construction, population, utilization, and practical realization. The workshop aims to bring different communities together to discuss these issues and create a shared research agenda.

We solicit both regular papers, position paper, demonstrators, as well as encore talks, i.e., presentation of work that has already been published in a leading conference or journal. Submission deadline: Sep 6, 2021.

For more details, visit the workshop’s website at https://pkgs.ws/.

SIGIR’21 preprints and resources

Thanks to a fruitful collaboration with colleagues at Google, Bloomberg, Radboud University, Shandong University, and the University of Amsterdam, and, of course, students at the University of Stavanger, I have the following papers to appear at SIGIR this year. All are around conversational and/or recommender systems and come with publicly released resources.

SIGIR’21 workshop on Simulation for IR Evaluation

I’m co-organizing a workshop on Simulation for IR Evaluation at SIGIR this year. Below is an extract from the CfP:

Simulation techniques are not foreign to information retrieval. Simulation has been employed, for example, for constructing test collections and for model performance prediction and analysis in a broad array of information access scenarios. The need for simulation has become ever more apparent recently with the emergence of areas where other types of evaluation are infeasible. One such area is conversational information access, where human evaluation is both time and resource intensive at scale. Another example is provided by settings that do not allow sharing of data, e.g., because of privacy constraints, and therefore necessitate the creation of synthetic test collections.

Despite the apparent need, a standardized methodology for performance evaluation via simulation has not yet been developed. The goal of the Sim4IR workshop is to create a forum for researchers and practitioners to promote methodology development and more widespread use of simulation for evaluation by: (i) identifying problem settings and application scenarios; (ii) sharing tools, techniques, and experiences; (iii) characterizing potentials and limitations; and (iv) developing a research agenda.

Submission deadlines: May 4 (regular/short/demo papers) and May 18 (encore talks).
Visit sim4ir.org for more details.

Highlights from 2019

As a first post of 2020, it’s only fitting to start with a brief summary of highlights from 2019.

Highlights from 2018

This was another year when I was just too busy to blog. But, here are a few things from 2018 to be proud of (in no particular order).