PhD position in Language Modeling for Explainable AI

We have a PhD position, funded by SFI NorwAI, the new 8-year research-based innovation Center for AI Innovation.

AI-powered systems exhibit a growing degree of personalization when recommending content. Users, however, have little knowledge of how these systems work and what personal data they use. There is a need for transparency, in terms of (1) collecting and using personal data, and how it is used for inferring user preferences and (2) explaining and justifying the generated recommendations. The most human-like form of providing explanations is via natural language. This would allow users to better understand how their preferences are understood and interpreted by the system, and also correct it if necessary.

The main goal of the project is to develop novel models that enable semantically rich and context-dependent text-based explainability of user preferences and system recommendations, using either existing metadata or automatically extracted annotations. A starting point for generating explanations is template-based; later, this can be made more human-like using language generation techniques, using large-scale (pre-trained) language models.


Visit Jobbnorge for more information and application details (notice the requirement for a cover letter).
Application deadline: March 15 June 1.

Highlights from 2020

A compilation of highlights from 2020: