Three recent papers

I’m sharing in this post the preprints of three recent full papers, covering a diverse set of topics, that are to appear in the coming weeks.

The KDD’20 paper “Evaluating Conversational Recommender Systems via User Simulation” (w/ Shuo Zhang) [PDF] represents a new line of work that I’m really excited about. We develop a user simulator for evaluating conversational agents on an item recommendation task. Our user simulator aims to generate responses that a real human would give by considering both individual preferences and the general flow of interaction with the system. We compare three existing conversational recommender systems and show that our simulation methods can achieve high correlation with real users using both automatic evaluation measures and manual human assessments.

The ICTIR’20 paper “Sanitizing Synthetic Training Data Generation for Question Answering over Knowledge Graphs” (w/ Trond Linjordet) [PDF] studies template-based synthetic data generation for neural KGQA systems. We show that there is a leakage of information in current approaches between training and test splits, which affects performance. We raise a series of challenging questions around training models with synthetic (template-based) data using fair conditions, which extend beyond the particular flavor of question answering task we study here.

The CIKM’20 paper “Generating Categories for Sets of Entities” (w/ Shuo Zhang and Jamie Callan) [PDF] addresses problems associated with the maintenance of category systems of large knowledge repositories, like Wikipedia. We aim to aid knowledge editors in the manual process of expanding a category system. Given a set of entities, e.g., in a list or table, we generate suggestions for new categories, which are specific, important and non-redundant. In addition to generating category labels, we also find the appropriate place of these new categories in the hierarchy, by locating the parent nodes that should be extended.

Two journal papers on online evaluation

I am a co-author of two journal papers that appeared in the special issues of the Journal of Data and Information Quality on Reproducibility in IR.

The article entitled “OpenSearch: Lessons Learned from an Online Evaluation Campaign” by Jagerman et al. reports on our experience with TREC OpenSearch, an online evaluation campaign that enabled researchers to evaluate their experimental retrieval methods using real users of a live website. TREC OpenSearch focused on the task of ad hoc document retrieval within the academic search domain. We describe our experimental platform, which is based on the living labs methodology, and report on the experimental results obtained. We also share our experiences, challenges, and the lessons learned from running this track in 2016 and 2017.

The article entitled “Evaluation-as-a-Service for the Computational Sciences: Overview and Outlook” by Hopfgartner et al. discusses the Evaluation-as-a-Service paradigm, where data sets are not provided for download, but can be accessed via application programming interfaces (APIs), virtual machines (VMs), or other possibilities to ship executables. We summarize and compare current approaches, consolidate the experiences of these approaches, and outline next steps toward sustainable research infrastructures.

Entity-Oriented Search book

Entity-Oriented SearchI am pleased to announce that my Entity-Oriented Search book is now available online.

This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book.

SIGIR’17 papers

Our group has 2 full papers, 3 short papers, and 1 demo at SIGIR this year. The preprints are available. See you in Japan!

  • EntiTables: Smart Assistance for Entity-Focused Tables, S. Zhang and K. Balog. [PDF]
  • Dynamic Factual Summaries for Entity Cards, F. Hasibi, K. Balog, and S. E. Bratsberg. [PDF]
  • Target Type Identification for Entity-Bearing Queries, D. Garigliotti, F. Hasibi, and K. Balog. [PDF|Extended version]
  • Generating Query Suggestions to Support Task-Based Search, D. Garigliotti and K. Balog. [PDF]
  • DBpedia-Entity v2: A Test Collection for Entity Search, F. Hasibi, F. Nikolaev, C. Xiong, K. Balog, S. E. Bratsberg, A. Kotov, and J. Callan. [PDF]
  • Nordlys: A Toolkit for Entity-Oriented and Semantic Search, F. Hasibi, K. Balog, D. Garigliotti, and S. Zhang. [PDF]

WSDM paper

Earlier today, Jan Benetka has presented our paper “Anticipating Information Needs Based on Check-in Activity” at the WSDM’17 conference in Cambrigde, UK.

In this work we address the development of a smart personal assistant that is capable of anticipating a user’s information needs based on a novel type of context: the person’s activity inferred from her check-in records on a location-based social network. Our main contribution is a method that translates a check-in activity into an information need, which is in turn addressed with an appropriate information card. This task is challenging because of the large number of possible activities and related information needs, which need to be addressed in a mobile dashboard that is limited in size. Our approach considers each possible activity that might follow after the last (and already finished) activity, and selects the top information cards such that they maximize the likelihood of satisfying the user’s information needs for all possible future scenarios. The proposed models also incorporate knowledge about the temporal dynamics of information needs. Using a combination of historical check-in data and manual assessments collected via crowdsourcing, we show experimentally the effectiveness of our approach.

Presentation slides and resources can be found at zero-query.com.