As the recipient of the 2018 Karen Spärck Jones Award, I was invited to give a keynote at the 41st European Conference on Information Retrieval (ECIR’19). Below are the slides of my presentation.
living labs
Two journal papers on online evaluation
I am a co-author of two journal papers that appeared in the special issues of the Journal of Data and Information Quality on Reproducibility in IR.
The article entitled “OpenSearch: Lessons Learned from an Online Evaluation Campaign” by Jagerman et al. reports on our experience with TREC OpenSearch, an online evaluation campaign that enabled researchers to evaluate their experimental retrieval methods using real users of a live website. TREC OpenSearch focused on the task of ad hoc document retrieval within the academic search domain. We describe our experimental platform, which is based on the living labs methodology, and report on the experimental results obtained. We also share our experiences, challenges, and the lessons learned from running this track in 2016 and 2017.
The article entitled “Evaluation-as-a-Service for the Computational Sciences: Overview and Outlook” by Hopfgartner et al. discusses the Evaluation-as-a-Service paradigm, where data sets are not provided for download, but can be accessed via application programming interfaces (APIs), virtual machines (VMs), or other possibilities to ship executables. We summarize and compare current approaches, consolidate the experiences of these approaches, and outline next steps toward sustainable research infrastructures.
ECIR’16 contributions
Last Sunday, Anne Schuth and I gave a tutorial on Living Labs for Online Evaluation. The tutorial’s homepage contains all the slides and reference material.
Experimental evaluation has always been central to Information Retrieval research. The field is increasingly moving towards online evaluation, which involves experimenting with real, unsuspecting users in their natural task environments, a so-called living lab. Specifically, with the recent introduction of the Living Labs for IR Evaluation initiative at CLEF and the OpenSearch track at TREC, researchers can now have direct access to such labs. With these benchmarking platforms in place, we believe that online evaluation will be an exciting area to work on in the future. This half-day tutorial aims to provide a comprehensive overview of the underlying theory and complement it with practical guidance.
Today, Faegheh Hashibi is presenting our work on the reproducibility of the TAGME Entity Linking System. The full paper and resources for this work are available online.
Among the variety of approaches proposed for entity linking, the TAGME system has gained due attention and is considered a must-have baseline. In this paper, we examine the repeatability, reproducibility, and generalizability of TAGME, by comparing results obtained from its public API with (re)implementations from scratch. We find that the results reported in the paper cannot be repeated due to unavailability of data sources. Part of the results are reproducible only through the provided API, while the rest are not reproducible. We further show that the TAGME approach is generalizable to the task of entity linking in queries. Finally, we provide insights gained during this process and formulate lessons learned to inform future reducibility efforts.
Living Labs at TREC, CLEF, ECIR
There is a number of exciting developments around the living labs initiative that Anne Schuth and I have been working on. The goal that we are looking for in this activity is to enable researchers to perform online experiments, i.e., in-situ evaluation with actual users of a live site, as opposed to relying exclusively on paid assessors (or simulated users). We believe that this is nothing less than a paradigm shift. We define this new evaluation paradigm as follows:
The experimentation platform is an existing search engine. Researchers have the opportunity to replace components of this search engine and evaluate these components using interactions with real, unsuspecting users of this search engine.
Living Labs developments
There have been a number of developments over the past months around our living labs for IR evaluation efforts.
We had a very successful challenge workshop in Amsterdam in June, thanks to the support we received from ELIAS, ESF, and ILPS. The scientific report summarizing the event is available online.
There are many challenges associated with operationalizing a living labs benchmarking campaign. Chief of these are incorporating results from experimental search systems into live production systems, and obtaining sufficiently many impressions from relatively low traffic sites. We propose that frequent (head) queries can be used to generate result lists offline, which are then interleaved with results of the production system for live evaluation. The choice of head queries is critical because (1) it removes a harsh requirement of providing rankings in real-time for query requests and (2) it ensures that experimental systems receive enough impressions, on the same set of queries, for a meaningful comparison. This idea is described in detail in an upcoming CIKM’14 short paper: Head First: Living Labs for Ad-hoc Search Evaluation.
A sad, but newsworthy development was that our CIKM’14 workshop got cancelled. It was our plan to organize a living labs challenge as part of the workshop. That challenge cannot be run as originally planned. Now we have something much better.
Living Labs for IR Evaluation (LL4IR) will run as a Lab at CLEF 2015 along the tagline “Give us your ranking, we’ll have it clicked!” The first edition of the lab will focus on three specific use-cases: (1) product search (on an e-commerce site), (2) local domain search (on a university’s website), (3) web search (through a commercial web search engine). See futher details here.