Research

At the broadest level, my research focuses on intelligent information access. This entails developing human-centered (i.e., assistive) AI technology for connecting people with information: designing novel methods for finding, organizing and consuming information, as well as exploring effective ways of interacting with AI systems.

My current research centers around two main areas: semantic search and evaluation. Most recently, I have been working on conversational information access, user simulation, personal knowledge graphs, and transparent and explainable recommender systems.
In the distant past (as my PhD topic) I worked extensively on expertise retrieval.

I am a strong believer in task-driven research; my approach to research is to combine theory, experimentation, and application. The contributions that I aim for are algorithms and models for various information access tasks, novel evaluation paradigms, and insights from the analysis of experimental outcomes.

Semantic search encompasses a variety of methods and approaches aimed at aiding users in their information access and consumption activities, by understanding their context and intent. My work on semantics is centered around the modeling of entities (i.e., “things,” such as persons, organizations, and products) and information needs, and the utilization and expansion of knowledge bases (a.k.a. knowledge graphs) in various application contexts.

For a synthesis of research on entity-oriented search, see the open access book I wrote in 2018.

Coming soon…

Coming soon…

Knowledge bases are a key enabling data component, organizing information around entities and their relationships. However, they are inherently incomplete. Also, they require constant updating over time as new facts and discoveries may turn the content outdated or inaccurate. I studied the problem of identifying “vital” documents in a streaming setting (news, blogs, tweets) that may contain novel information that is pertinent to a set of target entities, i.e., would trigger updates to their knowledge base entries [1,2]; this work also involved the development of appropriate evaluation methodology that considers the temporal aspects of this task [3]. Further, I worked on building knowledge bases for particular applications, such as monetary transactions [4] and entity-oriented search intents [5], as well as populating existing knowledge bases with novel entities (by utilizing tables on the Web) [7] and helping knowledge workers (such as Wikipedia editors) with the creation of new categories for entities [8].

Entities, in public resources, are usually taken to be anyone or anything “prominent enough” to be included in a given knowledge base. This, however, rules out many entities we interact with on a daily basis. For example, people who are not “famous enough” to make it to Wikipedia, are not represented there as entities. However, just as it can be helpful for a search engine to have access to structured knowledge about commonly known entities, services personal to the user (such as personal digital assistants) might benefit from having structured information about entities personally relevant to the user to their avail. In recent work, I am investigating the concept of a personal knowledge graph—a resource of structured information about entities personally related to its user, their attributes and the relations between them [6].

Evaluation

Evaluation is a key challenge within information retrieval. I have been actively engaged in developing evaluation methodology and resources, and organizing international benchmarking campaigns for evaluating the effectiveness of retrieval systems. I am interested in a broad spectrum of evaluation methodologies, including offline evaluation (facilitated by reusable test collections), online evaluation (performed in live settings, with production systems acting as “living labs”), and, most recently, user simulation.

I have been actively engaged in building test collections to enable research on (at the time) novel retrieval problems. An information retrieval test collection typically consists of (1) a set of items, (2) a set of information needs, (3) relevance assessments, and (4) appropriate evaluation measure(s). Examples include expertise search [4], entity retrieval [1,2,6], entity linking [5,10], type identification [3,7], table search [9] and summarization [8], narrative-based recommendations [11], and soft attributes [12].
Between 2009 and 2011, I co-organized the Entity track at the Text Retrieval Conference (TREC) [1,2], which introduced various entity-oriented search tasks over the Web—this work has been particularly impactful.

User interaction data, at scale, is a key enabling component to the development of improved search and recommendation services. Access to such data, however, is currently limited to those working within organizations that provide such services (i.e., real-world search or recommender applications). This creates a growing “data divide” between academia and industry. Because of the lack of access to real users and their interactions, many of the evaluations undertaken in academia are performed out of context (i.e., datasets annotated by crowd workers or by trained assessors) and are not sufficiently customer focused. As a result, academia often cannot show that their research (novel algorithms, methods, etc.) would lead to meaningful advancements in real-world applications.
I took a leading role in the development of the notion of “living labs” for information retrieval, from concept to realization, in an effort to open up live evaluation resources to the research community [18]. These efforts include establishing industrial partnerships and organizing world-wide benchmarking campaigns with their involvement (specifically, the CLEF Living Labs for IR Evaluation lab (2015–2016) [4] and the TREC Open Search track (2016–2017) [6]), raising awareness and helping agenda-setting in the academic community via workshops and tutorials [3,5,7], and developing and operating the arXivDigest scientific literature recommendation service that serves as a living lab platform [8].

I am particularly interested in user simulation in the context of conversational information access systems. There, building offline test collections (beyond the turn level) is infeasible, due to the interactive nature of the problem (that is, conversations can branch, at each turn, in virtually unlimited ways). My objective is to develop a user simulator that is (1) capable of producing responses that a real user would give in a certain dialog situation, and (2) would enable to compute an automatic assessment of a conversational agent such that it is predictive of its performance with real users [1].

Expertise retrieval

My PhD research was focused on developing methods for people search in an organizational setting. In particular, two core tasks were investigated: expert finding (retrieving people that are experts on a given topic) and expert profiling (characterizing the skills and knowledge of a person). The fact that people are not represented directly (as retrievable units, such as documents) gave rise to the main scientific challenges examined in my PhD thesis, which were (1) to identify people indirectly through their occurrences in documents, (2) to represent them, and (3) to match these representations with those of queries. Both expertise retrieval tasks were approached as an association finding problem between topics and people. Associations are captured using a probabilistic generative framework, based on statistical language models. The developed models were shown to be powerful, effective, and able to incorporate a number of extensions in a transparent and theoretically sound way [1—8].
When modeling expertise, one can consider more than content-based evidence that is directly available from (related) documents. Indeed, humans take several other contextual factors into account, like organizational structure, position, experience, and social distance, when making decisions of which expert(s) to select or recommend. These contextual factors can be quantified and combined with content-based methods to improve retrieval effectiveness [9,10].