I have a fully funded PhD position in deep learning.
Deep neural networks, a.k.a. deep learning, have transformed the fields of computer vision, speech recognition and machine translation, and now rivals human-level performance in a range of tasks. While the idea of neural networks dates several decades back, their recent success is attributed to three key factors: (1) vast computational power, (2) algorithmic advances, and (3) the availability of massive amounts of training data.
There is no doubt that deep learning will continue to transform other fields as well, including that of information retrieval. One major challenge is that for most information retrieval tasks, training data is not available in huge quantities. This is unlike, for example, to object recognition, where there are large scale resources at one’s disposal to train neural networks with (tens of) millions of parameters (e.g., the ImageNet database contains over 14 million images).
Deep learning is inspired by how the brain works. Yet, humans can learn and generalize from a very small number of examples. (A child, for example, does not need to see thousands of instances of cats, in many different sizes and from numerous different angles, to be able to recognize a cat and tell it apart from a dog.) Can deep neural networks be enhanced with this capability, i.e., to be able to learn and generalize from sparsely labeled data? The aim of this project is to answer this question, specifically, in the application domain of information retrieval.
Details and application instructions can be found here.
Application deadline: March 26, 2017.
Important note: there are multiple projects advertised within the call. You need to indicate that you are applying for this specific project. Feel free to contact me directly for more information.