Dissertation Defense
Kernel Methods for Learning with Limited Labeled Data
Add to Google Calendar
Abstract:
Machine learning is a rapidly developing technology that enables a system to automatically learn and improve from experience. Modern machine learning algorithms have achieved state-of-the-art performances on a variety of tasks such as speech recognition, image classification, machine translation, and playing games like Go, Dota 2, etc. However, one of the biggest challenges in applying these machine learning algorithms in the real world is that they require huge amount of labeled data for the training. In the real world, the amount of labeled training data is often limited.
In this thesis, we address three challenges in learning with limited labeled data using kernel methods. In our first contribution, we provide an efficient way to solve an existing domain generalization algorithm and extend the theoretical analysis to multiclass classification. As a second contribution, we propose a multi-task learning framework for contextual bandit problems. We propose an upper confidence bound-based multi-task learning algorithm for contextual bandits, establish a corresponding regret bound, and interpret this bound to quantify the advantages of learning in the presence of high task (arm) similarity. Our third contribution is to provide a simple regret guarantee (best policy identification) in a contextual bandits setup. Our experiments examine a novel application to adaptive sensor selection for magnetic field estimation in interplanetary spacecraft and demonstrate considerable improvements of our algorithm over algorithms designed to minimize the cumulative regret.