Optimal online learning with matrix parameters

Speaker Name: 
Jiazhong Nie
Speaker Organization: 
Computer Science
Start Time: 
Wednesday, August 5, 2015 - 10:30am
End Time: 
Wednesday, August 5, 2015 - 12:30pm
Location: 
475 Engineering 2
Organizer: 
Manfred Warmuth

Abstract: This thesis considers two online matrix learning problems: the problem of online Principle Component Analysis (PCA) and the problem of learning rotations online. Previous papers proposed two online algorithms for these problems, which are the Matrix  Exponentiated Gradient (MEG) algorithm and the Gradient Descent (GD) algorithm, respectively. This thesis evaluates these two algorithms by their regrets, which are the additional total losses of the online algorithms over the total loss of the best offline comparator (chosen in hindsight).

In Chapter 2, we show that for online PCA, both algorithms achieve essentially the same and optimal (within a constant factor) regret bound when the bound is expressed as a function of the number of trials. However, when considering regret bounds as functions of the loss of the best comparator, MEG remains optimal and strictly outperforms GD. Chapter 3 considers a generalization of online PCA in which the algorithm maintains both a vector and a matrix as its parameters, and uses them to represent the first and the second order moments of its randomized prediction, respectively. Chapter 4 considers the problem of learning rotations online. We show that for this problem, the GD algorithm is optimal (up to a constant) for both the regret bounds that are functions of the number of trials and the regret bounds that are functions of the loss of best comparator.