Beyond Worst Case In Machine Learning: The Oracle Model

19/12/2019 - 12:00
Speaker: 
Seminar: 
מיקום: 
Abstract: 

In recent years there has been an increasing gap between the success of machine learning algorithms and our ability to explain their success theoretically. 

Namely, many of the problems that are solved to a satisfactory degree of precision are computationally hard in the worst case. Fortunately, there are often reasonable assumptions  which help us to get around these worst-case impediments and allow us to rigorously analyze heuristics that are used in practice. 

In this talk I will advocate a complementary approach, where instead of explicitly characterizing some desired "niceness" properties of the data, we assume access to an optimization oracle that solves a relatively simpler task. This allows us to identify the sources of hardness and extend our theoretical understanding to new domains. Furthermore we will show that seemingly innocents (and arguably justifiable) modifications to the oracle can lead to tractable reductions and even to bypass hardness results. 

We demonstrate these ideas using the following results: i) An efficient algorithm for non-convex online learning using an optimization oracle. b) A faster boosting algorithm using a "simple" weak learner. iii) An efficient reduction from online to private learning.

 

Joint works with Naman Agarwal, Noga Alon, Elad Hazan, and Shay Moran.