I broadly work in the areas of the theory of machine learning and computational learning theory. Here are some examples of my work
Can we recover a high dimensional linear function or univariate polynomial function from a constant fraction of the samples being arbitrarily corrupted? What about half? more than half? We explore these problems and either show that these are possible or demonstrate lower bounds.
Learning one-layer neural networks.
We show that even for the Gaussian distribution, it is hard to agnostically learn a one-layer neural network. We also show that this is possible if we weaken the objective.
Inverse problems under generative model assumptions.
Can we sample-efficiently recover signals which are sampled from a 'natural-looking' distribution from their linear measurements? We demonstrate an instance optimal algorithm for this problem when the measurements are drawn from the output of a Generative Adversarial Network (GAN).