G

A

R

Y

Welcome.

I am a second year Electrical Engineering PhD student at Stanford University advised by Professor John Duchi and supported by the 3-year Professor Michael J. Flynn SGF fellowship. I am broadly interested in optimization and statistics. Currently, I am developing methods of personalizing Federated Learning models.


Prior to joining Stanford, I was fortunate as an undergraduate at UC Berkeley to work with Professors Jean Walrand, Laurent El Ghaoui, and Kannan Ramchandran. I was also fortunate to be a teaching assistant for Data Structures (CS61b) in Sp'17, Algorithms (CS170) in Fa'17, and Probability (EE126) in Sp'18 and Sp'19. In 2019, I was awarded the UC Berkeley Campus Outstanding GSI award.

Publications.

Karan Chadha*, Gary Cheng*, and John Duchi. "Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization." arXiv preprint.


Hilal Asi*, Karan Chadha*, Gary Cheng*, and John Duchi. "Minibatch Stochastic Approximate Proximal Point Methods." Spotlight Presentation at Neurips 2020 (video recording).


Gary Cheng, Kabir Chandrasekher, and Jean Walrand. "Static and Dynamic Appointment Scheduling with Stochastic Gradient Descent." In American Control Conference 2019.


Gary Cheng, Armin Askari, Kannan Ramchandran, and Laurent El Ghaoui. "Greedy Frank-Wolfe Algorithm for Exemplar Selection." Poster at BayLearn 2018.


* denotes equal contribution; authors were alphabetically ordered

If you would like contact me, please email me at

ude.drofnats@raggnehc



humor and image by xkcd.com