Dhruva's Dumb Website

headshot.jpg

Hi! I’m Dhruva, a fourth-year physics graduate student at UC Berkeley. I’m interested in the learning mechanisms of systems that learn from an external signal, and I study deep neural networks as a scientifically-accessible model of such systems. I’m especially interested the large-learning-rate phenomena associated with feature learning and I hope to understand why deep learning is more sample-efficient than classical machine learning techniques, like kernel machines. In my free time, I enjoy cooking for friends, playing chess, messing around with synthesizers, watercoloring, and going on long walks.

news

Sep 2, 2024 I won a one-year fellowship from Google!
Apr 30, 2024 My tutorial deriving the lazy (NTK) and rich (\(\mu\)P) regimes is on arxiv, check it out!
Jan 16, 2024 Our paper showing that More Is Better in modern ML will be at ICLR 2024 :)
May 29, 2023 Our paper explaining generalization in kernel machines was accepted in TMLR!
May 6, 2022 I won the Teaching Effectiveness Award! :sunglasses: WOO!!!!

selected publications

  1. More is better in modern machine learning: when infinite overparameterization is optimal and overfitting is obligatory
    James B Simon, Dhruva Karkada, Nikhil Ghosh, and Mikhail Belkin
    2023
  2. The eigenlearning framework: a conservation law perspective on KRR and wide NNs
    James B. Simon, Maddie Dickens, Dhruva Karkada, and Michael R. DeWeese
    2022