research

Deep learning is humanity’s most successful attempt thus far to imitate aspects of human intelligence. Despite fundamental differences in the architectures of deep learning systems and biological brains, deep learning remains a theoretically- and experimentally-accessible playground for understanding learning as a general, emergent phenomenon. The overarching goal of my research is to probe deep learning systems (using both theoretical tools and numerical experiments) to elucidate and characterize the general properties of systems that learn.

But even as a toy model of learning, deep learning is itself mysterious in many ways. Experiments reveal many interesting behaviors (e.g. feature learning, double descent, neural scaling laws) which are not well-understood from a theory standpoint. On the other hand, idealized models of large neural networks have provable behaviors that are tantalizingly similar to those of real neural networks. My work bridges the gap between experiment and theory. I’m currently working on using percolation theory to characterize the texture of neural net outputs in input space.

My advisor is Michael DeWeese and I’m affiliated with the Berkeley AI Research group and the Redwood Center for Theoretical Neuroscience.

2022

  1. The eigenlearning framework: a conservation law perspective on KRR and wide NNs
    James B. Simon, Maddie Dickens, Dhruva Karkada, and 1 more author
    2022