Ge Yang

Ge Yang

Ge grew up on the northern side of Beijing. He graduated from Yale with an undergraduate degree in Physics and Mathematics, and received his Ph.D. in Physics from the University of Chicago. In 2018, Ge changed his research focus to deep reinforcement learning while visiting Pieter Abbeel at UC Berkeley, which was followed by a research internship at Facebook AI Research with Roberto Calandra, and another one at Google DeepMind with Volodomyr Mnih. Ge is currently a postdoctoral fellow at the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI).


Research Interests

Ge’s research involves two sets of related problems. The first is to improve learning by re-examine the way we represent knowledge in a neural network, and how it transfers in and out of distribution. The second set involves looking at reinforcement learning through the lens of theoretical tools such as neural tangent kernel, non-euclidean geometry, and Hamiltonian dynamics. Ge takes a systems approach to deep learning and deep reinforcement learning, where by identifying key bottleneck components and procedures, small changes can lead to large downstream impact.

Open-source Tools

Ge believes the best way to affect change, is to build tools. Here are a few out of 40+ opensource packages Ge has published over the years.

  • jaynes - v0.5.25 - Cross-provider Training Utilities [link]
  • ml-logger - 0.4.46 - A distributed logging and visualization dashboard for ML research [link]
  • params-proto - v2.6.0 - singleton design pattern for defining ML model parameters [link]
  • CommonMark X (CMX) - v2.6.0 - Modern replacement of Jupyter notebooks, python to markdown. [link]
  • luna - v1.6.3 - a rxjs implementation of redux. Implemented in typescript. [link]
  • luna-saga - v6.2.1 - a co-routine runner for Luna. Enables generator-based async flow [link]

Recent Papers