Dacheng Li

I am a CS PhD at EECS, UC Berkeley, fortunately supervised by Prof. Ion Stoica and Prof. Joseph Gonzalez , affliated with lmsys, Sky and BAIR. I obtained my master in Machine Learning at CMU with Prof. Eric Xing and Prof. Hao Zhang . I obtained my undergraduate with double majors in Computer Science and Mathematics at UC San Diego with Prof. Zhuowen Tu . I also work closely with Prof. Song Han (MIT).

I study Machine Learning, in the context of modeling performance, scaling, system efficiency, framework usability, and theoratical support. My goal is to develop, support performant models at scale, and provide easily usable framework for people, to faciliate intelligence deployment in the real world. I am currently working on algorithms and systems around LLMs and diffusion models.

Also check out my girlfriend's webpage . She is a great CS PhD at UW.

Google Scholar  /  GitHub  /  Resume  /  SoP  /  Twitter

  • 2024-06 Joined Nvidia as a research intern, working on generative models.
  • 2024-05 Chatbot Arena is accepted to ICML 2024.
  • 2024-03 VTC is accepted to OSDI 2024.
  • 2024-02 S-lora and MCBench accepted to MLsys 2024.
  • 2023-10 Released LightSeq , a long-context distributed training kernel.
  • 2023-09 The official paper of Vicuna is accepted at Neurips 2024.
  • 2023-08 Joined Google as a student researcher, working on LLMs evaluation.
  • 2023-06 Released LongChat, a series of long-context models and evaluation toolkits,
  • 2023-04 Released FastChat-T5.
  • 2023-01 MPCFormer is accepted at ICLR 2023 as a spotlight.
  • 2022-12 A proposal on secure LLMs serving is accepted at Amazon Research Awards.
  • 2022-10 AMP is accepeted at NeruIPS 2022.
  • 2021-03 DC-VAE is accepted at CVPR 2021.