About me

I am a Ph.D. student in Computer Science at University of Massachusetts Amherst, advised by Prof. Justin Domke.

My research interest lies in

  • Variational Inference and Stochastic Optimization
  • Generative Models, with a focus on GFlowNets

I have a broad yet focused interest in the theoretical aspects of Variational Inference, particularly in high-capacity inference methods and constructing generative models such as GFlowNets. I am actively engaged in research with various groups, including the DiffEqML research group, where I have deepened my interest in probabilistic machine learning. Before joining UMass, I completed my Master’s in Industrial & Systems Engineering at KAIST under the supervision of Prof. Woo Chang Kim and did my undergraduate at Yonsei University.

Selected Publications

Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak, Seongrok Moon, Joohwan Ko, PooGyeon Park.
38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko*, Kyurae Kim*, Woo Chang Kim, Jacob R. Gardner.
41th International Conference on Machine Learning (ICML 2024)

Learning to scale logits for temperature-conditional GFlowNets
Minsu Kim*, Joohwan Ko*, Taeyoung Yun*, Dinghuai Zhang, Ling Pan, Woo Chang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio
41th International Conference on Machine Learning (ICML 2024)

Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim, Joohwan Ko, Yian Ma, Jacob R. Gardner
41th International Conference on Machine Learning (ICML 2024)

Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks
Minho Lee, Yun Young Choi, Sun Woo Park, Seunghwan Lee, Joohwan Ko, Jaeyoung Hong
The Third Learning on Graphs Conference (LOG 2024)

Multilevel approach to efficient gradient calculation in stochastic systems
Joohwan Ko, Michael Poli, Stefano Massaroli, Woo Chang Kim
11th International Conference on Learning Representations Workshop on Physics for Machine Learning (ICLR 2023 Workshop)

( * denotes equal contribution)