About me

I am a first year Ph.D. student in Computer Science at the University of Massachusetts Amherst, advised by Prof. Justin Domke. My research focuses on advancing probabilistic machine learning through Bayesian inference and deep generative models.

I’m interested in improving approximate inference by developing scalable and high-capacity variational inference methods using flow-based models. I’m also exploring deep generative models like Generative Flow Networks (GFlowNets) for scientific discovery. Additionally, I’m working on leveraging diffusion models for approximate inference and enhancing their training methodologies to push the boundaries of probabilistic machine learning.

I am actively engaged in research with various groups, including the DiffEqML, where I have deepened my interest in probabilistic machine learning. Prior to joining UMass, I completed my Master’s in Industrial & Systems Engineering at KAIST under the supervision of Prof. Woo Chang Kim and earned my undergraduate degree at Yonsei University.

Selected Publications

Learning to scale logits for temperature-conditional GFlowNets
Minsu Kim*, Joohwan Ko*, Taeyoung Yun*, Dinghuai Zhang, Ling Pan, Woo Chang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio
41th International Conference on Machine Learning (ICML 2024)

Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko*, Kyurae Kim*, Woo Chang Kim, Jacob R. Gardner.
41th International Conference on Machine Learning (ICML 2024)

Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim, Joohwan Ko, Yian Ma, Jacob R. Gardner
41th International Conference on Machine Learning (ICML 2024)

Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak, Seongrok Moon, Joohwan Ko, PooGyeon Park.
38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks
Minho Lee, Yun Young Choi, Sun Woo Park, Seunghwan Lee, Joohwan Ko, Jaeyoung Hong
The Third Learning on Graphs Conference (LOG 2024)

Multilevel approach to efficient gradient calculation in stochastic systems
Joohwan Ko, Michael Poli, Stefano Massaroli, Woo Chang Kim
11th International Conference on Learning Representations Workshop on Physics for Machine Learning (ICLR 2023 Workshop)

( * denotes equal contribution)