About me

I am a second year Ph.D. student in Computer Science at the University of Massachusetts Amherst, advised by Prof. Justin Domke. My research focuses on diffusion and flow-based generative models, Bayesian inference, and AI for Science with applications in protein design.

I’m interested in developing novel diffusion and flow-based generative models for probabilistic machine learning, with particular emphasis on improving approximate inference through scalable variational methods. I’m exploring how these models can be leveraged for scientific discovery, especially in protein design and other molecular applications. My work spans from theoretical foundations of Bayesian inference to practical applications of generative models in scientific domains. During summer 2025, I worked as an AI Research Intern at Flagship Pioneering, developing protein binder design methods.

Prior to joining UMass, I completed my Master’s in Industrial & Systems Engineering at KAIST under the supervision of Prof. Woo Chang Kim and earned my undergraduate degree at Yonsei University.

Selected Publications

Model Informed Flows for Bayesian Inference of Probabilistic Programs
Joohwan Ko, Justin Domke
39th Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Latent Target Score Matching, with an application to Simulation-Based Inference
Joohwan Ko, Tomas Geffner
39th Annual Conference on Neural Information Processing Systems (NeurIPS) Workshop MLPS, 2025

Relaxed Sequence Sampling for Diverse Protein Design
Joohwan Ko, Aristofanis Rontogiannis, Yih-En Andrew Ban, Axel Elaldi, Nicholas Franklin
Machine Learning in Structural Biology (MLSB 2025)

Learning to scale logits for temperature-conditional GFlowNets
Minsu Kim*, Joohwan Ko*, Taeyoung Yun*, Dinghuai Zhang, Ling Pan, Woo Chang Kim, Jinkyoo Park, Emmanuel Bengio, Yoshua Bengio
41th International Conference on Machine Learning (ICML 2024)

Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko*, Kyurae Kim*, Woo Chang Kim, Jacob R. Gardner.
41th International Conference on Machine Learning (ICML 2024)

Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim, Joohwan Ko, Yian Ma, Jacob R. Gardner
41th International Conference on Machine Learning (ICML 2024)

Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak, Seongrok Moon, Joohwan Ko, PooGyeon Park.
38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Enhancing Topological Dependencies in Spatio-Temporal Graphs with Cycle Message Passing Blocks
Minho Lee, Yun Young Choi, Sun Woo Park, Seunghwan Lee, Joohwan Ko, Jaeyoung Hong
The Third Learning on Graphs Conference (LOG 2024)

Multilevel approach to efficient gradient calculation in stochastic systems
Joohwan Ko, Michael Poli, Stefano Massaroli, Woo Chang Kim
11th International Conference on Learning Representations Workshop on Physics for Machine Learning (ICLR 2023 Workshop)

( * denotes equal contribution)