Jeffrey G. Wang

I’m analyzing language, vision, and multimodal foundation models through empirics & theory. I have a particular interest in security/privacy/fairness as well as planning/behavior/control in these systems. My Google Scholar is linked here.

Publications1

Kuchlous, Sahil*, Marvin Li*, and Jeffrey G. Wang*. “Bias Begets Bias: the Impact of Biased Embeddings on Diffusion Models.” Trustworthy Multimodal Foundation Models and Agents (TiFA) Workshop, ICML 2024. [Paper Link]

Li, Marvin*, Jason Wang*, Jeffrey G. Wang*, and Seth Neel. “MoPe: Model Perturbation-based Privacy Attacks on Language Models.” Empirical Methods for Natural Language Processing (2023). Also featured at NeurIPS SoLaR Workshop. [Paper Link]

Chakraborty, Abhijit*, Jeffrey G. Wang*, and Ferhat Ay. “dcHiC detects differential compartments across multiple Hi-C datasets.” Nature Communications (2022). Also featured as an oral presentation in Regulatory and Systems Genomics Track of ISMB and poster in RECOMB. [Paper Link] [Open-Source Library]

Details. I wrote a 3000-word exposition of my work on dcHiC, building from biological and computational first principles, here. For this work, I was a national finalist of the 2021 Regeneron Science Talent Search.

A Sketch of my Research Philosophy

A unifying theme of my research is computation. I find it incredible that we can just throw more and more of the right compute toward approximating some function undergirding the mystery we wish to solve, and usually that mystery shatters.

In my opinion, the best research consists of theoretically grounded work that is empirically validated through capable engineering. In our current explosion of machine learning research, this means that I particularly value building robust systems, creating efficient implementations, and writing clean code.

In general, I take a systems approach toward epistemology. I have found that I love building knowledge by repeatedly depth-first searching into new topics, gleaning as much as I can, and building infrastructure for maintaining/revisiting this knowledge. In research, I enjoy taking approaches that synthesize techniques between several domains, and often find–in the process of DFS’ing—-that hidden structure emerges between seemingly disparate areas.


  1. * denotes equal contribution. ↩︎