Machine Unlearning via Information-Theoretic Regularization
Conference talk, Joint Mathematics Meetings (JMM), TBA
Talk (to appear). Mathematical framing of unlearning objectives and practical verification-oriented evaluation.
</article> </div>Machine Unlearning via Information-Theoretic Regularization
Workshop talk, AIM Workshop on 'Fairness and Foundations in Machine Learning', Pasadena, CA (TBA)
Talk (to appear). Overview of information-theoretic regularization for machine unlearning, with emphasis on auditability and evaluation.
</article> </div>Machine Unlearning via Information-Theoretic Regularization
Conference talk, INFORMS Annual Meeting, TBA
Conference talk connecting unlearning goals to principled regularization and measurable evaluation pipelines.
</article> </div>WHOMP: Improving Upon Randomized Controlled Trials via Wasserstein Homogeneity
Workshop talk, ICLR Workshop, TBA
Workshop talk on WHOMP: optimality criteria and algorithms for subgroup splitting in comparative experiments.
</article> </div>WHOMP: Improving Upon Randomized Controlled Trials via Wasserstein Homogeneity
Spotlight, Conference on Parsimony and Learning (CPAL), Spotlight Track, Stanford, CA
Spotlight talk presenting WHOMP and its advantages over classical randomization and rerandomization baselines.
</article> </div>WHOMP: Improving Upon Randomized Controlled Trials via Wasserstein Homogeneity
Seminar, Math of Data & Decisions (MADDD) Seminar, UC Davis, Davis, CA
Seminar talk introducing WHOMP, theory/algorithms, and empirical comparisons for experimental design.
</article> </div>Machine Unlearning via Information-Theoretic Regularization
Seminar, Applied Math Seminar, University of Utah, Salt Lake City, UT
Seminar talk on information-theoretic unlearning: objectives, algorithms, and empirical evaluation considerations.
</article> </div>Machine Unlearning for Scientific Discovery
Invited talk, SLAC Users Meeting, Stanford University, Stanford, CA
Invited talk on why unlearning matters for scientific workflows (data governance, model updates, and reliability), and how to evaluate it.
</article> </div>Fair Data Representation for Machine Learning at the Pareto Frontier
Invited talk, Computational Harmonic Analysis in Data Science and Machine Learning (CMO–BIRS), Casa Matemática Oaxaca / BIRS (workshop)
Invited talk on Pareto-frontier methods for fair data representation with provable trade-offs.
</article> </div>Fair Data Representation for Machine Learning at the Pareto Frontier
Conference presentation, International Conference on Machine Learning (ICML), ICML
Conference presentation of the JMLR work on fair data representation via Pareto-frontier trade-offs.
</article> </div>Fair Data Representation for Machine Learning at the Pareto Frontier
Workshop talk, Explainable AI for the Sciences Workshop, IPAM (UCLA), Los Angeles, CA
Workshop talk on provable fairness–utility trade-offs and algorithmic construction along the Pareto frontier.
</article> </div>Fairness in Machine Learning
Invited talk, Inclusivity, Equity, and Ethics in Research and Data Science, UC Davis, Davis, CA
Invited talk introducing core fairness notions, practical pitfalls, and research directions in trustworthy ML.
</article> </div>Fair Data Representation for Machine Learning
Conference talk, SIAM Conference on Mathematics of Data Science (MDS), SIAM MDS
Talk on fairness objectives and data pre-processing approaches for controlling fairness–accuracy trade-offs.
</article> </div>