Shizhou Xu

Hi — I’m Shizhou Xu

I’m a Postdoctoral Scholar at UC Davis working with Prof. Thomas Strohmer on the mathematical foundations of machine learning & Artificial Intelligence. I earned my Ph.D. in Applied Mathematics (UC Davis, 2024), where my dissertation focused on fairness in machine learning via optimal transport.

My research focuses on mathematical foundation of AI (e.g. neural network architecture analysis, physics-informed data-driven ML/AI), AI for Science (e.g. scientific data analysis, ML/AI-driven inverse solver, autonomous experiment for scientific discovery), and trustworthy AI (e.g. performance guarantee, generalizability, robustness, unlearning, fine-tuning, privacy, and fairness) via a stochastic dynamics viewpoint of learning.

Currently seeking collaborations where theory-driven ML can translate into measurable real-world impact.


Research snapshot

  • Trustworthy ML/AI: fairness, privacy, robustness, and machine unlearning (theory → algorithms → evaluation).
  • Stochastic dynamics: probability, optimal transport, stochastic process, and ergodic theory.
  • Statistics for ML: statistical learning, information theory, uncertainty quantification, and unsupervised learning.

Highlights

  • Fairness theory: leveraged optimal transport to address multiple open questions in ML fairness.
  • Industry impact: my work has been referenced in external guidance on fairness mitigation in financial transactions; and my OT-based trial-design method has been adopted in a large clinical trial setting.
  • IP: patent application pending on marginal-information regularization for LLM unlearning.

Selected publications

  • Shizhou Xu, Thomas Strohmer.
    Fair Data Representation for Machine Learning at the Pareto Frontier. JMLR (2023).
    [PDF]

  • Shizhou Xu, Thomas Strohmer.
    WHOMP: Improving Upon Randomized Controlled Trials via Wasserstein Homogeneity. Under review (JASA).
    [Preprint]

  • Shizhou Xu, Thomas Strohmer.
    On the (In)Compatibility between Individual and Group Fairness. Under review (SIMODS).
    [Preprint]

  • Shizhou Xu, Thomas Strohmer.
    Machine Unlearning via Information-Theoretic Regularization. (manuscript; see publications page for the latest links).
    [Preprint]

  • Shizhou Xu, Yuan Ni, Stefan Broecker, Thomas Strohmer.
    Forgetting-MarI: LLM Unlearning via Marginal Information Regularization. Under review (ICLR 2026).
    (Patent application pending.)

Full list: /publications/


News & talks

  • 2026 — Talks to appear at AIM Workshop (Fairness and Foundations in ML) and Joint Mathematics Meetings.
  • 2025 — Invited/selected talks: University of Utah Applied Math Seminar, INFORMS Annual Meeting, SLAC Users Meeting (Stanford).
  • 2025CPAL Spotlight Track and ICLR Workshop presentation on WHOMP.
  • 2024 — Presented at ICML 2024 and CMO–BIRS (Casa Matemática Oaxaca / Banff).
  • 2024Yueh-Jing Lin Scholarship (UC Davis).

Contact

Email: shzxu@ucdavis.edu
CV: /files/CV.pdf