Forgetting-MarI: LLM Unlearning via Marginal Information Regularization
Published in Under review (ICLR 2026), 2025
Summary
Large language model (LLM) unlearning requires removing specific data-derived behavior while preserving broad capabilities.
Forgetting-MarI introduces marginal-information regularization to operationalize this goal with an emphasis on measurable outcomes and stability.
Notes
- Patent application pending (as listed on CV).
- Preprint/links: (add when public / arXiv is available)
Resources
- Paper: (available on request / add link)
- Code: (add link when public)
Recommended citation: Shizhou Xu, Yuan Ni, Stefan Broecker, Thomas Strohmer. (2025). “Forgetting-MarI: LLM Unlearning via Marginal Information Regularization.” Under review at ICLR 2026.
Download Paper | Download Slides
