金年会jinnianhuicom学术报告
Loss Landscape and Error Bound Analysis of Regularized Deep Matrix Factorization
江如俊
(复旦大学)
报告时间:2026年1月19日 星期一 下午14:30-15:30
报告地点:沙河校区E806
报告摘要:Deep matrix factorization (DMF) is a fundamental model underlying many applications, including deep linear neural networks. Despite its simplicity, the regularized DMF problem exhibits a highly nonconvex optimization landscape that is not yet fully understood. In this talk, we analyze the loss landscape and local geometry of regularized deep matrix factorization. We characterize all critical points and identify conditions a critical point is a local minimizer, a global minimizer, a strict saddle point, or a non-strict saddle point. We further establish an error bound around the critical point set, which leads to linear convergence guarantees for gradient-based methods. Our results provide theoretical insights into why first-order methods perform well for regularized DMF and offer a unified perspective on the optimization behavior of deep linear networks as an important application.
报告人简介:江如俊,复旦大学大数据学院副教授,博士生导师。2016年7月于香港中文大学获得博士学位。研究方向主要包括优化算法和理论分析及其在机器学习与管理科学等领域的应用研究。其研究成果发表在Math. Program.、SIAM J. Optim.、Math. Oper. Res.、INFORMS J. Comput.等运筹优化国际顶级期刊和ICML、NeurIPS、ICLR等人工智能顶会上。获国家级青年人才计划、上海市扬帆计划支持,主持国家自然科学基金青年项目和面上项目。获国际机器学习大会ICML 2022杰出论文奖。担任ICML和NeurIPS领域主席。
邀请人:谢家新