学术报告-胡耀华

学术报告


题      目:On Convergence of Iterative Thresholding Algorithms to Approximate Sparse Solution for Nonconvex Sparse Optimization



报  告  人:胡耀华   教授  (邀请人:陈艳男 )

                                    深圳大学数学与统计学院


时      间:3月13日  10:00-11:10


地     点:数科院西楼报告厅

报告人简介:

        胡耀华,深圳大学数学与统计学院教授,博士生导师,香港理工大学兼职博导,国家优秀青年科学基金获得者,深圳市海外高层次人才。主要从事数学优化理论、算法和应用研究,包括非凸稀疏优化、广义凸优化、复合优化与系统生物学、计算光学、交通科学应用。主持国家自然科学基金4项,省市级科研项目9项。在SIAM Journal on Optimization,Inverse Problems,Journal of Machine Learning Research,European Journal of Operational Research,Transportation Research Part B: Methodological,Briefings in Bioinformatics等国际期刊上发表四十余篇学术论文,授权3项国家发明专利,开发多个生物信息学工具包与网页服务器。


摘      要:

 Sparse optimization is a popular research topic in applied mathematics and optimization, and nonconvex sparse regularization problems have been extensively studied to ameliorate the statistical bias and enjoy robust sparsity promotion capability in vast applications. However, puzzled by the nonconvex and nonsmooth structure in nonconvex regularization problems, the convergence theory of their optimization algorithms is still far from completion: only the convergence to a stationary point was established in the literature, while there is still no theoretical evidence to guarantee the convergence to a global minimum or a true sparse solution.

This talk aims to find an approximate true sparse solution of an under-determined linear system. For this purpose, we propose two types of iterative thresholding algorithms with the continuation technique and the truncation technique respectively. We introduce a notion of limited shrinkage thresholding operator and apply it, together with the restricted isometry property, to show that the proposed algorithms converge to an approximate true sparse solution within a tolerance relevant to the noise level and the limited shrinkage magnitude. Applying the obtained results to nonconvex regularization problems with SCAD, MCP and Lp penalty and utilizing the recovery bound theory, we establish the convergence of their proximal gradient algorithms to an approximate global solution of nonconvex regularization problems. The established results include the existing convergence theory for L1 or L0 regularization problems for finding a true sparse solution as special cases. Preliminary numerical results show that our proposed algorithms can find approximate true sparse solutions that are much better than stationary solutions that are found by using the standard proximal gradient algorithm.