勷勤数学•专家报告-张海樟

勷勤数学•专家报告


题      目:Approximation via Function Composition for Deep Neural Networks


报  告  人: 张海樟 教授  (邀请人:钟柳强)

                                       中山大学


时      间: 10月15日  16:30-17:30

          

地     点:数学科学学院东楼 401


报告人简介:

        2003年本科毕业于北京师范大学数学系,2006年硕士毕业于中科院数学所,2009年博士毕业于美国雪城大学(Syracuse University)数学系,2009年6月-2010年5月 密歇根大学(University of Michigan)博士后。2010年6月起担任中山大学教授、博士生导师。

研究兴趣包括学习理论、应用调和分析和函数逼近. 代表性成果有再生核的Weierstrass逼近定理, 深度神经网络的收敛性理论,以及再生核巴拿赫空间理论. 在Journal of Machine Learning Research、Applied and Computational Harmonic Analysis、Mathematics of Computation、 Neural Networks、Neural Computation、Neurocomputing、Constructive Approximation、IEEE Transactions系列等发表多篇原创性工作, 单篇最高他引超过360次. 主持包括优秀青年基金在内的多项国家和省部级基金.



摘      要:

        Deep neural networks use consecutive compositions of an activation function and linear functions to represent a complex multivariate target function in machine learning. A popular technique in analyzing the expressive power of ReLU networks is based on an identity that allows exponential approximation of the square function $x^2$ and the multiplication operator $xy$ by self-compositions of the sawtooth function. It is interesting to study whether such a technique can be built for neural networks with other activation functions and whether one can have a faster approximation order. To this end, we first discover a hidden functional composition equation between the square function and the sawtooth function. By exploring the functional equation, we propose new continuously differentiable activation functions and compositional neural networks that can approximate polynomials exponentially and analytic functions sub-exponentially.

        

         欢迎老师、同学们参加、交流!