勷勤数学•专家报告-王寒宵

勷勤数学•专家报告


题      目:Zero-Sum Stochastic Linear-Quadratic Differential Games: Stackelberg Equilibria versus Nash Equilibria


报  告  人:王寒宵    副教授  (邀请人:杨舟)

                                   深圳大学


时      间:7月18日  16:00-17:00


地     点:西楼二楼会议室


报告人简介:

       王寒霄,深圳大学助理教授。2014年本科毕业于吉林大学,2020年于复旦大学获得博士学位(导师:雍炯敏教授),期间访问美国中佛罗里达大学近两年。之后在新加坡国立大学数学系做博士后。2022年4月至今就职于深圳大学。主要从事时间不一致、随机Volterra积分方程、路径依赖PDE、线性二次等问题的研究,在J. Math. Pures Appl.、SIAM J. Control Optim、Finance Stoch、Ann. Inst. Henri Poincare Probab. Stat.、J. Differential Equations等期刊发表论文14篇。现主持国家级、省级项目各1项,市级项目2项。获得深圳市优秀科技创新人才项目资助,入选深圳市“鹏城孔雀”计划,独立获得Stochastics and Dynamics期刊2021年度最佳论文奖。


摘      要:

       The talk is concerned with the relationship between the zero-sum Stackelberg and Nash stochastic linear-quadratic (LQ, for short) differential games over finite horizons. Under a fairly weak condition, the Stackelberg equilibrium is explicitly obtained by first solving a forward stochastic LQ optimal control problem (SLQ problem, for short) and then a backward SLQ problem. Two Riccati equations are derived for constructing the Stackelberg equilibrium. An interesting finding is that the difference of these two Riccati equations coincides with the Riccati equation associated with the zero-sum Nash stochastic LQ differential game, which implies that under a uniform convexity-concavity condition, the Stackelberg equilibrium and the Nash equilibrium are actually identical. Consequently, the Stackelberg equilibrium admits a linear state feedback representation, and the Nash game can be solved in a leader-follower manner. Finally, we provide a method of finding the Nash equilibrium (if it exists) under the weakest condition. We highlight that in this method, the associated coupled FBSDEs (i.e., the optimality system) can be always decoupled.


          欢迎老师、同学们参加、交流!