勷勤数学•专家报告
题 目:A Tail-risk Sensitive Reinforcement Learning Approach for Option Hedging
报 告 人:彭献华 副教授 (邀请人:杨舟)
北京大学
时 间:9月11日 16:00-17:00
地 点:数科院西楼二楼会议室
报告人简介:
彭献华现任职于北京大学汇丰商学院,是北京大学长聘副教授,博士生导师(金融工程专业),同时担任北京大学跨学科博士生导师(电子信息专业)。彭献华的主要研究领域为金融工程,金融科技,量化交易和投资,机器学习,金融监管,风险管理等。其论文发表在Management Science, Operations Research, 和Mathematics of Operations Research等顶尖学术期刊。彭献华分别于2000年和2003年从北京大学数学学院信息与计算科学系获得信息科学学士和应用数学硕士,于2009年从哥伦比亚大学工业工程与运筹学系获得运筹学(金融工程方向)博士。他与Steven Kou合作提出被业界称为“Peng-Kou模型”的用于信用违约互换(Credit Default Swap)及期权(CDS Options)等衍生品定价和风险管理的模型,在瑞银集团(UBS)等著名金融机构的衍生品业务中实际应用。Roland Lichters, Roland Stamm, Donal Gallagher所著的书《Modern Derivatives Pricing and Credit Exposure Analysis》的第15.5章详细讨论了Peng-Kou模型相对于其它的模型的优点。
摘 要:
We propose a new risk-sensitive reinforcement learning approach for the dynamic hedging of options and other derivatives. The approach focuses on the minimization of the tail risk of the final P&L of the seller of options. Different from most existing reinforcement learning approaches that require a parametric model of the underlying asset, our approach can learn the optimal hedging strategy directly from the historical market data without specifying a parametric model. Our approach presents an efficient way to incorporate the tail risk measures of the final P&L into traditional reinforcement learning framework in a model-free setting. We carry out comprehensive empirical study to show that, in the out-of-sample tests, the proposed reinforcement learning hedging strategy can obtain statistically significantly lower tail risk and higher mean of the final P&L than traditional delta hedging methods. This is a joint work with Xiang Zhou and Bo Xiao at City University of Hong Kong and Yi Wu at Peking University.
欢迎老师、同学们参加、交流!