emlyon faculty: Pr. Bertrand Maillet 

in collaborations with: Dr. Michele Costola, Economics Department, Ca’ Foscari University of Venice, Italy and Pr. Massimiliano Caporin, Department of Statistical Sciences, University of Padova, Italy. 

 

After the major financial crisis of 2008, several systemic risk measures were proposed in the financial literature to quantify the magnitude of financial system distress. In this project in due progress, we suggest the construction of a novel overall meta-index for the

measurement of systemic risk based on an AI/ML technique called Sparse Principal Component Analysis, applied here main systemic risk measures, with the ultimate aim of providing an index with a well-understood dynamic and proven explicit links to the stress of the financial system and future severe economic recessions, with a special dedicated attention to the aftermaths of the present on-going COVID19 crisis.

 

Predictive regressions: a machine learning perspective

emlyon faculty: Dr. Guillaume Coqueret

in collaborations with: Mr. Romain Deguest (independent)

We characterize the quadratic loss that occurs when forecasting an autocorrelated process from its relationship with another autocorrelated process. When the predictions are based on sample coefficients, we link the accuracy to three key quantities: the persistence of the underlying series, the forecasting horizon, and the sample size. Their impacts have mitigating effects, which creates a memory tradeoff. We surprisingly illustrate that choosing predictors with high autocorrelation often leads to lower performance, especially when the the autocorrelation of the predicted variable is high. We confirm our results with an empirical study on the S&P 500 with a series of 15 popular predictors in the literature.

 

Persistence in factor-based supervised learning models

emlyon faculty: Dr. Guillaume Coqueret 

In this paper, we document the importance of memory in machine learning (ML)-based models relying on firm characteristics for asset pricing. We come to three empirical conclusions. First, the pure out-of-sample fit of the models can be mediocre: we find that some R^2 measures are negative, especially when training samples are short. Second, we show that poor fit does not necessarily matter from an investment standpoint: what actually counts is the measure of cross-sectional accuracy. Third, memory is key: portfolios are the most profitable when they are based on models driven by strong persistence. Average realized returns are the highest when the size of training samples is large and when the horizon of the predicted variable is long.