Optimal execution is an important problem faced by any trader. Most solutions are based on the assumption of constant market impact, while liquidity is known to be dynamic. Moreover, models with time-varying liquidity typically assume that it is observable, despite the fact that, in reality, it is latent and hard to measure in real time. In this paper we show that the use of Double Deep Q-learning, a form of Reinforcement Learning based on neural networks, is able to learn optimal trading policies when liquidity is time-varying. Specifically, we consider an Almgren-Chriss framework with temporary and permanent impact parameters following several deterministic and stochastic dynamics. Using extensive numerical experiments, we show that the trained algorithm learns the optimal policy when the analytical solution is available, and overcomes benchmarks and approximated solutions when the solution is not available.

Reinforcement Learning for Optimal Execution When Liquidity Is Time-Varying

Macrì, Andrea
;
Lillo, Fabrizio
2024

Abstract

Optimal execution is an important problem faced by any trader. Most solutions are based on the assumption of constant market impact, while liquidity is known to be dynamic. Moreover, models with time-varying liquidity typically assume that it is observable, despite the fact that, in reality, it is latent and hard to measure in real time. In this paper we show that the use of Double Deep Q-learning, a form of Reinforcement Learning based on neural networks, is able to learn optimal trading policies when liquidity is time-varying. Specifically, we consider an Almgren-Chriss framework with temporary and permanent impact parameters following several deterministic and stochastic dynamics. Using extensive numerical experiments, we show that the trained algorithm learns the optimal policy when the analytical solution is available, and overcomes benchmarks and approximated solutions when the solution is not available.
2024
Settore STAT-04/A - Metodi matematici dell'economia e delle scienze attuariali e finanziarie
Double deep q-learning; Optimal execution; reinforcement learning; time varying liquidity
   PNRR Infrastrutture di Ricerca - SoBigData.it - Strengthening the Italian RI for Social Mining and Big Data Analytics.
   SoBigData.it
   Ministero della pubblica istruzione, dell'università e della ricerca
   IR_0000013
File in questo prodotto:
File Dimensione Formato  
Reinforcement Learning for Optimal Execution When.pdf

Accesso chiuso

Tipologia: Published version
Licenza: Tutti i diritti riservati
Dimensione 5.05 MB
Formato Adobe PDF
5.05 MB Adobe PDF   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11384/160323
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 5
social impact