Working Paper

Learning from Zero: How to Make Consumption-Saving Decisions in a Stochastic Environment with an AI Algorithm

Rui (Aruhan) Shi
CESifo, Munich, 2021

CESifo Working Paper No. 9255

This exercise offers an innovative learning mechanism to model economic agent’s decision-making process using a deep reinforcement learning algorithm. In particular, this AI agent is born in an economic environment with no information on the underlying economic structure and its own preference. I model how the AI agent learns from square one in terms of how it collects and processes information. It is able to learn in real time through constantly interacting with the environment and adjusting its actions accordingly (i.e., online learning). I illustrate that the economic agent under deep reinforcement learning is adaptive to changes in a given environment in real time. AI agents differ in their ways of collecting and processing information, and this leads to different learning behaviours and welfare distinctions. The chosen economic structure can be generalised to other decision-making processes and economic models.

CESifo Category
Fiscal Policy, Macroeconomics and Growth
Empirical and Theoretical Methods
Keywords: expectation formation, exploration, deep reinforcement learning, bounded rationality, stochastic optimal growth
JEL Classification: C450, D830, D840, E210, E700