Dynamic Economic Models (MSc)

I have been teaching this course in Tilburg's MSc in Econometrics and Mathematical Economics (325134) since 2017-18.  Students who have completed it master the core theory of dynamic decision making under uncertainty, with and without strategic interactions between decision makers (“agents”). They understand and can apply some standard econometric methods for estimating the unknown parameters of such decision problems and games using data on actual decisions and outcomes. They can numerically solve decision problems that are quantified using such empirical analysis and/or expert opinion. They can apply the theory and methods taught to actual decision problems in economics and business. 

In this course, we focus on Markov decision problems and stochastic (Markov) games in discrete time. In these workhorse models, agents’ choices affect future outcomes through state variables that follow a controlled (by the agents’ choices) Markov process. Their stochastic dynamics are sufficiently rich for most economic and business applications and sufficiently structured to facilitate powerful theoretical, empirical, and numerical analysis. Throughout the course, we will motivate models by, and apply methods (using MATLAB and/or R) to, practical examples of economic and business decision problems and games. These may include (but are not limited to)

  • life cycle analysis of human capital formation, savings, and pensions;
  • dynamic demand analysis; dynamic treatment choice and evaluation;
  • investment decision making using (binomial) decision trees;
  • the analysis of learning, competition, and antitrust policy in the market for wide-bodied commercial aircraft;
  • welfare analysis of environmental regulation of oligopolies;
  • the analysis of sunk costs, barriers to entry, and toughness of competition in dynamic oligopolies from market level panel data; and
  • the analysis of optimal advertising dynamics.

We will first consider individual decision problems in which agents choose from a finite set of actions and state variables are finite (or, in the case of econometric models that need to have continuous errors, only have continuous components that enter in a sufficiently simple way). We will study

  • Markov decision rules, Bellman’s equation and principle of optimality, the contraction mapping theorem, and Blackwell’s sufficient conditions;
  • two solution methods, value iteration (successive approximations) and policy iteration;
  • the extent to which various types of data— for example, data on actual choices and outcomes in a similar decision problem— can be used to learn about the unknown parameters of the decision problem (identification); and
  • a maximum likelihood estimation procedure and (briefly) some alternative methods for the empirical analysis of dynamic discrete choice problems.

Next, we will consider dynamic games in which each agent solves such a dynamic decision problem and has payoffs that depend on other agents’ choices. We will discuss

  • payoff-relevant variables, Markov strategies, Markov perfect equilibrium, and the one-shot deviation principle (the principle of optimality for games);
  • a theoretical and computational problem that is specific to games, equilibrium multiplicity; and
  • computational and empirical methods that are closely related to those for individual decision problems, in the context of a simple example of dynamic oligopolistic competition.

To the extent time permits, we will study extensions to large and continuous action and state spaces. These may include (but are not necessarily limited to)

  • generalized method-of-moments estimation of continuous choice models using their intertemporal first- order conditions (stochastic Euler equations) as moments;
  • discrete and smooth approximation methods to solve continuous choice problems; and
  • computational methods (neural networks, reinforcement learning, etcetera) for handling problems with large action and state spaces.
Joomla templates by a4joomla