Selected presentations

Below is a series of talks on approximate dynamic programming, as well as related talks in the broader field of computational stochastic optimization. I use this term as a broader umbrella that encompasses stochastic programming, dynamic programming, stochastic search and simulation optimization.

Most of these presentations are given with the idea that I am giving the talk, so I am not sure how useful they are as a standalone document to read. A notable exception is the first series of three talks at the top, which were specifically prepared to be read:

Modeling and algorithms for computational stochastic optimization

Below are a series of powerpoint presentations that were adapted from some recent talks I have been giving. I have broken up a single very long presentation into a series of smaller topics. I have also added slides that include discussions that I would normally include orally, so that the slides are readable.

Overview and modeling (posted October 25, 2012) - This includes a brief overview, and then a series of slides on the fundamental elements of a stochastic, dynamic model. I strongly urge students to focus first on modeling (assume that a policy has been provided), before turning to the challenge of designing policies.

Policies (posted October 25, 2012) - I have found that various algorithmic strategies for sequential problems can be broken down into four fundamental classes of policies, which can then form the basis for a wide range of hybrids.

Bridging communities (posted October 25, 2012) - There is a lot of confusion about the difference between dynamic programming and stochastic programming, which are often viewed as separate and competing fields. In addition, it is easy to overlook the relationship between stochastic search, dynamic programming (in the form of policy search), and simulation-optimization.

Approximate dynamic programming

I have given variations of this talk a number of times, often emphasizing different classes of applications.

Click here for a version of the talk emphasizing energy applications

Click here for a version of the talk emphasizing transportation and logistics

 

Bridging stochastic programming and dynamic programming - March, 2013 (powerpoint format).

This talk was given at Georgia Tech, with the goal of introducing students to a particular style of modeling stochastic, dynamic optimization problems. It addresses topics such as defining state variables and the five components of a stochastic dynamic problem. We make the transition from finding the best decision in deterministic optimization problems, to finding the best policy for stochastic optimization problems. We identify four fundamental classes of policies, and then describe two of these in more detail: lookahead policies, and policies based on value function approximations. The presentation closes with a step by step translation of "dynamic programming" as it is done by leaders in stochastic programming (in the form of Alex Shapiro) to the classical notation of Markov decision processes.

 

On Languages for Stochastic Optimzation - University of Quebec at Montreal, November 18, 2013 (powerpoint format)

This presentation was given at the University of Quebec at Montreal as part of their commencement exercises, where I was awarded the Docteur honoris causa. Montreal, with its bilingual tradition, has been the place where I seem to keep coming back to the theme of modeling and languages (this is highlighted at the beginning of the talk). The rest of the talk brings out concepts, terminology and notation from the different communities that work in some form of stochastic optimization.