| Call Number | 12883 |
|---|---|
| Points | 3 |
| Grading Mode | Standard |
| Approvals Required | None |
| Instructor | Gunter Meissner |
| Type | LECTURE |
| Method of Instruction | On-Line Only |
| Course Description | Stochastic control has broad applications in almost every walk of life, including finance, revenue management, energy, health care and robotics. Classical, model-based stochastic control theory assumes that the system dynamics and reward functions are known and given, whereas modern, model-free stochastic control problems call for reinforcement learning to learn optimal policies in an unknown environment. This course covers model-based stochastic control and model-free reinforcement learning, both in continuous time with continuous state space and possibly continuous control (action) space. It includes the following topics: Shortest path problem, calculus of variations and optimal control; formulation of stochastic control; maximum principle and backward stochastic differential equations; dynamic programming and Hamilton-Jacobi-Bellman (HJB) equation; linear-quadratic control and Riccati equations; applications in high-frequency trading; exploration versus exploitation in reinforcement learning; policy evaluation and martingale characterization; policy gradient; q-learning; applications in diffusion models for generative AI. |
| Web Site | Vergil |
| Subterm | 05/20-06/28 (A) |
| Department | Video Network |
| Enrollment | 9 students (99 max) as of 8:07PM Wednesday, October 29, 2025 |
| Subject | Industrial Engineering and Operations Research |
| Number | E4722 |
| Section | V01 |
| Division | School of Engineering and Applied Science: Graduate |
| Fee | $395 CVN Course Fee |
| Note | VIDEO NETWORK STUDENTS ONLY |
| Section key | 20242IEOR4722EV01 |