| Call Number | 15903 |
|---|---|
| Day & Time Location |
TR 2:40pm-3:55pm 524 Seeley W. Mudd Building |
| Points | 3 |
| Grading Mode | Standard |
| Approvals Required | None |
| Instructor | Xunyu Zhou |
| Type | LECTURE |
| Method of Instruction | In-Person |
| Course Description | Stochastic control has broad applications in almost every walk of life, including finance, revenue management, energy, health care and robotics. Classical, model-based stochastic control theory assumes that the system dynamics and reward functions are known and given, whereas modern, model-free stochastic control problems call for reinforcement learning to learn optimal policies in an unknown environment. This course covers model-based stochastic control and model-free reinforcement learning, both in continuous time with continuous state space and possibly continuous control (action) space. It includes the following topics: Shortest path problem, calculus of variations and optimal control; formulation of stochastic control; maximum principle and backward stochastic differential equations; dynamic programming and Hamilton-Jacobi-Bellman (HJB) equation; linear-quadratic control and Riccati equations; applications in high-frequency trading; exploration versus exploitation in reinforcement learning; policy evaluation and martingale characterization; policy gradient; q-learning; applications in diffusion models for generative AI. |
| Web Site | Vergil |
| Department | Industrial Engineering and Operations Research |
| Enrollment | 13 students (51 max) as of 9:07PM Wednesday, November 12, 2025 |
| Subject | Industrial Engineering and Operations Research |
| Number | E4722 |
| Section | 001 |
| Division | School of Engineering and Applied Science: Graduate |
| Section key | 20243IEOR4722E001 |