DDA-4230: Reinforcement Learning
Course Introduction
This course provides a basic introduction to reinforcement learning algorithms and their applications.
Topics include:
- Multi-armed bandits; finite Markov decision processes; dynamic programming; Monte-Carlo methods;
temporal-difference learning; actor-critic methods; off-policy learning.
- Introduction to deep variants of the aforementioned algorithms, including deep Q-learning,
policy gradient methods, and actor-critic methods.
Scoring:
- Assignments (written and coding homework) (30 points).
- Midterm exam (20 points).
- Final project (50 points).
For the detailed scoring scheme, please check the project introduction below.
Course Arrangement
- Lectures.
- Time: Tuesday and Thursday, 1:30 PM - 2:50 PM.
- Classroom: Bldg 103, Teaching B Building.
- Tutorials.
- Time: Wednesday, 6:00 PM -6:50 PM
- Classroom: Bldg 103, Teaching C Building.
- Office Hours.
- Guiliang Liu (Instructor): Tuesday, 2:50 PM - 3:50 PM, Bldg 103, Teaching B Building.
- Xu Sheng (TA): Wednesday 7:30 PM -8:30 PM, Room 611, Teaching Complex B (TXB).
Important Notes
News.
Some news will be added to here at the student′s request.
Polices.
- Late Policy. A late submission should receive a 10% penalty for each date after the due.
Note that the penalty can accumulate until it reaches 100% (late for 10 days).
If you need special care (e.g., for surgery and other health problem),
DO NOT wait until the last moment, and please let me know in advance (see my contact below).
- Late Drop. A late drop from the course is not encouraged.
Under special circumstances, students may apply for a late drop,
but there is no guarantee that the request can be approved by the school office.
- Honesty in Academic Work.
The Chinese University of Hong Kong, Shenzhen places very high importance on honesty in academic
work submitted by students, and adopts a policy of zero tolerance on academic dishonesty.
While academic dishonesty is the overall name, there are several sub-categories can be found at here.
Course syllabus and Timetable
Topics covered will include the following (The instructor will consistently upload slides and the timeline might be changed at the needs from students)):
- Week 1 (Sept. 3rd)
Lecture 0: [Slides].
- Week 1 (Sept. 5nd)
Lecture 1: Markov decision process [Slides] [Notes].
- Week 2 (Sept. 10th)
Lecture 2: Optimality of MDPs [Slides] [Notes].
- Week 2 (Sept. 12th)
Lecture 3: Stochastic multi-armed bandits [Slides] [Notes].
- Week 3 (Sept. 19th)
Lecture 4: Greedy algorithms [Slides] [Notes].
- Week 4 (Sept. 24th)
Lecture 5: Explore-then-commit algorithms [Slides] [Notes].
- Week 4 (Sept. 26th)
Lecture 6: UCB algorithms [Slides] [Notes].
- Week 4 (Sept. 26th)
Lecture 7: Thompson sampling [Slides] [Notes].
- Week 4 (Sept. 29th)
Lecture 8: Hardness of Bandits [Slides] [Notes].
- Week 5 (Otc. 8th)
Lecture 9: Discrete MDPs [Slides] [Notes].
- Week 5 (Otc. 8th)
Lecture 10: Iterative Methods [Slides] [Notes].
- Week 5 (Otc. 10th)
Lecture 11: UCVI and PSRL [Slides] [Notes].
- Week 6 (Otc. 15th)
Lecture 12: Q-Learning [Slides] [Notes].
- Week 6 (Otc. 17th)
Lecture 13: Model-Free Policy Evaluation [Slides] [Notes].
- Week 7 (Otc. 22th)
Lecture 14: Advanced Topic: Monte-Carlo Tree Search [Slides].
- Week 8 (Otc. 29th)
Lecture 15: Trial and Error [Slides] [Notes].
- Week 8 (Otc. 31th)
Lecture 16: Value function Approximation [Slides] [Notes].
- Week 9 (Nov. 5th)
Lecture 17: Deep Q-learning [Slides] [Notes].
- Week 10 (Nov. 12th)
Lecture 18: Policy Gradient [Slides] [Notes].
- Week 11 (Nov. 19th)
Lecture 19: Policy Optimization [Slides] [Notes].
- Week 11 (Nov. 21th)
Lecture 20: Interconnections between policy and value [Slides] [Notes].
- Week 12 (Nov. 26th)
Lecture 21: Imitation Learning [Slides].
- Week 12 (Nov. 28th)
Lecture 22: Reinforcement Learning from Human Feedback [Slides].
- Week 13 (Dec. 03rd)
Lecture 22: Embodied AI [Slides].
Acknowledgement: The teaching materials use resource from
[Previous Course].
Course Survey
Please fill in the survey so that we understand your concern.
[Survey Link]