DDA-4230: Reinforcement Learning
Course Introduction
This course provides a basic introduction to reinforcement learning algorithms and their applications.
Topics include:
- Multi-armed bandits; finite Markov decision processes; dynamic programming; Monte-Carlo methods;
temporal-difference learning; actor-critic methods; off-policy learning.
- Introduction to deep variants of the aforementioned algorithms, including deep Q-learning,
policy gradient methods, and actor-critic methods.
Scoring:
- Assignments (written and coding homework) (30 points).
- Midterm exam (20 points).
- Final project (50 points).
For the detailed scoring scheme, please check the project introduction below.
Course Arrangement
- Lectures.
- Time: Monday and Wednesday, 1:30PM - 2:50PM.
- Classroom: Room 302, Teaching Complex C.
- Tutorials.
- Time: Tuesday, 20:00 PM -20:50 PM
- Classroom: Bldg 206, Teaching C Building.
- Office Hours.
- Guiliang Liu (Instructor): Monday, 2:50 PM - 3:50 PM, Room 302, Teaching Complex C.
- Bo Yue,Hengming Zhang (TA): Friday 5:00-6:00, Room 611, Teaching Complex B (TXB).
Important Notes
News.
Some news will be added to here at the student′s request.
Polices.
- Late Policy. A late submission should receive a 10% penalty for each date after the due.
Note that the penalty can accumulate until it reaches 100% (late for 10 days).
If you need special care (e.g., for surgery and other health problem),
DO NOT wait until the last moment, and please let me know in advance (see my contact below).
- Late Drop. A late drop from the course is not encouraged.
Under special circumstances, students may apply for a late drop,
but there is no guarantee that the request can be approved by the school office.
- Honesty in Academic Work.
The Chinese University of Hong Kong, Shenzhen places very high importance on honesty in academic
work submitted by students, and adopts a policy of zero tolerance on academic dishonesty.
While academic dishonesty is the overall name, there are several sub-categories can be found at here.
Course syllabus and Timetable
Topics covered will include the following (The instructor will consistently upload slides and the timeline might be changed at the needs from students)):
- Week 1 (Sept. 1st)
Lecture 0: Course Introduction [Slides].
- Week 1 (Sept. 3rd)
Lecture 1: Markov decision process [Slides] [Notes].
- Week 2 (Sept. 8th)
Lecture 2: Optimality of MDPs [Slides] [Notes].
- Week 2 (Sept. 10th)
Lecture 3: Stochastic multi-armed bandits [Slides] [Notes].
Acknowledgement: The teaching materials use resource from
[Previous Course].
Course Survey
Please fill in the survey so that we understand your concern.
[Survey Link]
Midterm Exam
Time: November 3rd (Monday) 1:30 PM to 2:20 PM.
Format: In-Class Exam at Room 302, Teaching Complex C.