Course description
Fundamentals of Deep Reinforcement Learning
This course starts from the very beginnings of Reinforcement Learning and works its way up to a complete understanding of Q-learning, one of the core reinforcement learning algorithms.
In part II of this course, you'll use neural networks to implement Q-learning to produce powerful and effective learning agents (neural nets are the "Deep" in "Deep Reinforcement Learning").
Upcoming start dates
1 start date available
Suitability - Who should attend?
Prerequisites
Requirements:
- Proficiency with Python
- Functions, classes, objects, loops
- Basic familiarity with Jupyter notebooks
Recommended Prerequisites:
- Basic probability
- Sampling from a normal distributon
- Conditional probability notation
- \mathbb{E}E - expectation
- \SigmaΣ - the summation operator
Outcome / Qualification etc.
What you'll learn
- The theoretical underpinnings of Reinforcement Learning ("RL").
- How to implement each piece of theory to solve real problems in Python.
- The core RL formula: The Bellman Equation
- The Q-Learning algorithm, as well as many powerful improvements.
- Enough to prepare you for implement Reinforcement Learning algorithms using Deep Neural Networks (Part II).
Each concept is presented with a video overview, and detailed Jupyter notebooks covering each aspect of theory and practice.
Training Course Content
- Introduction to Reinforcment Learning
- Bandit Problems
- Epsilon Greedy Agent
- Markov Decision Processes
- Episode Returns
- Returns and Discount Factors
- The Bellman Equation
- Iterative Policy Evaluation and Improvement
- Policy Evaluation and Iteration
- Dynamic Programming
- Q-Learning and Sampling Based Methods
- Monte Carlo Rollouts vs. Temporal Difference Learning
- On-Policy Learning vs. Off-Policy Learning
- Q-Learning
- What's Next
Course delivery details
This course is offered through Learn Ventures, a partner institute of EdX.
2-6 hours per week
Expenses
- Verified Track -$75
- Audit Track - Free
Ads