Dissecting “Reinforcement Studying” by Richard S. Sutton with customized Python implementations, Episode V
In our earlier publish, we wrapped up the introductory collection on basic reinforcement studying (RL) methods by exploring Temporal-Distinction (TD) studying. TD strategies merge the strengths of Dynamic Programming (DP) and Monte Carlo (MC) strategies, leveraging their greatest options to type among the most necessary RL algorithms, akin to Q-learning.
Constructing on that basis, this publish delves into n-step TD studying, a flexible strategy launched in Chapter 7 of Sutton’s e-book [1]. This methodology bridges the hole between classical TD and MC methods. Like TD, n-step strategies use bootstrapping (leveraging prior estimates), however additionally they incorporate the subsequent n
rewards, providing a novel mix of short-term and long-term studying. In a future publish, we’ll generalize this idea even additional with eligibility traces.
We’ll comply with a structured strategy, beginning with the prediction downside earlier than shifting to management. Alongside the way in which, we’ll:
- Introduce n-step Sarsa,
- Prolong it to off-policy studying,
- Discover the n-step tree backup algorithm, and
- Current a unifying perspective with n-step Q(σ).
As at all times, yow will discover all accompanying code on GitHub. Let’s dive in!