Welcome to half 2 of my LLM deep dive. If you happen to’ve not learn Half 1, I extremely encourage you to test it out first.
Beforehand, we lined the primary two main levels of coaching an LLM:
- Pre-training — Studying from huge datasets to kind a base mannequin.
- Supervised fine-tuning (SFT) — Refining the mannequin with curated examples to make it helpful.
Now, we’re diving into the subsequent main stage: Reinforcement Studying (RL). Whereas pre-training and SFT are well-established, RL remains to be evolving however has turn into a crucial a part of the coaching pipeline.
I’ve taken reference from Andrej Karpathy’s extensively common 3.5-hour YouTube. Andrej is a founding member of OpenAI, his insights are gold — you get the thought.
Let’s go 🚀
What’s the aim of reinforcement studying (RL)?
People and LLMs course of data otherwise. What’s intuitive for us — like fundamental arithmetic — will not be for an LLM, which solely sees textual content as sequences of tokens. Conversely, an LLM can generate expert-level responses on complicated matters just because it has seen sufficient examples throughout coaching.
This distinction in cognition makes it difficult for human annotators to offer the “excellent” set of labels that constantly information an LLM towards the best reply.
RL bridges this hole by permitting the mannequin to be taught from its personal expertise.
As a substitute of relying solely on express labels, the mannequin explores completely different token sequences and receives suggestions — reward alerts — on which outputs are most helpful. Over time, it learns to align higher with human intent.
Instinct behind RL
LLMs are stochastic — that means their responses aren’t mounted. Even with the identical immediate, the output varies as a result of it’s sampled from a likelihood distribution.
We will harness this randomness by producing hundreds and even hundreds of thousands of potential responses in parallel. Consider it because the mannequin exploring completely different paths — some good, some dangerous. Our objective is to encourage it to take the higher paths extra usually.
To do that, we practice the mannequin on the sequences of tokens that result in higher outcomes. Not like supervised fine-tuning, the place human specialists present labeled information, reinforcement studying permits the mannequin to be taught from itself.
The mannequin discovers which responses work greatest, and after every coaching step, we replace its parameters. Over time, this makes the mannequin extra prone to produce high-quality solutions when given comparable prompts sooner or later.
However how can we decide which responses are greatest? And the way a lot RL ought to we do? The main points are difficult, and getting them proper shouldn’t be trivial.
RL shouldn’t be “new” — It may surpass human experience (AlphaGo, 2016)
An important instance of RL’s energy is DeepMind’s AlphaGo, the primary AI to defeat knowledgeable Go participant and later surpass human-level play.
Within the 2016 Nature paper (graph under), when a mannequin was educated purely by SFT (giving the mannequin tons of fine examples to mimic from), the mannequin was in a position to attain human-level efficiency, however by no means surpass it.
The dotted line represents Lee Sedol’s efficiency — one of the best Go participant on the earth.
It’s because SFT is about replication, not innovation — it doesn’t enable the mannequin to find new methods past human information.
Nevertheless, RL enabled AlphaGo to play towards itself, refine its methods, and finally exceed human experience (blue line).

RL represents an thrilling frontier in AI — the place fashions can discover methods past human creativeness once we practice it on a various and difficult pool of issues to refine it’s pondering methods.
RL foundations recap
Let’s shortly recap the important thing elements of a typical RL setup:

- Agent — The learner or determination maker. It observes the present state of affairs (state), chooses an motion, after which updates its behaviour based mostly on the result (reward).
- Surroundings — The exterior system wherein the agent operates.
- State — A snapshot of the setting at a given step t.
At every timestamp, the agent performs an motion within the setting that may change the setting’s state to a brand new one. The agent will even obtain suggestions indicating how good or dangerous the motion was.
This suggestions is known as a reward, and is represented in a numerical kind. A constructive reward encourages that behaviour, and a damaging reward discourages it.
By utilizing suggestions from completely different states and actions, the agent regularly learns the optimum technique to maximise the overall reward over time.
Coverage
The coverage is the agent’s technique. If the agent follows coverage, it can constantly make good selections, resulting in larger rewards over many steps.
In mathematical phrases, it’s a operate that determines the likelihood of various outputs for a given state — (πθ(a|s)).
Worth operate
An estimate of how good it’s to be in a sure state, contemplating the long run anticipated reward. For an LLM, the reward would possibly come from human suggestions or a reward mannequin.
Actor-Critic structure
It’s a common RL setup that mixes two elements:
- Actor — Learns and updates the coverage (πθ), deciding which motion to absorb every state.
- Critic — Evaluates the worth operate (V(s)) to present suggestions to the actor on whether or not its chosen actions are resulting in good outcomes.
The way it works:
- The actor picks an motion based mostly on its present coverage.
- The critic evaluates the result (reward + subsequent state) and updates its worth estimate.
- The critic’s suggestions helps the actor refine its coverage in order that future actions result in larger rewards.
Placing all of it collectively for LLMs
The state might be the present textual content (immediate or dialog), and the motion might be the subsequent token to generate. A reward mannequin (eg. human suggestions), tells the mannequin how good or dangerous it’s generated textual content is.
The coverage is the mannequin’s technique for selecting the subsequent token, whereas the worth operate estimates how useful the present textual content context is, when it comes to finally producing top quality responses.
DeepSeek-R1 (revealed 22 Jan 2025)
To spotlight RL’s significance, let’s discover Deepseek-R1, a reasoning mannequin reaching top-tier efficiency whereas remaining open-source. The paper launched two fashions: DeepSeek-R1-Zero and DeepSeek-R1.
- DeepSeek-R1-Zero was educated solely through large-scale RL, skipping supervised fine-tuning (SFT).
- DeepSeek-R1 builds on it, addressing encountered challenges.
Let’s dive into a few of these key factors.
1. RL algo: Group Relative Coverage Optimisation (GRPO)
One key sport altering RL algorithm is Group Relative Coverage Optimisation (GRPO), a variant of the extensively common Proximal Coverage Optimisation (PPO). GRPO was launched within the DeepSeekMath paper in Feb 2024.
Why GRPO over PPO?
PPO struggles with reasoning duties as a result of:
- Dependency on a critic mannequin.
PPO wants a separate critic mannequin, successfully doubling reminiscence and compute.
Coaching the critic might be complicated for nuanced or subjective duties. - Excessive computational value as RL pipelines demand substantial sources to guage and optimise responses.
- Absolute reward evaluations
Once you depend on an absolute reward — that means there’s a single normal or metric to guage whether or not a solution is “good” or “dangerous” — it may be laborious to seize the nuances of open-ended, various duties throughout completely different reasoning domains.
How GRPO addressed these challenges:
GRPO eliminates the critic mannequin through the use of relative analysis — responses are in contrast inside a gaggle quite than judged by a set normal.
Think about college students fixing an issue. As a substitute of a instructor grading them individually, they evaluate solutions, studying from one another. Over time, efficiency converges towards larger high quality.
How does GRPO match into the entire coaching course of?
GRPO modifies how loss is calculated whereas retaining different coaching steps unchanged:
- Collect information (queries + responses)
– For LLMs, queries are like questions
– The outdated coverage (older snapshot of the mannequin) generates a number of candidate solutions for every question - Assign rewards — every response within the group is scored (the “reward”).
- Compute the GRPO loss
Historically, you’ll compute a loss — which exhibits the deviation between the mannequin prediction and the true label.
In GRPO, nonetheless, you measure:
a) How doubtless is the brand new coverage to provide previous responses?
b) Are these responses comparatively higher or worse?
c) Apply clipping to forestall excessive updates.
This yields a scalar loss. - Again propagation + gradient descent
– Again propagation calculates how every parameter contributed to loss
– Gradient descent updates these parameters to scale back the loss
– Over many iterations, this regularly shifts the brand new coverage to favor larger reward responses - Replace the outdated coverage often to match the brand new coverage.
This refreshes the baseline for the subsequent spherical of comparisons.
2. Chain of thought (CoT)
Conventional LLM coaching follows pre-training → SFT → RL. Nevertheless, DeepSeek-R1-Zero skipped SFT, permitting the mannequin to instantly discover CoT reasoning.
Like people pondering by way of a tricky query, CoT permits fashions to interrupt issues into intermediate steps, boosting complicated reasoning capabilities. OpenAI’s o1 mannequin additionally leverages this, as famous in its September 2024 report: o1’s efficiency improves with extra RL (train-time compute) and extra reasoning time (test-time compute).
DeepSeek-R1-Zero exhibited reflective tendencies, autonomously refining its reasoning.
A key graph (under) within the paper confirmed elevated pondering throughout coaching, resulting in longer (extra tokens), extra detailed and higher responses.

With out express programming, it started revisiting previous reasoning steps, enhancing accuracy. This highlights chain-of-thought reasoning as an emergent property of RL coaching.
The mannequin additionally had an “aha second” (under) — an interesting instance of how RL can result in surprising and complex outcomes.

Word: Not like DeepSeek-R1, OpenAI doesn’t present full actual reasoning chains of thought in o1 as they’re involved a couple of distillation danger — the place somebody is available in and tries to mimic these reasoning traces and get better plenty of the reasoning efficiency by simply imitating. As a substitute, o1 simply summaries of those chains of ideas.
Reinforcement studying with Human Suggestions (RLHF)
For duties with verifiable outputs (e.g., math issues, factual Q&A), AI responses might be simply evaluated. However what about areas like summarisation or inventive writing, the place there’s no single “appropriate” reply?
That is the place human suggestions is available in — however naïve RL approaches are unscalable.

Let’s take a look at the naive strategy with some arbitrary numbers.

That’s one billion human evaluations wanted! That is too expensive, gradual and unscalable. Therefore, a better answer is to coach an AI “reward mannequin” to be taught human preferences, dramatically decreasing human effort.
Rating responses can be simpler and extra intuitive than absolute scoring.

Upsides of RLHF
- Could be utilized to any area, together with inventive writing, poetry, summarisation, and different open-ended duties.
- Rating outputs is far simpler for human labellers than producing inventive outputs themselves.
Downsides of RLHF
- The reward mannequin is an approximation — it could not completely mirror human preferences.
- RL is nice at gaming the reward mannequin — if run for too lengthy, the mannequin would possibly exploit loopholes, producing nonsensical outputs that also get excessive scores.
Do observe that Rlhf shouldn’t be the identical as conventional RL.
For empirical, verifiable domains (e.g. math, coding), RL can run indefinitely and uncover novel methods. RLHF, then again, is extra like a fine-tuning step to align fashions with human preferences.
Conclusion
And that’s a wrap! I hope you loved Half 2 🙂 If you happen to haven’t already learn Half 1 — do test it out right here.
Acquired questions or concepts for what I ought to cowl subsequent? Drop them within the feedback — I’d love to listen to your ideas. See you within the subsequent article!