File size: 33,651 Bytes
69d0951
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>PPO, GRPO, & DAPO: Core Concepts and Comparison</title>
    <link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
    <style>
        body {
            font-family: 'Inter', sans-serif;
            line-height: 1.7;
            margin: 0;
            padding: 0;
            background-color: #fdfdff;
            /* Slightly off-white */
            color: #333;
        }

        .container {
            max-width: 900px;
            margin: 25px auto;
            padding: 25px;
            background-color: #ffffff;
            border-radius: 10px;
            box-shadow: 0 5px 15px rgba(0, 0, 0, 0.08);
        }

        h1,
        h2,
        h3,
        h4 {
            color: #005A9C;
            /* Deep blue for headings */
            margin-top: 1.6em;
            margin-bottom: 0.7em;
        }

        h1 {
            font-size: 2.2em;
            text-align: center;
            border-bottom: 2px solid #007BFF;
            /* Brighter blue accent */
            padding-bottom: 0.5em;
            margin-top: 0;
        }

        h2 {
            font-size: 1.75em;
            border-bottom: 1px solid #e0e0e0;
            /* Light grey separator */
            padding-bottom: 0.3em;
        }

        h3 {
            font-size: 1.4em;
            color: #0067B3;
            /* Medium blue */
        }

        h4 {
            font-size: 1.15em;
            color: #0072C6; /* Slightly lighter blue for sub-examples */
        }

        p {
            margin-bottom: 1.1em;
            text-align: left;
            /* Changed from justify for better readability */
            font-size: 1em;
        }

        code,
        .code-inline {
            background-color: #e8f4ff;
            /* Very light blue */
            padding: 0.2em 0.4em;
            border-radius: 4px;
            font-family: 'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, Courier, monospace;
            font-size: 0.9em;
            border: 1px solid #cce0ff;
            /* Light blue border */
            color: #004a80;
        }

        .formula {
            display: block;
            background-color: #f5f9ff;
            /* Lighter blue-tinted background */
            border-left: 4px solid #007BFF;
            padding: 15px;
            margin: 15px 0;
            font-family: 'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, Courier, monospace;
            font-size: 1em;
            overflow-x: auto;
            border-radius: 5px;
            box-shadow: 0 1px 3px rgba(0, 0, 0, 0.04);
        }

        .highlight-box {
            background-color: #e6f7ff;
            /* Light cyan/blue */
            padding: 15px;
            border-radius: 6px;
            margin-bottom: 1.1em;
            border: 1px solid #b3e0ff;
            /* Cyan/blue border */
            color: #004080;
            /* Darker blue text for contrast */
        }

        .highlight-box strong {
            color: #005A9C;
        }

        strong,
        .bold-text {
            font-weight: 600;
            color: #005A9C;
            /* Consistent bold color */
        }

        ul,
        ol {
            margin-bottom: 1.1em;
            padding-left: 22px;
        }

        li {
            margin-bottom: 0.5em;
        }

        .note-box {
            background-color: #fffbe6;
            /* Light yellow */
            border-left: 4px solid #ffdd77;
            /* Yellow border */
            padding: 10px 15px;
            margin: 15px 0;
            border-radius: 5px;
            color: #665200;
            /* Dark yellow text */
        }

        .example-box {
            background-color: #e6ffed;
            /* Light green */
            border-left: 4px solid #77dd77;
            /* Green border */
            padding: 10px 15px;
            margin: 15px 0;
            border-radius: 5px;
            color: #004d1a;
            /* Dark green text */
        }
         .example-box h4 {
            margin-top: 0.5em;
            color: #004d1a; /* Dark green to match box text */
        }

        table {
            width: 100%;
            border-collapse: collapse;
            margin: 20px 0;
            box-shadow: 0 1px 3px rgba(0, 0, 0, 0.06);
        }

        th,
        td {
            border: 1px solid #dce4f0;
            /* Lighter border for table */
            padding: 10px 12px;
            text-align: left;
            vertical-align: top;
        }

        th {
            background-color: #f0f6ff;
            /* Light blue for table header */
            font-weight: 600;
            color: #005A9C;
        }

        hr {
            border: 0;
            height: 1px;
            background-color: #d0d0d0;
            margin: 2em 0;
        }
    </style>
</head>

<body>
    <div class="container">
        <h1>PPO, GRPO, & DAPO: Core Concepts (Unified Example)</h1>

        <section id="intro">
            <h2>I. Introduction to Policy Optimization</h2>
            <p>
                In Reinforcement Learning (RL), an <strong class="bold-text">agent</strong> learns to make decisions by interacting with an <strong class="bold-text">environment</strong> to maximize cumulative <strong class="bold-text">rewards</strong>. The agent's strategy is its <strong class="bold-text">policy (π)</strong>.
            </p>
            <p>
                <strong class="bold-text">Policy Optimization</strong> algorithms directly learn or improve this policy. Instead of just learning values for states/actions, they find policy parameters (θ) that yield the highest rewards. PPO, GRPO, and DAPO are advanced policy optimization algorithms, particularly relevant for complex tasks like training Large Language Models (LLMs).
            </p>
        </section>
        <hr />

        <section id="ppo">
            <h2>II. Proximal Policy Optimization (PPO)</h2>
            <div class="highlight-box">
                <strong>Core Idea of PPO:</strong> Improve the policy with updates that are not too large (to avoid performance collapse) and not too small (to ensure progress). PPO aims for stable and reliable policy improvement.
            </div>

            <h3>A. Key Concepts in PPO</h3>
            <ul>
                <li>
                    <strong class="bold-text">Policy (π<sub>θ</sub>(a|s)):</strong> The agent's current strategy, mapping states <code class="code-inline">s</code> to action <code class="code-inline">a</code> probabilities, parameterized by <code class="code-inline">θ</code>. In LLMs, <code class="code-inline">s</code> is the current sequence of generated tokens (prompt + previous tokens), and <code class="code-inline">a</code> is the next token to generate.
                </li>
                <li>
                    <strong class="bold-text">Value Function (V(s)):</strong> Learned by a <strong class="bold-text">critic network</strong>, estimates the expected cumulative reward from state <code class="code-inline">s</code>. For an LLM, <code class="code-inline">V(s)</code> would estimate the quality (e.g., from a reward model) of the completion starting from the current token sequence <code class="code-inline">s</code>.
                </li>
                <li>
                    <strong class="bold-text">Advantage Function (A<sup>π</sup>(s,a) or Â<sub>t</sub>):</strong> Quantifies how much better action <code class="code-inline">a</code> is compared to the average action from state <code class="code-inline">s</code>. Often estimated using <strong class="bold-text">Generalized Advantage Estimation (GAE)</strong>:
                    <div class="formula">Â<sub>t</sub><sup>GAE</sup> = ∑<sub>l=0</sub><sup>T-t-1</sup> (γλ)<sup>l</sup>δ<sub>t+l</sub></div>
                    where <code class="code-inline">δ<sub>t+l</sub> = r<sub>t+l</sub> + γV(s<sub>t+l+1</sub>) - V(s<sub>t+l</sub>)</code> is the TD residual. <code class="code-inline">r<sub>t+l</sub></code> is the reward for generating the token <code class="code-inline">a<sub>t+l</sub></code>, and <code class="code-inline">V(s)</code> is the state-value function.
                </li>
                <li>
                    <strong class="bold-text">Probability Ratio (r<sub>t</sub>(θ)):</strong> Compares the probability of an action under the new policy (π<sub>θ</sub>) versus the old policy (π<sub>θ<sub>old</sub></sub>) that collected the data.
                    <div class="formula">r<sub>t</sub>(θ) = π<sub>θ</sub>(a<sub>t</sub>|s<sub>t</sub>) / π<sub>θ<sub>old</sub></sub>(a<sub>t</sub>|s<sub>t</sub>)</div>
                </li>
                <li>
                    <strong class="bold-text">Clipped Surrogate Objective Function (L<sup>CLIP</sup>(θ)):</strong> PPO's core.
                    <div class="formula">L<sup>CLIP</sup>(θ) = Ê<sub>t</sub> [ min( r<sub>t</sub>(θ)Â<sub>t</sub>, clip(r<sub>t</sub>(θ), 1-ε, 1+ε)Â<sub>t</sub> ) ]</div>
                    Here, <code class="code-inline">ε</code> (epsilon, e.g., 0.1 or 0.2) defines the clipping range.
                </li>
            </ul>
            <div class="note-box">
                <strong>Why clipping?</strong> It prevents the new policy from moving too far from the old policy in a single update, ensuring stability.
            </div>

            <h3>B. Example: LLM Answering "Capital of Australia?" with PPO</h3>
            <div class="example-box">
                <h4>Scenario: LLM generating an answer to "What is the capital of Australia?"</h4>
                <ol>
                    <li><strong>Prompt (Initial State):</strong> "What is the capital of Australia?"</li>
                    <li><strong>Data Collection (Rollout):</strong> The LLM (policy π<sub>θ<sub>old</sub></sub>) generates a sequence of tokens. Let's say it generates "Sydney".
                        <ul>
                            <li>State <code class="code-inline">s<sub>0</sub></code>: "What is the capital of Australia?"</li>
                            <li>Action <code class="code-inline">a<sub>0</sub></code>: Token "Sydney"</li>
                            <li>Reward <code class="code-inline">r<sub>0</sub></code>: A reward model evaluates "Sydney" in this context. Let's say it gives a low reward (e.g., 2/10) because it's incorrect.</li>
                            <li>Next State <code class="code-inline">s<sub>1</sub></code>: "What is the capital of Australia? Sydney" (now considering this as a completed sequence for simplicity of this step).</li>
                        </ul>
                    </li>
                    <li><strong>Value Estimation (Critic):</strong>
                        <ul>
                            <li>The critic <code class="code-inline">V(s<sub>0</sub>)</code> estimates the expected future reward from the prompt. It might have a moderate value if the LLM sometimes gets it right.</li>
                            <li>The critic <code class="code-inline">V(s<sub>1</sub>)</code> (value of the state *after* generating "Sydney") would likely be low, as "Sydney" is a poor completion.</li>
                        </ul>
                    </li>
                    <li><strong>Advantage Estimation (Â<sub>0</sub> for token "Sydney"):</strong>
                        Using a simplified GAE (single step for clarity): <code class="code-inline">Â<sub>0</sub> ≈ r<sub>0</sub> + γV(s<sub>1</sub>) - V(s<sub>0</sub>)</code>.
                        If <code class="code-inline">r<sub>0</sub>=2</code>, <code class="code-inline">V(s<sub>1</sub>)</code> is low (e.g., 1), and <code class="code-inline">V(s<sub>0</sub>)</code> was higher (e.g., 5, hoping for a better outcome), then Â<sub>0</sub> might be <code class="code-inline">2 + 0.9*1 - 5 = -2.1</code> (negative advantage). This indicates "Sydney" was a worse-than-average choice.
                    </li>
                    <li><strong>Probability Ratio (r<sub>0</sub>(θ)):</strong>
                        The new policy π<sub>θ</sub> will try to decrease the probability of generating "Sydney" given the prompt: <code class="code-inline">π<sub>θ</sub>("Sydney"|s<sub>0</sub>) < π<sub>θ<sub>old</sub></sub>("Sydney"|s<sub>0</sub>)</code>. This makes <code class="code-inline">r<sub>0</sub>(θ) < 1</code>.
                    </li>
                    <li><strong>Clipped Objective in Action:</strong>
                        Let <code class="code-inline">ε = 0.2</code>. Suppose <code class="code-inline">r<sub>0</sub>(θ) = 0.7</code> (policy wants to reduce probability by 30%). Â<sub>0</sub> is negative (-2.1).
                        <ul>
                            <li>Term 1: <code class="code-inline">r<sub>0</sub>(θ)Â<sub>0</sub> = 0.7 * (-2.1) = -1.47</code>.</li>
                            <li>Term 2: <code class="code-inline">clip(r<sub>0</sub>(θ), 1-ε, 1+ε)Â<sub>0</sub> = clip(0.7, 0.8, 1.2)Â<sub>0</sub> = 0.8 * (-2.1) = -1.68</code>.</li>
                            <li>The objective takes <code class="code-inline">min(-1.47, -1.68)</code>. Since Â<sub>0</sub> is negative, the objective function aims to make the term <code class="code-inline">r<sub>t</sub>(θ)Â<sub>t</sub></code> less negative (i.e., closer to zero or positive). The `min` here actually selects the *larger* value when advantages are negative (less negative is larger). So, it would be <code class="code-inline">-1.47</code>. The goal is to reduce the likelihood of this action, but the clipping (to 0.8 * Â<sub>0</sub> in the second term) prevents the policy from drastically reducing the probability too quickly if the proposed change is too large.
                            <br/><em>Correction:</em> For negative advantages, the objective is <code class="code-inline">max( r<sub>t</sub>(θ)Â<sub>t</sub>, clip(r<sub>t</sub>(θ), 1-ε, 1+ε)Â<sub>t</sub> )</code> effectively, or the PPO paper presents it as <code class="code-inline">min</code> but the interpretation for negative advantage is that we want to increase <code class="code-inline">r_t(θ)</code> towards <code class="code-inline">1+ε</code> if <code class="code-inline">r_t(θ) > 1+ε</code > (which is not the case here) or decrease it towards <code class="code-inline">1-ε</code> if <code class="code-inline">r_t(θ) < 1-ε</code >.
                            The key is that the change is bounded. If <code class="code-inline">r_t(θ) = 0.7</code> and <code class="code-inline">Â_t < 0</code >, the objective encourages increasing <code class="code-inline">r_t(θ)</code > but not beyond <code class="code-inline">1-ε</code >. The update is based on <code class="code-inline">0.8 * Â_t</code > if <code class="code-inline">0.7 * Â_t < 0.8 * Â_t</code > (which is true since <code class="code-inline">Â_t < 0</code >).
                            So, the effective update is based on <code class="code-inline">clip(0.7, 0.8, 1.2) * Â_t = 0.8 * Â_t</code>.
                            The policy is gently discouraged from picking "Sydney".
                            Conversely, if the LLM had generated "Canberra" (high reward, positive advantage), PPO would encourage increasing its probability, clipped at <code class="code-inline">(1+ε)Â<sub>t</sub></code>.
                            </li>
                        </ul>
                    </li>
                </ol>
            </div>

            <h3>C. PPO Strengths & Limitations</h3>
            <ul>
                <li><strong>Strengths:</strong> Simpler to implement than TRPO, good empirical performance, stable, relatively sample efficient.</li>
                <li><strong>Limitations:</strong> Critic can be complex and resource-intensive for LLMs. Sensitive to initialization. Value estimation for long, sparse-reward sequences in LLMs is challenging (value initialization bias, reward signal decay).</li>
            </ul>
        </section>
        <hr />

        <section id="grpo">
            <h2>III. Group Relative Policy Optimization (GRPO)</h2>
            <div class="highlight-box">
                <strong>Core Idea of GRPO:</strong> Simplify PPO for LLMs by <strong class="bold-text">removing the critic network</strong>. Advantage is estimated by comparing a response's reward to the average reward of a "group" of responses generated for the <strong class="bold-text">same input prompt</strong>.
            </div>

            <h3>A. Key Concepts in GRPO</h3>
            <ul>
                <li>
                    <strong class="bold-text">Critic-less:</strong> No learned value function <code class="code-inline">V(s)</code>. This reduces memory and computation.
                </li>
                <li>
                    <strong class="bold-text">Group Sampling:</strong> For an input prompt <code class="code-inline">q</code>, generate <code class="code-inline">G</code> responses <code class="code-inline">{o<sub>1</sub>, ..., o<sub>G</sub>}</code> using the current policy.
                </li>
                <li>
                    <strong class="bold-text">Group-Relative Advantage Estimation (Â(o<sub>i</sub>)):</strong> The advantage for response <code class="code-inline">o<sub>i</sub></code> is its reward <code class="code-inline">R(o<sub>i</sub>)</code> standardized relative to the group's rewards:
                    <div class="formula">Â(o<sub>i</sub>) = (R(o<sub>i</sub>) - μ<sub>G</sub>) / (σ<sub>G</sub> + ϵ<sub>norm</sub>)</div>
                    where <code class="code-inline">μ<sub>G</sub></code> is the mean reward of the group, <code class="code-inline">σ<sub>G</sub></code> is the standard deviation of rewards in the group, and <code class="code-inline">ϵ<sub>norm</sub></code> is for numerical stability. This advantage is typically applied to all tokens in the response <code class="code-inline">o<sub>i</sub></code>.
                </li>
                <li>
                    <strong class="bold-text">Objective Function (L<sub>GRPO</sub>(θ)):</strong> Uses the PPO-style clipped objective but with the group-relative advantage. Often includes a KL-divergence penalty term <code class="code-inline">β · KL(π<sub>θ</sub> || π<sub>ref</sub>)</code>.
                     <div class="formula">L<sub>GRPO</sub>(θ) = E [min( r<sub>θ</sub>(o)Â(o), clip(r<sub>θ</sub>(o), 1-ε, 1+ε)Â(o) )] - β KL(π<sub>θ</sub> || π<sub>ref</sub>)</div>
                </li>
            </ul>

            <h3>B. Example: LLM Answering "Capital of Australia?" with GRPO</h3>
            <div class="example-box">
                <h4>Scenario: LLM generating an answer to "What is the capital of Australia?"</h4>
                <ol>
                    <li> <strong>Prompt:</strong> "What is the capital of Australia?"</li>
                    <li> <strong>Group Sampling (G=3 responses generated by current policy π<sub>θ<sub>old</sub></sub>):</strong>
                        <ul>
                            <li>o<sub>1</sub>: "Sydney" (Reward R(o<sub>1</sub>)=2 from a reward model)</li>
                            <li>o<sub>2</sub>: "Canberra" (Reward R(o<sub>2</sub>)=10)</li>
                            <li>o<sub>3</sub>: "Melbourne" (Reward R(o<sub>3</sub>)=3)</li>
                        </ul>
                    </li>
                    <li> <strong>Group Stats:</strong> Mean reward μ<sub>G</sub> = (2+10+3)/3 = 5. Standard deviation σ<sub>G</sub> ≈ 3.5 (assuming ϵ<sub>norm</sub> is small).</li>
                    <li> <strong>Group-Relative Advantages:</strong>
                        <ul>
                            <li>Â(o<sub>1</sub>) for "Sydney" = (2-5)/3.5 ≈ -0.86 (negative advantage)</li>
                            <li>Â(o<sub>2</sub>) for "Canberra" = (10-5)/3.5 ≈ +1.43 (positive advantage)</li>
                            <li>Â(o<sub>3</sub>) for "Melbourne" = (3-5)/3.5 ≈ -0.57 (negative advantage)</li>
                        </ul>
                        These advantages apply to the entire sequences. For example, every token in "Sydney" gets an advantage of -0.86.
                    </li>
                    <li><strong>Probability Ratio (r<sub>θ</sub>(o<sub>i</sub>)):</strong> This is <code class="code-inline">π<sub>θ</sub>(o<sub>i</sub>|prompt) / π<sub>θ<sub>old</sub></sub>(o<sub>i</sub>|prompt)</code>.
                    <li> <strong>Policy Update:</strong>
                        <ul>
                            <li>For o<sub>1</sub> ("Sydney"): Â(o<sub>1</sub>) is negative. The policy π<sub>θ</sub> will be updated to decrease the probability of generating "Sydney". The update is clipped.</li>
                            <li>For o<sub>2</sub> ("Canberra"): Â(o<sub>2</sub>) is positive. The policy π<sub>θ</sub> will be updated to increase the probability of generating "Canberra". The update is clipped.</li>
                            <li>For o<sub>3</sub> ("Melbourne"): Â(o<sub>3</sub>) is negative. The policy π<sub>θ</sub> will be updated to decrease the probability of generating "Melbourne". The update is clipped.</li>
                        </ul>
                        The KL term helps ensure π<sub>θ</sub> doesn't stray too far from a reference policy (e.g., the SFT model).
                    </li>
                </ol>
            </div>

            <h3>C. GRPO Strengths & Limitations</h3>
            <ul>
                <li><strong>Strengths:</strong> Computationally efficient for LLMs (no critic), stable due to clipping and KL regularization, flexible reward sources.</li>
                <li><strong>Limitations:</strong>
                    <ul>
                        <li><strong class="bold-text">Zero-Advantage Problem:</strong> If all responses in a group have the same reward (e.g., all "Canberra" or all "Sydney"), <code class="code-inline">σ<sub>G</sub></code> is zero, leading to zero advantage for all samples in that group and thus no learning signal.</li>
                        <li>Potential optimization bias (e.g., favoring longer responses if length correlates with reward, as advantage is sequence-level).</li>
                        <li>Advantage is applied at sequence level, not token level, which can be less precise.</li>
                    </ul>
                </li>
            </ul>
        </section>
        <hr />

        <section id="dapo">
            <h2>IV. Decoupled Clip and Dynamic Sampling Policy Optimization (DAPO)</h2>
            <div class="highlight-box">
                <strong>Core Idea of DAPO:</strong> Refine GRPO for LLMs by addressing issues like <strong class="bold-text">entropy collapse</strong> and the <strong class="bold-text">zero-advantage problem</strong>, using techniques like decoupled clipping and dynamic sampling. It often uses token-level advantages.
            </div>

            <h3>A. Key Innovations in DAPO</h3>
            <ul>
                <li>
                    <strong class="bold-text">Decoupled Clipping ("Clip-Higher" Strategy):</strong>
                    Uses asymmetric clipping bounds for the probability ratio <code class="code-inline">r<sub>t</sub>(θ)</code>, e.g., <code class="code-inline">[1-ε<sub>low</sub>, 1+ε<sub>high</sub>]</code>, where <code class="code-inline">ε<sub>high</sub></code> (e.g., 0.28) > <code class="code-inline">ε<sub>low</sub></code> (e.g., 0.2).
                    <div class="note-box">
                        <strong>Rationale:</strong> When advantage is positive, allows larger probability increases for beneficial, initially low-probability tokens, promoting exploration.
                    </div>
                </li>
                <li>
                    <strong class="bold-text">Dynamic Sampling:</strong>
                    Filters out and replaces training prompts/groups where responses have homogenous rewards (low <code class="code-inline">σ<sub>G</sub></code>).
                    <div class="note-box">
                        <strong>Rationale:</strong> Mitigates GRPO's zero-advantage problem, ensuring batches have "informative" samples.
                    </div>
                </li>
                <li>
                    <strong class="bold-text">Token-Level Policy Gradient Loss:</strong>
                    Calculates loss at the token level and averages across all tokens in the batch. GRPO's group-relative advantage can be adapted for token-level application (e.g., by assigning the sequence advantage to each token, or using more fine-grained token-level reward signals if available).
                </li>
            </ul>
            <p>DAPO typically builds on GRPO's critic-less group-relative advantage estimation but incorporates these enhancements.</p>

            <h3>B. Example: LLM Answering "Capital of Australia?" with DAPO</h3>
            <div class="example-box">
                <h4>Scenario: LLM generating an answer to "What is the capital of Australia?"</h4>
                <ol>
                    <li><strong>Prompt:</strong> "What is the capital of Australia?"</li>
                    <li><strong>Group Sampling (G=3, by π<sub>θ<sub>old</sub></sub>):</strong>
                        <ul>
                            <li>o<sub>1</sub>: "Sydney" (Reward R(o<sub>1</sub>)=2)</li>
                            <li>o<sub>2</sub>: "Canberra" (Reward R(o<sub>2</sub>)=10)</li>
                            <li>o<sub>3</sub>: "Melbourne" (Reward R(o<sub>3</sub>)=3)</li>
                        </ul>
                    </li>
                     <li> <strong>Group Stats & Advantages (as in GRPO):</strong> μ<sub>G</sub>=5, σ<sub>G</sub>≈3.5.
                        <ul>
                            <li>Â(o<sub>1</sub>) ≈ -0.86</li>
                            <li>Â(o<sub>2</sub>) ≈ +1.43</li>
                            <li>Â(o<sub>3</sub>) ≈ -0.57</li>
                        </ul>
                        These advantages are applied at the token level for DAPO's loss calculation. So, each token in "Canberra" gets an advantage of +1.43.
                    </li>
                    <li><strong>Dynamic Sampling in Action:</strong>
                        This group (rewards 2, 10, 3) is diverse (σ<sub>G</sub> is non-zero), so it's likely kept.
                        If another prompt, e.g., "What is 1+1?", produced three responses: o<sub>4</sub>:"2" (R=10), o<sub>5</sub>:"Two" (R=10), o<sub>6</sub>:"II" (R=10). Here, σ<sub>G</sub> would be 0. Dynamic Sampling might filter out this "1+1" prompt and its group, replacing it with a prompt that yields more diverse rewards to ensure effective gradients.
                    </li>
                    <li><strong>Decoupled Clipping ("Clip-Higher") in Action (Focus on o<sub>2</sub>: "Canberra"):</strong>
                        Assume the token "Canberra" (or its constituent tokens) had a relatively low probability under π<sub>θ<sub>old</sub></sub>, but it's the correct, high-reward answer. The advantage Â(o<sub>2</sub>) ≈ +1.43 is positive.
                        <ul>
                            <li>The new policy π<sub>θ</sub> aims to significantly increase the probability of "Canberra". Suppose this leads to a token probability ratio <code class="code-inline">r<sub>t</sub>(θ) = 1.35</code> for "Canberra".</li>
                            <li>DAPO uses <code class="code-inline">ε<sub>low</sub>=0.2</code>, <code class="code-inline">ε<sub>high</sub>=0.28</code>. Clipping range: <code class="code-inline">[1-0.2, 1+0.28] = [0.8, 1.28]</code>.</li>
                            <li>The clipped ratio is <code class="code-inline">clip(1.35, 0.8, 1.28) = 1.28</code>.</li>
                            <li>The update for "Canberra" tokens is based on <code class="code-inline">1.28 * Â(o<sub>2</sub>)</code>.
                                Standard PPO/GRPO (ε=0.2) would clip at <code class="code-inline">1.2 * Â(o<sub>2</sub>)</code>.
                                DAPO's "clip-higher" allows a larger update (<code class="code-inline">1.28 * Â(o<sub>2</sub>)</code > vs <code class="code-inline">1.2 * Â(o<sub>2</sub>)</code >), more strongly reinforcing this correct, high-advantage token, especially if it was initially unlikely.
                            </li>
                        </ul>
                    </li>
                    <li><strong>Token-Level Policy Gradient Loss:</strong> The loss is computed for each token in each response using its assigned advantage (e.g., all tokens in "Canberra" use Â(o<sub>2</sub>)). These token losses are then averaged across all tokens in the batch.</li>
                </ol>
            </div>

            <h3>C. DAPO Strengths & Considerations</h3>
            <ul>
                <li><strong>Strengths:</strong> Addresses GRPO's zero-advantage problem and LLM entropy collapse. Improves training efficiency and stability for complex reasoning.</li>
                <li><strong>Considerations:</strong> More hyperparameters (ε<sub>low</sub>, ε<sub>high</sub>). Effectiveness of dynamic sampling can be task-dependent. Relies on good reward function design.
                </li>
            </ul>
        </section>
        <hr />

        <section id="comparison">
            <h2>V. Comparative Overview & Evolution</h2>
            <p>
                The journey from PPO to GRPO to DAPO shows an evolution driven by the need to apply RL effectively to increasingly large and complex models, especially LLMs, using the same core problem as a lens.
            </p>

            <h3>Key Differences at a Glance (Unified Example: "Capital of Australia?"):</h3>
            <table>
                <thead>
                    <tr>
                        <th>Feature</th>
                        <th>PPO</th>
                        <th>GRPO</th>
                        <th>DAPO</th>
                    </tr>
                </thead>
                <tbody>
                    <tr>
                        <td><strong>Critic Usage</strong></td>
                        <td>Yes (Learned V(s) estimates value of "What is capital of Aus? ...token_sequence")</td>
                        <td>No (Critic-less)</td>
                        <td>No (Critic-less)</td>
                    </tr>
                    <tr>
                        <td><strong>Advantage Estimation for "Canberra"</strong></td>
                        <td>GAE: Uses reward for "Canberra" & V(s) from critic. Token-level.</td>
                        <td>Group-Relative: Compares R("Canberra") to R("Sydney"), R("Melbourne"). Sequence-level.</td>
                        <td>Group-Relative (like GRPO), but often applied for token-level loss.</td>
                    </tr>
                    <tr>
                        <td><strong>Clipping for "Canberra" (if <code class="code-inline">r<sub>t</sub>(θ)=1.35</code>, Â > 0)</strong></td>
                        <td>Symmetric (ε=0.2): <code class="code-inline">clip(1.35, 0.8, 1.2)Â = 1.2Â</code></td>
                        <td>Symmetric (ε=0.2): <code class="code-inline">clip(1.35, 0.8, 1.2)Â = 1.2Â</code></td>
                        <td>Decoupled (ε<sub>high</sub>=0.28): <code class="code-inline">clip(1.35, 0.8, 1.28)Â = 1.28Â</code> (allows larger increase)</td>
                    </tr>
                     <tr>
                        <td><strong>Handling Homogenous Rewards (e.g., all outputs "Canberra" R=10)</strong></td>
                        <td>Critic still provides value estimates; GAE can be non-zero if V(s) differs.</td>
                        <td>Zero-Advantage Problem: σ<sub>G</sub>=0, so Â=0 for all. No learning signal from this batch.</td>
                        <td>Dynamic Sampling: Filters out this batch to replace with more diverse one.</td>
                    </tr>
                    <tr>
                        <td><strong>Primary Stability/Exploration Mechanisms</strong></td>
                        <td>Clipping, Entropy Bonus (optional)</td>
                        <td>Clipping, KL Regularization, Group Normalization</td>
                        <td>Decoupled Clipping, Dynamic Sampling, Token-level Loss</td>
                    </tr>
                    <tr>
                        <td><strong>Primary Application Focus</strong></td>
                        <td>General RL</td>
                        <td>LLM fine-tuning (general)</td>
                        <td>Advanced LLM reasoning, mitigating specific LLM RL issues</td>
                    </tr>
                </tbody>
            </table>
        </section>
        <hr />
        <section id="conclusion">
            <h2>VI. Conclusion</h2>
            <p>
                PPO, GRPO, and DAPO represent a significant lineage of policy optimization algorithms. Using a consistent example like an LLM answering a factual question, we can see how PPO provides a robust foundation with its critic-based advantage. GRPO adapts these principles for resource-constrained LLM training by introducing a critic-less, group-based advantage, simplifying computation. DAPO further refines this with specialized techniques like decoupled clipping and dynamic sampling to tackle nuanced challenges in LLM training, such as maintaining exploration and improving data efficiency when rewards might be homogenous or certain correct tokens are initially rare. Understanding their core mechanisms and evolutionary path is key to applying them effectively.
            </p>
        </section>

    </div>
</body>

</html>