Spaces:
Running
Running
push
Browse files- dist/index.html +1 -2
- src/index.html +1 -2
dist/index.html
CHANGED
@@ -75,8 +75,7 @@
|
|
75 |
<p>
|
76 |
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models β a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part β the training code, the knowledge and technics necessary to coordinate GPUs to train these massive systems β remains shrouded in complexity and spread around a series of disconnected papers and often private codebases.
|
77 |
</p>
|
78 |
-
<aside>Reading time: a
|
79 |
-
<br>For the best reading experience, we recommend not using a mobile phone.</aside>
|
80 |
<p>
|
81 |
This open-source book is here to changes that. Starting from the basics, we'll walk you through the knowledge necessary to scale the training of large language models from one GPU to tens, hundreds and even thousands of GPUs, illustrating theory with practical code examples and reproducible benchmarks.
|
82 |
</p>
|
|
|
75 |
<p>
|
76 |
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models β a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part β the training code, the knowledge and technics necessary to coordinate GPUs to train these massive systems β remains shrouded in complexity and spread around a series of disconnected papers and often private codebases.
|
77 |
</p>
|
78 |
+
<aside>Reading time: 2-4 days. <br>For the best reading experience, we recommend not using a mobile phone.</aside>
|
|
|
79 |
<p>
|
80 |
This open-source book is here to changes that. Starting from the basics, we'll walk you through the knowledge necessary to scale the training of large language models from one GPU to tens, hundreds and even thousands of GPUs, illustrating theory with practical code examples and reproducible benchmarks.
|
81 |
</p>
|
src/index.html
CHANGED
@@ -75,8 +75,7 @@
|
|
75 |
<p>
|
76 |
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models β a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part β the training code, the knowledge and technics necessary to coordinate GPUs to train these massive systems β remains shrouded in complexity and spread around a series of disconnected papers and often private codebases.
|
77 |
</p>
|
78 |
-
<aside>Reading time: a
|
79 |
-
<br>For the best reading experience, we recommend not using a mobile phone.</aside>
|
80 |
<p>
|
81 |
This open-source book is here to changes that. Starting from the basics, we'll walk you through the knowledge necessary to scale the training of large language models from one GPU to tens, hundreds and even thousands of GPUs, illustrating theory with practical code examples and reproducible benchmarks.
|
82 |
</p>
|
|
|
75 |
<p>
|
76 |
Thousands of GPUs humming in perfect harmony. That's what it takes to train today's most powerful AI models β a symphony of computing power that until recently was the exclusive domain of elite research labs. Open source has transformed this landscape, but not completely. Yes, you can download the latest <a href="https://huggingface.co/meta-llama">Llama</a> or <a href="https://huggingface.co/deepseek-ai">DeepSeek</a> models. Yes, you can read their <a href="https://ai.meta.com/research/publications/the-llama-3-herd-of-models/">technical</a> and <a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf">experiment</a> reports. But the most challenging part β the training code, the knowledge and technics necessary to coordinate GPUs to train these massive systems β remains shrouded in complexity and spread around a series of disconnected papers and often private codebases.
|
77 |
</p>
|
78 |
+
<aside>Reading time: 2-4 days. <br>For the best reading experience, we recommend not using a mobile phone.</aside>
|
|
|
79 |
<p>
|
80 |
This open-source book is here to changes that. Starting from the basics, we'll walk you through the knowledge necessary to scale the training of large language models from one GPU to tens, hundreds and even thousands of GPUs, illustrating theory with practical code examples and reproducible benchmarks.
|
81 |
</p>
|