Update readme
Browse files
README.md
CHANGED
@@ -7,32 +7,22 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
<
|
11 |
-
This organization gathers all the collaborators who participated in the collaborative training of the model <b>Insert model name here with href</b>. <br>
|
12 |
-
</p>
|
13 |
-
<p class="lg:col-span-3">
|
14 |
|
15 |
-
</
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
π the frequently updated <a class="underline" >model</a> <br>
|
22 |
-
</p>
|
23 |
|
24 |
-
<a
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
class="w-full h-40 mb-2 bg-gray-900 group-hover:bg-gray-850 rounded-lg flex items-start justify-start overflow-hidden"
|
35 |
-
>
|
36 |
-
<iframe src="https://www.youtube.com/embed/zdVsg5zsGdc" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe>
|
37 |
-
</div>
|
38 |
-
</a>
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
This organization is a part of the NeurIPS 2021 demonstration <a href="https://training-transformers-together.github.io/">"Training Transformers Together"</a>.
|
|
|
|
|
|
|
11 |
|
12 |
+
In this demo, we train a model similar to <a target="_blank" href="https://openai.com/blog/dall-e/">OpenAI DALL-E</a> β
|
13 |
+
a Transformer "language model" that generates images from text descriptions.
|
14 |
+
It is trained on <a target="_blank" href="https://laion.ai/laion-400-open-dataset/">LAION-400M</a>,
|
15 |
+
the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on
|
16 |
+
the <a target="_blank" href="https://github.com/lucidrains/DALLE-pytorch">dalleβpytorch</a> implementation
|
17 |
+
by <a target="_blank" href="https://github.com/lucidrains">Phil Wang</a> with a few tweaks to make it communication-efficient.
|
|
|
|
|
18 |
|
19 |
+
See details about how to join and how it works on <a target="_blank" href="https://training-transformers-together.github.io/">our website</a>.
|
20 |
+
|
21 |
+
The organization gathers people participating in the collaborative training and provides links to the necessary resources:
|
22 |
+
|
23 |
+
- π Starter kits for **Google Colab** and **Kaggle** (easy way to join the training)
|
24 |
+
- π [Dashboard](https://huggingface.co/spaces/training-transformers-together/Dashboard) (the current training state: loss, number of peers, etc.)
|
25 |
+
- π [Model](https://huggingface.co/training-transformers-together/dalle-demo) (the latest model checkpoint)
|
26 |
+
- π [Dataset](https://huggingface.co/datasets/laion/laion_100m_vqgan_f8)
|
27 |
+
|
28 |
+
Feel free to reach us on [Discord](https://discord.gg/uGugx9zYvN) if you have any questions π
|
|
|
|
|
|
|
|
|
|