robot-bengali-2 commited on
Commit
b95bf0a
Β·
unverified Β·
1 Parent(s): 4a196a2

Update readme

Browse files
Files changed (1) hide show
  1. README.md +17 -27
README.md CHANGED
@@ -7,32 +7,22 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- <p class="lg:col-span-3">
11
- This organization gathers all the collaborators who participated in the collaborative training of the model <b>Insert model name here with href</b>. <br>
12
- </p>
13
- <p class="lg:col-span-3">
14
 
15
- </p>
16
- <p>
17
- Where to start?<br>
18
- <!-- TODO: add the links -->
19
- πŸ‘‰ the <a class="underline" href="https://huggingface.co/spaces/training-transformers-together/how-to-join">"How to join?" spaces </a> <br>
20
- πŸ‘‰ the <a href="https://huggingface.co/spaces/training-transformers-together/Dashboard" class="underline" >"Dashboard" spaces </a> <br>
21
- πŸ‘‰ the frequently updated <a class="underline" >model</a> <br>
22
- </p>
23
 
24
- <a class="block overflow-hidden">
25
- <div
26
- class="w-full h-40 mb-2 bg-gray-900 group-hover:bg-gray-850 rounded-lg flex items-start justify-start overflow-hidden"
27
- >
28
- <iframe src="https://www.youtube.com/embed/v8ShbLasRF8" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe>
29
- <div href="https://www.youtube.com/watch?v=v8ShbLasRF8&t=6s" class="underline">What is a collaborative training?</div>
30
- </div>
31
- </a>
32
- <a class="block overflow-hidden group">
33
- <div
34
- class="w-full h-40 mb-2 bg-gray-900 group-hover:bg-gray-850 rounded-lg flex items-start justify-start overflow-hidden"
35
- >
36
- <iframe src="https://www.youtube.com/embed/zdVsg5zsGdc" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0"></iframe>
37
- </div>
38
- </a>
 
7
  pinned: false
8
  ---
9
 
10
+ This organization is a part of the NeurIPS 2021 demonstration <a href="https://training-transformers-together.github.io/">"Training Transformers Together"</a>.
 
 
 
11
 
12
+ In this demo, we train a model similar to <a target="_blank" href="https://openai.com/blog/dall-e/">OpenAI DALL-E</a> β€”
13
+ a Transformer "language model" that generates images from text descriptions.
14
+ It is trained on <a target="_blank" href="https://laion.ai/laion-400-open-dataset/">LAION-400M</a>,
15
+ the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on
16
+ the <a target="_blank" href="https://github.com/lucidrains/DALLE-pytorch">dalle‑pytorch</a> implementation
17
+ by <a target="_blank" href="https://github.com/lucidrains">Phil Wang</a> with a few tweaks to make it communication-efficient.
 
 
18
 
19
+ See details about how to join and how it works on <a target="_blank" href="https://training-transformers-together.github.io/">our website</a>.
20
+
21
+ The organization gathers people participating in the collaborative training and provides links to the necessary resources:
22
+
23
+ - πŸ‘‰ Starter kits for **Google Colab** and **Kaggle** (easy way to join the training)
24
+ - πŸ‘‰ [Dashboard](https://huggingface.co/spaces/training-transformers-together/Dashboard) (the current training state: loss, number of peers, etc.)
25
+ - πŸ‘‰ [Model](https://huggingface.co/training-transformers-together/dalle-demo) (the latest model checkpoint)
26
+ - πŸ‘‰ [Dataset](https://huggingface.co/datasets/laion/laion_100m_vqgan_f8)
27
+
28
+ Feel free to reach us on [Discord](https://discord.gg/uGugx9zYvN) if you have any questions πŸ™‚