File size: 3,171 Bytes
4d3d744 d231795 4d3d744 d231795 4d3d744 d016be4 d10efa1 101323a 096a91f d10efa1 d016be4 d10efa1 770efb5 d10efa1 096a91f d10efa1 d016be4 096a91f d10efa1 d016be4 d10efa1 d016be4 cfab226 096a91f f580ac8 d79a8e5 b6e15c6 096a91f d10efa1 d016be4 096a91f d10efa1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
title: README
emoji: π
colorFrom: pink
colorTo: indigo
sdk: static
pinned: false
---
<!-- The classes below are necessary for correct rendering -->
<div class="lg:col-span-3">
<p class="mb-2">
This organization is a part of the NeurIPS 2021 demonstration <u><a href="https://training-transformers-together.github.io/">"Training Transformers Together"</a></u>.
</p>
<p class="mb-2">
In this demo, we train a model similar to <u><a target="_blank" href="https://openai.com/blog/dall-e/">OpenAI DALL-E</a></u> β
a Transformer "language model" that generates images from text descriptions.
Training happens collaboratively β volunteers from all over the Internet contribute to the training using hardware available to them.
We use <u><a target="_blank" href="https://laion.ai/laion-400-open-dataset/">LAION-400M</a></u>,
the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on
the <u><a target="_blank" href="https://github.com/lucidrains/DALLE-pytorch">dalleβpytorch</a></u> implementation
by <u><a target="_blank" href="https://github.com/lucidrains">Phil Wang</a></u> with a few tweaks to make it communication-efficient.
</p>
<p class="mb-2">
See details about how to join and how it works on <u><a target="_blank" href="https://training-transformers-together.github.io/">our website</a></u>.
</p>
<p class="mb-2">
This organization gathers people participating in the collaborative training and provides links to the necessary resources:
</p>
<ul class="mb-2">
<li>π Starter kits for <u><a target="_blank" href="https://colab.research.google.com/drive/1BqTWcfsvNQwQqqCRKMKp1_jvQ5L1BhCY?usp=sharing">Google Colab</a></u> and <u><a target="_blank" href="https://www.kaggle.com/yhn112/training-transformers-together/">Kaggle</a></u> (easy way to join the training)</li>
<li>π <u><a target="_blank" href="https://huggingface.co/spaces/training-transformers-together/Dashboard">Dashboard</a></u> (the current training state: loss, number of peers, etc.)</li>
<li>π <u><a target="_blank" href="https://colab.research.google.com/drive/1Vkb-4nhEEH1a5vrKtpL4MTNiUTPdpPUl?usp=sharing">Colab notebook for running inference</a></u>
<li>π <u><a target="_blank" href="https://huggingface.co/training-transformers-together/dalle-demo-v1">Model weights</a></u> (the latest checkpoint)</li></li>
<li>π Weights & Biases plots for <u><a target="_blank" href="https://wandb.ai/learning-at-home/dalle-hivemind/runs/3l7q56ht">aux peers</a></u> (aggregating the metrics) and actual <u><a target="_blank" href="https://wandb.ai/learning-at-home/dalle-hivemind-trainers">trainers</a></u> (contributing with their GPUs)</li>
<li>π <u><a target="_blank" href="https://github.com/learning-at-home/dalle-hivemind">Code</a></u></li>
<li>π <u><a target="_blank" href="https://huggingface.co/datasets/laion/laion_100m_vqgan_f8">Dataset</a></u></li>
</ul>
<p class="mb-2">
Feel free to reach us on <u><a target="_blank" href="https://discord.gg/uGugx9zYvN">Discord</a></u> if you have any questions π
</p>
</div>
|