amd
/

Text-to-Image
Diffusers
Safetensors
art
akasharidas commited on
Commit
29b851a
·
verified ·
1 Parent(s): 6cde8f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - poloclub/diffusiondb
5
+ - JourneyDB/JourneyDB
6
+ - PixArt-alpha/SAM-LLaVA-Captions10M
7
+ pipeline_tag: text-to-image
8
+ library_name: diffusers
9
+ tags:
10
+ - art
11
+ ---
12
+ # AMD Nitro-T
13
+
14
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6355aded9c72a7e742f341a4/J-hy3Wb2lYA3d8mDGRPw-.png)
15
+
16
+ ## Introduction
17
+ Nitro-T is a set of text-to-image diffusion models focused on highly efficient training. Our models achieve competitive scores on image generation benchmarks compared to previous models focused on efficient training while requiring less than 1 day of training from scratch on 32 AMD Instinct™ MI300X GPUs. The release consists of:
18
+
19
+ * [Nitro-T-0.6B](https://huggingface.co/amd/Nitro-T-0.6B): a 512px DiT-based model
20
+ * [Nitro-T-1.2B](https://huggingface.co/amd/Nitro-T-1.2B): a 1024px MMDiT-based model
21
+
22
+ ⚡️ [Open-source code](https://github.com/AMD-AIG-AIMA/Nitro-T)! Our GitHub provides training and data preparation scripts to reproduce our results. We hope this codebase for efficient diffusion model training enables researchers to iterate faster on ideas and lowers the barrier for independent developers to build custom models.
23
+
24
+ 📝 Read our technical blog post for more details on the techniques we used to achieve fast training and for results and evaluations.
25
+
26
+ ## Details
27
+
28
+ * **Model architecture**: Nitro-T-0.6B is a text-to-image Diffusion Transformer that resembles the architecture of PixArt-α. It uses the latent space of Deep Compression Autoencoder (DC-AE) and uses a Llama 3.2 1B model for text conditioning. Additionally, several techniques were used to reduce training time. See our technical blog post for more details.
29
+ * **Dataset**: The Nitro-T models were trained on a dataset of ~35M images consisting of both real and synthetic data sources that are openly available on the internet. See our GitHub repo for data processing scripts.
30
+ * **Training cost**: Nitro-T-0.6B requires less than 1 day of training from scratch on 32 AMD Instinct™ MI300X GPUs.
31
+
32
+
33
+ ## Quickstart
34
+
35
+ You must use `diffusers>=0.34` in order to load the model from the Huggingface hub.
36
+
37
+ ```python
38
+ import torch
39
+ from diffusers import DiffusionPipeline
40
+ from transformers import AutoModelForCausalLM
41
+
42
+ torch.set_grad_enabled(False)
43
+
44
+ device = torch.device('cuda:0')
45
+ dtype = torch.bfloat16
46
+ resolution = 512
47
+ MODEL_NAME = "amd/Nitro-T-0.6B"
48
+
49
+ text_encoder = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", torch_dtype=dtype)
50
+ pipe = DiffusionPipeline.from_pretrained(
51
+ MODEL_NAME,
52
+ text_encoder=text_encoder,
53
+ torch_dtype=dtype,
54
+ trust_remote_code=True,
55
+ )
56
+ pipe.to(device)
57
+
58
+ image = pipe(
59
+ prompt="The image is a close-up portrait of a scientist in a modern laboratory. He has short, neatly styled black hair and wears thin, stylish eyeglasses. The lighting is soft and warm, highlighting his facial features against a backdrop of lab equipment and glowing screens.",
60
+ height=resolution, width=resolution,
61
+ num_inference_steps=20,
62
+ guidance_scale=4.0,
63
+ ).images[0]
64
+
65
+ image.save("output.png")
66
+ ```
67
+
68
+ For more details on training and evaluation please visit the [GitHub repo](https://github.com/AMD-AIG-AIMA/Nitro-T) and read our technical blog post.
69
+
70
+
71
+
72
+ ## License
73
+ Copyright (c) 2018-2025 Advanced Micro Devices, Inc. All Rights Reserved.
74
+ Licensed under the Apache License, Version 2.0 (the "License");
75
+ you may not use this file except in compliance with the License.
76
+ You may obtain a copy of the License at
77
+ http://www.apache.org/licenses/LICENSE-2.0
78
+ Unless required by applicable law or agreed to in writing, software
79
+ distributed under the License is distributed on an "AS IS" BASIS,
80
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
81
+ See the License for the specific language governing permissions and
82
+ limitations under the License.