amd
/

Text-to-Image
Diffusers
Safetensors
art
File size: 4,412 Bytes
29b851a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab88d76
29b851a
 
 
 
 
 
362daca
29b851a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
362daca
29b851a
b613948
29b851a
8aa1652
b613948
 
 
8aa1652
b613948
 
29b851a
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: apache-2.0
datasets:
- poloclub/diffusiondb
- JourneyDB/JourneyDB
- PixArt-alpha/SAM-LLaVA-Captions10M
pipeline_tag: text-to-image
library_name: diffusers
tags:
- art
---
# AMD Nitro-T

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6355aded9c72a7e742f341a4/J-hy3Wb2lYA3d8mDGRPw-.png)

## Introduction
Nitro-T is a family of text-to-image diffusion models focused on highly efficient training. Our models achieve competitive scores on image generation benchmarks compared to previous models focused on efficient training while requiring less than 1 day of training from scratch on 32 AMD Instinct™ MI300X GPUs. The release consists of:

* [Nitro-T-0.6B](https://huggingface.co/amd/Nitro-T-0.6B): a 512px DiT-based model
* [Nitro-T-1.2B](https://huggingface.co/amd/Nitro-T-1.2B): a 1024px MMDiT-based model

⚡️ [Open-source code](https://github.com/AMD-AIG-AIMA/Nitro-T)! Our GitHub provides training and data preparation scripts to reproduce our results. We hope this codebase for efficient diffusion model training enables researchers to iterate faster on ideas and lowers the barrier for independent developers to build custom models.

📝 Read our [technical blog post](https://rocm.blogs.amd.com/artificial-intelligence/nitro-t-diffusion/README.html) for more details on the techniques we used to achieve fast training and for results and evaluations.

## Details

* **Model architecture**: Nitro-T-0.6B is a text-to-image Diffusion Transformer that resembles the architecture of PixArt-α. It uses the latent space of Deep Compression Autoencoder (DC-AE) and uses a Llama 3.2 1B model for text conditioning. Additionally, several techniques were used to reduce training time. See our technical blog post for more details.
* **Dataset**: The Nitro-T models were trained on a dataset of ~35M images consisting of both real and synthetic data sources that are openly available on the internet. See our GitHub repo for data processing scripts.
* **Training cost**: Nitro-T-0.6B requires less than 1 day of training from scratch on 32 AMD Instinct™ MI300X GPUs.


## Quickstart

You must use `diffusers>=0.34` in order to load the model from the Huggingface hub.

```python
import torch
from diffusers import DiffusionPipeline
from transformers import AutoModelForCausalLM

torch.set_grad_enabled(False)

device = torch.device('cuda:0')
dtype = torch.bfloat16
resolution = 512
MODEL_NAME = "amd/Nitro-T-0.6B"

text_encoder = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", torch_dtype=dtype)
pipe = DiffusionPipeline.from_pretrained(
    MODEL_NAME,
    text_encoder=text_encoder,
    torch_dtype=dtype, 
    trust_remote_code=True,
)
pipe.to(device)

image = pipe(
    prompt="The image is a close-up portrait of a scientist in a modern laboratory. He has short, neatly styled black hair and wears thin, stylish eyeglasses. The lighting is soft and warm, highlighting his facial features against a backdrop of lab equipment and glowing screens.",
    height=resolution, width=resolution,
    num_inference_steps=20,
    guidance_scale=4.0,
).images[0]

image.save("output.png")
```

For more details on training and evaluation please visit the [GitHub repo](https://github.com/AMD-AIG-AIMA/Nitro-T) and read our [technical blog post](https://rocm.blogs.amd.com/artificial-intelligence/nitro-t-diffusion/README.html).

## Examples of images generated by Nitro-T

| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6355aded9c72a7e742f341a4/v5vZgVBtcpIqx8akEuVTB.png) | 
|:--:| 
| *Images generated by Nitro-T-1.2B at 1024px resolution* |

| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6355aded9c72a7e742f341a4/E7AjjE9WySsM6Hb7DG5VN.png) | 
|:--:| 
| *Images generated by Nitro-T-0.6B at 512px resolution* |

## License
Copyright (c) 2018-2025 Advanced Micro Devices, Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.