PseudoTerminal X
commited on
Commit
•
c983add
1
Parent(s):
7c80735
Trained for 0 epochs and 9000 steps.
Browse filesTrained with datasets ['text-embeds-pixart-filter', 'photo-concept-bucket', 'ideogram', 'midjourney-v6-520k-raw', 'sfwbooru', 'nijijourney-v6-520k-raw', 'dalle3']
Learning rate 1e-06, batch size 24, and 1 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: terminusresearch/pixart-900m-1024-ft-v0.6
VAE: madebyollin/sdxl-vae-fp16-fix
- README.md +1 -1
- optimizer.bin +1 -1
- random_states_0.pkl +2 -2
- scheduler.bin +1 -1
- training_state-dalle3.json +2 -2
- training_state-ideogram.json +0 -0
- training_state-midjourney-v6-520k-raw.json +0 -0
- training_state-nijijourney-v6-520k-raw.json +0 -0
- training_state-photo-concept-bucket.json +0 -0
- training_state.json +1 -1
- transformer/diffusion_pytorch_model.safetensors +1 -1
README.md
CHANGED
@@ -1562,7 +1562,7 @@ You may reuse the base model text encoder for inference.
|
|
1562 |
## Training settings
|
1563 |
|
1564 |
- Training epochs: 0
|
1565 |
-
- Training steps:
|
1566 |
- Learning rate: 1e-06
|
1567 |
- Effective batch size: 192
|
1568 |
- Micro-batch size: 24
|
|
|
1562 |
## Training settings
|
1563 |
|
1564 |
- Training epochs: 0
|
1565 |
+
- Training steps: 9000
|
1566 |
- Learning rate: 1e-06
|
1567 |
- Effective batch size: 192
|
1568 |
- Micro-batch size: 24
|
optimizer.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5451415117
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:db320a3a30003b84ce972a8199c2d6ec5c9e2adf58a70671613f6244e42219cc
|
3 |
size 5451415117
|
random_states_0.pkl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b82e5a39ad6fa38decb37e51db4eb4c424849104cc2730bedfe9d4624e18d7af
|
3 |
+
size 16036
|
scheduler.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1000
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a75a07a445e729186793318d013c36b066f1cd41dec2ae7f9542745153948c5
|
3 |
size 1000
|
training_state-dalle3.json
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9dd700e1f52211880f97997332585926e43a97bb66e0834fd792155b3f62c76d
|
3 |
+
size 11534817
|
training_state-ideogram.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
training_state-midjourney-v6-520k-raw.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
training_state-nijijourney-v6-520k-raw.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
training_state-photo-concept-bucket.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
training_state.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"global_step":
|
|
|
1 |
+
{"global_step": 9000, "epoch_step": 9000, "epoch": 1, "exhausted_backends": ["sfwbooru"], "repeats": {"ideogram": 7, "sfwbooru": 0}}
|
transformer/diffusion_pytorch_model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1816969728
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ef30c9c45d4932a0a889447742a96e924ed8371c4713b3571e21a594e9c0bdf9
|
3 |
size 1816969728
|