xiaozaa commited on
Commit
91af88a
1 Parent(s): 24e20da

some small fix

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -8,6 +8,11 @@ Also inspired by [In-Context LoRA](https://arxiv.org/abs/2410.23775) for prompt
8
  ---
9
  **Latest Achievement**
10
 
 
 
 
 
 
11
  (2024/11/25):
12
  - Released lora weights. Lora weights achieved FID: `6.0675811767578125` on VITON-HD dataset. Test configuration: scale 30, step 30.
13
  - Revise gradio demo. Added huggingface spaces support.
@@ -34,7 +39,7 @@ LORA weights in Hugging Face: 🤗 [catvton-flux-lora-alpha](https://huggingface
34
  The model weights are trained on the [VITON-HD](https://github.com/shadow2496/VITON-HD) dataset.
35
 
36
  ## Prerequisites
37
- Make sure you are runing the code with VRAM >= 40GB. (I run all my experiments on a 80GB GPU, lower VRAM will cause OOM error. Will support lower VRAM in the future.)
38
 
39
  ```bash
40
  bash
 
8
  ---
9
  **Latest Achievement**
10
 
11
+ (2024/11/26):
12
+ - Updated the weights. (Still training on the VITON-HD dataset only.)
13
+ - Reduce the fine-tuning weights size (46GB -> 23GB)
14
+ - Weights has better performance on garment small details/text.
15
+
16
  (2024/11/25):
17
  - Released lora weights. Lora weights achieved FID: `6.0675811767578125` on VITON-HD dataset. Test configuration: scale 30, step 30.
18
  - Revise gradio demo. Added huggingface spaces support.
 
39
  The model weights are trained on the [VITON-HD](https://github.com/shadow2496/VITON-HD) dataset.
40
 
41
  ## Prerequisites
42
+ Make sure you are running the code with VRAM >= 40GB. (I run all my experiments on a 80GB GPU, lower VRAM will cause OOM error. Will support lower VRAM in the future.)
43
 
44
  ```bash
45
  bash