idlebg commited on
Commit
b1ebeb9
1 Parent(s): bba16b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -275,16 +275,15 @@ extra_gated_fields:
275
  "I acknowledge the license agreement stated above and pledge to utilize the Software strictly for non-commercial research": checkbox
276
 
277
  ---
278
- [![Download](https://img.shields.io/badge/-Download%20Model-brightgreen?style=for-the-badge&logo=appveyor)](https://huggingface.co/FFusion/FFusionXL-09-SDXL/blob/main/FFusionXL-09-SDXL.safetensors)
279
 
280
  # FFXL Model Card
281
  <div style="display: flex; flex-wrap: wrap; gap: 2px;">
282
  <img src="https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success">
283
  <img src="https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success">
 
284
  </div>
285
 
286
-
287
- ![FFusionAI_00187_.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/dtRkHom_cxGSzCV2ReeVc.png)
288
 
289
  ## Model
290
 
@@ -293,7 +292,7 @@ First, we use a base model to generate latents of the desired output size.
293
  In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
294
  to the latents generated in the first step, using the same prompt.
295
 
296
- [![Download](https://img.shields.io/badge/-Download%20Model-brightgreen?style=for-the-badge&logo=appveyor)](https://huggingface.co/FFusion/FFusionXL-LoRa-SDXL-Potion-Art-Engine/resolve/main/FFusionXL-LoRa-SDXL-Potion-Art-Engine.safetensors)
297
 
298
 
299
  ### Model Description
@@ -304,6 +303,8 @@ to the latents generated in the first step, using the same prompt.
304
  - **Model Description:** This is a trained model based on SDXL that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
305
  - **Resources for more information:** [SDXL paper on arXiv](https://arxiv.org/abs/2307.01952).
306
 
 
 
307
  ### Model Sources
308
 
309
  - **Demo:** soon
@@ -317,7 +318,7 @@ to the latents generated in the first step, using the same prompt.
317
 
318
 
319
  ### 🧨 Diffusers
320
- ![ffusionXL.jpg](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/iM_2uykpHRQsZgLvIjJJl.jpeg)
321
  Make sure to upgrade diffusers to >= 0.18.0:
322
  ```
323
  pip install diffusers --upgrade
 
275
  "I acknowledge the license agreement stated above and pledge to utilize the Software strictly for non-commercial research": checkbox
276
 
277
  ---
 
278
 
279
  # FFXL Model Card
280
  <div style="display: flex; flex-wrap: wrap; gap: 2px;">
281
  <img src="https://img.shields.io/badge/%F0%9F%94%A5%20Refiner%20Compatible-Yes-success">
282
  <img src="https://img.shields.io/badge/%F0%9F%92%BB%20CLIP--ViT%2FG%20and%20CLIP--ViT%2FL%20tested-Yes-success">
283
+ <img src="https://img.shields.io/badge/%F0%9F%A7%A8%20FFXL%20Diffusers-available-brightgreen">
284
  </div>
285
 
286
+ ![ffusionXL.jpg](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/iM_2uykpHRQsZgLvIjJJl.jpeg)
 
287
 
288
  ## Model
289
 
 
292
  In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
293
  to the latents generated in the first step, using the same prompt.
294
 
295
+ [![Download](https://img.shields.io/badge/-Download%20Model-brightgreen?style=for-the-badge&logo=appveyor)](https://huggingface.co/FFusion/FFusionXL-09-SDXL/blob/main/FFusionXL-09-SDXL.safetensors)
296
 
297
 
298
  ### Model Description
 
303
  - **Model Description:** This is a trained model based on SDXL that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
304
  - **Resources for more information:** [SDXL paper on arXiv](https://arxiv.org/abs/2307.01952).
305
 
306
+ ![FFusionAI_00187_.png](https://cdn-uploads.huggingface.co/production/uploads/6380cf05f496d57325c12194/dtRkHom_cxGSzCV2ReeVc.png)
307
+
308
  ### Model Sources
309
 
310
  - **Demo:** soon
 
318
 
319
 
320
  ### 🧨 Diffusers
321
+
322
  Make sure to upgrade diffusers to >= 0.18.0:
323
  ```
324
  pip install diffusers --upgrade