Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -492,21 +492,18 @@ with gr.Blocks(css="style.css") as demo:
|
|
492 |
|
493 |
|
494 |
|
495 |
-
|
496 |
-
|
497 |
-
1.
|
498 |
-
|
499 |
-
|
500 |
-
|
501 |
-
* Increasing the Image CFG weight, or
|
502 |
-
* Decreasing the Text CFG weight
|
503 |
-
3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
|
504 |
-
4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
|
505 |
-
5. Increasing the number of steps sometimes improves results.
|
506 |
-
6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try:
|
507 |
-
* Cropping the image so the face takes up a larger portion of the frame.
|
508 |
"""
|
|
|
|
|
|
|
509 |
|
510 |
-
gr.Markdown(
|
|
|
511 |
|
512 |
demo.queue().launch()
|
|
|
492 |
|
493 |
|
494 |
|
495 |
+
help_text1 = """
|
496 |
+
<b>Instructions</b>:
|
497 |
+
1. To get results faster without waiting in queue, you can duplicate into a private space with an A100 GPU.
|
498 |
+
2. To begin, you will have to get an identity-encoding model. You can either sample one from **weights2weights** space by clicking `Sample New Model` or by uploading an image and clicking `invert` to invert the identity into a model. You can optionally draw over the face in the image to obtain better results. Sampling a model takes around 10 seconds and inversion takes around 2 minutes. After this is done, you can optionally download this model for later use. A model can be uploaded in the "Uploading a model\" tab in the Advanced options.
|
499 |
+
3. After getting a model, an image of the identity will be displayed on the right. You can sample from the model by changing seeds as well as prompts and then clicking `Generate`.
|
500 |
+
4. The identity in the model can be edited by changing the sliders for various attributes. After clicking `Generate`, you can see how the identity has changed and the effects are maintained across different seeds and prompts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
501 |
"""
|
502 |
+
help_text2 = """<b>Tips</b>:
|
503 |
+
1. Editing
|
504 |
+
* Cropping the image so the face takes up a larger portion of the frame."""
|
505 |
|
506 |
+
gr.Markdown(help_text1)
|
507 |
+
gr.Markdown(help_text2)
|
508 |
|
509 |
demo.queue().launch()
|