Spaces:
Paused
Paused
Bolded text for user guide
Browse files
app.py
CHANGED
@@ -81,8 +81,8 @@ generatorSeed = gradio.Slider(label="Generator Seed", info=generatorSeedDesc, ma
|
|
81 |
staticLatents = gradio.Checkbox(label="Static Latents", info=staticLatentsDesc, value=True)
|
82 |
pauseInference = gradio.Checkbox(label="Pause Inference", value=False)
|
83 |
|
84 |
-
description = "This generative machine learning demonstration streams stable diffusion outpainting inference live from your camera on your computer or phone to expand your local reality and create an alternate world. High quality frame to frame determinism is a hard problem to solve for latent diffusion models as the generation is inherently relative to input noise distributions for the latents, and many factors such as the inherent Bayer noise from the camera images as well as anything that is altered between camera images (such as focus, white balance, etc) causes non-determinism between frames. Some methods apply spationtemporal attention, but this demonstration focuses on the control over the input latents to navigate the latent space. Increase the lighting of your physical scene to improve the quality and consistency
|
85 |
-
article = "This demonstration should initialize automatically from the default values, and run relatively well, but if the output is not an ideal reconstruction of your physical local space from your camera's perspective, then you should adjust the generator seed to take large walks across the latent space. In addition, the static latents can be disable to continously walk the latent space, and then it can be set to static again when a better region of the embedded space is found, but this will increase frame to fram non-determinism. You can also condition the generation using prompts to re-enforce or change aspects of the scene. <b>If you see a black image instead of a generated output image, then you are running into the safety checker. </b>
|
86 |
|
87 |
inputs=[staticLatents, generatorSeed, inputImage, mask, pauseInference, prompt, negativePrompt, guidanceScale, numInferenceSteps]
|
88 |
ux = gradio.Interface(fn=diffuse, title="View Diffusion", article=article, description=description, inputs=inputs, outputs=outputImage, live=True)
|
|
|
81 |
staticLatents = gradio.Checkbox(label="Static Latents", info=staticLatentsDesc, value=True)
|
82 |
pauseInference = gradio.Checkbox(label="Pause Inference", value=False)
|
83 |
|
84 |
+
description = "This generative machine learning demonstration streams stable diffusion outpainting inference live from your camera on your computer or phone to expand your local reality and create an alternate world. High quality frame to frame determinism is a hard problem to solve for latent diffusion models as the generation is inherently relative to input noise distributions for the latents, and many factors such as the inherent Bayer noise from the camera images as well as anything that is altered between camera images (such as focus, white balance, etc) causes non-determinism between frames. Some methods apply spationtemporal attention, but this demonstration focuses on the control over the input latents to navigate the latent space. <b>Increase the lighting of your physical scene from your camera's perspective, and avoid self shadows of scene content, to improve the quality and consistency of the scene generation.</b>"
|
85 |
+
article = "This demonstration should initialize automatically from the default values, and run relatively well, but if the output is not an ideal reconstruction of your physical local space from your camera's perspective, then you should adjust the generator seed to take large walks across the latent space. In addition, the static latents can be disable to continously walk the latent space, and then it can be set to static again when a better region of the embedded space is found, but this will increase frame to fram non-determinism. You can also condition the generation using prompts to re-enforce or change aspects of the scene. <b>If you see a black image instead of a generated output image, then you are running into the safety checker. </b>This can trigger inconsistently even when the generated content is purely PG. If this happens, then increase the lighting of the scene and also increase the number of inference steps to improve the generated predicition to reduce the likelihood of the saftey checker triggering a false positive."
|
86 |
|
87 |
inputs=[staticLatents, generatorSeed, inputImage, mask, pauseInference, prompt, negativePrompt, guidanceScale, numInferenceSteps]
|
88 |
ux = gradio.Interface(fn=diffuse, title="View Diffusion", article=article, description=description, inputs=inputs, outputs=outputImage, live=True)
|