text
stringlengths 0
5.54k
|
---|
pipe = AutoPipelineForImage2Image.from_pretrained(
|
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16
|
)
|
pipe.enable_model_cpu_offload()
|
prompt = "A fantasy landscape, Cinematic lighting"
|
negative_prompt = "low quality, bad quality"
|
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
|
response = requests.get(url)
|
image = Image.open(BytesIO(response.content)).convert("RGB")
|
image.thumbnail((768, 768))
|
image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
|
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward
|
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
|
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
|
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
|
torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) β
|
A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) β
|
Conditional U-Net architecture to denoise the image embedding. movq (VQModel) β
|
MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
|
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) β ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β
|
The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) β
|
Image, or tensor representing an image batch, that will be used as the starting point for the
|
process. Can also accept image latents as image, if passing latents directly, it will not be encoded
|
again. strength (float, optional, defaults to 0.8) β
|
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image
|
will be used as a starting point, adding more noise to it the larger the strength. The number of
|
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will
|
be maximum and the denoising process will run for the full number of iterations specified in
|
num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) β
|
The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β
|
The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) β
|
The height in pixels of the generated image. width (int, optional, defaults to 512) β
|
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) β
|
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
expense of slower inference. guidance_scale (float, optional, defaults to 4.0) β
|
Guidance scale as defined in Classifier-Free Diffusion Guidance.
|
guidance_scale is defined as w of equation 2. of Imagen
|
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
|
usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) β
|
The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) β
|
One or a list of torch generator(s)
|
to make generation deterministic. output_type (str, optional, defaults to "pil") β
|
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np"
|
(np.array) or "pt" (torch.Tensor). callback (Callable, optional) β
|
A function that calls every callback_steps steps during inference. The function is called with the
|
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β
|
The frequency at which the callback function is called. If not specified, the callback is called at
|
every step. return_dict (bool, optional, defaults to True) β
|
Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns
|
ImagePipelineOutput or tuple
|
Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) β
|
A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) β
|
Conditional U-Net architecture to denoise the image embedding. movq (VQModel) β
|
MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
|
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) β ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β
|
The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) β
|
Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will
|
be masked out with mask_image and repainted according to prompt. mask_image (np.array) β
|
Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while
|
black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single
|
channel (luminance) before use. If itβs a tensor, it should contain one color channel (L) instead of 3,
|
so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) β
|
The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) β
|
The height in pixels of the generated image. width (int, optional, defaults to 512) β
|
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) β
|
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
expense of slower inference. guidance_scale (float, optional, defaults to 4.0) β
|
Guidance scale as defined in Classifier-Free Diffusion Guidance.
|
guidance_scale is defined as w of equation 2. of Imagen
|
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
|
usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) β
|
The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) β
|
One or a list of torch generator(s)
|
to make generation deterministic. latents (torch.FloatTensor, optional) β
|
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
|
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β
|
The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np"
|
(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) β
|
Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) β
|
A function that calls at the end of each denoising steps during the inference. The function is called
|
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
|
callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) β
|
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
|
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
|
._callback_tensor_inputs attribute of your pipeline class. Returns
|
ImagePipelineOutput or tuple
|
Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) β
|
A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) β
|
Conditional U-Net architecture to denoise the image embedding. movq (VQModel) β
|
MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) β
|
The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) β
|
Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) β
|
Frozen text-encoder. prior_tokenizer (CLIPTokenizer) β
|
Tokenizer of class
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.