--- base_model: - stabilityai/stable-diffusion-2-inpainting - stabilityai/stable-diffusion-2-1 pipeline_tag: image-to-image --- ``` from transformers import AutoConfig, AutoModel, ModelCard # Load the gray-inpaint model gray_inpaintor = AutoModel.from_pretrained( 'jwengr/stable-diffusion-2-gray-inpaint-to-rgb', subfolder='gray-inpaint', trust_remote_code=True, ) # Load the gray2rgb model gray2rgb = AutoModel.from_pretrained( 'jwengr/stable-diffusion-2-gray-inpaint-to-rgb', subfolder='gray2rgb', trust_remote_code=True, ) # Move models to GPU gray2rgb.to('cuda') gray_inpaintor.to('cuda') # Enable memory-efficient attention for xFormers gray2rgb.unet.enable_xformers_memory_efficient_attention() gray_inpaintor.unet.enable_xformers_memory_efficient_attention() # Generate images using gray_inpaintor and gray2rgb image_gray_restored = gray_inpaintor(batch, seed=inpaint_seed) image_gray_restored = [img.convert('RGB') for img in image_gray_restored] image_restored_pil = gray2rgb(image_gray_restored) image_restored_pt = gray2rgb(image_gray_restored, output_type='pt') ```