![]() Mask_image_arr = np.array(mask_nvert( "L")) ![]() Repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images Prompt = "Face of a yellow cat, high resolution, sitting on a park bench" "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant= "fp16" "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. The Kandinsky model family is similar to SDXL because it uses two models as well the image prior model creates image embeddings, and the diffusion model generates images from them. Make_image_grid(, rows= 1, cols= 3) Kandinsky 2.2 Inpainting "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant= "fp16" Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. This model can follow a two-stage model process (though each model can also be used alone) the base model generates an image, and a refiner model takes that image and further enhances its details and quality. SDXL is a larger and more powerful version of Stable Diffusion v1.5. Make_image_grid(, rows= 1, cols= 3) Stable Diffusion XL (SDXL) Inpainting Image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images Prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" Generator = torch.Generator( "cuda").manual_seed( 92) ![]() Pipeline.enable_xformers_memory_efficient_attention() # remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant= "fp16" To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline:įrom diffusers import AutoPipelineForInpaintingįrom diffusers.utils import load_image, make_image_grid It is a good starting point because it is relatively fast and generates good quality images. Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. Once you’re done, click Run to generate and download the mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Use the Space below to easily create a mask image. You can inpaint on your own images, but you’ll need to create a mask image for it. Throughout this guide, the mask image is provided in all of the code examples for convenience.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |