Comfyui inpainting workflow reddit. normal inpainting, but I haven't tested it.

I use the "Load Image" node and "Open in MaskEditor" to draw my masks. ComfyUI - SDXL basic to advanced workflow tutorial - 4 - upgrading your workflow Heya, tutorial 4 from my series is up, it covers the creation of an input selector switch, use of some math nodes and has a few tips and tricks. it is supporting it is working well with High Res Images + SDXL + SDXL Lightening + FreeU2+ Self Attention gaudiness+ Fooocus inpainting + SAM + Manual mask Composition + Lama mate models + Upscale, IPAdaptern, and more. Inpainting with a standard Stable Diffusion model Welcome to the unofficial ComfyUI subreddit. - What is the Difference between "IMAGE" and "image"? - How can I pass the image on for painting the mask in ComfyUI? thx! I also have a lot of controls over mask, letting me switch between txt2img, img2img, inpainting, (inverted inpainting), and "enhanced inpainting" which includes the entire image with the mask to the sampler, also a "image blend" so i have my img2img, and a secondary image, and those latents both get blended together, optionally, before my first Welcome to the unofficial ComfyUI subreddit. This is the concept: Generate your usual 1024x1024 Image. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. I saw that I can expose the "image" input in the "Load image" node. You do a manual mask via Mask Editor, then it will feed into a ksampler and inpaint the masked area. If you have any questions, please feel free to leave a comment here or on my civitai article. How can I inpaint with ComfyUI such that unmasked areas are not altered? Welcome to the unofficial ComfyUI subreddit. 4" - Free Workflow for ComfyUI. I want to create a workflow which takes an image of a person and generate a new person’s face and body in the exact same clothes and pose. I loaded it up and input an image (the same image fyi) into the two image loaders and pointed the batch loader at a folder of random images and it produced an interesting but not usable result. Will post workflow in the comments. I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. I’m hoping to use InstantID as part of an inpainting process to change the face of an already existing image but can’t seem to figure it out. What works: It successfully identifies the hands and creates a mask for inpainting What does not work: it does not create anything close to a desired result All suggestions are welcome /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Is there anything similar available in ComfyUI? I'm specifically looking for an outpainting workflow that can match the existing style and subject matter of the base image similar to what LaMa is capable of. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. The mask can be created by:- hand with the mask editor- the SAMdetector, where we place one or m EDIT: Fix Hands - Basic Inpainting Tutorial | Civitai (Workflow Included) It's not perfect, but definitely much better than before. Release: AP Workflow 8. I just recorded this video tutorial that explains, in just ten minutes, how to do very fast inpainting only on masked areas in ComfyUI. I will record the Tutorial ASAP. This update includes new features and improvements to make your image creation process faster and more efficient. normal inpainting, but I haven't tested it. 0. Is there a way I can add a node to my workflow so that I pass in the base image + mask and get 9 options out to compare? JAPANESE GUARDIAN - This was the simplest possible workflow and probably shouldn't have worked (it didn't before) but the final output is 8256x8256 all within Automatic1111. Please share your tips, tricks, and workflows for using this software to create your AI art. If for some reason you cannot install missing nodes with the Comfyui manager, here are the nodes used in this workflow: ComfyLiterals, Masquerade Nodes, Efficiency Nodes for ComfyUI, pfaeff-comfyui, MTB Nodes. Anyone have a good workflow for inpainting parts of characters for better consistency using the newer IPAdapter models? I have an idea for a comic and would like to generate a base character with a predetermined appearance including outfit, and then use IPAdapter to inpaint and correct some of the inconsistency I get from generate the same character in difference poses and context (I Welcome to the unofficial ComfyUI subreddit. I don't want any changes or additions to the image, just a straightforward upscale and quality enhancement. Welcome to the unofficial ComfyUI subreddit. you want to use vae for inpainting OR set latent noise, not both. Quick Tip for Beginners: You can change the default settings of Stable Diffusion WebUI (AUTOMATIC1111) in the ui-config. If you want more resolution you can simply add another Ultimate SD Upscale node. I have some idea of how masking,segmenting and inpainting works but cannot pinpoint to the desired result. Link: Tutorial: Inpainting only on masked area in ComfyUI. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Please keep posted images SFW. 9 and ran it through ComfyUI. This was really a test of Comfy UI. TLDR: Question: i want to take a 512x512 image that i generate in txt2img and then in the same workflow, send it to controlnet inpaint to make it 740x512, by extending the left and right side of it. It is not perfect and has some things i want to fix some day. Update 8/28/2023 Thanks to u/wawawa64 i was able to get a working functional workflow that looks like this!. Also make sure you install missing nodes with ComfyUI Manager. Constraints: No inpainting, maintain perspective and room size. Hey, I need help with masking and inpainting in comfyui, I’m relatively new to it. Fast ~18 steps, 2 seconds images, with Full Workflow Included! No ControlNet, No ADetailer, No LoRAs, No inpainting, No editing, No face restoring, Not Even Hires Fix!! (and obviously no spaghetti nightmare). Being the control freak that I am, I took the base refiner image into Automatic111 and inpainted the eyes and lips. I like to create images like that one: end result. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. It's messy right now but does the job. No refiner. 512x512. Join the largest ComfyUI community. Goal: Input: Image of an empty room. See comments for more details We would like to show you a description here but the site won’t allow us. Senders save their input in a temporary location, so you do not need to feed them new data every gen. my rule of thumb is if I need to completely replace a feature of my image I use vae for inpainting with an inpainting model. The clipdrop "uncrop" gave really good results. We would like to show you a description here but the site won’t allow us. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. Does anyone know why? I would have guessed that only the area inside of the mask would be modified. Setting the crop_factor to 1 considers only the masked area for inpainting, while increasing the crop_factor incorporates context relative to the mask for inpainting. Before inpainting the workflow will blow the masked size up to 1024x1024 to get a nice resolution and resize before pasting back. Either you want no original context at all then you need to do what gxcells posted using something like the Paste by Mask custom node to merge the two image using that mask. You can try to use CLIPSeg with a query like "man" to automatically create an inpainting mask, and pass it into an inpainting workflow using your new prompt or a Lora/IPAdapter setup. The example workflows featured in Correctly uses the refiner unlike most comfyui or any A1111/Vlad workflow by using the fooocus KSampler takes ~18 seconds on a 3070 per picture Saves as a webp, meaning it takes up 1/10 the space of the default PNG save Has in painting, IMG2IMG, and TXT2IMG all easily accessible Is actually simple to use and to modify Just getting up to speed with comfyui (love it so far) and I want to get inpainting dialled. You can do it with Masquerade nodes. Even the word "workflow" has been bastardized to mean the node graphs in ComfyUI. So I tried to create the outpainting workflow from the ComfyUI example site. Introducing "Fast Creator v1. Raw output, pure and simple TXT2IMG. However, I can not connect the VAE Decode here with the Image input. I've noticed that the output image is altered in areas that have not been masked. Node based editors are unfamiliar to lots of people, so even with the ability to have images loaded in people might get lost or just overwhelmed to the point where it turns people off even though they can handle it (like how people have an ugh reaction to math). 0 Release Welcome to the unofficial ComfyUI subreddit. It takes less than 5 minutes with my 8GB VRAM GC: Generate with txt2img, for example: Jan 20, 2024 · Inpainting in ComfyUI has not been as easy and intuitive as in AUTOMATIC1111. The center image flashes through the 64 random images it pulled from the batch loader and the outpainted portion seems to correlate to The inpaint_only +Lama ControlNet in A1111 produces some amazing results. . For "only masked," using the Impact Pack's detailer simplifies the process. seen a lot of people asking for something similar, it can be refined but works great for quickly changing the image to run back through an ipadapter or something similar, always thought you had to use 'vae encode for inpainting' , turns out you just vae encode and set a latent noise mask, i usually just leave inpaint controlnet between 0. I'm noticing that with every pass the image (outside the mask!) gets worse. 0 for ComfyUI - Now with a next-gen upscaler (competitive against Magnific AI and Topaz Gigapixel!) and higher quality mask inpainting with Fooocus inpaint model "Truly Reborn" | Version 3 of Searge SDXL for ComfyUI | Overhauled user interface | All features integrated in ONE single workflow | Multiple prompting styles from "simple" for a quick start to the unpredictable and surprising "overlay" mode | text-2-image, image-2-image, and inpainting supported A follow up to my last vid, showing how you can use zoned noise to better control InPainting. But with ComfyUI, I spend all my time setting up graphs and almost zero time doing actual work. But one thing I've noticed is that the image outside of the mask isn't identical to the input. 3 its still wrecking it even though you have set latent noise. Then ported it into Photoshop for further finishing a slight gradient layer to enhance the warm to cool lighting and with the Camera Raw Filter to add just a little sharpening Inpainting is inherently contex aware ( at least that's how I see it ). Release: AP Workflow 7. I don’t think “if you’re too newb to figure it out try again later” is a productive way to introduce a technique. 0 denoise to work correctly and as you are running it with 0. An example of the images you can generate with this workflow: I’m wondering if anyone can help. Hello! I am currently trying to figure out how to build a crude video inpainting workflow that will allow me to create rips or tears in the surface of a video so that I can create a video that looks similar to a paper collage- meaning that in the hole of the ‘torn’ video, you can see an animation peaking through the hole- I have included an example of the type of masking I am imagining We would like to show you a description here but the site won’t allow us. Output: Same room with conventional furniture and decor. Dec 23, 2023 · This is inpaint workflow for comfy i did as an experiment. 4". We take an existing image (image-to-image), and modify just a portion of it (the mask) within the latent space, then use a /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. suuuuup, :Dso, with set latent noise mask, it is trying to turn that blue/white sky into a space ship, this may not be enough for it, a higher denoise value is more likely to work in this instance, also if you want to creatively inpaint then inpainting models are not as good as they want to use what exists to make an image more than a normal model. In my Workflow I want to generate some images and then pass it on for mask painting. I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. I'm not 100% because I haven't tested it myself, but I do believe you can use a higher noise ratio with ControlNet inpainting vs. image to image sender, latent out to set latent noise mask. I tested and found that VAE Encoding is adding artifacts. Yes, I can use ComfyUI just fine. Note that when inpaiting it is better to use checkpoints trained for the purpose. The blurred latent mask does its best to prevent ugly seams. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0 You might be able to automate the process if the profiles of the characters are similar, but otherwise you might need manual masking for inpainting. I'm looking for a workflow for ComfyUI that can take an uploaded image and generate an identical one, but upscaled using Ultimate SD Upscaling. You need to use the various ControlNet methods/conditions in conjunction with InPainting to get the best results (which the OP semi-shotdown in another post). It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. Mask painted with image receiver, mask out from there to set latent noise mask. I’m using ComfyUI and have InstantID up and running perfectly in my generation process. In my inpaint workflow I do some manipulation of the initial image (add noise, then use blurs mask to re-paste original overtop the area I do not intend to change), and it generally yields better inpainting around the seams (#2 step below), I also noted some of the other nodes I use as well. It's an 2x upscale workflow. I'll copy and paste my description from a prior post: I built this inpainting workflow as an effort to imitate the A1111 Masked-Area-Only inpainting experience. Trying to emulate that with a workflow in ComfyUI This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic Share, discover, & run thousands of ComfyUI workflows. You can try using my inpainting workflow if interested. After spending 10 days finally, my new workflow for inpainting is ready for running in ComfyUI. I'm following the inpainting example from the ComfyUI Examples repo, masking with the mask editor. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Overall, I've had great success using this node to do a simple inpainting workflow. Put your folder in the top left text input. Inpainting is very effective in Stable Diffusion and the workflow in ComfyUI is really simple. This post hopes to bridge the gap by providing the following bare-bone inpainting examples with detailed instructions in ComfyUI. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. vae for inpainting requires 1. 5-1. I just installed SDXL 0. Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) the first is the original background from which the background remover crappily removed the background, right? Because the others look way worse, inpainting is not really capable of inpainting an entire background without it looking like a cheap background replacement plus unwanted artifacts appearing. For some reason, it struggles to create decent results. I got a makeshift controlnet/inpainting workflow started with SDXL for ComfyUI (WIP). . The idea is sometimes the area to be masked may be different from the semantic segment by clipseg and also the area may not be properly fixed by automatic segmentation. 784x512. json to enhance your workflow r/StableDiffusion • Invoke AI 3. If you see a few red boxes, be sure to read the Questions section on the page. I have a basic workflow that I would like to modify to get a grid of 9 outputs. Note that if force_inpaint is turned off, inpainting might not occur due to the guide_size. They are generally called with the base model name plus inpainting My current workflow to generate decent pictures at upscale X4, with minor glitches. Below is a source image and I've run it through VAE encode / decode five times in a row to exaggerate the issue and produce the second image. Unfortunately, Reddit strips the workflow info from uploaded png files. Thanks Updated: Inpainting only on masked area, outpainting, and seamless blending (includes custom nodes, workflow, and video tutorial) upvotes · comments r/StableDiffusion I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). I gave a try to image of 2304x2304 and result was perfect. I'm learning how to do inpainting (Comfyui) and I'm doing multiple passes. It comes the time when you need to change a detail on an image, or maybe you want to expand on a side. The resources for inpainting workflow are scarce and riddled with errors. I've done it on Automatic 1111, but its not been the best result - I could spend more time and get better, but I've been trying to switch to ComfyUI. In this workflow we try and merge two masks one from "clipseg" and another from Mask inpainting so that the combined mask acts as a place holder for image generation. In addition to a whole image inpainting and mask only inpainting, I also have workflows that Release: AP Workflow 7. Does this same workflow work to basically any size of image ? I got a bit confused of what you mean with DetailerForEach, you mean Detailer (SEGS I am creating a workflow that allows me to fix hands easily using ComfyUI. The Inpaint Crop and Stitch nodes can be downloaded using ComfyUI-Manager, just look for "Inpaint-CropAndStitch". It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and With Inpainting we can change parts of an image via masking. 0 for ComfyUI - Now with support for Stable Diffusion Video, a better Upscaler, a new Caption Generator, a new Inpainter (w inpainting/outpainting masks), a new Watermarker, support for Kohya Deep Shrink, Self-Attention, StyleAligned, Perp-Neg, and IPAdapter attention mask Usually, or almost always I like to inpaint the face , or depending on the image I am making, I know what I want to inpaint, there is always something that has high probability of wanting to get inpainted, so I do it automatically by using grounding dino segment anything and have it ready in the workflow (which is a workflow specified to the picture I am making) and feed it into impact pack I think the DALL-E 3 does a good job of following prompts to create images, but Microsoft Image Creator only supports 1024x1024 sizes, so I thought it would be nice to outpaint with ComfyUI. I am not very familiar with ComfyUI but maybe it allows to make a workflow like that? In A1111 I tried Batch Face Swap extension for creating a mask for face only, but then I have to run the batch three times (first for the mask, second for inpainting with masked face and third for face only with adetailer). Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. Apr 21, 2024 · Inpainting is a blend of the image-to-image and text-to-image processes. Workflow is in the description of the vid. Thank you for this interesting workflow. With everyone focusing almost all attention on ComfyUI, ideas for incorporating SD into professional workflows has fallen by the wayside. This was just great! I was suffering with inpainting also lowering the quality of surrounding areas of mask, while they should remain intact. Hello! I am starting to work with ComfyUI transitioning from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows This is my inpainting workflow. I gave the SDXL refiner latent output to DreamShaper XL model as latent input (as inpainting) with slightly changed prompt, I added hand focused terms to the prompt like "highly detailed hand" and I increased their weight. gc mo hz db ag yw oh pz hr bv