Comfyui deforum workflow reddit. 0 is the first step in that direction.


The Real Housewives of Atlanta; The Bachelor; Sister Wives; 90 Day Fiance; Wife Swap; The Amazing Race Australia; Married at First Sight; The Real Housewives of Dallas Welcome to the unofficial ComfyUI subreddit. In Deforum I was able to import a color scheme from a PNG and apply it to all rendered video frames. EDIT: For example this workflow shows the use of the other prompt windows. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. Makeing a bit of progress this week in ComfyUI. This is a simple guide through deforum I explain basically how it works and some tips for trouble shooting if you have any issues. Upcoming tutorial - SDXL Lora + using 1. Share and Run ComfyUI workflows in the cloud. 0 is the first step in that direction. ComfyUI provides a constant source of experimentation, even for me who has but a minimal understanding of how the whole process works. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Stage A >> \models\vae\SD Cascade stage_a. Welcome to the unofficial ComfyUI subreddit. However, this can be clarified by reloading the workflow or by asking questions. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . The preview is broken but if download the workflow and drop it in Comfy, it is correct. Sep 5, 2023 · Dark Horror Animation - SD 1. I set up a workflow for first pass and highres pass. I have tried to build this workflow but comfy ui , man its just like spaghetti I want it to have 2 lora + controlnet + Hires. ai Hello to everyone because people ask here my full workflow, and my node system for ComfyUI but here what I am using : - First I used Cinema 4D with the sound effector mograph to create the animation, there is many tutorial online how to set it up. It is a powerful workflow that let's your imagination run wild. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision [x-post] Can we make deforum work with comfyUI Discussion I use mainly automatic1111 and comfyUI only for SDXL (because it hang my PC if i use on automatic1111) comfy UI handles it with no problem on OMEN 16. K12sysadmin is for K12 techs. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. A short animation made it with: Stable Diffusion v2. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. fix + upscale after the… I just installed ComfyUI with Sytan's SDXL/Superscale workflow and it's not lightning quick but I'm getting stellar 2048x2048 images out the other end without spontaneously combusting my PC. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. 5 / Deforum - (Workflow and settings in comments) comment sorted by Best Top New Controversial Q&A Add a Comment TheFunkSludge • We would like to show you a description here but the site won’t allow us. This is what it's supposed to look like in the preview. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. Is there a workflow for inpainting at full resolution? /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. Feb 2, 2023 · Guide: Workflow for Creating Animations using animatediff-cli-prompt-travel Step-by-Step r/StableDiffusion • "When I first tried Time Jumping, I was discombobulated as hell. DEFORUM in ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access Jul 29, 2023 · Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. Screenshots have all json data embedded into image metadata. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 5 Inpainting tutorial. It makes things a lot easier in terms of locking seeds etc. JAPANESE GUARDIAN - This was the simplest possible workflow and probably shouldn't have worked (it didn't before) but the final output is 8256x8256 all within Automatic1111. Is there a workflow with all features and options combined together that I can simply load and use ? Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this… Please note the workflow is using the clip text encode ++ that is converting the prompt weights to a1111 behavior. Following this, I utilize… sorry to hijack your post, but just curious if there is some kind of way to sort of export your comfyui workflow (as its a json file) and run prompts through the workflow from cli? im just looking to build a workflow using comfyui and then pushing using cli to utilize the workflow, kind of? Tell me about it. 5 manage workflows, generated images gallery, saving versions history, tags, insert subwokflow Additionally, I also used Deforum, where part of the knowledge is incorporated into the extension. Thanks for posting! I've been looking for something like this. I was doing: Hi! I just made the move from A1111 to ComfyUI a few days ago. jpg -r 60 -vframes 120 OUTPUT_A. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. This workflow looks complicated because the same variables (image width & height) and the prompts (pos+neg) have to be carried around the workflow a dozen times by pipes. THANK YOU. But for a base to start at it'll work. Ending Workflow. Oct 17, 2023 · /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This RAVE workflow in combination with AnimateDiff allows you to change a main subject character into something completely different. Hello beautiful people,this is my first tutorial on Youtube so bare with me. When you say "workflow" do you mean a single file that somehow configures ComfyUI for some sort of task? This is confusing for me as an artist because when I speak of "workflow" I'm talking about various techniques, medium organization, idea development, and so on. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. Hello, I'm a beginner looking for a somewhat simple all in one workflow that would work on my 4070 Ti super with 16gb vram. json Quirky thing at first, but after few days using it I find it way better than pnginfo in A1111. Follow the steps below depending on your method of preference. Every time I want to learn something new from a workflow I see someone post--especially if it is an older workflow--I inevitably have to deconstruct, reverse engineer, and carefully examine exactly how it works. io, the premier marketplace for AI-generated artwork. Nov 1, 2023 · Welcome to the unofficial ComfyUI subreddit. AP Workflow 6. Some of them were pretty nice and I liked them. be/pZ6Li3qF-Kk?si=cP4awsdbc8niz8sB. And above all, BE NICE. hopefully this will be useful to you. I have not seen that much in the meta data stored in the image, is there a central repository that the code saves the details when you save the image ? ComfyUI Workspace manager v1. (Zoom functions, etc. Jan 26, 2024 · Image interpolation is a powerful technique based on creating new pixels surrounding an image: this opens up the door to many possibilities, such as image resizing and upscaling, as well as merging… This subreddit is an unofficial community about the video game "Space Engineers", a sandbox game on PC, Xbox and PlayStation, about engineering, construction, exploration and survival in space and on planets. 1 / fking_scifi v2 / Deforum v0. A lot of people are just discovering this technology, and want to show off what they created. Help me make it better! Welcome to the unofficial ComfyUI subreddit. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. I understand how outpainting is supposed to work in comfyui (workflow here /r/StableDiffusion is back open after the protest of Reddit killing open API access Starting workflow. But is there a way to then to create animations like I can in Deforum, where ai manages the transition from one keyframe prompt to the next? It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. Dive into a world where technology meets artistry, and discover the limitless boundaries of creativity powered by artificial intelligence. 22K subscribers in the comfyui community. I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. ner model. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting 17 votes, 14 comments. Now you can condition your prompts as easily as applying a CNet! Welcome to the unofficial ComfyUI subreddit. made a simple workflow to help folks get a better understanding of the Advanced Ksampler. I've seen a lot of comments about people having trouble with inpainting and some saying that inpainting is useless. 3) with the SDXL or the SD 1. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint Welcome to the unofficial ComfyUI subreddit. Nov 23, 2023 · Trying to use Comfyui Workflow Father & Mother = CHILD Question - Help I downloaded the workflow for taking 2 images you have, of someone you call father and the other you call mother and you run it and it combines them both to make the child. 5 + SDXL Refiner Workflow : StableDiffusion. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease This site is more like sharing workflow but people can’t run it to generate images. Dont forget you can still make dozens of variations of each sketch (even in a simple ComfyUI workflow) and than cherry pick the one that stands out. We recommend using a virtual environment. Introducing "Fast Creator v1. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. More of this, please. Sorry for the breathing sounds and my horrible accent. I have a ComfyUI workflow that produces great results. Please share your tips, tricks, and workflows for using this… This workflow isn’t img2vid as there isn’t a controlnet involved but an ipadapter which works differently. Please keep posted images SFW. I come from a distant gal From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. fix. Is something like this possible in ComfyUI? My video-to-video workflow is based on the one from Civitai video guide, nothing fancy. 4". AP Workflow v3. 600 frames) Welcome to the unofficial ComfyUI subreddit. There is no tiling in the default A1111 hires. For example, see this: SDXL Base + SD 1. Hello! Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before me🫡🙌🫡🙌🫡🙌🫡🙌 Are people using… Is there a way to make comfyUI loop back on itself so that it repeats/can be automated? Essentially I want to make a workflow that takes the output and feeds it back in on itself similar to what deforum does for x amount of images. B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks who build UIs to put in front of it. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 0. Tried the llite custom nodes with lllite models and impressed. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. Went looking everywhere with no luck, lol. 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. Dec 23, 2023 · Welcome to the unofficial ComfyUI subreddit. You may want to note the seed used. Adding LORAs in my next iteration. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. It provides workflow for SDXL (base + refiner). 5. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. What is the best workflow that people have used with the most capability without using custom nodes? Welcome to the unofficial ComfyUI subreddit. AP Workflow 5. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. Mar 30, 2024 · The workflow I'm using is screenshot below, very basic. K12sysadmin is open to view and closed to post. Have a look at this video where I show how it is done: https://youtu. 7 colab notebook, and upscaled x4 with RealESRGAN model on Cupscale (12. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory We would like to show you a description here but the site won’t allow us. But it separates LORA to another workflow (and it's not based on SDXL either). You can also animate the subject while the composite node is being schedules as well! Drag and drop the image in this link into ComfyUI to load the workflow or save the image and load it using the load button. Personally I prefer Auto1111 because I love Parseq with Deforum but Comfy is just so much better with Animatediff specifically. link to deforum discord / discord link to the deforum Aug 4, 2023 · View community ranking In the Top 1% of largest communities on Reddit. When I switch to 3D, the Zoom option goes away. I only recently got the "HiRes-Fix" code set sort-of working - really quite chuffed with how it's all going. I'll also share the inpainting methods I use to correct any issues that might pop up. csv file to remove some incompatible characters (mostly accents). Here's the link to the workflow ComfyUI. Belittling their efforts will get you banned. 5 (+ Controlnet,PatchModel. I uploaded an image to civitai but not any workflow details . I had good results with zavychromaxl, but what I do is a workflow with no SDXL refiner but SD 1. Posted by u/qstone75 - 288 votes and 17 comments The noise strength formula works great and will probably be forever a part of my deforum workflow. To add content, your account must be vetted/verified. a search of the subreddit Didn't turn up any answers to my question. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. 5 , denoise 0. json file Made up movie trailer ComfyUI is lighter on the system than Auto1111, so if anything you’ll see a performance increase when doing the same things. mp4 (The -start_number value defines a custom file name integer start frame, Hello! I am currently trying to figure out how to build a crude video inpainting workflow that will allow me to create rips or tears in the surface of a video so that I can create a video that looks similar to a paper collage- meaning that in the hole of the ‘torn’ video, you can see an animation peaking through the hole- I have included an example of the type of masking I am imagining This is something I have been chasing for a while. As far as I can tell (please correct me if I'm wrong), ComfyUI only runs only whatever has changed in the workflow. The graphic style Hey all- I'm attempting to replicate my workflow from 1111 and SD1. 760 frames) Release: AP Workflow 9. This update includes new features and improvements to make your image creation process faster and more efficient. This time around I used Deforum's guided image function to transition between Deforum and AnimateDiff (with varying success). 8K subscribers in the comfyui community. We would like to show you a description here but the site won’t allow us. You will likely never use "Deforum" in ComfyUI, as it is unlikely to fit well into ComfyUI. Is there some way to have a look into your comfyUI workflow to see how the warp style video2video was completed and what is the style of inputs and models it can work with. I tried to keep the noodles under control and organized so that extending the workflow isn't a pain. Please share your tips, tricks, and workflows for using this… Aug 20, 2023 · I would like to experiment more with warpfusion and deforum, but - even though people consider it dead tech - I still favor the results I'm getting from img2img / ebsynth using manually selected keyframes and post correction. I don't know what Magnific AI uses. - comfyanonymous/ComfyUI Interesting idea, but I'd hope bullets 2 and 3 could be solved by something that leverages the API, preferably by injecting variables anywhere in the GUI-loaded or API-provided workflow. ) We would like to show you a description here but the site won’t allow us. when I went back it had added my prompt and a lot of the generation detail. Dec 26, 2023 · Welcome to the unofficial ComfyUI subreddit. 1 or not. 5 , denoise ~0. ) I haven't managed to reproduce this process in Comfyui yet. I think it’s a fairly decent starting point for someone transitioning from Automatic1111 and looking to expand from there. Join the largest ComfyUI community. The other night I was riffing off of a prompt that I saw and ended up with an end of time/apocalpyse/cosmic explosion thing going on. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Ignore the prompts and setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 10 or these nodes will not work. And now for part two of my "not SORA" series. C: Supports newer stuff that A1111 doesn't (or doesn't yet) When you say controller are you referring to the menu ribbon with buttons like queue image? Or are you referring to one of the nodes in your workflow? Sep 12, 2023 · Welcome to the unofficial ComfyUI subreddit. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. 5 by using XL in comfy. This example showcases the Noisy Laten Composition workflow. Aug 3, 2023 · Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Sep 7, 2023 · I am one of the developers working on nodes to support Deforum-like flows in ComfyUI. . I also created the workflow based on Olivio's video, and replaced the positive and negative nodes with the new styles node. 7 colab notebook, audioreactive keyframes generated on Framesync, and upscaled x4 with RealESRGAN model on Cupscale (5. Jul 9, 2023 · I love the idea of finally having control over areas of an image for generating images with more precision like Comfyui can provide. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. In 1111 using image to image, you can batch load all frames of a video, batch load control net images, or even masks, and as long as they share the same name as the main video frames they will be associated with the image when batch processing. May 16, 2024 · Has anyone managed to implement Krea. Custom node support is critical IMO because any significantly complex workflow will be leveraging something custom. Image Realistic Composite & Refine ComfyUI Workflow . Please share your tips, tricks, and workflows for using this… ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. 0 Refiner for very quick image generation. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. Good for depth, open pose so far so good. Official Deforum animation pipeline tools that provide a unique way to create frame-by-frame generative motion art. I haven't tried A1111 recently so this isn't a comparison, I'm just stoked to start playing with all this stuff again. To get started with Deforum Comfy Nodes, please make sure ComfyUI is installed and you are using Python v3. Civitai has few workflows as well. Ignore the rest until you feel comfortable with those. Aug 6, 2023 · My main issue with this one is that it doesn't seem able to support SDXL and exports its results as gifs with lots of visual artifacts (that and I do really like the ability to music-sync things like in deforum, which I don't think AnimateDiff can support) Sep 12, 2023 · 3K subscribers in the comfyui community. This node is particularly useful for AI artists who want to create high-quality images by leveraging advanced sampling techniques. I use the rgthree seed node to deal with this. Will post workflow in the comments. Share and Run ComfyUI workflows in the cloud. the diagram doesn't load into comfyui so I can't test it out. A lot of this process appears to involve where you put what. These instructions assume you have ComfyUI installed and are familiar with how everything works, including installing missing custom nodes, which you may need to if you get errors when loading the workflow. Has anyone tried or is still trying? Generate a character you like with a basic image generation workflow. It's installable through the ComfyUI and lets you have a song or other audio files to drive the strengths on your prompt scheduling. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows Guys just a little Question , assuming that I have a workflow in comfy that requires for example, just an image, there Is anyway I can create a gradio interface, for an end user in which the users only uploads a photo and waits for the result? Welcome to the unofficial ComfyUI subreddit. Comfy1111 SDXL Workflow for ComfyUI Just a quick and simple workflow I whipped up this morning to mimic Automatic1111's layout. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. It uses the amplitude of the frequency band and normalizes it to strengths that you can add to the fizz nodes. Working on a workflow What are the main options that comfyui users change a lot? Welcome to the unofficial ComfyUI subreddit. but mine do include workflows for the most part in the video description. Aug 3, 2023 · If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). 3. basically calculates the steps needed to reach the desired denoise , and applies the steps. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Anyone have a decent turoial or workflow for batch img2img for comfyui? I'm looking at doing more vudeo render type deals but comfyui tutorials are all about sdxl. I am open to approaching this a variety of ways- using svd, animatediff, or even flicker/deforum ish approaches Is anyone aware of an existing workflow that allows for video masking similar to what I am describing? I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. saw your video sample and it is a nice work ! An audioreactive animation made it with: Stable Diffusion v2. In over my head, sticking to A1111 for this kind of stuff for now, hope there's more movement on this side of things some day. Grab the ComfyUI workflow JSON here. safetensors 73. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I also had to edit the styles. You can do infinite zoom animations using an outpainting workflow and the 'Impact' custom nodes (imageSender & imageReciever) 3. The value schedule node schedules the latent composite node's x position. I can show you more about what I built if you have any Comfy workflow want to host. Please share your tips, tricks, and workflows for using this… One thing I can't figure out in my video to video workflow is how to add a coherence option. Jul 23, 2023 · r/StableDiffusion • 1. Multiple characters from separate LoRAs interacting with each other. If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. 7 MB Welcome to the unofficial ComfyUI subreddit. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. You can also share the . When you have your ComfyUI running just drag image file from your downloads to ComfyUi opened in browser and it will load entire workflow as if you load . With the Deforum video generated, we made a new video of the original frames with FFmpeg, up to but excluding the initial Deforum Init frame: ffmpeg -f image2 -framerate 60 -start_number 0031 -i frame%04d. I simply combined the two for use in ComfyUI. I've been using the newer ones listed here [GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide | Civitai because these are the ones that work with Prompt Scheduling, using GitHub /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Beginners' guide for ComfyUI 😊 We discussed the fundamental comfyui workflow in this post 😊 You can express your creativity with ComfyUI #ComfyUI #CreativeDesign #ImaginativePictures #Jarvislabs. This workflow uses SDXL 1. If you have a random or incremental seed, the workflow will run everything from that point (which is almost all the workflow most of the times). If you switch it to comfUI it will be a major pain to recreate the results which sometimes make me think is there an underlying issue with animateDiff in comfyUI that nobdoy noticed so far or is it just me? Train a lora for the style, or dreambooth or full fine tune to lock in the style/character Experiment with various controlnets to buld a workflow. A. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 refiner (one or two) to have 3 results (SDXL, + two refined), in upscaling I do an Ultimate Upscale with the SDXL Checkpoint (x 1. I applied massive slow motion to the AnimateDiff clips to give them a less hectic feel. Below is how I used That's the one I'm referring to. Welcome to AIStoxiaArt, the official community for Stoxia. Here's a basic example of using a single frequency band range to drive one prompt: Workflow I'm trying out deforum for 1111 for the first time and I have a few questions for more experienced users first i would like to fix the lines and not this zoom impression i would like to make the spaces in between more interesting i have used 3 CN to create here Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Show/Share your ComfyUi Workflow. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. 4) and after that depending on the result another upscale (x1. BUT, the zoom & translation z settings have me confused. May 15, 2024 · The above animation was created using OpenPose and Line Art ControlNets with full color input video. Problem is when I set the batch to 80 in latent nodes I get 80 completely unrelated images from the example workflow when I run the workflow. 1 Laptop with 3050TI. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app On the ComfyUI project page, there are much smaller workflows that are ideal for beginners. Thankfully you guys provided an option. 4" - Free Workflow for ComfyUI. comfy uis inpainting and masking aint perfect. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! Jun 25, 2024 · (deforum) Integrated Pipeline: The DeforumSingleSampleNode is designed to facilitate the generation of a single sample image using the Deforum framework. In almost all cases, it is loaded with custom nodes I've never heard of that Sep 3, 2023 · If necessary, updates of the workflow will be made available on Github. , may still be missing, which could be added later or if another custom node already has this capability and is incorporated into the workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. ?? Welcome to the unofficial ComfyUI subreddit. Generate a face closeup for that character, ideally in a square format, using the same workflow but a different prompt. So you workflow should look like this: 21K subscribers in the comfyui community. Warning (OP may know this, but for others like me): There are 2 different sets of AnimateDiff nodes now. So all the motion calculations are made separately like in a regular txt2vid workflow with the ipadapter only affecting the “look” of the output. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Share, discover, & run thousands of ComfyUI workflows. 0 for ComfyUI - Now with support for SD 1. om 。 This animation is a combo of AnimateDiff and Deforum. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. The Deforum parts were also deflickered with AnimateDiff. May 11, 2024 · 2 Person LORA ComfyUI workflow? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. Thanks. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. gg ks tn ew lj bo vy zz co wy