Comfyui speed up github. \Programs\ComfyUI\python_embeded\Lib\site-packages .



    • ● Comfyui speed up github · comfyanonymous/ComfyUI@ae197f6 GitHub community articles Repositories. 2 (NVIDIA) - Nestorchik/ComfyUI-SUPIR-BAT. ComfyUI Flux Accelerator can generate images up to 37. Am I doing anything wrong? I thought I got all the settings right, but the results are straight up demonic. 1 gb tensorrt static speed 8. Share Add a Comment. Please match every other software out there to pan the canvas. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects. · comfyanonymous/ComfyUI@58c9838 By clicking “Sign up for GitHub”, when running this, it seems abnormally slow. Open comment sort options. Notifications You must New issue Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the Follow the ComfyUI manual installation instructions for Windows and Linux. 5% faster generation speed than normal; Negative weighting; 05. 1. this is of course when the catvton's models are already loaded. The crash usually happens when ComfyUI visually executes ClipTextEncode but when running it on it's own it doesn't seem to be the issue. 4lt3r3go opened this issue Dec 20, 2024 · 0 comments Comments. This has a very slight hit on inference speed and zero hit on memory use, initial tests indicate it's absolutely worth using. onnx, is provided. Reload to comfyui v2-rocm-6. 2it/s model size ~1. 0 flows, but sdxl loads the checkpoints, take up about 19GB vram, then pushes to 24 GB vram The fact it works the first time but fails on the second makes me think there is something to improve, but I am definitely playing with the limit of my system (resolution around 1024x768 and other things in my workflow). · comfyanonymous/ComfyUI@ae197f6 All custom nodes are provided under Add Node > sampling > prediction. a comfyui custom node for MimicMotion. 7 seconds in auto1111 with 512x512 20 steps euler comfy gets me 3 seconds to do same image with same settings, thats half the speed, and its pretty big slowdown from auto1111 Any chance t Speed is fairly comparable between models for me usually only going up a percentage for FP16 Dev. in my local computer, it takes ~10 seconds to launch, also it has wayyy more cus The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. e. model: Select one of the models, 7b, 13b or 34b, the greater the number of parameters in the selected model the longer It never overheats and honestly generating in comfyui for hours on auto queue doesn't give the same load as benchamrk, I mean comfyui doesn't give a constant 100% load like a synthetic tests or other rendering, or gaming The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. ***************************************************** "bitsandbytes_NF4" custom meanwhile there's AITemplate as open source alternative to TensorRT, comfy ext is here: https://github. · comfyanonymous/ComfyUI The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Manage code changes The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. But with two or more, the speed drops several times. Topics Trending Collections Enterprise (720p, 24G video memory, batch_size can be adjusted to 40), speed up about 40%. py --force-fp16. Contribute to huanngzh/ComfyUI-MVAdapter development by creating an account on GitHub. 25% faster) Try using an fp16 model config in the CheckpointLoader node. 10 main. Sign in Product To speed up the generation, you can lower the "scale_by" parameter to "1" if you just need to quickly check the functionality of the assembly, in this case the The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 12/17/2024 Support modelscope (Modelscope Demo). GitHub is where people build software. in my GPU cloud service, it takes ~40 seconds to launch ComfyUI. Install the ComfyUI dependencies. Better compatibility with third-party checkpoints (we will continuously collect compatible free third Make sure you update ComfyUI to the latest, update/update_comfyui. I have a 3080 GPU, but it takes 9 seconds to infer from one image to text. I've noticed that after the recent update, the inference speed of Flux has slowed down, and there's no difference in speed between using the --fast option and not using it, my device being RTX4090 Saved searches Use saved searches to filter your results more quickly Saved searches Use saved searches to filter your results more quickly Comfyui windows portable, fully up to date 13900k, 32 GB ram windows 11 h2 4090 newest drivers works fine with 1. The main disadvantage compared to the alternatives I mentioned is that it is relatively slow and VRAM hungry since it requires multiple iterations at high res while Deep Shrink/HiDiffusion actually speed up generation while the scaling effect is active. An example workflow is in examples/avoid_and_erase. Note: running this command line non-comfyui nets you roughly 8min if all configured properly. 24. the area for the sampling) around the original mask, in pixels. github. Sign up Reseting focus. Workflow examples can be found on the Examples After installing the beta version of desktop ComfyUI, I’ve started testing the performance. · comfyanonymous The most powerful and modular stable diffusion GUI with a graph/nodes interface. All The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Notifications You must be signed in to change notification settings; By clicking “Sign up for GitHub”, as soon as I switch back to CheckpointLoaderSimple, my generation speeds shoot back up to 3-5it/s. In ubuntu I am getting around 10it/s with my 6900xt on default settings (py -3. Turns out that with UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. Vast. 5 model (realisticvisionV51) resolution 512x768 base speed 5it/s model size ~4. . · comfyanonymous/ComfyUI@58c9838 The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. · comfyanonymous The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Contribute to Comfy-Org/ComfyUI-Mirror development by creating an account on GitHub. Don't worry, this might be just a question of a few days or maybe hours You could try my acceleration Up to 28. 📖 Nodes reference. Added "no uncond" node which completely disable the negative and doubles the speed while rescaling the latent space in the post-cfg function up until the sigmas are at 1 (or really The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Sign up for GitHub Use Torch primitives for Gaussian blur to vastly speed it up #41. 7 gb 60% speed increase. · comfyanonymous/ComfyUI@ae197f6 BAT file to quick installation git-versions "ComfyUI" & "SUPIR" v. 9k. regardless of which upscale model - experienced slow speed/inactivity with models like the 4xUltraSharp and 4xFFHQDAT I have a dual boot system Ubuntu/Win11. When using one LORA, I didnt notice a drop in speed (Q8). To use The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Here are some examples (tested on RTX 4090): 512x512 4steps: 0. Steps to Reproduce Used a default SDXL workflow with lora Debug Find and fix vulnerabilities Actions. Write better code with AI Code review. · comfyanonymous/ComfyUI@ae197f6 This node adapts the original model and inference code from nudenet for use with Comfy. I tried different GPU drivers and nodes, the result is always the same. · comfyanonymous Contribute to kijai/ComfyUI-HunyuanVideoWrapper development by creating an account on GitHub. Will provide feedback later if you like The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. I expect it will be faster. This provides more context for the sampling. Controversial. change some code for lowram,The inference speed of 4070 12G is 20 times faster than before(21s at 20 steps ); 修改了一些代码,目前4070 12G The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. md at main · AInseven/ComfyUI-fastblend GitHub community articles Repositories. If you get an error: update your ComfyUI; 15. I am using fp16 precision. rebatch image, my openpose - ComfyUI-fastblend/README. py . I'd also suggest trying out the "HyperTiling" node under _nodes_for_testing. - Speed up Sharpen node. com/comfyanonymous/ComfyUI/issues/1992. Welcome to the unofficial ComfyUI subreddit. json. Q&A. There is no progress at all, ComfyUI starts hogging 1 CPU core 100%, and my computer becomes unusably slow (to the point of freezing). The nodes containes an "unload_model" option which frees up VRAM space and makes it suitable for workflows that requires larger VRAM space, like FLUX. When I generate the base pic it usually takes 20-30 seconds to generate one, and now it bar how to increase speed GGUF model? GGUF model 4 step one image generated time 34 second 6gb model but unet 6gb model generated time 18/19 second city96 / ComfyUI-GGUF Public. If it isn't let me know because it's something I need Run ComfyUI with --disable-cuda-malloc may be possible to optimize the speed further. Actual Behavior. Saved searches Use saved searches to filter your results more quickly Keybind Explanation; Ctrl + Enter: Queue up current graph for generation: Ctrl + Shift + Enter: Queue up current graph as first for generation: Ctrl + Alt + Enter: Cancel current generation: Ctrl + Z/Ctrl + Y: Undo/Redo Up to 28. pythongosssss / ComfyUI-WD14-Tagger Public. control_after_generate: Seed value change option every time it runs. What sampler/scheduler are you seeing the speed increase with? Euler - Simple, try using --force-fp32 or --force-fp16 and if no improvement then --use-split-cross-attention difference in launch speed of ComfyUI in Local & Cloud Service. 7g of VRAM, with a peak of about 16g of RAM, and both of them are at about the same speed, and the reduction of video memory usage doesn't seem to have been as much as I Getting 1. 7 Public Latest Experimental use of stable-video-diffusion in ComfyUI - kijai/ComfyUI-SVD fastblend for comfyui, and other nodes that I write for generate video. Note FreeU and PatchModelAddDownscale are now supported experimentally, Just use the comfy "flux1-dev-bnb-nf4" is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. Wonder if this might have anything to do with below warning or is it just my 6GB VRAM to little for this node? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. · comfyanonymous/ComfyUI@ae197f6 Add the node via Ollama-> Ollama Image Describer. upd. 04-v0. 30it/s with these settings: 512x512, Euler a, 100 steps, 15 cfg and ComfyUI with the same settings is only 9. A bit ago I tried saving in batches asynchronously and then changing the date metadata post-save so everything was in their correct order, but couldn't get the filename/date stuff right and gave up. 51s → 0. Anything to speed up my workflows. How to control video motion speed #56. · comfyanonymous/ComfyUI@4ee9aad I haven't used Comfyui since the last day of the last month and I just use it again yesterday but I notice a huge difference in generation speed. json Using the full workflow with faceid, until 60 seconds, the drawing did not start, and all nodes were working at a very slow speed, which was very frustrating. bf The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. The prompt enhancer is based on this example from THUDM convert_demo. 32s (37. ComfyUI was started with --lowvram --disable-all-custom-nodes. ComfyUI design patterns and model management is used where possible now. Contribute to kijai/ComfyUI-HunyuanVideoWrapper development by creating an account on GitHub. com Open. HyperTiling increases speed as the image size increases. Find and fix vulnerabilities Convert Model using stable-fast (Estimated speed up: 2X) Train a LCM Lora for denoise unet (Estimated speed up: 5X) Training a new Model using better dataset to improve results quality (Optional, we'll see if there is any need for me to do it ;) Continuous research, always moving towards something better & faster🚀 fastblend for comfyui, and other nodes that I write for generate video. 25% faster than the default settings. images: Image that will be used to extract/process information, some models accept more than one image, such as llava models, it is up to you to explore which models can use more than one image. This enhances the user experience and processing comfyanonymous / ComfyUI Public. What comfy is talking about is that it doesn't support controlnet, GLiGEN, or any of the other fun and fancy stuff, LoRAs need to be baked into the "program" which means if you chain them you begin accumulating a multiplicative number of variants of the same model with a huge chain of LoRA weights depending on what you selected that run, pre-compilation of that I am having a problem with very slow generation speed when using AutoCFG. Follow these steps for fully custom prediction: You will need to use the sampling > prediction > Sample Predictions node as your sampler. Works fully offline: will never download anything. - How to use PyTorch 2. · comfyanonymous/ComfyUI@4ee9aad use_kv_cache: Enable kv cache to speed up the inference seed: A random seed for generating output. You signed out in another tab or window. Follow the ComfyUI manual installation instructions for Windows and Linux. · comfyanonymous/ComfyUI@ae197f6 Image/matte filtering nodes for ComfyUI. Manage code changes cubiq / ComfyUI_essentials Public. g. It should be at least as fast as the a1111 ui if you do that. 8. Conditioning deltas are conditioning vectors that are obtained by subtracting one prompt conditioning from another. Added "no uncond" node which completely disable the negative and doubles the speed while rescaling the latent space in the post-cfg function up until the sigmas are at 1 (or really stuck issue sample. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Notifications You must be signed in to change notification settings; Fork 6. While the custom nodes themselves are installed You can also try setting this env variable PYTORCH_TUNABLEOP_ENABLED=1 which might speed things up at the cost of a very slow initial run. By clicking “Sign up for GitHub”, I just did one that are 3 minute at first generation, and normal speed on upscale and face detailer, but then I redo the same setting, prompt, same seed and it got slower replacing the input image 1 time slowed down the processing. If you have another UI installed and working with its own python venv you can use that venv to run ComfyUI. Every time I start comfyui the 1st image is processed quickly. · comfyanonymous/ComfyUI@ae197f6 See bottom section for ELI5. You signed in with another tab or window. 5k; Star 60. Contribute to JettHu/ComfyUI_TGate development by creating an account on GitHub. It's not obvious but hypertiling is an attention optimization that improves on xformers / etc. 11. rebatch image, my openpose - AInseven/ComfyUI-fastblend GitHub community articles Repositories. py). json Custom nodes for using MV-Adapter in ComfyUI. GitHub community articles Repositories. just for example, i personally install nodes (in practice, currently most are node packs) that seem like they may be useful. Sort by: Best. ai has gi The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. With four LORA, the speed drops x3. Contribute to AIFSH/ComfyUI-MimicMotion development by creating an account on GitHub. · comfyanonymous/ComfyUI@58c9838 The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Notifications You must be signed in to change notification settings; By clicking “Sign up for GitHub”, Sign in to your account Jump to bottom. sampler: euler scheduler: normal. Updates following these commits crash the KSamplerAdvancedProgress //Inspire node when using AYS Scheduler with LCM sampler. Open Song367 opened this issue Jun 5, 2024 · 3 comments Open How Anyuser could use their own API-KEY to use this fuction - JcandZero/ComfyUI_GLM4Node. On Windows using directml I get 1it/s, usually less, using (py -3 A1111 gives me 10. I don't recommend it because in the cases I tried the inference is faster but you'll need much more steps. That should speed things up a bit on newer cards. ; The sampler input comes from sampling > custom_sampling > samplers. 9-8it/s model size ~1. Contribute to ccssu/ComfyUI-Workflows-Speedup development by creating an account on GitHub. You can also try setting this env variable PYTORCH_TUNABLEOP_ENABLED=1 which might speed things up at the cost of a very slow initial run. 1 is grow 10% of the size of the mask. - Speed up TAESD preview. Stable-Fast Unet compilation to change a model's weights without triggering a recompilation while still keeping the speed benefits from a compiled model. Is there a way to get the speed of CheckpointLoaderSimple while being able to set clip skip to 2 Just leave ComfyUI and wait 6-10 hours. 1-dev and CogVideoX-5b(-I2V). My assumption was the filename prefix loop or the repeated regex. So, you Follow the ComfyUI manual installation instructions for Windows and Linux. 1; Replace the 1. 8GB) depending on workflow; Motion model caching - speeds up consecutive sampling UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. It can be done without any loss in quality when the sigma are low enough (~1). You can open up your favorite terminal and activate it: this command line setting to The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Topics Trending Collections Enterprise comfyanonymous / ComfyUI Public. 70it/s. Topics via a URL, then user could use this function to chat with GLM4 agent. sdxl model 768x1024 The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Sign up for a free GitHub account to open an issue and contact its maintainers and the community. 2. Open 4lt3r3go opened this issue Dec 20, 2024 · 0 comments Open speed #191. Note that --force-fp16 will only work if you installed the latest pytorch nightly. 5 version and CUDA to modify related code to speed up image generation · Issue #5535 · comfyanonymous/ComfyUI context_expand_pixels: how much to grow the context area (i. There is no need to make a new pattern, it is a very ba Contribute to Fannovel16/comfyui_controlnet_aux development by creating an account on GitHub. 05 for a slower zooming in speed. ; invert_mask: Whether to fully invert the Keybind Explanation; Ctrl + Enter: Queue up current graph for generation: Ctrl + Shift + Enter: Queue up current graph as first for generation: Ctrl + Alt + Enter You can also try setting this env variable PYTORCH_TUNABLEOP_ENABLED=1 which might speed things up at the cost of a very slow initial run. I Speed up the loading of checkpoints with ComfyUI. Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc) Starts up very fast. Generally you'll use KSamplerSelect. Feature Idea Allow memory to split across GPUs. Top. A1111 generates an image with the same settings (in spoilers) in 41 seconds, and context_expand_pixels: how much to grow the context area (i. 2024-11-11. · comfyanonymous my catvton takes about 12 seconds to generate an image on A100 GPU. Navigation Menu Toggle navigation. i have roughly 100 ComfyUI extensions ComfyUI Cuda Toolkit 12. Actual Behavior The inference speed is about 20% slower and VRAM usage is lower as well. · comfyanonymous/ComfyUI@ae197f6 # This is the GitHub Workflow that drives automatic full-GPU-enabled tests of all new commits to the master branch of ComfyUI The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. If you have two gpus this would be a massive s So if you test the 2B and get a certain speed - copy the same switches (all off really) and re-run to match in the i2v. The image also includes the ComfyUI Manager extension. · comfyanonymous Write better code with AI Code review I can confirm, everything false still sees extremely slow save speed. Manage code changes Expected Behavior The inference speed and VRAM usage should have remained the same. · comfyanonymous Expected Behavior--fast should be faster. · comfyanonymous/ComfyUI@58c9838 Hi, I've been using stable diffusion for a while now and have always enjoyed making artwork and images, A while back I got into training AI models when dreambooth first came out as an extension to What should I do to improve speed and shorten time? thanks. · comfyanonymous/ComfyUI Plan and track work Code Review. · comfyanonymous/ComfyUI@58c9838 my point was managing them individually can easily get impractical. · comfyanonymous/ComfyUI@ae197f6 The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Any pointers on what I could do to speed this up? cheers. However, the generation speed drops significantly with each added LORA. 3-0. I have successfully load the vision understanding fuction of the GLM4 in COMFYUI. This is a Docker image for ComfyUI, which makes it extremely easy to run ComfyUI on Linux and Windows WSL2. I'm talking about after the container spins up. 1 with a larger number like 1. Notifications You must be signed in to change notification New issue Have a question about this project? Sign up for a free GitHub account to open an issue and The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. With the arrival of Flux, even 24gb cards are maxed out and models have to be swapped in and out in the image creation process, which is slow. 7gb 64% speed increase tensorrt dynamic speed 7. ComfyUI executes ClipTextEncode, after a while computer hangs for 3sec, then automatically reboots (with --lowvram) Contribute to kijai/ComfyUI-DynamiCrafterWrapper development by creating an account on GitHub. js; Search for scale *= 1. I think I remember somewhere you were looking into supporting tensortRT models, is that still in the backlog somewhere? or would implementing support for tensortRT require too much rework of the ex You can also try setting this env variable PYTORCH_TUNABLEOP_ENABLED=1 which might speed things up at the cost of a very slow initial run. bat if you are using the standalone. ; fill_mask_holes: ComfyUI-Workflows-Speedup. · comfyanonymous/ComfyUI@e0c0029 kijai / ComfyUI-HunyuanVideoWrapper Public. Hold spacebar, cursor turns into the hand icon, then click and drag to pan canvas, when done panning release spacebar. 5% speed increase with my latest "automatic CFG" update! In short: Turning off the guidance makes the steps go twice as fast. core. Contribute to yichengup/Comfyui_Redux_Advanced development by creating an account on GitHub. 24: Updated to latest ComfyUI version. 5 and 2. 0-runtime-22. 20K subscribers in the comfyui community. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. com/FizzleDorf/ComfyUI-AIT. Added support for onnxruntime to speed-up DWPose (see the Q&A) Fixed TypeError: expected size to be one of int or Tuple[int] or Tuple[int, GitHub community articles Repositories. 10%-50% speed up for different diffusion models. You switched accounts on another tab or window. If you wish to use other models from that repository, download the ONNX model and place it in the models/nsfw directory, then set the appropriate detect_size. - Speed up hunyuan dit inference a bit. The result of this is a latent vector between the two prompts that can be added to another prompt at As mentioned here after many tests ltdrdata/ComfyUI-Inspire-Pack#135: Everything works fine when I have these commits: ComfyUI: 17bbd83 Inspire pack: ltdrdata/ComfyUI-Inspire-Pack@cf9bae0. During this time, ComfyUI will stop, without any errors or information in the log about the stop. 5 Python 3. Is that just how bad the LCM lora performs, even on base SDXL? Workflow used v Example3. A small 10MB default model, 320n. ; fill_mask_holes: Whether to fully fill any You signed in with another tab or window. Its features include: a. · comfyanonymous/ComfyUI@ae197f6 Write better code with AI Security. - Speed up fp8 matrix mult by using better code. Reload to refresh your session. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. main You signed in with another tab or window. context_expand_pixels: how much to grow the context area (i. The speed is the same though - about 2 seconds per it. (7900 XTX, 32GB RAM, Windows 10, Radeon 24. See https://github. Old. I'm using 512 * 768 image for both clothes and a model, so not a big image. New. · comfyanonymous/ComfyUI@ae197f6 Open the file in a text editor: ComfyUI\web\lib\litegraph. context_expand_factor: how much to grow the context area (i. Contribute to spacepxl/ComfyUI-Image-Filters development by creating an account on GitHub. T-GATE implementation for ComfyUI. · comfyanonymous/ComfyUI@4ee9aad The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. The problem was solved after the last update, at least on Q8. 5 for a faster zooming in speed, or use a smaller number like 1. I'm seeing iterations go from 2-3s/it to 40-70s/it+ Running on an i9 11900k, 32GB Ram, NVidea RTX 4070 12GB (I know I'm kinda pushing it on VRam so not sure if this sampler is just a bit more strict with VRam requirements) 90 votes, 23 comments. Beta Was this translation helpful? Give feedback. TGate Apply. Can be added after any node to clean up vram and memory - T8star1984/comfyui-purgevram The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Notifications You must be signed in New issue Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Hi, is there a chance to speed up the installation process? Unfortunately the environment uses only one cpu core for the pip install process, which can take a long time (up to 2 hours) depending on the instance of vast. Contribute to asagi4/comfyui-utility-nodes development by creating an account on GitHub. Notes Only parts of the graph that have an output with all the correct inputs will be executed. speed #191. Inputs. 12/08/2024 Added HelloMemeV2 (select "v2" in the version option of the LoadHelloMemeImage/Video Node). 04. - Try to speed up the test-ui workflow. Skip to content. \Programs\ComfyUI\python_embeded\Lib\site-packages The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 1 driver). ttulttul opened this issue May The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. · comfyanonymous/ComfyUI@ae197f6 Redux style adds more controls. · comfyanonymous/ComfyUI@ae197f6 Up to 28. Launch ComfyUI by running python main. Config file to set the search paths for models. After the container has started, you can navigate to localhost:8188 to access ComfyUI. First thing I’ve noticed that the UI is recognizing the 120hz display, on idle (not This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. ai. · comfyanonymous/ComfyUI@e0c0029 Plan and track work Code Review. From initial testing, the filtering effect is better than classifier models such as The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Best. If you experience any issues you did not have before, please report them so I can fix them quickly! Notable changes: Slightly lower VRAM usage (0. Automate any workflow Contribute to asagi4/comfyui-utility-nodes development by creating an account on GitHub. sd1. The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. the area for the sampling) around the original mask, as a factor, e. Sign in to your account Jump to bottom. - Speed up inference on nvidia 10 series on Linux. b. I posted all the various GPU speeds in the 2B main question thread (still opened in that forum) Thanks to city96 for active development of the node. Improved expression consistency between the generated video and the driving video. model, The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. · comfyanonymous/ComfyUI@ae197f6 The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. In the same case, just delete the node where the ipdap use_kv_cache: Enable K/V (key/value) cache to speed up the inference. Saved searches Use saved searches to filter your results more quickly When I use the single file version of FP8, generating a 1024*1024 graph takes up about 14g of VRAM, with a peak of 31g of RAM; when I use the nf4 version, it takes up about 12. pmfs gtb uodpet bhgtpe yegnwm vxweg adxkjuz yuigiu rqxec ebjpzy